Rapid advancements in satellite technology have led to a significant increase in high-resolution remote sensing (RS) images, necessitating the use of advanced processing methods. Additionally, patent analysis revealed a substantial increase in deep learning and machine learning applications in remote sensing, highlighting the growing importance of these technologies. Therefore, this paper introduces the Kolmogorov-Arnold Network (KAN) model to remote sensing to enhance efficiency and performance in RS applications. We conducted several experiments to validate KAN’s applicability, starting with the EuroSAT dataset, where we combined the KAN layer with multiple pre-trained CNN models. Optimal performance was achieved using ConvNeXt, leading to the development of the KonvNeXt model. KonvNeXt was evaluated on the Optimal-31, AID, and Merced datasets for validation and achieved accuracies of 90.59%, 94.1%, and 98.1%, respectively. The model also showed fast processing speed, with the Optimal-31 and Merced datasets completed in 107.63 s each, while the bigger and more complicated AID dataset took 545.91 s. This result is meaningful since it achieved faster speeds and comparable accuracy compared to the existing study, which utilized VIT and proved KonvNeXt’s applicability for remote sensing classification tasks. Furthermore, we investigated the model’s interpretability by utilizing Occlusion Sensitivity, and by displaying the influential regions, we validated its potential use in a variety of domains, including medical imaging and weather forecasting. This paper is meaningful in that it is the first to use KAN in remote sensing classification, proving its adaptability and efficiency.
卫星技术的快速发展导致高分辨率遥感(RS)图像大幅增加,这就需要使用先进的处理方法。此外,专利分析显示,遥感领域的深度学习和机器学习应用大幅增加,凸显了这些技术日益增长的重要性。因此,本文将柯尔莫哥洛夫-阿诺德网络(KAN)模型引入遥感领域,以提高遥感应用的效率和性能。我们进行了多项实验来验证 KAN 的适用性,首先从 EuroSAT 数据集开始,将 KAN 层与多个预先训练好的 CNN 模型相结合。使用 ConvNeXt 实现了最佳性能,从而开发出了 KonvNeXt 模型。KonvNeXt 在 Optimal-31、AID 和 Merced 数据集上进行了验证评估,准确率分别达到 90.59%、94.1% 和 98.1%。该模型的处理速度也很快,Optimal-31 和 Merced 数据集的处理时间分别为 107.63 秒,而更大更复杂的 AID 数据集的处理时间则为 545.91 秒。这一结果很有意义,因为与利用 VIT 的现有研究相比,它实现了更快的速度和相当的准确率,证明了 KonvNeXt 在遥感分类任务中的适用性。此外,我们还利用遮挡灵敏度研究了模型的可解释性,并通过显示有影响的区域,验证了其在医学成像和天气预报等多个领域的潜在用途。本文的意义在于首次将 KAN 应用于遥感分类,证明了其适应性和高效性。
{"title":"Combining KAN with CNN: KonvNeXt’s Performance in Remote Sensing and Patent Insights","authors":"Minjong Cheon, Changbae Mun","doi":"10.3390/rs16183417","DOIUrl":"https://doi.org/10.3390/rs16183417","url":null,"abstract":"Rapid advancements in satellite technology have led to a significant increase in high-resolution remote sensing (RS) images, necessitating the use of advanced processing methods. Additionally, patent analysis revealed a substantial increase in deep learning and machine learning applications in remote sensing, highlighting the growing importance of these technologies. Therefore, this paper introduces the Kolmogorov-Arnold Network (KAN) model to remote sensing to enhance efficiency and performance in RS applications. We conducted several experiments to validate KAN’s applicability, starting with the EuroSAT dataset, where we combined the KAN layer with multiple pre-trained CNN models. Optimal performance was achieved using ConvNeXt, leading to the development of the KonvNeXt model. KonvNeXt was evaluated on the Optimal-31, AID, and Merced datasets for validation and achieved accuracies of 90.59%, 94.1%, and 98.1%, respectively. The model also showed fast processing speed, with the Optimal-31 and Merced datasets completed in 107.63 s each, while the bigger and more complicated AID dataset took 545.91 s. This result is meaningful since it achieved faster speeds and comparable accuracy compared to the existing study, which utilized VIT and proved KonvNeXt’s applicability for remote sensing classification tasks. Furthermore, we investigated the model’s interpretability by utilizing Occlusion Sensitivity, and by displaying the influential regions, we validated its potential use in a variety of domains, including medical imaging and weather forecasting. This paper is meaningful in that it is the first to use KAN in remote sensing classification, proving its adaptability and efficiency.","PeriodicalId":48993,"journal":{"name":"Remote Sensing","volume":"207 1","pages":""},"PeriodicalIF":5.0,"publicationDate":"2024-09-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142268756","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"地球科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
The timely updating of the spatial distribution of buildings is essential to understanding a city’s development. Deep learning methods have remarkable benefits in quickly and accurately recognizing these changes. Current semi-supervised change detection (SSCD) methods have effectively reduced the reliance on labeled data. However, these methods primarily focus on utilizing unlabeled data through various training strategies, neglecting the impact of pseudo-changes and learning bias in models. When dealing with limited labeled data, abundant low-quality pseudo-labels generated by poorly performing models can hinder effective performance improvement, leading to the incomplete recognition results of changes to buildings. To address this issue, we propose a feature multi-scale information interaction and complementation semi-supervised method based on consistency regularization (MSFG-SemiCD), which includes a multi-scale feature fusion-guided change detection network (MSFGNet) and a semi-supervised update method. Among them, the network facilitates the generation of multi-scale change features, integrates features, and captures multi-scale change targets through the temporal difference guidance module, the full-scale feature fusion module, and the depth feature guidance fusion module. Moreover, this enables the fusion and complementation of information between features, resulting in more complete change features. The semi-supervised update method employs a weak-to-strong consistency framework to achieve model parameter updates while maintaining perturbation invariance of unlabeled data at both input and encoder output features. Experimental results on the WHU-CD and LEVIR-CD datasets confirm the efficacy of the proposed method. There is a notable improvement in performance at both the 1% and 5% levels. The IOU in the WHU-CD dataset increased by 5.72% and 6.84%, respectively, while in the LEVIR-CD dataset, it improved by 18.44% and 5.52%, respectively.
{"title":"Semi-Supervised Remote Sensing Building Change Detection with Joint Perturbation and Feature Complementation","authors":"Zhanlong Chen, Rui Wang, Yongyang Xu","doi":"10.3390/rs16183424","DOIUrl":"https://doi.org/10.3390/rs16183424","url":null,"abstract":"The timely updating of the spatial distribution of buildings is essential to understanding a city’s development. Deep learning methods have remarkable benefits in quickly and accurately recognizing these changes. Current semi-supervised change detection (SSCD) methods have effectively reduced the reliance on labeled data. However, these methods primarily focus on utilizing unlabeled data through various training strategies, neglecting the impact of pseudo-changes and learning bias in models. When dealing with limited labeled data, abundant low-quality pseudo-labels generated by poorly performing models can hinder effective performance improvement, leading to the incomplete recognition results of changes to buildings. To address this issue, we propose a feature multi-scale information interaction and complementation semi-supervised method based on consistency regularization (MSFG-SemiCD), which includes a multi-scale feature fusion-guided change detection network (MSFGNet) and a semi-supervised update method. Among them, the network facilitates the generation of multi-scale change features, integrates features, and captures multi-scale change targets through the temporal difference guidance module, the full-scale feature fusion module, and the depth feature guidance fusion module. Moreover, this enables the fusion and complementation of information between features, resulting in more complete change features. The semi-supervised update method employs a weak-to-strong consistency framework to achieve model parameter updates while maintaining perturbation invariance of unlabeled data at both input and encoder output features. Experimental results on the WHU-CD and LEVIR-CD datasets confirm the efficacy of the proposed method. There is a notable improvement in performance at both the 1% and 5% levels. The IOU in the WHU-CD dataset increased by 5.72% and 6.84%, respectively, while in the LEVIR-CD dataset, it improved by 18.44% and 5.52%, respectively.","PeriodicalId":48993,"journal":{"name":"Remote Sensing","volume":"65 1","pages":""},"PeriodicalIF":5.0,"publicationDate":"2024-09-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142251012","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"地球科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Monitoring aircraft using synthetic aperture radar (SAR) images is a very important task. Given its coherent imaging characteristics, there is a large amount of speckle interference in the image. This phenomenon leads to the scattering information of aircraft targets being masked in SAR images, which is easily confused with background scattering points. Therefore, automatic detection of aircraft targets in SAR images remains a challenging task. For this task, this paper proposes a framework for speckle reduction preprocessing of SAR images, followed by the use of an improved deep learning method to detect aircraft in SAR images. Firstly, to improve the problem of introducing artifacts or excessive smoothing in speckle reduction using total variation (TV) methods, this paper proposes a new nonconvex total variation (NTV) method. This method aims to ensure the effectiveness of speckle reduction while preserving the original scattering information as much as possible. Next, we present a framework for aircraft detection based on You Only Look Once v8 (YOLOv8) for SAR images. Therefore, the complete framework is called SAR-NTV-YOLOv8. Meanwhile, a high-resolution small target feature head is proposed to mitigate the impact of scale changes and loss of depth feature details on detection accuracy. Then, an efficient multi-scale attention module was proposed, aimed at effectively establishing short-term and long-term dependencies between feature grouping and multi-scale structures. In addition, the progressive feature pyramid network was chosen to avoid information loss or degradation in multi-level transmission during the bottom-up feature extraction process in Backbone. Sufficient comparative experiments, speckle reduction experiments, and ablation experiments are conducted on the SAR-Aircraft-1.0 and SADD datasets. The results have demonstrated the effectiveness of SAR-NTV-YOLOv8, which has the most advanced performance compared to other mainstream algorithms.
利用合成孔径雷达(SAR)图像监控飞机是一项非常重要的任务。鉴于合成孔径雷达的相干成像特性,图像中存在大量斑点干扰。这种现象导致合成孔径雷达图像中飞机目标的散射信息被掩盖,很容易与背景散射点混淆。因此,在合成孔径雷达图像中自动检测飞机目标仍然是一项具有挑战性的任务。针对这一任务,本文提出了一个减少 SAR 图像斑点预处理的框架,然后利用改进的深度学习方法来检测 SAR 图像中的飞机。首先,为了改善使用总变化(TV)方法减少斑点时引入伪影或过度平滑的问题,本文提出了一种新的非凸总变化(NTV)方法。该方法旨在确保斑点减少的有效性,同时尽可能保留原始散射信息。接下来,我们提出了一个基于 SAR 图像 You Only Look Once v8(YOLOv8)的飞机检测框架。因此,整个框架被称为 SAR-NTV-YOLOv8。同时,还提出了一种高分辨率小目标特征头,以减轻尺度变化和深度特征细节丢失对检测精度的影响。然后,提出了一个高效的多尺度关注模块,旨在有效建立特征分组与多尺度结构之间的短期和长期依赖关系。此外,在 Backbone 自下而上的特征提取过程中,选择了渐进式特征金字塔网络,以避免多级传输中的信息丢失或质量下降。在 SAR-Aircraft-1.0 和 SADD 数据集上进行了充分的对比实验、斑点减少实验和消融实验。结果证明了 SAR-NTV-YOLOv8 的有效性,与其他主流算法相比,它具有最先进的性能。
{"title":"SAR-NTV-YOLOv8: A Neural Network Aircraft Detection Method in SAR Images Based on Despeckling Preprocessing","authors":"Xiaomeng Guo, Baoyi Xu","doi":"10.3390/rs16183420","DOIUrl":"https://doi.org/10.3390/rs16183420","url":null,"abstract":"Monitoring aircraft using synthetic aperture radar (SAR) images is a very important task. Given its coherent imaging characteristics, there is a large amount of speckle interference in the image. This phenomenon leads to the scattering information of aircraft targets being masked in SAR images, which is easily confused with background scattering points. Therefore, automatic detection of aircraft targets in SAR images remains a challenging task. For this task, this paper proposes a framework for speckle reduction preprocessing of SAR images, followed by the use of an improved deep learning method to detect aircraft in SAR images. Firstly, to improve the problem of introducing artifacts or excessive smoothing in speckle reduction using total variation (TV) methods, this paper proposes a new nonconvex total variation (NTV) method. This method aims to ensure the effectiveness of speckle reduction while preserving the original scattering information as much as possible. Next, we present a framework for aircraft detection based on You Only Look Once v8 (YOLOv8) for SAR images. Therefore, the complete framework is called SAR-NTV-YOLOv8. Meanwhile, a high-resolution small target feature head is proposed to mitigate the impact of scale changes and loss of depth feature details on detection accuracy. Then, an efficient multi-scale attention module was proposed, aimed at effectively establishing short-term and long-term dependencies between feature grouping and multi-scale structures. In addition, the progressive feature pyramid network was chosen to avoid information loss or degradation in multi-level transmission during the bottom-up feature extraction process in Backbone. Sufficient comparative experiments, speckle reduction experiments, and ablation experiments are conducted on the SAR-Aircraft-1.0 and SADD datasets. The results have demonstrated the effectiveness of SAR-NTV-YOLOv8, which has the most advanced performance compared to other mainstream algorithms.","PeriodicalId":48993,"journal":{"name":"Remote Sensing","volume":"49 1","pages":""},"PeriodicalIF":5.0,"publicationDate":"2024-09-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142268651","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"地球科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Yingping Chen, Bo Jiang, Jianghai Peng, Xiuwan Yin, Yu Zhao
Surface downward longwave radiation (SDLR) is crucial for maintaining the global radiative budget balance. Due to their ease of practicality, SDLR parameterization models are widely used, making their objective evaluation essential. In this study, against comprehensive ground measurements collected from more than 300 globally distributed sites, four SDLR parameterization models, including three popular existing ones and a newly proposed model, were evaluated under clear- and cloudy-sky conditions at hourly (daytime and nighttime) and daily scales, respectively. The validation results indicated that the new model, namely the Peng model, originally proposed for SDLR estimation at the sea surface and applied for the first time to the land surface, outperformed all three existing models in nearly all cases, especially under cloudy-sky conditions. Moreover, the Peng model demonstrated robustness across various land cover types, elevation zones, and seasons. All four SDLR models outperformed the Global Land Surface Satellite product from Advanced Very High-Resolution Radiometer Data (GLASS-AVHRR), ERA5, and CERES_SYN1de-g_Ed4A products. The Peng model achieved the highest accuracy, with validated RMSE values of 13.552 and 14.055 W/m2 and biases of −0.25 and −0.025 W/m2 under clear- and cloudy-sky conditions at daily scale, respectively. Its superior performance can be attributed to the inclusion of two cloud parameters, total column cloud liquid water and ice water, besides the cloud fraction. However, the optimal combination of these three parameters may vary depending on specific cases. In addition, all SDLR models require improvements for wetlands, bare soil, ice-covered surfaces, and high-elevation regions. Overall, the Peng model demonstrates significant potential for widespread use in SDLR estimation for both land and sea surfaces.
{"title":"Evaluation of the Surface Downward Longwave Radiation Estimation Models over Land Surface","authors":"Yingping Chen, Bo Jiang, Jianghai Peng, Xiuwan Yin, Yu Zhao","doi":"10.3390/rs16183422","DOIUrl":"https://doi.org/10.3390/rs16183422","url":null,"abstract":"Surface downward longwave radiation (SDLR) is crucial for maintaining the global radiative budget balance. Due to their ease of practicality, SDLR parameterization models are widely used, making their objective evaluation essential. In this study, against comprehensive ground measurements collected from more than 300 globally distributed sites, four SDLR parameterization models, including three popular existing ones and a newly proposed model, were evaluated under clear- and cloudy-sky conditions at hourly (daytime and nighttime) and daily scales, respectively. The validation results indicated that the new model, namely the Peng model, originally proposed for SDLR estimation at the sea surface and applied for the first time to the land surface, outperformed all three existing models in nearly all cases, especially under cloudy-sky conditions. Moreover, the Peng model demonstrated robustness across various land cover types, elevation zones, and seasons. All four SDLR models outperformed the Global Land Surface Satellite product from Advanced Very High-Resolution Radiometer Data (GLASS-AVHRR), ERA5, and CERES_SYN1de-g_Ed4A products. The Peng model achieved the highest accuracy, with validated RMSE values of 13.552 and 14.055 W/m2 and biases of −0.25 and −0.025 W/m2 under clear- and cloudy-sky conditions at daily scale, respectively. Its superior performance can be attributed to the inclusion of two cloud parameters, total column cloud liquid water and ice water, besides the cloud fraction. However, the optimal combination of these three parameters may vary depending on specific cases. In addition, all SDLR models require improvements for wetlands, bare soil, ice-covered surfaces, and high-elevation regions. Overall, the Peng model demonstrates significant potential for widespread use in SDLR estimation for both land and sea surfaces.","PeriodicalId":48993,"journal":{"name":"Remote Sensing","volume":"119 1","pages":""},"PeriodicalIF":5.0,"publicationDate":"2024-09-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142268758","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"地球科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Integrating coastal topographic and bathymetric data for creating regional seamless topobathymetric digital elevation models of the land/water interface presents a complex challenge due to the spatial and temporal gaps in data acquisitions. The Coastal National Elevation Database (CoNED) Applications Project develops topographic (land elevation) and bathymetric (water depth) regional scale digital elevation models by integrating multiple sourced disparate topographic and bathymetric data models. These integrated regional models are broadly used in coastal and climate science applications, such as sediment transport, storm impact, and sea-level rise modeling. However, CoNED’s current integration method does not address the occurrence of measurable vertical discrepancies between adjacent near-shore topographic and bathymetric data sources, which often create artificial barriers and sinks along their intersections. To tackle this issue, the CoNED project has developed an additional step in its integration process that collectively assesses the input data to define how to transition between these disparate datasets. This new step defines two zones: a micro blending zone for near-shore transitions and a macro blending zone for the transition between high-resolution (3 m or less) to moderate-resolution (between 3 m and 10 m) bathymetric datasets. These zones and input data sources are reduced to a multidimensional array of zeros and ones. This array is compiled into a 16-bit integer representing a vertical assessment for each pixel. This assessed value provides the means for dynamic pixel-level blending between disparate datasets by leveraging the 16-bit binary notation. Sample site RMSE assessments demonstrate improved accuracy, with values decreasing from 0.203–0.241 using the previous method to 0.126–0.147 using the new method. This paper introduces CoNED’s unique approach of using binary code to improve the integration of coastal topobathymetric data.
{"title":"Mitigating Disparate Elevation Differences between Adjacent Topobathymetric Data Models Using Binary Code","authors":"William M. Cushing, Dean J. Tyler","doi":"10.3390/rs16183418","DOIUrl":"https://doi.org/10.3390/rs16183418","url":null,"abstract":"Integrating coastal topographic and bathymetric data for creating regional seamless topobathymetric digital elevation models of the land/water interface presents a complex challenge due to the spatial and temporal gaps in data acquisitions. The Coastal National Elevation Database (CoNED) Applications Project develops topographic (land elevation) and bathymetric (water depth) regional scale digital elevation models by integrating multiple sourced disparate topographic and bathymetric data models. These integrated regional models are broadly used in coastal and climate science applications, such as sediment transport, storm impact, and sea-level rise modeling. However, CoNED’s current integration method does not address the occurrence of measurable vertical discrepancies between adjacent near-shore topographic and bathymetric data sources, which often create artificial barriers and sinks along their intersections. To tackle this issue, the CoNED project has developed an additional step in its integration process that collectively assesses the input data to define how to transition between these disparate datasets. This new step defines two zones: a micro blending zone for near-shore transitions and a macro blending zone for the transition between high-resolution (3 m or less) to moderate-resolution (between 3 m and 10 m) bathymetric datasets. These zones and input data sources are reduced to a multidimensional array of zeros and ones. This array is compiled into a 16-bit integer representing a vertical assessment for each pixel. This assessed value provides the means for dynamic pixel-level blending between disparate datasets by leveraging the 16-bit binary notation. Sample site RMSE assessments demonstrate improved accuracy, with values decreasing from 0.203–0.241 using the previous method to 0.126–0.147 using the new method. This paper introduces CoNED’s unique approach of using binary code to improve the integration of coastal topobathymetric data.","PeriodicalId":48993,"journal":{"name":"Remote Sensing","volume":"198 1","pages":""},"PeriodicalIF":5.0,"publicationDate":"2024-09-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142251011","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"地球科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Sumin Kim, Seung-Goo Kang, Yeonjin Choi, Jong-Kuk Hong, Joonyoung Kwak
High-resolution seismic imaging allows for the better interpretation of subsurface geological structures. In this study, we employ least-squares reverse time migration (LSRTM) as a seismic imaging method to delineate the subsurface geological structures from the field dataset for understanding the status of Arctic subsea permafrost structures, which is pertinent to global warming issues. The subsea permafrost structures in the Arctic continental shelf, located just below the seafloor at a shallow water depth, have an abnormally high P-wave velocity. These structural conditions create internal multiples and noise in seismic data, making it challenging to perform seismic imaging and construct a seismic P-wave velocity model using conventional methods. LSRTM offers a promising approach by addressing these challenges through linearized inverse problems, aiming to achieve high-resolution, subsurface imaging by optimizing the misfit between the predicted and the observed seismic data. Synthetic experiments, encompassing various subsea permafrost structures and seismic survey configurations, were conducted to investigate the feasibility of LSRTM for imaging the Arctic subsea permafrost from the acquired seismic field dataset, and the possibility of the seismic imaging of the subsea permafrost was confirmed through these synthetic numerical experiments. Furthermore, we applied the LSRTM method to the seismic data acquired in the Canadian Beaufort Sea (CBS) and generated a seismic image depicting the subsea permafrost structures in the Arctic region.
高分辨率地震成像可以更好地解释地下地质结构。在本研究中,我们采用最小二乘反向时间迁移(LSRTM)作为地震成像方法,从野外数据集中划分地下地质结构,以了解北极海底永久冻土结构的状况,这与全球变暖问题息息相关。北极大陆架的海底永久冻土结构位于浅水深度的海底之下,具有异常高的 P 波速度。这些结构条件在地震数据中产生了内部倍频和噪声,使得使用传统方法进行地震成像和构建地震 P 波速度模型具有挑战性。LSRTM 提供了一种很有前景的方法,它通过线性化反问题来解决这些难题,旨在通过优化预测地震数据与观测地震数据之间的不匹配度来实现高分辨率的地下成像。我们进行了包括各种海底永久冻土结构和地震勘探配置的合成实验,以研究 LSRTM 从获取的地震野外数据集对北极海底永久冻土成像的可行性,并通过这些合成数值实验证实了海底永久冻土地震成像的可能性。此外,我们还将 LSRTM 方法应用于在加拿大波弗特海(CBS)获取的地震数据,并生成了描述北极地区海底永久冻土结构的地震图像。
{"title":"Seismic Imaging of the Arctic Subsea Permafrost Using a Least-Squares Reverse Time Migration Method","authors":"Sumin Kim, Seung-Goo Kang, Yeonjin Choi, Jong-Kuk Hong, Joonyoung Kwak","doi":"10.3390/rs16183425","DOIUrl":"https://doi.org/10.3390/rs16183425","url":null,"abstract":"High-resolution seismic imaging allows for the better interpretation of subsurface geological structures. In this study, we employ least-squares reverse time migration (LSRTM) as a seismic imaging method to delineate the subsurface geological structures from the field dataset for understanding the status of Arctic subsea permafrost structures, which is pertinent to global warming issues. The subsea permafrost structures in the Arctic continental shelf, located just below the seafloor at a shallow water depth, have an abnormally high P-wave velocity. These structural conditions create internal multiples and noise in seismic data, making it challenging to perform seismic imaging and construct a seismic P-wave velocity model using conventional methods. LSRTM offers a promising approach by addressing these challenges through linearized inverse problems, aiming to achieve high-resolution, subsurface imaging by optimizing the misfit between the predicted and the observed seismic data. Synthetic experiments, encompassing various subsea permafrost structures and seismic survey configurations, were conducted to investigate the feasibility of LSRTM for imaging the Arctic subsea permafrost from the acquired seismic field dataset, and the possibility of the seismic imaging of the subsea permafrost was confirmed through these synthetic numerical experiments. Furthermore, we applied the LSRTM method to the seismic data acquired in the Canadian Beaufort Sea (CBS) and generated a seismic image depicting the subsea permafrost structures in the Arctic region.","PeriodicalId":48993,"journal":{"name":"Remote Sensing","volume":"160 1","pages":""},"PeriodicalIF":5.0,"publicationDate":"2024-09-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142251013","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"地球科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Joan-Cristian Padró, Valerio Della Sala, Marc Castelló-Bueno, Rafael Vicente-Salar
The Olympic Games are a sporting event and a catalyst for urban development in their host city. In this study, we utilized remote sensing and GIS techniques to examine the impact of the Olympic infrastructure on the surface temperature of urban areas. Using Landsat Series Collection 2 Tier 1 Level 2 data and cloud computing provided by Google Earth Engine (GEE), this study examines the effects of various forms of Olympic Games facility urban planning in different historical moments and location typologies, as follows: monocentric, polycentric, peripheric and clustered Olympic ring. The GEE code applies to the Olympic Games that occurred from Paris 2024 to Montreal 1976. However, this paper focuses specifically on the representative cases of Paris 2024, Tokyo 2020, Rio 2016, Beijing 2008, Sydney 2000, Barcelona 1992, Seoul 1988, and Montreal 1976. The study is not only concerned with obtaining absolute land surface temperatures (LST), but rather the relative influence of mega-event infrastructures on mitigating or increasing the urban heat. As such, the locally normalized land surface temperature (NLST) was utilized for this purpose. In some cities (Paris, Tokyo, Beijing, and Barcelona), it has been determined that Olympic planning has resulted in the development of green spaces, creating “green spots” that contribute to lower-than-average temperatures. However, it should be noted that there is a significant variation in temperature within intensely built-up areas, such as Olympic villages and the surrounding areas of the Olympic stadium, which can become “hotspots.” Therefore, it is important to acknowledge that different planning typologies of Olympic infrastructure can have varying impacts on city heat islands, with the polycentric and clustered Olympic ring typologies displaying a mitigating effect. This research contributes to a cloud computing method that can be updated for future Olympic Games or adapted for other mega-events and utilizes a widely available remote sensing data source to study a specific urban planning context.
{"title":"Mapping the Influence of Olympic Games’ Urban Planning on the Land Surface Temperatures: An Estimation Using Landsat Series and Google Earth Engine","authors":"Joan-Cristian Padró, Valerio Della Sala, Marc Castelló-Bueno, Rafael Vicente-Salar","doi":"10.3390/rs16183405","DOIUrl":"https://doi.org/10.3390/rs16183405","url":null,"abstract":"The Olympic Games are a sporting event and a catalyst for urban development in their host city. In this study, we utilized remote sensing and GIS techniques to examine the impact of the Olympic infrastructure on the surface temperature of urban areas. Using Landsat Series Collection 2 Tier 1 Level 2 data and cloud computing provided by Google Earth Engine (GEE), this study examines the effects of various forms of Olympic Games facility urban planning in different historical moments and location typologies, as follows: monocentric, polycentric, peripheric and clustered Olympic ring. The GEE code applies to the Olympic Games that occurred from Paris 2024 to Montreal 1976. However, this paper focuses specifically on the representative cases of Paris 2024, Tokyo 2020, Rio 2016, Beijing 2008, Sydney 2000, Barcelona 1992, Seoul 1988, and Montreal 1976. The study is not only concerned with obtaining absolute land surface temperatures (LST), but rather the relative influence of mega-event infrastructures on mitigating or increasing the urban heat. As such, the locally normalized land surface temperature (NLST) was utilized for this purpose. In some cities (Paris, Tokyo, Beijing, and Barcelona), it has been determined that Olympic planning has resulted in the development of green spaces, creating “green spots” that contribute to lower-than-average temperatures. However, it should be noted that there is a significant variation in temperature within intensely built-up areas, such as Olympic villages and the surrounding areas of the Olympic stadium, which can become “hotspots.” Therefore, it is important to acknowledge that different planning typologies of Olympic infrastructure can have varying impacts on city heat islands, with the polycentric and clustered Olympic ring typologies displaying a mitigating effect. This research contributes to a cloud computing method that can be updated for future Olympic Games or adapted for other mega-events and utilizes a widely available remote sensing data source to study a specific urban planning context.","PeriodicalId":48993,"journal":{"name":"Remote Sensing","volume":"13 1","pages":""},"PeriodicalIF":5.0,"publicationDate":"2024-09-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142200459","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"地球科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Linsheng Bu, Tuo Fu, Defeng Chen, Huawei Cao, Shuo Zhang, Jialiang Han
The Doppler-spread problem is commonly encountered in space target observation scenarios using ground-based radar when prolonged coherent integration techniques are utilized. Even when the translational motion is accurately compensated, the phase resulting from changes in the target observation attitude (TOA) still leads to extension of the target’s echo energy across multiple Doppler cells. In particular, as the TOA change undergoes multiple cycles within a coherent processing interval (CPI), the Doppler spectrum spreads into equidistant sparse line spectra, posing a substantial challenge for target detection. Aiming to address such problems, we propose a generalized likelihood ratio test based on overlapping group shrinkage denoising and order statistics (OGSos-GLRT) in this study. First, the Doppler domain signal is denoised according to its equidistant sparse characteristics, allowing for the recovery of Doppler cells where line spectra may be situated. Then, several of the largest Doppler cells are integrated into the GLRT for detection. An analytical expression for the false alarm probability of the proposed detector is also derived. Additionally, a modified OGSos-GLRT method is proposed to make decisions based on an increasing estimated number of line spectra (ENLS), thus increasing the robustness of OGSos-GLRT when the ENLS mismatches the actual value. Finally, Monte Carlo simulations confirm the effectiveness of the proposed detector, even at low signal-to-noise ratios (SNRs).
{"title":"Doppler-Spread Space Target Detection Based on Overlapping Group Shrinkage and Order Statistics","authors":"Linsheng Bu, Tuo Fu, Defeng Chen, Huawei Cao, Shuo Zhang, Jialiang Han","doi":"10.3390/rs16183413","DOIUrl":"https://doi.org/10.3390/rs16183413","url":null,"abstract":"The Doppler-spread problem is commonly encountered in space target observation scenarios using ground-based radar when prolonged coherent integration techniques are utilized. Even when the translational motion is accurately compensated, the phase resulting from changes in the target observation attitude (TOA) still leads to extension of the target’s echo energy across multiple Doppler cells. In particular, as the TOA change undergoes multiple cycles within a coherent processing interval (CPI), the Doppler spectrum spreads into equidistant sparse line spectra, posing a substantial challenge for target detection. Aiming to address such problems, we propose a generalized likelihood ratio test based on overlapping group shrinkage denoising and order statistics (OGSos-GLRT) in this study. First, the Doppler domain signal is denoised according to its equidistant sparse characteristics, allowing for the recovery of Doppler cells where line spectra may be situated. Then, several of the largest Doppler cells are integrated into the GLRT for detection. An analytical expression for the false alarm probability of the proposed detector is also derived. Additionally, a modified OGSos-GLRT method is proposed to make decisions based on an increasing estimated number of line spectra (ENLS), thus increasing the robustness of OGSos-GLRT when the ENLS mismatches the actual value. Finally, Monte Carlo simulations confirm the effectiveness of the proposed detector, even at low signal-to-noise ratios (SNRs).","PeriodicalId":48993,"journal":{"name":"Remote Sensing","volume":"2 1","pages":""},"PeriodicalIF":5.0,"publicationDate":"2024-09-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142200474","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"地球科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Xiaojun Liu, James A. Craven, Victoria Tschirhart, Stephen E. Grasby
In this study, we describe a deep learning (DL)-based workflow for the three-dimensional (3D) geophysical inversion of magnetotelluric (MT) data. We derived a mathematical connection between a 3D resistivity model and the surface-observed electric/magnetic field response by using a fully connected neural network framework (U-Net). Limited by computer hardware functionality, the resistivity models were generated by using a random walk technique to enlarge the generalization coverage of the neural network model, and 15,000 paired datasets were utilized to train and validate it. Grid search was used to select the optimal configuration parameters. With the optimal model framework from the parameter tuning phase, the metrics showed stable convergence during model training/validation. In the test period, the trained model was applied to predict the resistivity distribution by using both the simulated synthetic and the real MT data from the Mount Meager area, British Columbia. The reliability of the model prediction was verified with noised input data from the synthetic model. The calculated results can be used to reconstruct the position and shape trends of bodies with anomalous resistivity, which verifies the stability and performance of the DL-based 3D inversion algorithm and showcases its potential practical applications.
{"title":"Estimating Three-Dimensional Resistivity Distribution with Magnetotelluric Data and a Deep Learning Algorithm","authors":"Xiaojun Liu, James A. Craven, Victoria Tschirhart, Stephen E. Grasby","doi":"10.3390/rs16183400","DOIUrl":"https://doi.org/10.3390/rs16183400","url":null,"abstract":"In this study, we describe a deep learning (DL)-based workflow for the three-dimensional (3D) geophysical inversion of magnetotelluric (MT) data. We derived a mathematical connection between a 3D resistivity model and the surface-observed electric/magnetic field response by using a fully connected neural network framework (U-Net). Limited by computer hardware functionality, the resistivity models were generated by using a random walk technique to enlarge the generalization coverage of the neural network model, and 15,000 paired datasets were utilized to train and validate it. Grid search was used to select the optimal configuration parameters. With the optimal model framework from the parameter tuning phase, the metrics showed stable convergence during model training/validation. In the test period, the trained model was applied to predict the resistivity distribution by using both the simulated synthetic and the real MT data from the Mount Meager area, British Columbia. The reliability of the model prediction was verified with noised input data from the synthetic model. The calculated results can be used to reconstruct the position and shape trends of bodies with anomalous resistivity, which verifies the stability and performance of the DL-based 3D inversion algorithm and showcases its potential practical applications.","PeriodicalId":48993,"journal":{"name":"Remote Sensing","volume":"4 1","pages":""},"PeriodicalIF":5.0,"publicationDate":"2024-09-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142200440","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"地球科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Darren Ghent, Jasdeep Singh Anand, Karen Veal, John Remedios
Land Surface Temperature (LST) is integral to our understanding of the radiative energy budget of the Earth’s surface since it provides the best approximation to the thermodynamic temperature that drives the outgoing longwave flux from surface to atmosphere. Since 5 July 2017, an operational LST product has been available from the Sentinel-3A mission, with the corresponding product being available from Sentinel-3B since 17 November 2018. Here, we present the first paper describing formal products, including algorithms, for the Sea and Land Surface Temperature Radiometer (SLSTR) instruments onboard Sentinel-3A and 3B (SLSTR-A and SLSTR-B, respectively). We evaluate the quality of both the Land Surface Temperature Climate Change Initiative (LST_cci) product and the Copernicus operational LST product (SL_2_LST) for the years 2018 to 2021. The evaluation takes the form of a validation against ground-based observations of LST across eleven well-established in situ stations. For the validation, the mean absolute daytime and night-time difference against the in situ measurements for the LST_cci product is 0.77 K and 0.50 K, respectively, for SLSTR-A, and 0.91 K and 0.54 K, respectively, for SLSTR-B. These are an improvement on the corresponding statistics for the SL_2_LST product, which are 1.45 K (daytime) and 0.76 (night-time) for SLSTR-A, and 1.29 K (daytime) and 0.77 (night-time) for SLSTR-B. The key influencing factors in this improvement include an upgraded database of reference states for the generation of retrieval coefficients, higher stratification of the auxiliary data for the biome and fractional vegetation, and enhanced cloud masking.
{"title":"The Operational and Climate Land Surface Temperature Products from the Sea and Land Surface Temperature Radiometers on Sentinel-3A and 3B","authors":"Darren Ghent, Jasdeep Singh Anand, Karen Veal, John Remedios","doi":"10.3390/rs16183403","DOIUrl":"https://doi.org/10.3390/rs16183403","url":null,"abstract":"Land Surface Temperature (LST) is integral to our understanding of the radiative energy budget of the Earth’s surface since it provides the best approximation to the thermodynamic temperature that drives the outgoing longwave flux from surface to atmosphere. Since 5 July 2017, an operational LST product has been available from the Sentinel-3A mission, with the corresponding product being available from Sentinel-3B since 17 November 2018. Here, we present the first paper describing formal products, including algorithms, for the Sea and Land Surface Temperature Radiometer (SLSTR) instruments onboard Sentinel-3A and 3B (SLSTR-A and SLSTR-B, respectively). We evaluate the quality of both the Land Surface Temperature Climate Change Initiative (LST_cci) product and the Copernicus operational LST product (SL_2_LST) for the years 2018 to 2021. The evaluation takes the form of a validation against ground-based observations of LST across eleven well-established in situ stations. For the validation, the mean absolute daytime and night-time difference against the in situ measurements for the LST_cci product is 0.77 K and 0.50 K, respectively, for SLSTR-A, and 0.91 K and 0.54 K, respectively, for SLSTR-B. These are an improvement on the corresponding statistics for the SL_2_LST product, which are 1.45 K (daytime) and 0.76 (night-time) for SLSTR-A, and 1.29 K (daytime) and 0.77 (night-time) for SLSTR-B. The key influencing factors in this improvement include an upgraded database of reference states for the generation of retrieval coefficients, higher stratification of the auxiliary data for the biome and fractional vegetation, and enhanced cloud masking.","PeriodicalId":48993,"journal":{"name":"Remote Sensing","volume":"60 1","pages":""},"PeriodicalIF":5.0,"publicationDate":"2024-09-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142200442","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"地球科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}