Tomáš Rusňák, T. Kasanický, Peter Malík, J. Mojžiš, J. Zelenka, M. Svicek, Dominik Abrahám, A. Halabuk
Multitemporal crop classification approaches have demonstrated high performance within a given season. However, cross-season and cross-region crop classification presents a unique transferability challenge. This study addresses this challenge by adopting a domain generalization approach, e.g., by training models on multiple seasons to improve generalization to new, unseen target years. We utilize a comprehensive five-year Sentinel-2 dataset over different agricultural regions in Slovakia and a diverse crop scheme (eight crop classes). We evaluate the performance of different machine learning classification algorithms, including random forests, support vector machines, quadratic discriminant analysis, and neural networks. Our main findings reveal that the transferability of models across years differs between regions, with the Danubian lowlands demonstrating better performance (overall accuracies ranging from 91.5% in 2022 to 94.3% in 2020) compared to eastern Slovakia (overall accuracies ranging from 85% in 2022 to 91.9% in 2020). Quadratic discriminant analysis, support vector machines, and neural networks consistently demonstrated high performance across diverse transferability scenarios. The random forest algorithm was less reliable in generalizing across different scenarios, particularly when there was a significant deviation in the distribution of unseen domains. This finding underscores the importance of employing a multi-classifier analysis. Rapeseed, grasslands, and sugar beet consistently show stable transferability across seasons. We observe that all periods play a crucial role in the classification process, with July being the most important and August the least important. Acceptable performance can be achieved as early as June, with only slight improvements towards the end of the season. Finally, employing a multi-classifier approach allows for parcel-level confidence determination, enhancing the reliability of crop distribution maps by assuming higher confidence when multiple classifiers yield similar results. To enhance spatiotemporal generalization, our study proposes a two-step approach: (1) determine the optimal spatial domain to accurately represent crop type distribution; and (2) apply interannual training to capture variability across years. This approach helps account for various factors, such as different crop rotation practices, diverse observational quality, and local climate-driven patterns, leading to more accurate and reliable crop classification models for nationwide agricultural monitoring.
{"title":"Crop Mapping without Labels: Investigating Temporal and Spatial Transferability of Crop Classification Models Using a 5-Year Sentinel-2 Series and Machine Learning","authors":"Tomáš Rusňák, T. Kasanický, Peter Malík, J. Mojžiš, J. Zelenka, M. Svicek, Dominik Abrahám, A. Halabuk","doi":"10.3390/rs15133414","DOIUrl":"https://doi.org/10.3390/rs15133414","url":null,"abstract":"Multitemporal crop classification approaches have demonstrated high performance within a given season. However, cross-season and cross-region crop classification presents a unique transferability challenge. This study addresses this challenge by adopting a domain generalization approach, e.g., by training models on multiple seasons to improve generalization to new, unseen target years. We utilize a comprehensive five-year Sentinel-2 dataset over different agricultural regions in Slovakia and a diverse crop scheme (eight crop classes). We evaluate the performance of different machine learning classification algorithms, including random forests, support vector machines, quadratic discriminant analysis, and neural networks. Our main findings reveal that the transferability of models across years differs between regions, with the Danubian lowlands demonstrating better performance (overall accuracies ranging from 91.5% in 2022 to 94.3% in 2020) compared to eastern Slovakia (overall accuracies ranging from 85% in 2022 to 91.9% in 2020). Quadratic discriminant analysis, support vector machines, and neural networks consistently demonstrated high performance across diverse transferability scenarios. The random forest algorithm was less reliable in generalizing across different scenarios, particularly when there was a significant deviation in the distribution of unseen domains. This finding underscores the importance of employing a multi-classifier analysis. Rapeseed, grasslands, and sugar beet consistently show stable transferability across seasons. We observe that all periods play a crucial role in the classification process, with July being the most important and August the least important. Acceptable performance can be achieved as early as June, with only slight improvements towards the end of the season. Finally, employing a multi-classifier approach allows for parcel-level confidence determination, enhancing the reliability of crop distribution maps by assuming higher confidence when multiple classifiers yield similar results. To enhance spatiotemporal generalization, our study proposes a two-step approach: (1) determine the optimal spatial domain to accurately represent crop type distribution; and (2) apply interannual training to capture variability across years. This approach helps account for various factors, such as different crop rotation practices, diverse observational quality, and local climate-driven patterns, leading to more accurate and reliable crop classification models for nationwide agricultural monitoring.","PeriodicalId":20944,"journal":{"name":"Remote. Sens.","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2023-07-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"88333091","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Lei Xu, Hongchu Yu, Zeqiang Chen, Wenying Du, Nengcheng Chen, Min Huang
Surface soil moisture (SSM) and root-zone soil moisture (RZSM) are key hydrological variables for the agricultural water cycle and vegetation growth. Accurate SSM and RZSM forecasting at sub-seasonal scales would be valuable for agricultural water management and preparations. Currently, weather model-based soil moisture predictions are subject to large uncertainties due to inaccurate initial conditions and empirical parameterization schemes, while the data-driven machine learning methods have limitations in modeling long-term temporal dependences of SSM and RZSM because of the lack of considerations in the soil water process. Thus, here, we innovatively integrate the model-based soil moisture predictions from a sub-seasonal-to-seasonal (S2S) model into a data-driven stacked deep learning model to construct a hybrid SSM and RZSM forecasting framework. The hybrid forecasting model is evaluated over the Yangtze River Basin and parts of Europe from 1- to 46-day lead times and is compared with four baseline methods, including the support vector regression (SVR), random forest (RF), convolutional long short-term memory (ConvLSTM) and the S2S model. The results indicate substantial skill improvements in the hybrid model relative to baseline models over the two study areas spatiotemporally, in terms of the correlation coefficient, unbiased root mean square error (ubRMSE) and RMSE. The hybrid forecasting model benefits from the long-lead predictive skill from S2S and retains the advantages of data-driven soil moisture memory modeling at short-lead scales, which account for the superiority of hybrid forecasting. Overall, the developed hybrid model is promising for improved sub-seasonal SSM and RZSM forecasting over global and local areas.
{"title":"Hybrid Deep Learning and S2S Model for Improved Sub-Seasonal Surface and Root-Zone Soil Moisture Forecasting","authors":"Lei Xu, Hongchu Yu, Zeqiang Chen, Wenying Du, Nengcheng Chen, Min Huang","doi":"10.3390/rs15133410","DOIUrl":"https://doi.org/10.3390/rs15133410","url":null,"abstract":"Surface soil moisture (SSM) and root-zone soil moisture (RZSM) are key hydrological variables for the agricultural water cycle and vegetation growth. Accurate SSM and RZSM forecasting at sub-seasonal scales would be valuable for agricultural water management and preparations. Currently, weather model-based soil moisture predictions are subject to large uncertainties due to inaccurate initial conditions and empirical parameterization schemes, while the data-driven machine learning methods have limitations in modeling long-term temporal dependences of SSM and RZSM because of the lack of considerations in the soil water process. Thus, here, we innovatively integrate the model-based soil moisture predictions from a sub-seasonal-to-seasonal (S2S) model into a data-driven stacked deep learning model to construct a hybrid SSM and RZSM forecasting framework. The hybrid forecasting model is evaluated over the Yangtze River Basin and parts of Europe from 1- to 46-day lead times and is compared with four baseline methods, including the support vector regression (SVR), random forest (RF), convolutional long short-term memory (ConvLSTM) and the S2S model. The results indicate substantial skill improvements in the hybrid model relative to baseline models over the two study areas spatiotemporally, in terms of the correlation coefficient, unbiased root mean square error (ubRMSE) and RMSE. The hybrid forecasting model benefits from the long-lead predictive skill from S2S and retains the advantages of data-driven soil moisture memory modeling at short-lead scales, which account for the superiority of hybrid forecasting. Overall, the developed hybrid model is promising for improved sub-seasonal SSM and RZSM forecasting over global and local areas.","PeriodicalId":20944,"journal":{"name":"Remote. Sens.","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2023-07-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"77884601","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Christoph Jörges, Hedwig Sophie Vidal, T. Hank, H. Bach
Solar photovoltaic panels (PV) provide great potential to reduce greenhouse gas emissions as a renewable energy technology. The number of solar PV has increased significantly in recent years and is expected to increase even further. Therefore, accurate and global mapping and monitoring of PV modules with remote sensing methods is important for predicting energy production potentials, revealing socio-economic drivers, supporting urban planning, and estimating ecological impacts. Hyperspectral imagery provides crucial information to identify PV modules based on their physical absorption and reflection properties. This study investigated spectral signatures of spaceborne PRISMA data of 30 m low resolution for the first time, as well as airborne AVIRIS-NG data of 5.3 m medium resolution for the detection of solar PV. The study region is located around Irlbach in southern Germany. A physics-based approach using the spectral indices nHI, NSPI, aVNIR, PEP, and VPEP was used for the classification of the hyperspectral images. By validation with a solar PV ground truth dataset of the study area, a user’s accuracy of 70.53% and a producer’s accuracy of 88.06% for the PRISMA hyperspectral data, and a user’s accuracy of 65.94% and a producer’s accuracy of 82.77% for AVIRIS-NG were achieved.
{"title":"Detection of Solar Photovoltaic Power Plants Using Satellite and Airborne Hyperspectral Imaging","authors":"Christoph Jörges, Hedwig Sophie Vidal, T. Hank, H. Bach","doi":"10.3390/rs15133403","DOIUrl":"https://doi.org/10.3390/rs15133403","url":null,"abstract":"Solar photovoltaic panels (PV) provide great potential to reduce greenhouse gas emissions as a renewable energy technology. The number of solar PV has increased significantly in recent years and is expected to increase even further. Therefore, accurate and global mapping and monitoring of PV modules with remote sensing methods is important for predicting energy production potentials, revealing socio-economic drivers, supporting urban planning, and estimating ecological impacts. Hyperspectral imagery provides crucial information to identify PV modules based on their physical absorption and reflection properties. This study investigated spectral signatures of spaceborne PRISMA data of 30 m low resolution for the first time, as well as airborne AVIRIS-NG data of 5.3 m medium resolution for the detection of solar PV. The study region is located around Irlbach in southern Germany. A physics-based approach using the spectral indices nHI, NSPI, aVNIR, PEP, and VPEP was used for the classification of the hyperspectral images. By validation with a solar PV ground truth dataset of the study area, a user’s accuracy of 70.53% and a producer’s accuracy of 88.06% for the PRISMA hyperspectral data, and a user’s accuracy of 65.94% and a producer’s accuracy of 82.77% for AVIRIS-NG were achieved.","PeriodicalId":20944,"journal":{"name":"Remote. Sens.","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2023-07-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"78070583","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
The fractal dimension (FD) is a classical nonlinear dynamic index that can effectively reflect the dynamic transformation of a signal. However, FD can only reflect signal information of a single scale in the whole frequency band. To solve this problem, we combine refined composite multi-scale processing with FD and propose the refined composite multi-scale FD (RCMFD), which can reflect the information of signals at a multi-scale. Furthermore, hierarchical RCMFD (HRCMFD) is proposed by introducing hierarchical analysis, which successfully represents the multi-scale information of signals in each sub-frequency band. Moreover, two ship-radiated noise (SRN) multi-feature extraction methods based on RCMFD and HRCMFD are proposed. The simulation results indicate that RCMFD and HRCMFD can effectively discriminate different simulated signals. The experimental results show that the proposed two-feature extraction methods are more effective for distinguishing six types of SRN than other feature-extraction methods. The HRCMFD-based multi-feature extraction method has the best performance, and the recognition rate reaches 99.7% under the combination of five features.
{"title":"Hierarchical Refined Composite Multi-Scale Fractal Dimension and Its Application in Feature Extraction of Ship-Radiated Noise","authors":"Yuxing Li, Lili Liang, Shuai-Shuai Zhang","doi":"10.3390/rs15133406","DOIUrl":"https://doi.org/10.3390/rs15133406","url":null,"abstract":"The fractal dimension (FD) is a classical nonlinear dynamic index that can effectively reflect the dynamic transformation of a signal. However, FD can only reflect signal information of a single scale in the whole frequency band. To solve this problem, we combine refined composite multi-scale processing with FD and propose the refined composite multi-scale FD (RCMFD), which can reflect the information of signals at a multi-scale. Furthermore, hierarchical RCMFD (HRCMFD) is proposed by introducing hierarchical analysis, which successfully represents the multi-scale information of signals in each sub-frequency band. Moreover, two ship-radiated noise (SRN) multi-feature extraction methods based on RCMFD and HRCMFD are proposed. The simulation results indicate that RCMFD and HRCMFD can effectively discriminate different simulated signals. The experimental results show that the proposed two-feature extraction methods are more effective for distinguishing six types of SRN than other feature-extraction methods. The HRCMFD-based multi-feature extraction method has the best performance, and the recognition rate reaches 99.7% under the combination of five features.","PeriodicalId":20944,"journal":{"name":"Remote. Sens.","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2023-07-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"82444756","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
The recent discovery of water ice in the lunar polar shadowed regions (PSRs) has driven interest in robotic exploration, due to its potential utilization to generate water, oxygen, and hydrogen that would enable sustainable human exploration in the future. However, the absence of direct sunlight in the PSRs poses a significant challenge for the robotic operation to obtain clear images, consequently impacting crucial tasks such as obstacle avoidance, pathfinding, and scientific investigation. In this regard, this study proposes a visual simultaneous localization and mapping (SLAM)-based robotic mapping approach that combines dense mapping and low-light image enhancement (LLIE) methods. The proposed approach was experimentally examined and validated in an environment that simulated the lighting conditions of the PSRs. The mapping results show that the LLIE method leverages scattered low light to enhance the quality and clarity of terrain images, resulting in an overall improvement of the rover’s perception and mapping capabilities in low-light environments.
{"title":"Pilot Study of Low-Light Enhanced Terrain Mapping for Robotic Exploration in Lunar PSRs","authors":"Jae-Min Park, Sungchul Hong, H. Shin","doi":"10.3390/rs15133412","DOIUrl":"https://doi.org/10.3390/rs15133412","url":null,"abstract":"The recent discovery of water ice in the lunar polar shadowed regions (PSRs) has driven interest in robotic exploration, due to its potential utilization to generate water, oxygen, and hydrogen that would enable sustainable human exploration in the future. However, the absence of direct sunlight in the PSRs poses a significant challenge for the robotic operation to obtain clear images, consequently impacting crucial tasks such as obstacle avoidance, pathfinding, and scientific investigation. In this regard, this study proposes a visual simultaneous localization and mapping (SLAM)-based robotic mapping approach that combines dense mapping and low-light image enhancement (LLIE) methods. The proposed approach was experimentally examined and validated in an environment that simulated the lighting conditions of the PSRs. The mapping results show that the LLIE method leverages scattered low light to enhance the quality and clarity of terrain images, resulting in an overall improvement of the rover’s perception and mapping capabilities in low-light environments.","PeriodicalId":20944,"journal":{"name":"Remote. Sens.","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2023-07-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"76849165","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Land scene classification in satellite imagery has a wide range of applications in remote surveillance, environment monitoring, remote scene analysis, Earth observations and urban planning. Due to immense advantages of the land scene classification task, several methods have been proposed during recent years to automatically classify land scenes in remote sensing images. Most of the work focuses on designing and developing deep networks to identify land scenes from high-resolution satellite images. However, these methods face challenges in identifying different land scenes. Complex texture, cluttered background, extremely small size of objects and large variations in object scale are the common challenges that restrict the models to achieve high performance. To tackle these challenges, we propose a multi-branch deep learning framework that efficiently combines global contextual features with multi-scale features to identify complex land scenes. Generally, the framework consists of two branches. The first branch extracts global contextual information from different regions of the input image, and the second branch exploits a fully convolutional network (FCN) to extract multi-scale local features. The performance of the proposed framework is evaluated on three benchmark datasets, UC-Merced, SIRI-WHU, and EuroSAT. From the experiments, we demonstrate that the framework achieves superior performance compared to other similar models.
{"title":"Multi-Branch Deep Learning Framework for Land Scene Classification in Satellite Imagery","authors":"Sultan Daud Khan, Saleh M. Basalamah","doi":"10.3390/rs15133408","DOIUrl":"https://doi.org/10.3390/rs15133408","url":null,"abstract":"Land scene classification in satellite imagery has a wide range of applications in remote surveillance, environment monitoring, remote scene analysis, Earth observations and urban planning. Due to immense advantages of the land scene classification task, several methods have been proposed during recent years to automatically classify land scenes in remote sensing images. Most of the work focuses on designing and developing deep networks to identify land scenes from high-resolution satellite images. However, these methods face challenges in identifying different land scenes. Complex texture, cluttered background, extremely small size of objects and large variations in object scale are the common challenges that restrict the models to achieve high performance. To tackle these challenges, we propose a multi-branch deep learning framework that efficiently combines global contextual features with multi-scale features to identify complex land scenes. Generally, the framework consists of two branches. The first branch extracts global contextual information from different regions of the input image, and the second branch exploits a fully convolutional network (FCN) to extract multi-scale local features. The performance of the proposed framework is evaluated on three benchmark datasets, UC-Merced, SIRI-WHU, and EuroSAT. From the experiments, we demonstrate that the framework achieves superior performance compared to other similar models.","PeriodicalId":20944,"journal":{"name":"Remote. Sens.","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2023-07-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"86410873","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Bing Xu, Chunju Zhang, Wencong Liu, Jianwei Huang, Yujiao Su, Yucheng Yang, Weijie Jiang, Wenhao Sun
Currently, researchers commonly use convolutional neural network (CNN) models for landslide remote sensing image recognition. However, with the increase in landslide monitoring data, the available multimodal landslide data contain rich feature information, and existing landslide recognition models have difficulty utilizing such data. A knowledge graph is a linguistic network knowledge base capable of storing and describing various entities and their relationships. A landslide knowledge graph is used to manage multimodal landslide data, and by integrating this graph into a landslide image recognition model, the given multimodal landslide data can be fully utilized for landslide identification. In this paper, we combine knowledge and models, introduce the use of landslide knowledge graphs in landslide identification, and propose a landslide identification method for remote sensing images that fuses knowledge graphs and ResNet (FKGRNet). We take the Loess Plateau of China as the study area and test the effect of the fusion model by comparing the baseline model, the fusion model and other deep learning models. The experimental results show that, first, with ResNet34 as the baseline model, the FKGRNet model achieves 95.08% accuracy in landslide recognition, which is better than that of the baseline model and other deep learning models. Second, the FKGRNet model with different network depths has better landslide recognition accuracy than its corresponding baseline model. Third, the FKGRNet model based on feature splicing outperforms the fused feature classifier in terms of both accuracy and F1-score on the landslide recognition task. Therefore, the FKGRNet model can make fuller use of landslide knowledge to accurately recognize landslides in remote sensing images.
{"title":"Landslide Identification Method Based on the FKGRNet Model for Remote Sensing Images","authors":"Bing Xu, Chunju Zhang, Wencong Liu, Jianwei Huang, Yujiao Su, Yucheng Yang, Weijie Jiang, Wenhao Sun","doi":"10.3390/rs15133407","DOIUrl":"https://doi.org/10.3390/rs15133407","url":null,"abstract":"Currently, researchers commonly use convolutional neural network (CNN) models for landslide remote sensing image recognition. However, with the increase in landslide monitoring data, the available multimodal landslide data contain rich feature information, and existing landslide recognition models have difficulty utilizing such data. A knowledge graph is a linguistic network knowledge base capable of storing and describing various entities and their relationships. A landslide knowledge graph is used to manage multimodal landslide data, and by integrating this graph into a landslide image recognition model, the given multimodal landslide data can be fully utilized for landslide identification. In this paper, we combine knowledge and models, introduce the use of landslide knowledge graphs in landslide identification, and propose a landslide identification method for remote sensing images that fuses knowledge graphs and ResNet (FKGRNet). We take the Loess Plateau of China as the study area and test the effect of the fusion model by comparing the baseline model, the fusion model and other deep learning models. The experimental results show that, first, with ResNet34 as the baseline model, the FKGRNet model achieves 95.08% accuracy in landslide recognition, which is better than that of the baseline model and other deep learning models. Second, the FKGRNet model with different network depths has better landslide recognition accuracy than its corresponding baseline model. Third, the FKGRNet model based on feature splicing outperforms the fused feature classifier in terms of both accuracy and F1-score on the landslide recognition task. Therefore, the FKGRNet model can make fuller use of landslide knowledge to accurately recognize landslides in remote sensing images.","PeriodicalId":20944,"journal":{"name":"Remote. Sens.","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2023-07-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"87728326","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Dávid D. Kovács, P. Reyes-Muñoz, Matías Salinero-Delgado, Viktor Ixion Mészáros, K. Berger, J. Verrelst
Global mapping of essential vegetation traits (EVTs) through data acquired by Earth-observing satellites provides a spatially explicit way to analyze the current vegetation states and dynamics of our planet. Although significant efforts have been made, there is still a lack of global and consistently derived multi-temporal trait maps that are cloud-free. Here we present the processing chain for the spatiotemporally continuous production of four EVTs at a global scale: (1) fraction of absorbed photosynthetically active radiation (FAPAR), (2) leaf area index (LAI), (3) fractional vegetation cover (FVC), and (4) leaf chlorophyll content (LCC). The proposed workflow presents a scalable processing approach to the global cloud-free mapping of the EVTs. Hybrid retrieval models, named S3-TOA-GPR-1.0-WS, were implemented into Google Earth Engine (GEE) using Sentinel-3 Ocean and Land Color Instrument (OLCI) Level-1B for the mapping of the four EVTs along with associated uncertainty estimates. We used the Whittaker smoother (WS) for the temporal reconstruction of the four EVTs, which led to continuous data streams, here applied to the year 2019. Cloud-free maps were produced at 5 km spatial resolution at 10-day time intervals. The consistency and plausibility of the EVT estimates for the resulting annual profiles were evaluated by per-pixel intra-annually correlating against corresponding vegetation products of both MODIS and Copernicus Global Land Service (CGLS). The most consistent results were obtained for LAI, which showed intra-annual correlations with an average Pearson correlation coefficient (R) of 0.57 against the CGLS LAI product. Globally, the EVT products showed consistent results, specifically obtaining higher correlation than R> 0.5 with reference products between 30 and 60° latitude in the Northern Hemisphere. Additionally, intra-annual goodness-of-fit statistics were also calculated locally against reference products over four distinct vegetated land covers. As a general trend, vegetated land covers with pronounced phenological dynamics led to high correlations between the different products. However, sparsely vegetated fields as well as areas near the equator linked to smaller seasonality led to lower correlations. We conclude that the global gap-free mapping of the four EVTs was overall consistent. Thanks to GEE, the entire OLCI L1B catalogue can be processed efficiently into the EVT products on a global scale and made cloud-free with the WS temporal reconstruction method. Additionally, GEE facilitates the workflow to be operationally applicable and easily accessible to the broader community.
利用地球观测卫星获取的数据绘制全球植被基本特征(evt)图,为分析地球当前的植被状态和动态提供了一种空间明确的方法。尽管已经做出了巨大的努力,但仍然缺乏无云的全球和一致的多时相特征图。本文给出了全球尺度下4种EVTs的时空连续生产处理链:(1)吸收光合有效辐射(FAPAR),(2)叶面积指数(LAI),(3)植被覆盖度(FVC),(4)叶片叶绿素含量(LCC)。该工作流为evt的全局无云映射提供了一种可扩展的处理方法。使用Sentinel-3 Ocean and Land Color Instrument (OLCI) Level-1B将名为S3-TOA-GPR-1.0-WS的混合检索模型应用到Google Earth Engine (GEE)中,用于绘制4个evt以及相关的不确定性估算。我们使用惠特克平滑(WS)对四个evt进行时间重建,从而得到连续的数据流,这里应用于2019年。以5公里空间分辨率每隔10天制作无云地图。通过与MODIS和哥白尼全球土地服务(CGLS)的相应植被产品进行年内逐像元相关,评估了EVT估算结果的一致性和合理性。LAI得到了最一致的结果,与CGLS LAI产品的平均Pearson相关系数(R)为0.57。在全球范围内,EVT产品与北半球30 ~ 60°纬度的参考产品的相关性均高于R> 0.5。此外,还根据四种不同植被覆盖的参考产品局部计算了年度内拟合优度统计。从总体趋势看,物候动态显著的植被覆被导致了不同产品之间的高度相关性。然而,植被稀疏的地区以及赤道附近的地区与较小的季节性相关,导致相关性较低。我们得出结论,四种evt的全球无间隙映射总体上是一致的。得益于GEE,整个OLCI L1B目录可以在全球范围内有效地处理成EVT产品,并使用WS时间重建方法实现无云。此外,GEE使工作流在操作上适用,并且更容易被更广泛的社区访问。
{"title":"Cloud-Free Global Maps of Essential Vegetation Traits Processed from the TOA Sentinel-3 Catalogue in Google Earth Engine","authors":"Dávid D. Kovács, P. Reyes-Muñoz, Matías Salinero-Delgado, Viktor Ixion Mészáros, K. Berger, J. Verrelst","doi":"10.3390/rs15133404","DOIUrl":"https://doi.org/10.3390/rs15133404","url":null,"abstract":"Global mapping of essential vegetation traits (EVTs) through data acquired by Earth-observing satellites provides a spatially explicit way to analyze the current vegetation states and dynamics of our planet. Although significant efforts have been made, there is still a lack of global and consistently derived multi-temporal trait maps that are cloud-free. Here we present the processing chain for the spatiotemporally continuous production of four EVTs at a global scale: (1) fraction of absorbed photosynthetically active radiation (FAPAR), (2) leaf area index (LAI), (3) fractional vegetation cover (FVC), and (4) leaf chlorophyll content (LCC). The proposed workflow presents a scalable processing approach to the global cloud-free mapping of the EVTs. Hybrid retrieval models, named S3-TOA-GPR-1.0-WS, were implemented into Google Earth Engine (GEE) using Sentinel-3 Ocean and Land Color Instrument (OLCI) Level-1B for the mapping of the four EVTs along with associated uncertainty estimates. We used the Whittaker smoother (WS) for the temporal reconstruction of the four EVTs, which led to continuous data streams, here applied to the year 2019. Cloud-free maps were produced at 5 km spatial resolution at 10-day time intervals. The consistency and plausibility of the EVT estimates for the resulting annual profiles were evaluated by per-pixel intra-annually correlating against corresponding vegetation products of both MODIS and Copernicus Global Land Service (CGLS). The most consistent results were obtained for LAI, which showed intra-annual correlations with an average Pearson correlation coefficient (R) of 0.57 against the CGLS LAI product. Globally, the EVT products showed consistent results, specifically obtaining higher correlation than R> 0.5 with reference products between 30 and 60° latitude in the Northern Hemisphere. Additionally, intra-annual goodness-of-fit statistics were also calculated locally against reference products over four distinct vegetated land covers. As a general trend, vegetated land covers with pronounced phenological dynamics led to high correlations between the different products. However, sparsely vegetated fields as well as areas near the equator linked to smaller seasonality led to lower correlations. We conclude that the global gap-free mapping of the four EVTs was overall consistent. Thanks to GEE, the entire OLCI L1B catalogue can be processed efficiently into the EVT products on a global scale and made cloud-free with the WS temporal reconstruction method. Additionally, GEE facilitates the workflow to be operationally applicable and easily accessible to the broader community.","PeriodicalId":20944,"journal":{"name":"Remote. Sens.","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2023-07-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"89866681","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
There has been increased interest in recognizing the dynamic and flexible changes in shipborne multi-function radar (MFR) working modes. The working modes determine the distribution of pulse descriptor words (PDWs). However, building the mapping relationship from PDWs to working modes in reconnaissance systems presents many challenges, such as the duration of the working modes not being fixed, incomplete temporal features in short PDW slices, and delayed feedback of the reconnaissance information in long PDW slices. This paper recommends an MFR working mode recognition method based on the ShakeDrop regularization dual-path attention temporal convolutional network (DP-ATCN) with prolonged temporal feature preservation. The method uses a temporal feature extraction network with the Convolutional Block Attention Module (CBAM) and ShakeDrop regularization to acquire a high-dimensional space mapping of temporal features of the PDWs in a short time slice. Additionally, with prolonged PDW accumulation, an enhanced TCN is introduced to attain the temporal variation of long-term dependence. This way, secondary correction of MFR working mode recognition results is achieved with both promptness and accuracy. Experimental results and analysis confirm that, despite the presence of missing and spurious pulses, the recommended method performs effectively and consistently in shipborne MFR working mode recognition tasks.
{"title":"Shipborne Multi-Function Radar Working Mode Recognition Based on DP-ATCN","authors":"Tian Tian, Qianrong Zhang, Zhizhong Zhang, Feng Niu, Xinyi Guo, Feng Zhou","doi":"10.3390/rs15133415","DOIUrl":"https://doi.org/10.3390/rs15133415","url":null,"abstract":"There has been increased interest in recognizing the dynamic and flexible changes in shipborne multi-function radar (MFR) working modes. The working modes determine the distribution of pulse descriptor words (PDWs). However, building the mapping relationship from PDWs to working modes in reconnaissance systems presents many challenges, such as the duration of the working modes not being fixed, incomplete temporal features in short PDW slices, and delayed feedback of the reconnaissance information in long PDW slices. This paper recommends an MFR working mode recognition method based on the ShakeDrop regularization dual-path attention temporal convolutional network (DP-ATCN) with prolonged temporal feature preservation. The method uses a temporal feature extraction network with the Convolutional Block Attention Module (CBAM) and ShakeDrop regularization to acquire a high-dimensional space mapping of temporal features of the PDWs in a short time slice. Additionally, with prolonged PDW accumulation, an enhanced TCN is introduced to attain the temporal variation of long-term dependence. This way, secondary correction of MFR working mode recognition results is achieved with both promptness and accuracy. Experimental results and analysis confirm that, despite the presence of missing and spurious pulses, the recommended method performs effectively and consistently in shipborne MFR working mode recognition tasks.","PeriodicalId":20944,"journal":{"name":"Remote. Sens.","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2023-07-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"83756048","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Ground subsidence is a significant safety concern in mining regions, making large-scale subsidence forecasting vital for mine site environmental management. This study proposes a deep learning-based prediction approach to address the challenges posed by the existing prediction methods, such as complicated model parameters or large data requirements. Small baseline subset interferometric synthetic aperture radar (SBAS-InSAR) technology was utilized to collect spatiotemporal ground subsidence data at the Pingshuo mining area from 2019 to 2022, which was then analyzed using the long-short term memory (LSTM) neural network algorithm. Additionally, an attention mechanism was introduced to incorporate temporal dependencies and improve prediction accuracy, leading to the development of the AT-LSTM model. The results demonstrate that the Pingshuo mine area had subsidence rates ranging from −205.89 to −59.70 mm/yr from 2019 to 2022, with subsidence areas mainly located around Jinggong-1 (JG-1) and the three open-pit mines, strongly linked to mining activities, and the subsidence range continuously expanding. The spatial distribution of the AT-LSTM prediction results is basically consistent with the real situation, and the correlation coefficient is more than 0.97. Compared with the LSTM, the AT-LSTM method better captured the fluctuation changes of the time series for fitting, while the model was more sensitive to the mining method of the mine, and had different expressiveness in open-pit and shaft mines. Furthermore, in comparison to existing time-series forecasting methods, the AT-LSTM is effective and practical.
{"title":"Integrating SBAS-InSAR and AT-LSTM for Time-Series Analysis and Prediction Method of Ground Subsidence in Mining Areas","authors":"Yahong Liu, Jin Zhang","doi":"10.3390/rs15133409","DOIUrl":"https://doi.org/10.3390/rs15133409","url":null,"abstract":"Ground subsidence is a significant safety concern in mining regions, making large-scale subsidence forecasting vital for mine site environmental management. This study proposes a deep learning-based prediction approach to address the challenges posed by the existing prediction methods, such as complicated model parameters or large data requirements. Small baseline subset interferometric synthetic aperture radar (SBAS-InSAR) technology was utilized to collect spatiotemporal ground subsidence data at the Pingshuo mining area from 2019 to 2022, which was then analyzed using the long-short term memory (LSTM) neural network algorithm. Additionally, an attention mechanism was introduced to incorporate temporal dependencies and improve prediction accuracy, leading to the development of the AT-LSTM model. The results demonstrate that the Pingshuo mine area had subsidence rates ranging from −205.89 to −59.70 mm/yr from 2019 to 2022, with subsidence areas mainly located around Jinggong-1 (JG-1) and the three open-pit mines, strongly linked to mining activities, and the subsidence range continuously expanding. The spatial distribution of the AT-LSTM prediction results is basically consistent with the real situation, and the correlation coefficient is more than 0.97. Compared with the LSTM, the AT-LSTM method better captured the fluctuation changes of the time series for fitting, while the model was more sensitive to the mining method of the mine, and had different expressiveness in open-pit and shaft mines. Furthermore, in comparison to existing time-series forecasting methods, the AT-LSTM is effective and practical.","PeriodicalId":20944,"journal":{"name":"Remote. Sens.","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2023-07-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"81274168","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}