Air pollution can occur in the whole world, with each region having its unique driving factors that contribute to human's health. However, effective mitigation of air pollution is often hindered by the uneven distribution of air quality monitoring stations, which tend to be concentrated in potential hotspots like major cities. This study aims to detect and improve the accuracy of the Global Air Quality Index from Remote Sensing (AQI-RS) by integrating AQI from ground-based stations with driving factors such as meteorological, environmental, sources of air pollution, and air pollution magnitude from satellite observation parameters as independent variables using Geographics Machine Learning (GML). This study utilizes 425 air pollution stations and the driving factors data globally from 2013 to 2024. The GML considers geographical characteristics in the analysis by calculating the optimal bandwidth area in its algorithm. The study employs nine scenarios to identify which parameters significantly contribute to the model and determine the best parameter combinations. In determining the best scenario, this study considers the R2 value, Root Mean Square Error (RMSE), and uncertainty in each of the scenarios. This study produced an AQI-RS model with an average R2, RMSE, and uncertainty in the best scenario of 0.89, 5.58, and 5.69 (AQI unit), respectively. The results indicate that GML significantly improves the accuracy of global AQI-RS over previous studies. By considering geographical characteristics using GML, this research is expected to gain an accurate prediction of AQI globally especially in regions without ground-based air pollution stations for the worldwide mitigation.
{"title":"Global air quality index prediction using integrated spatial observation data and geographics machine learning","authors":"Tania Septi Anggraini , Hitoshi Irie , Anjar Dimara Sakti , Ketut Wikantika","doi":"10.1016/j.srs.2025.100197","DOIUrl":"10.1016/j.srs.2025.100197","url":null,"abstract":"<div><div>Air pollution can occur in the whole world, with each region having its unique driving factors that contribute to human's health. However, effective mitigation of air pollution is often hindered by the uneven distribution of air quality monitoring stations, which tend to be concentrated in potential hotspots like major cities. This study aims to detect and improve the accuracy of the Global Air Quality Index from Remote Sensing (AQI-RS) by integrating AQI from ground-based stations with driving factors such as meteorological, environmental, sources of air pollution, and air pollution magnitude from satellite observation parameters as independent variables using Geographics Machine Learning (GML). This study utilizes 425 air pollution stations and the driving factors data globally from 2013 to 2024. The GML considers geographical characteristics in the analysis by calculating the optimal bandwidth area in its algorithm. The study employs nine scenarios to identify which parameters significantly contribute to the model and determine the best parameter combinations. In determining the best scenario, this study considers the R<sup>2</sup> value, Root Mean Square Error (RMSE), and uncertainty in each of the scenarios. This study produced an AQI-RS model with an average R<sup>2</sup>, RMSE, and uncertainty in the best scenario of 0.89, 5.58, and 5.69 (AQI unit), respectively. The results indicate that GML significantly improves the accuracy of global AQI-RS over previous studies. By considering geographical characteristics using GML, this research is expected to gain an accurate prediction of AQI globally especially in regions without ground-based air pollution stations for the worldwide mitigation.</div></div>","PeriodicalId":101147,"journal":{"name":"Science of Remote Sensing","volume":"11 ","pages":"Article 100197"},"PeriodicalIF":5.7,"publicationDate":"2025-02-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143421044","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-02-10DOI: 10.1016/j.srs.2025.100202
Siyuan Zhao , Yong Kang , Hang Yuan , Guan Wang , Hui Wang , Shichao Xiong , Ying Luo
Heterogeneous Synthetic Aperture Radar (SAR) image object detection task with inconsistent joint probability distributions is occurring more and more frequently in practical applications. In which the small sample of data scarcity is becoming an urgent problem for researchers. Therefore, this paper proposes a novel few-shot domain adaptation object detection (FsDAOD) method based on Faster Region Convolutional Neural Network baseline to cope with the above problem. Firstly, employing the foundational structure of the existing baseline method, a novel mutual information loss function is introduced that prompts the neural network to extract domain-specific knowledge. This strategic approach encourages distinctive levels of confidence in individual predictions while fostering overall diversity. Given that performance can be easily over-fitted with a restricted number of observed objects if feature alignment strictly adheres to conventional methods, the set of source instances are initially categorized into two groups: target domain-easy set and target domain-hard set. Subsequently, asynchronous alignment is performed between the target-hard domain set of the source instances and the extended dataset of the target instances to achieve effective supervised learning. It is then asserted that confidence-based sample separation methods can improve detection efficiency by adjusting the model to prioritize the identification of more easily detected objects, but this may lead to incorrect decisions for more challenging instances. Extensive experiments on FsDAOD on heterogeneous satellite-borne SAR image datasets have been conducted, and the experimental results have demonstrated that the detection rate of the proposed method exceeds the existing state-of-the-art methods by 5%.
{"title":"FsDAOD: Few-shot domain adaptation object detection for heterogeneous SAR image","authors":"Siyuan Zhao , Yong Kang , Hang Yuan , Guan Wang , Hui Wang , Shichao Xiong , Ying Luo","doi":"10.1016/j.srs.2025.100202","DOIUrl":"10.1016/j.srs.2025.100202","url":null,"abstract":"<div><div>Heterogeneous Synthetic Aperture Radar (SAR) image object detection task with inconsistent joint probability distributions is occurring more and more frequently in practical applications. In which the small sample of data scarcity is becoming an urgent problem for researchers. Therefore, this paper proposes a novel few-shot domain adaptation object detection (FsDAOD) method based on Faster Region Convolutional Neural Network baseline to cope with the above problem. Firstly, employing the foundational structure of the existing baseline method, a novel mutual information loss function is introduced that prompts the neural network to extract domain-specific knowledge. This strategic approach encourages distinctive levels of confidence in individual predictions while fostering overall diversity. Given that performance can be easily over-fitted with a restricted number of observed objects if feature alignment strictly adheres to conventional methods, the set of source instances are initially categorized into two groups: target domain-easy set and target domain-hard set. Subsequently, asynchronous alignment is performed between the target-hard domain set of the source instances and the extended dataset of the target instances to achieve effective supervised learning. It is then asserted that confidence-based sample separation methods can improve detection efficiency by adjusting the model to prioritize the identification of more easily detected objects, but this may lead to incorrect decisions for more challenging instances. Extensive experiments on FsDAOD on heterogeneous satellite-borne SAR image datasets have been conducted, and the experimental results have demonstrated that the detection rate of the proposed method exceeds the existing state-of-the-art methods by 5%.</div></div>","PeriodicalId":101147,"journal":{"name":"Science of Remote Sensing","volume":"11 ","pages":"Article 100202"},"PeriodicalIF":5.7,"publicationDate":"2025-02-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143394340","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Multi-temporal interferometric synthetic aperture radar (MT-InSAR) is a powerful geodetic technique for detecting and monitoring ground deformation over extensive areas. The accuracy of these measurements is critically dependent on effectively separating unwanted phase signals, such as atmospheric delay effects (APS) and decorrelation noise. Recent advancements in data-driven deep learning (DL) methods have shown promise in phase separation by utilizing inherent phase relationships. However, the complex spatiotemporal relationship of InSAR phase components presents challenges that traditional 1D or 2D DL models cannot effectively address, leading to potential biases in deformation measurements. To address this limitation, we propose UNet-3D, a novel three-dimensional encoder-decoder architecture that captures the spatiotemporal features of phase components through an enhanced 3D convolutional neural network (CNN) ensemble, enabling accurate separation of deformation time series. In addition, a spatiotemporal mask is designed to reconstruct missing time series data caused by decorrelation effects. We also developed a separable convolution operator to reduce the computational costs without compromising performance. The proposed model is trained on simulated datasets and benchmarked against existing DL models, achieving an improvement of 25.0% in MSE, 1.8% in SSIM, and 0.2% in SNR. Notably, the computation cost is reduced by up to 80% through separable convolution, establishing the proposed model as both lightweight and efficient. Furthermore, a comprehensive analysis of performance factors was conducted to assess the robustness of UNet-3D, facilitating its open-source usability. To validate our approach in real-world scenarios, we conducted a comparative ground deformation monitoring study over Fernandina Volcano in the Galapagos Islands using Sentinel-1 SAR data and the Small Baseline Subset (SBAS) technique in MintPy software. The results show that the correlation between the deformation time series of UNet-3D and the SBAS method is as high as 0.91 and shows the advantages in mitigating the topography-related APS effects. Overall, the UNet-3D model represents a significant advancement in automating InSAR data processing and enhancing the accuracy of deformation time series retrieval.
{"title":"A novel lightweight 3D CNN for accurate deformation time series retrieval in MT-InSAR","authors":"Mahmoud Abdallah , Xiaoli Ding , Samaa Younis , Songbo Wu","doi":"10.1016/j.srs.2025.100206","DOIUrl":"10.1016/j.srs.2025.100206","url":null,"abstract":"<div><div>Multi-temporal interferometric synthetic aperture radar (MT-InSAR) is a powerful geodetic technique for detecting and monitoring ground deformation over extensive areas. The accuracy of these measurements is critically dependent on effectively separating unwanted phase signals, such as atmospheric delay effects (APS) and decorrelation noise. Recent advancements in data-driven deep learning (DL) methods have shown promise in phase separation by utilizing inherent phase relationships. However, the complex spatiotemporal relationship of InSAR phase components presents challenges that traditional 1D or 2D DL models cannot effectively address, leading to potential biases in deformation measurements. To address this limitation, we propose UNet-3D, a novel three-dimensional encoder-decoder architecture that captures the spatiotemporal features of phase components through an enhanced 3D convolutional neural network (CNN) ensemble, enabling accurate separation of deformation time series. In addition, a spatiotemporal mask is designed to reconstruct missing time series data caused by decorrelation effects. We also developed a separable convolution operator to reduce the computational costs without compromising performance. The proposed model is trained on simulated datasets and benchmarked against existing DL models, achieving an improvement of 25.0% in MSE, 1.8% in SSIM, and 0.2% in SNR. Notably, the computation cost is reduced by up to 80% through separable convolution, establishing the proposed model as both lightweight and efficient. Furthermore, a comprehensive analysis of performance factors was conducted to assess the robustness of UNet-3D, facilitating its open-source usability. To validate our approach in real-world scenarios, we conducted a comparative ground deformation monitoring study over Fernandina Volcano in the Galapagos Islands using Sentinel-1 SAR data and the Small Baseline Subset (SBAS) technique in MintPy software. The results show that the correlation between the deformation time series of UNet-3D and the SBAS method is as high as 0.91 and shows the advantages in mitigating the topography-related APS effects. Overall, the UNet-3D model represents a significant advancement in automating InSAR data processing and enhancing the accuracy of deformation time series retrieval.</div></div>","PeriodicalId":101147,"journal":{"name":"Science of Remote Sensing","volume":"11 ","pages":"Article 100206"},"PeriodicalIF":5.7,"publicationDate":"2025-02-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143403678","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-02-07DOI: 10.1016/j.srs.2025.100203
Nirdesh Kumar Sharma , Manabendra Saharia
Rapid and automated flood inundation mapping is critical for disaster management. While optical satellites provide valuable data on flood extent and impact, their real-time usage is limited by challenges such as cloud cover, limited vegetation penetration, and the inability to operate at night, making real-time flood assessments difficult. Synthetic Aperture Radar (SAR) satellites can overcome these limitations, allowing for high-resolution flood mapping. However, SAR data remains underutilized due to less availability of training data, and reliance on labor-intensive manual or semi-automated change detection methods. This study introduces a novel end-to-end methodology for generating SAR-based flood inundation maps, by training deep learning models on weak flood labels generated from concurrent optical imagery. These labels are used to train deep learning models based on Convolutional Neural Networks (CNN) and Vision Transformer (ViT) architectures, optimized through multitask learning and model soups. Additionally, we develop a novel gain algorithm to identify diverse ensemble members and estimate uncertainty through deep ensembles. Our results show that ViT-based and CNN-ViT hybrid architectures significantly outperform traditional CNN models, achieving a state-of-the-art Intersection over Union (IoU) score of 0.72 on the Sen1Floods11 test dataset, while also providing uncertainty quantification. These models have been integrated into an open-source and fully automated, Python-based tool called DeepSARFlood, and demonstrated for the Pakistan floods of 2022 and Assam (India) floods of 2020. With its high accuracy, processing speed, and ability to estimate uncertainty, DeepSARFlood is optimized for real-time deployment, processing a 1° × 1° (12,100 km2) area in under 40 s, and will complement upcoming SAR missions like NISAR and Sentinel 1-C for flood mapping.
{"title":"DeepSARFlood: Rapid and automated SAR-based flood inundation mapping using vision transformer-based deep ensembles with uncertainty estimates","authors":"Nirdesh Kumar Sharma , Manabendra Saharia","doi":"10.1016/j.srs.2025.100203","DOIUrl":"10.1016/j.srs.2025.100203","url":null,"abstract":"<div><div>Rapid and automated flood inundation mapping is critical for disaster management. While optical satellites provide valuable data on flood extent and impact, their real-time usage is limited by challenges such as cloud cover, limited vegetation penetration, and the inability to operate at night, making real-time flood assessments difficult. Synthetic Aperture Radar (SAR) satellites can overcome these limitations, allowing for high-resolution flood mapping. However, SAR data remains underutilized due to less availability of training data, and reliance on labor-intensive manual or semi-automated change detection methods. This study introduces a novel end-to-end methodology for generating SAR-based flood inundation maps, by training deep learning models on weak flood labels generated from concurrent optical imagery. These labels are used to train deep learning models based on Convolutional Neural Networks (CNN) and Vision Transformer (ViT) architectures, optimized through multitask learning and model soups. Additionally, we develop a novel gain algorithm to identify diverse ensemble members and estimate uncertainty through deep ensembles. Our results show that ViT-based and CNN-ViT hybrid architectures significantly outperform traditional CNN models, achieving a state-of-the-art Intersection over Union (IoU) score of 0.72 on the Sen1Floods11 test dataset, while also providing uncertainty quantification. These models have been integrated into an open-source and fully automated, Python-based tool called DeepSARFlood, and demonstrated for the Pakistan floods of 2022 and Assam (India) floods of 2020. With its high accuracy, processing speed, and ability to estimate uncertainty, DeepSARFlood is optimized for real-time deployment, processing a 1° × 1° (12,100 km<sup>2</sup>) area in under 40 s, and will complement upcoming SAR missions like NISAR and Sentinel 1-C for flood mapping.</div></div>","PeriodicalId":101147,"journal":{"name":"Science of Remote Sensing","volume":"11 ","pages":"Article 100203"},"PeriodicalIF":5.7,"publicationDate":"2025-02-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143394429","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-02-06DOI: 10.1016/j.srs.2025.100204
Luis A. Hernández-Martínez , Juan Manuel Dupuy-Rada , Alfonso Medel-Narváez , Carlos Portillo-Quintero , José Luis Hernández-Stefanoni
Accurate estimates of forest aboveground biomass density (AGBD) are essential to guide mitigation strategies for climate change. NASA's Global Ecosystem Dynamics Investigation (GEDI) project delivers full-waveform LiDAR data and provides a unique opportunity to improve AGBD estimates. However, global GEDI estimates (GEDI-L4A) have some constraints, such as lack of full coverage of AGBD maps and scarcity of training data for some biomes, particularly in arid areas. Moreover, uncertainties remain about the type of GEDI footprint that best penetrates the canopy and yields accurate vegetation structure metrics. This study estimates forest biomass of arid and semi-arid zones in two stages. First, a model was fitted to predict AGBD by relating GEDI and field data from different vegetation types, including xeric shrubland. Second, different footprint qualities were evaluated, and their AGBD was related to images from Sentinel-1 and -2 satellites to produce a wall-to-wall map of AGBD. The model fitted with field data and GEDI showed adequate performance (%RMSE = 45.0) and produced more accurate estimates than GEDI-L4A (%RMSE = 84.6). The wall-to-wall mapping model also performed well (%RMSE = 37.0) and substantially reduced the underestimation of AGBD for arid zones. This study highlights the advantages of fitting new models for AGBD estimation from GEDI and local field data, whose combination with satellite imagery yielded accurate wall-to-wall AGBD estimates with a 10 m resolution. The results of this study contribute new perspectives to improve the accuracy of AGBD estimates in arid zones, whose role in climate change mitigation may be markedly underestimated.
{"title":"Improving aboveground biomass density mapping of arid and semi-arid vegetation by combining GEDI LiDAR, Sentinel-1/2 imagery and field data","authors":"Luis A. Hernández-Martínez , Juan Manuel Dupuy-Rada , Alfonso Medel-Narváez , Carlos Portillo-Quintero , José Luis Hernández-Stefanoni","doi":"10.1016/j.srs.2025.100204","DOIUrl":"10.1016/j.srs.2025.100204","url":null,"abstract":"<div><div>Accurate estimates of forest aboveground biomass density (AGBD) are essential to guide mitigation strategies for climate change. NASA's Global Ecosystem Dynamics Investigation (GEDI) project delivers full-waveform LiDAR data and provides a unique opportunity to improve AGBD estimates. However, global GEDI estimates (GEDI-L4A) have some constraints, such as lack of full coverage of AGBD maps and scarcity of training data for some biomes, particularly in arid areas. Moreover, uncertainties remain about the type of GEDI footprint that best penetrates the canopy and yields accurate vegetation structure metrics. This study estimates forest biomass of arid and semi-arid zones in two stages. First, a model was fitted to predict AGBD by relating GEDI and field data from different vegetation types, including xeric shrubland. Second, different footprint qualities were evaluated, and their AGBD was related to images from Sentinel-1 and -2 satellites to produce a wall-to-wall map of AGBD. The model fitted with field data and GEDI showed adequate performance (%RMSE = 45.0) and produced more accurate estimates than GEDI-L4A (%RMSE = 84.6). The wall-to-wall mapping model also performed well (%RMSE = 37.0) and substantially reduced the underestimation of AGBD for arid zones. This study highlights the advantages of fitting new models for AGBD estimation from GEDI and local field data, whose combination with satellite imagery yielded accurate wall-to-wall AGBD estimates with a 10 m resolution. The results of this study contribute new perspectives to improve the accuracy of AGBD estimates in arid zones, whose role in climate change mitigation may be markedly underestimated.</div></div>","PeriodicalId":101147,"journal":{"name":"Science of Remote Sensing","volume":"11 ","pages":"Article 100204"},"PeriodicalIF":5.7,"publicationDate":"2025-02-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143388168","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-02-01DOI: 10.1016/j.srs.2025.100199
He Yin , Lina Eklund , Dimah Habash , Mazin B. Qumsiyeh , Jamon Van Den Hoek
The ongoing 2023 Israel-Hamas War has severe and far-reaching consequences for the people, economy, food security, and environment. The immediate impacts of damage and destruction to cities and farms are apparent in widespread reporting and first-hand accounts from within the Gaza Strip. However, there is a lack of comprehensive assessment of the war's impacts on key Gazan agricultural land that are vital for immediate humanitarian concerns during the ongoing war and for long-term recovery. In the Gaza Strip, agriculture is arguably one of the most important land use systems. However, remote detection of damage to Gazan agriculture is challenged by the diverse agronomic landscapes and small farm sizes. This study uses multi-resolution satellite imagery to monitor damage to tree crops and greenhouses, the most important agricultural land in the Gaza Strip. Our methodology involved several key steps: First, we generated a pre-war cropland map, distinguishing between tree crops (e.g., olives) and greenhouses, using a random forest (RF) model and the Segment Anything Model (SAM) on nominally 3-m PlanetScope and 50-cm Planet SkySat imagery, obtained from 2022 to 2023. Second, we assessed damage to tree crop fields due to the war, employing a harmonic model-based time series analysis using PlanetScope imagery. Third, we assessed the damage to greenhouses by classifying PlanetScope imagery using a random forest model. We performed accuracy assessments on a generated tree crop fields damage map using 1,200 randomly sampled 3 × 3-m areas, and we generated error-adjusted area estimates with a 95% confidence interval. To validate the generated greenhouse damage map, we used a random sampling-based analysis. We found that 64–70% of tree crop fields and 58% of greenhouses had been damaged by 27 September 2024, after almost one year of war in the Gaza Strip. Agricultural land in Gaza City and North Gaza were the most heavily damaged with 90% and 73% of tree crop fields damaged in each governorate, respectively. By the end of 2023, all greenhouses in North Gaza and Gaza City had been damaged. Our damage estimate overall agrees with that from UNOSAT but provides more detailed and accurate information, such as the timing of the damage as well as fine-scale changes. Our results attest to the severe impacts of the Israel-Hamas War on Gaza's agricultural sector with direct relevance for food security and economic recovery needs. Due to the rapid progression of the war, we have made the latest damage maps and area estimates available on GitHub (https://github.com/hyinhe/Gaza).
{"title":"Evaluating war-induced damage to agricultural land in the Gaza Strip since October 2023 using PlanetScope and SkySat imagery","authors":"He Yin , Lina Eklund , Dimah Habash , Mazin B. Qumsiyeh , Jamon Van Den Hoek","doi":"10.1016/j.srs.2025.100199","DOIUrl":"10.1016/j.srs.2025.100199","url":null,"abstract":"<div><div>The ongoing 2023 Israel-Hamas War has severe and far-reaching consequences for the people, economy, food security, and environment. The immediate impacts of damage and destruction to cities and farms are apparent in widespread reporting and first-hand accounts from within the Gaza Strip. However, there is a lack of comprehensive assessment of the war's impacts on key Gazan agricultural land that are vital for immediate humanitarian concerns during the ongoing war and for long-term recovery. In the Gaza Strip, agriculture is arguably one of the most important land use systems. However, remote detection of damage to Gazan agriculture is challenged by the diverse agronomic landscapes and small farm sizes. This study uses multi-resolution satellite imagery to monitor damage to tree crops and greenhouses, the most important agricultural land in the Gaza Strip. Our methodology involved several key steps: First, we generated a pre-war cropland map, distinguishing between tree crops (e.g., olives) and greenhouses, using a random forest (RF) model and the Segment Anything Model (SAM) on nominally 3-m PlanetScope and 50-cm Planet SkySat imagery, obtained from 2022 to 2023. Second, we assessed damage to tree crop fields due to the war, employing a harmonic model-based time series analysis using PlanetScope imagery. Third, we assessed the damage to greenhouses by classifying PlanetScope imagery using a random forest model. We performed accuracy assessments on a generated tree crop fields damage map using 1,200 randomly sampled 3 × 3-m areas, and we generated error-adjusted area estimates with a 95% confidence interval. To validate the generated greenhouse damage map, we used a random sampling-based analysis. We found that 64–70% of tree crop fields and 58% of greenhouses had been damaged by 27 September 2024, after almost one year of war in the Gaza Strip. Agricultural land in Gaza City and North Gaza were the most heavily damaged with 90% and 73% of tree crop fields damaged in each governorate, respectively. By the end of 2023, all greenhouses in North Gaza and Gaza City had been damaged. Our damage estimate overall agrees with that from UNOSAT but provides more detailed and accurate information, such as the timing of the damage as well as fine-scale changes. Our results attest to the severe impacts of the Israel-Hamas War on Gaza's agricultural sector with direct relevance for food security and economic recovery needs. Due to the rapid progression of the war, we have made the latest damage maps and area estimates available on GitHub (<span><span>https://github.com/hyinhe/Gaza</span><svg><path></path></svg></span>).</div></div>","PeriodicalId":101147,"journal":{"name":"Science of Remote Sensing","volume":"11 ","pages":"Article 100199"},"PeriodicalIF":5.7,"publicationDate":"2025-02-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143421043","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-02-01DOI: 10.1016/j.srs.2025.100201
Jihye Shin , Jaehyung Yu , Jihee Seo , Lei Wang , Hyun-Cheol Kim
This study developed an efficient method using hyperspectral camera for detecting diesel content in soils with spectral indices. Over 70 days of the experiment, clean soils were saturated with diesel, and 186 measurements were taken to monitor the evaporation rate and spectral variation. The diesel evaporation followed a logarithmic pattern, where the diesel volatility decreased from 1.57% per day during the initial period to 0.06% per day during the late period. Using the hull-quotient reflectance at 2236 nm, the diesel content prediction model derived from a stepwise multiple linear regression (SMLR) achieved satisfactory accuracy with sufficient statistical significance (R2 = 0.89, RPD = 2.52). This spectral band was well visualized for diesel presence in hyperspectral images as the band infers variations in two absorptions (CH/AlOH and CH) concurrently. Additionally, this study presented an age estimation model based on the diesel evaporation rate using the same spectral band. Given the fact that this study is based on the largest number of samples with the longest observation period and models were developed excluding atmospheric absorption bands, the simple form of the spectral index makes it applicable to large-scale diesel pollution detection with hyperspectral scanners or narrow-band multispectral cameras in real-world cases.
{"title":"Volatility characteristics and hyperspectral-based detection models of diesel in soils","authors":"Jihye Shin , Jaehyung Yu , Jihee Seo , Lei Wang , Hyun-Cheol Kim","doi":"10.1016/j.srs.2025.100201","DOIUrl":"10.1016/j.srs.2025.100201","url":null,"abstract":"<div><div>This study developed an efficient method using hyperspectral camera for detecting diesel content in soils with spectral indices. Over 70 days of the experiment, clean soils were saturated with diesel, and 186 measurements were taken to monitor the evaporation rate and spectral variation. The diesel evaporation followed a logarithmic pattern, where the diesel volatility decreased from 1.57% per day during the initial period to 0.06% per day during the late period. Using the hull-quotient reflectance at 2236 nm, the diesel content prediction model derived from a stepwise multiple linear regression (SMLR) achieved satisfactory accuracy with sufficient statistical significance (R<sup>2</sup> = 0.89, RPD = 2.52). This spectral band was well visualized for diesel presence in hyperspectral images as the band infers variations in two absorptions (CH/AlOH and CH) concurrently. Additionally, this study presented an age estimation model based on the diesel evaporation rate using the same spectral band. Given the fact that this study is based on the largest number of samples with the longest observation period and models were developed excluding atmospheric absorption bands, the simple form of the spectral index makes it applicable to large-scale diesel pollution detection with hyperspectral scanners or narrow-band multispectral cameras in real-world cases.</div></div>","PeriodicalId":101147,"journal":{"name":"Science of Remote Sensing","volume":"11 ","pages":"Article 100201"},"PeriodicalIF":5.7,"publicationDate":"2025-02-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143327505","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-01-25DOI: 10.1016/j.srs.2025.100198
Jaydeo K. Dharpure , Ian M. Howat , Saurabh Kaushik , Bryan G. Mark
The Gravity Recovery and Climate Experiment (GRACE) and GRACE Follow-On (GFO) missions have provided valuable data for monitoring global terrestrial water storage anomalies (TWSA) over the past two decades. However, the nearly one-year gap between these missions pose challenges for long-term TWSA measurements and various applications. Unlike previous studies, we use a combination of Machine Learning (ML) methods—Random Forest (RF), Support Vector Machine (SVM), eXtreme Gradient Boosting (XGB), Deep Neural Network (DNN), and Stacked Long-Short Term Memory (SLSTM)—to identify and efficiently bridge the gap between GRACE and GFO by using the best-performing ML model to estimate TWSA at each grid cell. The models were trained using six hydroclimatic variables (temperature, precipitation, runoff, evapotranspiration, ERA5-Land derived TWSA, and cumulative water storage change), as well as a vegetation index and timing variables, to reconstruct global land TWSA at 0.5° grid resolution. We evaluated the performance of each model using Nash-Sutcliffe Efficiency (NSE), Pearson's Correlation Coefficient (PCC), and Root Mean Square Error (RMSE). Our results demonstrate test accuracy with area weighted average NSE, PCC, and RMSE of 0.51 ± 0.31, 0.71 ± 0.23, and 4.75 ± 3.63 cm, respectively. The model's performance was further compared across five climatic zones, with two previously reconstructed products (Li and Humphrey methods) at 26 major river basins, during flood/drought events, and for sea-level rise. Our results showcase the model's superior performance and its capability to accurately predict data gaps at both grid and basin scales globally.
{"title":"Combining machine learning algorithms for bridging gaps in GRACE and GRACE Follow-On missions using ERA5-Land reanalysis","authors":"Jaydeo K. Dharpure , Ian M. Howat , Saurabh Kaushik , Bryan G. Mark","doi":"10.1016/j.srs.2025.100198","DOIUrl":"10.1016/j.srs.2025.100198","url":null,"abstract":"<div><div>The Gravity Recovery and Climate Experiment (GRACE) and GRACE Follow-On (GFO) missions have provided valuable data for monitoring global terrestrial water storage anomalies (TWSA) over the past two decades. However, the nearly one-year gap between these missions pose challenges for long-term TWSA measurements and various applications. Unlike previous studies, we use a combination of Machine Learning (ML) methods—Random Forest (RF), Support Vector Machine (SVM), eXtreme Gradient Boosting (XGB), Deep Neural Network (DNN), and Stacked Long-Short Term Memory (SLSTM)—to identify and efficiently bridge the gap between GRACE and GFO by using the best-performing ML model to estimate TWSA at each grid cell. The models were trained using six hydroclimatic variables (temperature, precipitation, runoff, evapotranspiration, ERA5-Land derived TWSA, and cumulative water storage change), as well as a vegetation index and timing variables, to reconstruct global land TWSA at 0.5° grid resolution. We evaluated the performance of each model using Nash-Sutcliffe Efficiency (NSE), Pearson's Correlation Coefficient (PCC), and Root Mean Square Error (RMSE). Our results demonstrate test accuracy with area weighted average NSE, PCC, and RMSE of 0.51 ± 0.31, 0.71 ± 0.23, and 4.75 ± 3.63 cm, respectively. The model's performance was further compared across five climatic zones, with two previously reconstructed products (Li and Humphrey methods) at 26 major river basins, during flood/drought events, and for sea-level rise. Our results showcase the model's superior performance and its capability to accurately predict data gaps at both grid and basin scales globally.</div></div>","PeriodicalId":101147,"journal":{"name":"Science of Remote Sensing","volume":"11 ","pages":"Article 100198"},"PeriodicalIF":5.7,"publicationDate":"2025-01-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143327506","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-01-23DOI: 10.1016/j.srs.2025.100196
Xiaohan Lin , Ainong Li , Jinhu Bian , Zhengjian Zhang , Xi Nan , Limin Chen , Yi Bai , Yi Deng , Siyuan Li
Forests are complex, multi-layered ecosystems mainly comprising an overstory, understory, and soil. Radiative transfer simulations of these forests underpin the theoretical framework for retrieving forest parameters; however, the understory has often been neglected due to limitations in data acquisition technology. In this study, we assessed the contribution of the understory to canopy reflectance in a temperate broadleaf forest by comparing simulated bidirectional reflectance factor (BRF) differences between forest scenes with and without the understory. These scenes were reconstructed through voxel-based, boundary-based, and ellipsoid-based approaches respectively based on the multi-layered point cloud data acquired via combining unmanned aerial vehicle (UAV) and backpack laser scanning. The results show that the understory influences the simulated BRF across all three forest scene reconstruction approaches, suggesting that canopy reflectance signals can be used to evaluate the understory information, which provides a theoretical foundation for the feasibility of retrieving understory parameters via remote sensing. The understory increases BRF by 80% in shaded regions beneath the overstory in the red and NIR bands, and can increase BRF by 40% in the NIR band for voxel-based and ellipsoid-based forest scenes. Conversely, it reduces the simulated BRF in sunlit soil areas in the red band. Among the three forest reconstruction methods, the canopy reflectance simulation using the boundary-based model can consistently project the most understory information. Notably, the findings also indicate that the reflectance of the forest canopy definitely capture less understory vegetation information as the simulation resolution decreases, for instance, as the simulated resolution decreased from 1 m to 30 m, the absolute difference in the red band between the multi-layered BRF and L50 BRF decreased from 23.93% to 10.22% when using the boundary-based approach. It implies that higher resolution remote sensing observations are more advantageous for the retrieval of understory parameters. This study provides a successful case for modeling the multi-layered forest structure in natural temperate broadleaf forests, and even offers a theoretical reference for facilitating the retrieval of biochemical and biophysical information from the understory by remote sensing.
{"title":"Investigating the contribution of understory to radiative transfer simulations through reconstructing 3-D realistic temperate broadleaf forest scenes based on multi-platform laser scanning","authors":"Xiaohan Lin , Ainong Li , Jinhu Bian , Zhengjian Zhang , Xi Nan , Limin Chen , Yi Bai , Yi Deng , Siyuan Li","doi":"10.1016/j.srs.2025.100196","DOIUrl":"10.1016/j.srs.2025.100196","url":null,"abstract":"<div><div>Forests are complex, multi-layered ecosystems mainly comprising an overstory, understory, and soil. Radiative transfer simulations of these forests underpin the theoretical framework for retrieving forest parameters; however, the understory has often been neglected due to limitations in data acquisition technology. In this study, we assessed the contribution of the understory to canopy reflectance in a temperate broadleaf forest by comparing simulated bidirectional reflectance factor (BRF) differences between forest scenes with and without the understory. These scenes were reconstructed through voxel-based, boundary-based, and ellipsoid-based approaches respectively based on the multi-layered point cloud data acquired via combining unmanned aerial vehicle (UAV) and backpack laser scanning. The results show that the understory influences the simulated BRF across all three forest scene reconstruction approaches, suggesting that canopy reflectance signals can be used to evaluate the understory information, which provides a theoretical foundation for the feasibility of retrieving understory parameters via remote sensing. The understory increases BRF by 80% in shaded regions beneath the overstory in the red and NIR bands, and can increase BRF by 40% in the NIR band for voxel-based and ellipsoid-based forest scenes. Conversely, it reduces the simulated BRF in sunlit soil areas in the red band. Among the three forest reconstruction methods, the canopy reflectance simulation using the boundary-based model can consistently project the most understory information. Notably, the findings also indicate that the reflectance of the forest canopy definitely capture less understory vegetation information as the simulation resolution decreases, for instance, as the simulated resolution decreased from 1 m to 30 m, the absolute difference in the red band between the multi-layered BRF and L50 BRF decreased from 23.93% to 10.22% when using the boundary-based approach. It implies that higher resolution remote sensing observations are more advantageous for the retrieval of understory parameters. This study provides a successful case for modeling the multi-layered forest structure in natural temperate broadleaf forests, and even offers a theoretical reference for facilitating the retrieval of biochemical and biophysical information from the understory by remote sensing.</div></div>","PeriodicalId":101147,"journal":{"name":"Science of Remote Sensing","volume":"11 ","pages":"Article 100196"},"PeriodicalIF":5.7,"publicationDate":"2025-01-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143099577","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-01-12DOI: 10.1016/j.srs.2025.100195
Aaron Cardenas-Martinez , Adrian Pascual , Emilia Guisado-Pintado , Victor Rodriguez-Galiano
The estimation of three-dimensional (3D) vegetation metrics from space-borne LiDAR allows to capture spatio-temporal trends in forest ecosystems. Structural traits from the NASAGlobal Ecosystem Dynamics Investigation (GEDI) are vital to support forest monitoring, restoration and biodiversity protection. The Mediterranean Basin is home of relict forest species facing the consequences of intensified climate change effects and whose habitats have been progressively shrinking over time. We used two sources of 3D-structural metrics, LiDAR point clouds and full-waveform space-borne LiDAR from GEDI to estimate forest structure in a protected area of Southern Spain, home of relict species in jeopardy due to recent extreme water-stress conditions. We locally calibrated GEDI spaceborne measurements using discrete point clouds collected by Airborne Laser Scanner (ALS) to adjust the geolocation of GEDI waveform metrics and to predict GEDI structural traits such as canopy height, foliage height diversity or leaf area index. Our results showed significant improvements in the retrieval of ecological indicators when using data collocation between ALS point clouds and comparable GEDI metrics. The best results for canopy height retrieval after collocation yielded an RMSE of 2.6 m, when limited to forest-classified areas and flat terrain, compared to an RMSE of 3.4 m without collocation. Trends for foliage height diversity (FHD; RMSE = 2.1) and leaf area index (LAI; RMSE = 1.6 m2/m2) were less consistent than those for canopy height but confirmed the enhancement derived from collocation. The wall-to-wall mapping of GEDI traits framed over ALS surveys is currently available to monitor Mediterranean sparse mountain forests with sufficiency. Our results showed that combining different LiDAR platforms is particularly important for mapping areas where access to in-situ data is limited and especially in regions with abrupt changes in vegetation cover, such as Mediterranean mountainous forests.
{"title":"Using airborne LiDAR and enhanced-geolocated GEDI metrics to map structural traits over a Mediterranean forest","authors":"Aaron Cardenas-Martinez , Adrian Pascual , Emilia Guisado-Pintado , Victor Rodriguez-Galiano","doi":"10.1016/j.srs.2025.100195","DOIUrl":"10.1016/j.srs.2025.100195","url":null,"abstract":"<div><div>The estimation of three-dimensional (3D) vegetation metrics from space-borne LiDAR allows to capture spatio-temporal trends in forest ecosystems. Structural traits from the <span>NASA</span> <span>Global</span> Ecosystem Dynamics Investigation (GEDI) are vital to support forest monitoring, restoration and biodiversity protection. The Mediterranean Basin is home of relict forest species facing the consequences of intensified climate change effects and whose habitats have been progressively shrinking over time. We used two sources of 3D-structural metrics, LiDAR point clouds and full-waveform space-borne LiDAR from GEDI to estimate forest structure in a protected area of Southern Spain, home of relict species in jeopardy due to recent extreme water-stress conditions. We locally calibrated GEDI spaceborne measurements using discrete point clouds collected by Airborne Laser Scanner (ALS) to adjust the geolocation of GEDI waveform metrics and to predict GEDI structural traits such as canopy height, foliage height diversity or leaf area index. Our results showed significant improvements in the retrieval of ecological indicators when using data collocation between ALS point clouds and comparable GEDI metrics. The best results for canopy height retrieval after collocation yielded an RMSE of 2.6 m, when limited to forest-classified areas and flat terrain, compared to an RMSE of 3.4 m without collocation. Trends for foliage height diversity (FHD; RMSE = 2.1) and leaf area index (LAI; RMSE = 1.6 m<sup>2</sup>/m<sup>2</sup>) were less consistent than those for canopy height but confirmed the enhancement derived from collocation. The wall-to-wall mapping of GEDI traits framed over ALS surveys is currently available to monitor Mediterranean sparse mountain forests with sufficiency. Our results showed that combining different LiDAR platforms is particularly important for mapping areas where access to in-situ data is limited and especially in regions with abrupt changes in vegetation cover, such as Mediterranean mountainous forests.</div></div>","PeriodicalId":101147,"journal":{"name":"Science of Remote Sensing","volume":"11 ","pages":"Article 100195"},"PeriodicalIF":5.7,"publicationDate":"2025-01-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143094977","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}