Rapid and accurate detection of marine oil spills is crucial for environmental protection and emergency response. Synthetic Aperture Radar (SAR), a primary tool for sea surface oil spill monitoring, faces persistent challenges such as varying spill scales, blurred boundaries, and confusion with look-alike phenomena. To address these issues, this study proposes OilSeg-SARNet, a novel architecture tailored for SAR oil spill detection. The model incorporates a Group Convolutional Block Attention Module Enhancer to emphasize salient features and suppress background noise, an Atrous Spatial Pyramid Pooling module to capture multi-scale contextual information, and an improved Edge Supervision Enhancement Module to refine boundary representation and facilitate gradient propagation. These components work synergistically to enhance detection precision under complex marine conditions. Experimental results on the public SAR Oil Spill Detection Dataset demonstrate that OilSeg-SARNet achieves class-specific Intersection-over-Unions (IoUs) of 61.33%, 64.86%, and 45.10% for oil spill, look-alike, and ship categories, respectively, outperforming the best prior method by +0.85%, +3.73%, and +9.89%, respectively. The model attains an overall mean IoU (mIoU) of 72.22% and an F-score of 79.33%. The proposed model surpasses existing methods with reduced complexity, offering a reliable and efficient framework for marine oil spill monitoring, thereby enhancing early detection and supporting timely environmental response.
{"title":"A novel framework for marine oil spill detection in SAR imagery fusing edge supervision enhancement and group attention mechanism","authors":"Xinrong Lyu , Haosha Su , Christos Grecos , Peng Ren","doi":"10.1016/j.rsase.2026.101901","DOIUrl":"10.1016/j.rsase.2026.101901","url":null,"abstract":"<div><div>Rapid and accurate detection of marine oil spills is crucial for environmental protection and emergency response. Synthetic Aperture Radar (SAR), a primary tool for sea surface oil spill monitoring, faces persistent challenges such as varying spill scales, blurred boundaries, and confusion with look-alike phenomena. To address these issues, this study proposes OilSeg-SARNet, a novel architecture tailored for SAR oil spill detection. The model incorporates a Group Convolutional Block Attention Module Enhancer to emphasize salient features and suppress background noise, an Atrous Spatial Pyramid Pooling module to capture multi-scale contextual information, and an improved Edge Supervision Enhancement Module to refine boundary representation and facilitate gradient propagation. These components work synergistically to enhance detection precision under complex marine conditions. Experimental results on the public SAR Oil Spill Detection Dataset demonstrate that OilSeg-SARNet achieves class-specific Intersection-over-Unions (IoUs) of 61.33%, 64.86%, and 45.10% for oil spill, look-alike, and ship categories, respectively, outperforming the best prior method by +0.85%, +3.73%, and +9.89%, respectively. The model attains an overall mean IoU (mIoU) of 72.22% and an F<span><math><msub><mrow></mrow><mrow><mn>1</mn></mrow></msub></math></span>-score of 79.33%. The proposed model surpasses existing methods with reduced complexity, offering a reliable and efficient framework for marine oil spill monitoring, thereby enhancing early detection and supporting timely environmental response.</div></div>","PeriodicalId":53227,"journal":{"name":"Remote Sensing Applications-Society and Environment","volume":"41 ","pages":"Article 101901"},"PeriodicalIF":4.5,"publicationDate":"2026-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"146077859","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2026-01-01Epub Date: 2025-12-19DOI: 10.1016/j.rsase.2025.101839
Sri Priyanka Kommula , Bharat Lohani , Dongryeol Ryu , Stephan Winter
Accurate identification of micro-surface rainwater harvesting (RWH) sites depends on the quality of topographic data. Commonly used DEMs such as SRTM, ASTER, and CartoDEM are limited by their coarse resolution and often fail to capture the fine-scale geomorphic features required for identifying these structures. Their radar- and optical-based acquisition methods also struggle in hilly and densely vegetated terrains, further restricting their ability to represent terrain accurately. To address these limitations, this study develops a GIS-based decision-support framework using the Analytical Hierarchy Process (AHP) to compare satellite-derived CartoDEM (30-m) with LiDAR DEMs at 30-m, 10-m, 5-m, and 1-m resolutions. Seven parameters — runoff, slope, land use/land cover, soil, lithology, flow accumulation, and geomorphology — were integrated to generate suitability maps for gabions and loose stone check dams. Validation against 116 expert-verified sites demonstrates that the 1-m LiDAR DEM achieves the highest performance (OA = 0.87, Precision = 0.98, Recall = 0.98), substantially outperforming CartoDEM (OA = 0.62). While discrepancies existed between CartoDEM and LiDAR DEM at 30-m resolution, these differences were not reflected in OA values, likely due to the limited validation dataset. High-resolution LiDAR DEMs significantly improve the delineation of slope and flow accumulation, enabling more reliable micro-RWH site identification. The proposed framework provides a practical and transferable method for watershed managers designing micro RWH structures in complex terrains.
{"title":"Improving micro rainwater harvesting site selection with high-resolution LiDAR DEMs: A GIS-based multi-criteria approach","authors":"Sri Priyanka Kommula , Bharat Lohani , Dongryeol Ryu , Stephan Winter","doi":"10.1016/j.rsase.2025.101839","DOIUrl":"10.1016/j.rsase.2025.101839","url":null,"abstract":"<div><div>Accurate identification of micro-surface rainwater harvesting (RWH) sites depends on the quality of topographic data. Commonly used DEMs such as SRTM, ASTER, and CartoDEM are limited by their coarse resolution and often fail to capture the fine-scale geomorphic features required for identifying these structures. Their radar- and optical-based acquisition methods also struggle in hilly and densely vegetated terrains, further restricting their ability to represent terrain accurately. To address these limitations, this study develops a GIS-based decision-support framework using the Analytical Hierarchy Process (AHP) to compare satellite-derived CartoDEM (30-m) with LiDAR DEMs at 30-m, 10-m, 5-m, and 1-m resolutions. Seven parameters — runoff, slope, land use/land cover, soil, lithology, flow accumulation, and geomorphology — were integrated to generate suitability maps for gabions and loose stone check dams. Validation against 116 expert-verified sites demonstrates that the 1-m LiDAR DEM achieves the highest performance (OA = 0.87, Precision = 0.98, Recall = 0.98), substantially outperforming CartoDEM (OA = 0.62). While discrepancies existed between CartoDEM and LiDAR DEM at 30-m resolution, these differences were not reflected in OA values, likely due to the limited validation dataset. High-resolution LiDAR DEMs significantly improve the delineation of slope and flow accumulation, enabling more reliable micro-RWH site identification. The proposed framework provides a practical and transferable method for watershed managers designing micro RWH structures in complex terrains.</div></div>","PeriodicalId":53227,"journal":{"name":"Remote Sensing Applications-Society and Environment","volume":"41 ","pages":"Article 101839"},"PeriodicalIF":4.5,"publicationDate":"2026-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145791460","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2026-01-01Epub Date: 2025-12-29DOI: 10.1016/j.rsase.2025.101857
Pericles Vale Alves , Vandoir Bourscheidt , Damaris Kirsch Pinheiro , Rodrigo Martins Moreira , Marcos André Braz Vaz , Mônica Santos , Marcos Antônio Lima Moura , Carlos Alexandre Santos Querino , Paula Regina Humbelino de Melo , Maria Adriana Moreira
Ultraviolet (UV) radiation is a critical environmental driver influencing ecological and human health, with its variability shaped by atmospheric factors and climate dynamics. This study examined the seasonal patterns and temporal trends of the erythemal UV radiation and key atmospheric variables in the Brazilian Amazon, using satellite remote sensing data from OMI/Aura and climate reanalysis data from CAMS spanning 2005 to 2022. Temporal trends were assessed using robust statistical approaches, while the relative influence of atmospheric drivers on erythemal UV variability was quantified using SHAP (Shapley Additive Explanations). A Susceptibility Index (SI) for UV-related health risks was developed, integrating biological, behavioral, and socioeconomic dimensions. Results revealed a distinct seasonal erythemal UV cycle, with peaks from January to April and lows from June to August, maintaining predominantly “very high” to “extreme” levels year-round. Statistically significant trends were observed in cloud optical thickness (COT) and total ozone column (TOC), while SHAP analysis indicated that variables such as water vapor (through its association with cloud processes), aerosols, and TOC emerged as primary predictors of surface UV, followed by PM2.5 and PM10, thereby reinforcing the model's potential as a tool for environmental health risk assessment. The SI indicated moderate to high susceptibility among most individuals, strongly modulated by social inequalities and sun exposure habits. The empirical validation of the SI through estimated UV dose and Minimal Erythemal Dose (MED) exceedance supports its potential as a tool for environmental health risk monitoring. These findings underscore the importance of integrated strategies that consider atmospheric and social factors to mitigate UV-related health risks in tropical regions under climate change scenarios.
{"title":"Seasonal patterns and atmospheric modulators of erythemal UV radiation in a sensitive region of the Brazilian Amazon: Implications for environmental health risk assessment","authors":"Pericles Vale Alves , Vandoir Bourscheidt , Damaris Kirsch Pinheiro , Rodrigo Martins Moreira , Marcos André Braz Vaz , Mônica Santos , Marcos Antônio Lima Moura , Carlos Alexandre Santos Querino , Paula Regina Humbelino de Melo , Maria Adriana Moreira","doi":"10.1016/j.rsase.2025.101857","DOIUrl":"10.1016/j.rsase.2025.101857","url":null,"abstract":"<div><div>Ultraviolet (UV) radiation is a critical environmental driver influencing ecological and human health, with its variability shaped by atmospheric factors and climate dynamics. This study examined the seasonal patterns and temporal trends of the erythemal UV radiation and key atmospheric variables in the Brazilian Amazon, using satellite remote sensing data from OMI/Aura and climate reanalysis data from CAMS spanning 2005 to 2022. Temporal trends were assessed using robust statistical approaches, while the relative influence of atmospheric drivers on erythemal UV variability was quantified using SHAP (Shapley Additive Explanations). A Susceptibility Index (SI) for UV-related health risks was developed, integrating biological, behavioral, and socioeconomic dimensions. Results revealed a distinct seasonal erythemal UV cycle, with peaks from January to April and lows from June to August, maintaining predominantly “very high” to “extreme” levels year-round. Statistically significant trends were observed in cloud optical thickness (COT) and total ozone column (TOC), while SHAP analysis indicated that variables such as water vapor (through its association with cloud processes), aerosols, and TOC emerged as primary predictors of surface UV, followed by PM<sub>2</sub>.<sub>5</sub> and PM<sub>10</sub>, thereby reinforcing the model's potential as a tool for environmental health risk assessment. The SI indicated moderate to high susceptibility among most individuals, strongly modulated by social inequalities and sun exposure habits. The empirical validation of the SI through estimated UV dose and Minimal Erythemal Dose (MED) exceedance supports its potential as a tool for environmental health risk monitoring. These findings underscore the importance of integrated strategies that consider atmospheric and social factors to mitigate UV-related health risks in tropical regions under climate change scenarios.</div></div>","PeriodicalId":53227,"journal":{"name":"Remote Sensing Applications-Society and Environment","volume":"41 ","pages":"Article 101857"},"PeriodicalIF":4.5,"publicationDate":"2026-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145926037","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2026-01-01Epub Date: 2025-12-12DOI: 10.1016/j.rsase.2025.101827
Mingyu Ouyang , Bowei Zeng , Guoru Huang
Rapid and precise waterlogging depth measurements in the context of urban floods are key in guiding the management of such flooding events. Traditional urban flooding monitoring methods are labor-intensive, expensive, and ineffective for comprehensive and timely monitoring. To overcome these limitations, we propose a method to detect the waterlogging depth on urban roads. In particular, the method integrates deep-learning and ellipse detection algorithms for the detection and segmentation of wheels from various vehicle types using Cascade Mask R-CNN. These detected wheels serve as reference objects for the waterlogging depth calculations. The geometric information on the submerged wheels is then obtained using the ellipse and minimum area rectangle detection algorithms. These parameters are subsequently employed to calculate the waterlogging depth on roads. The model was validated on a representative surveillance video site located in Dongying City, China. The model achieves an average bounding box precision and segmentation precision of over 97 % on the validation dataset. Following this, 246 validation samples were compared with manually measured depth. The absolute errors of all samples are below 0.1 m. The proposed method can facilitate the advancement of related studies and offer technical assistance in areas of urban waterlogging monitoring.
{"title":"A deep learning method for identifying waterlogging depth on urban roadways from surveillance camera images","authors":"Mingyu Ouyang , Bowei Zeng , Guoru Huang","doi":"10.1016/j.rsase.2025.101827","DOIUrl":"10.1016/j.rsase.2025.101827","url":null,"abstract":"<div><div>Rapid and precise waterlogging depth measurements in the context of urban floods are key in guiding the management of such flooding events. Traditional urban flooding monitoring methods are labor-intensive, expensive, and ineffective for comprehensive and timely monitoring. To overcome these limitations, we propose a method to detect the waterlogging depth on urban roads. In particular, the method integrates deep-learning and ellipse detection algorithms for the detection and segmentation of wheels from various vehicle types using Cascade Mask R-CNN. These detected wheels serve as reference objects for the waterlogging depth calculations. The geometric information on the submerged wheels is then obtained using the ellipse and minimum area rectangle detection algorithms. These parameters are subsequently employed to calculate the waterlogging depth on roads. The model was validated on a representative surveillance video site located in Dongying City, China. The model achieves an average bounding box precision and segmentation precision of over 97 % on the validation dataset. Following this, 246 validation samples were compared with manually measured depth. The absolute errors of all samples are below 0.1 m. The proposed method can facilitate the advancement of related studies and offer technical assistance in areas of urban waterlogging monitoring.</div></div>","PeriodicalId":53227,"journal":{"name":"Remote Sensing Applications-Society and Environment","volume":"41 ","pages":"Article 101827"},"PeriodicalIF":4.5,"publicationDate":"2026-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145738456","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2026-01-01Epub Date: 2026-01-27DOI: 10.1016/j.rsase.2026.101897
Gabriela Reyes-Palomeque , Juan Andrés-Mauricio , Luis A. Hernández-Martínez , Victor Peña-Lara , Fernando Tun-Dzul , José Luis Hernández-Stefanoni
High-accuracy aboveground biomass maps are essential for describing tropical dry forests (TDFs), guiding sustainable management, and enhancing conservation efforts. In this study, an aboveground biomass density (AGBD) map was generated through a two-stage approach. In the first stage, AGBD was estimated at the footprint level using the National Forest Inventory (NFI) and GEDI LiDAR metrics between 2019 and 2020. In addition, NFI data were corrected to include small trees and to account for temporal differences between the years of field data collection and GEDI data acquisition. In the second stage, the footprint-level AGBD estimates were linked with Sentinel-1 and Sentinel-2 imagery to produce a continuous biomass map across the Yucatán Peninsula. The results show that the corrections improved the AGBD estimates (R2 = 0.38 and %RMSE = 34.8) compared to the uncorrected data and also were superior (R2 = 0.41, %RMSE = 36.4) compared to the GEDI L4A product (R2 = 0.07, %RMSE = 87.9). In the second stage, the validation model showed good accuracy, with an R2 of 0.52 and %RMSE of 24.1, outperforming other studies that report R2 values between 0.26 and 0.28, and %RMSE between 30.79 and 62.02. This study presents an approach that improves AGBD maps in tropical dry forests. It highlights the value of ecological knowledge in correcting errors in AGBD estimation at the plot level and in addressing discrepancies between field data and remote sensing, as well as the use of GEDI data to increase sample size and model accuracy for AGBD.
{"title":"Enhancing aboveground biomass estimation in tropical dry forests with GEDI, Sentinel-1/2 and national Forest inventory data","authors":"Gabriela Reyes-Palomeque , Juan Andrés-Mauricio , Luis A. Hernández-Martínez , Victor Peña-Lara , Fernando Tun-Dzul , José Luis Hernández-Stefanoni","doi":"10.1016/j.rsase.2026.101897","DOIUrl":"10.1016/j.rsase.2026.101897","url":null,"abstract":"<div><div>High-accuracy aboveground biomass maps are essential for describing tropical dry forests (TDFs), guiding sustainable management, and enhancing conservation efforts. In this study, an aboveground biomass density (AGBD) map was generated through a two-stage approach. In the first stage, AGBD was estimated at the footprint level using the National Forest Inventory (NFI) and GEDI LiDAR metrics between 2019 and 2020. In addition, NFI data were corrected to include small trees and to account for temporal differences between the years of field data collection and GEDI data acquisition. In the second stage, the footprint-level AGBD estimates were linked with Sentinel-1 and Sentinel-2 imagery to produce a continuous biomass map across the Yucatán Peninsula. The results show that the corrections improved the AGBD estimates (R<sup>2</sup> = 0.38 and %RMSE = 34.8) compared to the uncorrected data and also were superior (R<sup>2</sup> = 0.41, %RMSE = 36.4) compared to the GEDI L4A product (R<sup>2</sup> = 0.07, %RMSE = 87.9). In the second stage, the validation model showed good accuracy, with an R<sup>2</sup> of 0.52 and %RMSE of 24.1, outperforming other studies that report R<sup>2</sup> values between 0.26 and 0.28, and %RMSE between 30.79 and 62.02. This study presents an approach that improves AGBD maps in tropical dry forests. It highlights the value of ecological knowledge in correcting errors in AGBD estimation at the plot level and in addressing discrepancies between field data and remote sensing, as well as the use of GEDI data to increase sample size and model accuracy for AGBD.</div></div>","PeriodicalId":53227,"journal":{"name":"Remote Sensing Applications-Society and Environment","volume":"41 ","pages":"Article 101897"},"PeriodicalIF":4.5,"publicationDate":"2026-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"147395830","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Floods are among the most severe consequences of climate change, causing significant damage across several sectors, including agriculture. Nevertheless, the assessment of agricultural flood damage remains limited, particularly in agriculturally intensive regions where timely support is crucial. This work proposes a data-driven approach for assessing crop flood damage through a machine learning classification framework applied to features derived from Earth Observation (EO) data, trained and tested on field-level damage data collected by agronomists. Specifically, we applied a Random Forest model to classify fields into three damage classes by integrating Sentinel-2–derived indices, topographic information, and flood extent maps. The analysis focused on the flood event that struck the Emilia-Romagna region (Italy) in May 2023, one of the costliest floods globally that year. The model was trained and tested on 412 fields, achieving an overall accuracy of 0.74, with precision, recall, and F1 score of 0.75, 0.74, and 0.74, each with a standard deviation of 0.04, indicating stable model performance. The model accurately identified high-damage fields, which were characterized by greater flood exposure, lower elevations, and pronounced declines in vegetation indices. However, it struggled to distinguish between no-damage and medium-damage fields, particularly for permanent crops, where damage often occurs beneath the canopy and flooded areas may be partially occluded. The main novelty of this work lies in the use of in situ crop damage assessments, enabling a data-driven estimation of flood impacts. These results have direct implications for policymakers: the framework relies on free EO data, providing a tool that can support post-event compensation and decision-making in flood-prone regions.
{"title":"Crop flood damage assessment integrating Sentinel-2 imagery and in situ data: the 2023 Emilia-Romagna case","authors":"Filippo Bocchino , Valeria Belloni , Roberta Ravanelli , Camillo Zaccarini , Mattia Crespi , Roderik Lindenbergh","doi":"10.1016/j.rsase.2025.101852","DOIUrl":"10.1016/j.rsase.2025.101852","url":null,"abstract":"<div><div>Floods are among the most severe consequences of climate change, causing significant damage across several sectors, including agriculture. Nevertheless, the assessment of agricultural flood damage remains limited, particularly in agriculturally intensive regions where timely support is crucial. This work proposes a data-driven approach for assessing crop flood damage through a machine learning classification framework applied to features derived from Earth Observation (EO) data, trained and tested on field-level damage data collected by agronomists. Specifically, we applied a Random Forest model to classify fields into three damage classes by integrating Sentinel-2–derived indices, topographic information, and flood extent maps. The analysis focused on the flood event that struck the Emilia-Romagna region (Italy) in May 2023, one of the costliest floods globally that year. The model was trained and tested on 412 fields, achieving an overall accuracy of 0.74, with precision, recall, and F1 score of 0.75, 0.74, and 0.74, each with a standard deviation of 0.04, indicating stable model performance. The model accurately identified high-damage fields, which were characterized by greater flood exposure, lower elevations, and pronounced declines in vegetation indices. However, it struggled to distinguish between no-damage and medium-damage fields, particularly for permanent crops, where damage often occurs beneath the canopy and flooded areas may be partially occluded. The main novelty of this work lies in the use of in situ crop damage assessments, enabling a data-driven estimation of flood impacts. These results have direct implications for policymakers: the framework relies on free EO data, providing a tool that can support post-event compensation and decision-making in flood-prone regions.</div></div>","PeriodicalId":53227,"journal":{"name":"Remote Sensing Applications-Society and Environment","volume":"41 ","pages":"Article 101852"},"PeriodicalIF":4.5,"publicationDate":"2026-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145977181","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2026-01-01Epub Date: 2026-01-05DOI: 10.1016/j.rsase.2025.101864
Sakar Dhakal , Kamal Raj Aryal , Uttam Babu Shrestha , Hari Adhikari
Understanding forest disturbances is essential for effective conservation strategies. Given Nepal's complex geography and forest ecology, change detection using Remote Sensing remains challenging, with limited time-series studies. This study introduces an enhanced LandTrendr (LT) workflow to improve forest loss mapping using medium-resolution imagery and machine learning. The approach includes: a) a Vision Transformers model (LiteForest-ViT) for semi-automated forest cover mask using Landsat 5, b) masking terrain shadows, c) ensemble of 7 spectral indices: NBR (Normalized Burn Ratio), NDVI (Normalized Difference Vegetation Index), TCA (Tasseled Cap Angle), TCB (Tasseled Cap Brightness), TCG (Tasseled Cap Greenness), EVI (Enhanced Vegetation Index), TCW (Tasseled Cap Wetness) with 6 LT-derived metrics for Random Forest (RF) and eXtreme Gradient Boosting (XGBoost) classification, d) expert-weighted district-level model selection tailored to regional heterogeneity, e) integration of multiple platforms for seamless processing, and f) MODIS-derived snow uncertainty loss estimation. The study spans (1995–2024) across Karnali, Bagmati, and Darchula. Results indicate RF edged XGBoost in the High Mountains and Himalayas, while XGBoost did better in the Siwalik and Middle Mountains. NBR was the most influential index regardless of model classifier and region. The algorithm achieved 0.90 overall accuracy, 0.74 kappa statistics, and 0.93 F1-score, exceeding GFC (Global Forest Change) and REDD + AI (CTrees) benchmarks. Overall, 7870 ha of forest loss were detected, where ∼165 ha accounted for snow-impacted uncertain loss. While loss has decreased, continued disturbance underscores the significance of our findings to support REDD+ (Reducing Emissions from Deforestation and Forest Degradation) in the region.
{"title":"Improving forest loss mapping in Nepal using LandTrendr time-series and machine learning","authors":"Sakar Dhakal , Kamal Raj Aryal , Uttam Babu Shrestha , Hari Adhikari","doi":"10.1016/j.rsase.2025.101864","DOIUrl":"10.1016/j.rsase.2025.101864","url":null,"abstract":"<div><div>Understanding forest disturbances is essential for effective conservation strategies. Given Nepal's complex geography and forest ecology, change detection using Remote Sensing remains challenging, with limited time-series studies. This study introduces an enhanced LandTrendr (LT) workflow to improve forest loss mapping using medium-resolution imagery and machine learning. The approach includes: a) a Vision Transformers model (LiteForest-ViT) for semi-automated forest cover mask using Landsat 5, b) masking terrain shadows, c) ensemble of 7 spectral indices: NBR (Normalized Burn Ratio), NDVI (Normalized Difference Vegetation Index), TCA (Tasseled Cap Angle), TCB (Tasseled Cap Brightness), TCG (Tasseled Cap Greenness), EVI (Enhanced Vegetation Index), TCW (Tasseled Cap Wetness) with 6 LT-derived metrics for Random Forest (RF) and eXtreme Gradient Boosting (XGBoost) classification, d) expert-weighted district-level model selection tailored to regional heterogeneity, e) integration of multiple platforms for seamless processing, and f) MODIS-derived snow uncertainty loss estimation. The study spans (1995–2024) across Karnali, Bagmati, and Darchula. Results indicate RF edged XGBoost in the High Mountains and Himalayas, while XGBoost did better in the Siwalik and Middle Mountains. NBR was the most influential index regardless of model classifier and region. The algorithm achieved 0.90 overall accuracy, 0.74 kappa statistics, and 0.93 F1-score, exceeding GFC (Global Forest Change) and REDD + AI (CTrees) benchmarks. Overall, 7870 ha of forest loss were detected, where ∼165 ha accounted for snow-impacted uncertain loss. While loss has decreased, continued disturbance underscores the significance of our findings to support REDD+ (Reducing Emissions from Deforestation and Forest Degradation) in the region.</div></div>","PeriodicalId":53227,"journal":{"name":"Remote Sensing Applications-Society and Environment","volume":"41 ","pages":"Article 101864"},"PeriodicalIF":4.5,"publicationDate":"2026-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145977876","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2026-01-01Epub Date: 2025-12-05DOI: 10.1016/j.rsase.2025.101818
Okikiola Michael Alegbeleye, Arjan Johan Herman Meddens, Yetunde Oladepe Rotimi, Kelechi Godwin Ibeh
Individual tree data in urban settings are used for many purposes, and gathering such information requires time and other limited resources. Additionally, the data collected are spatially and temporally sparse, especially for continuous monitoring. However, high-resolution images and deep learning can offer automated and accurate detection of trees in complex urban settings. Therefore, this study compared four popular convolutional neural network CNN-based object detection models (You Only Look Once v3, RetinanNet, Mask R-CNN, and Faster R-CNN) to map individual trees. We used high-resolution aerial imagery (∼8 cm spatial resolution), which was manually annotated to derive training (4,859) and testing (1,184) datasets. The analysis was carried out in three phases: First, we trained all the models for 20 epochs and evaluated the performance using standard metrics (Precision, Recall, and F1 score). Second, the best model was selected and retrained longer (30 epochs) with more data (5002 annotations) to develop an urban tree crown detection model for Pullman – a small-sized city in the inland northwest of the United States. Finally, we tested the reliability of the developed model under two scenarios. According to our analysis, YOLOv3 (F1 score: 69 %) outperformed Mask R-CNN (F1 score: 60 %), RetinaNet (F1 score: 57 %), and Faster R-CNN (F1 score: 52 %). Based on the evaluation metrics and visual assessment, YOLOv3 was selected to develop the final urban tree crown detector – Pullman Tree Crown Network (PTCNet), for our study area. PTCNet had precision and recall values of 78 % and 62 %, respectively. It also performed well under different tree arrangements, achieving an F1 score of over 70 %. The model was used to generate ∼12,000 individual tree locations. Subsequently, height information was extracted from a LiDAR-derived canopy height model, and a comprehensive tree inventory dataset was derived. The model and dataset are publicly available (https://github.com/Okikiola-Michael/PTCNet) for different applications, thus, contributing to open science. This study provides a straightforward and repeatable framework for researchers and managers to map urban trees with height information, which is useful for spatial and temporal tree monitoring. This study further highlights the performance of four popular models and supports the application of deep learning and aerial imagery for individual tree detection in complex urban settings.
城市环境中的单个树木数据用于许多目的,收集此类信息需要时间和其他有限的资源。此外,收集的数据在空间和时间上都是稀疏的,特别是对于连续监测而言。然而,高分辨率图像和深度学习可以在复杂的城市环境中提供自动和准确的树木检测。因此,本研究比较了四种流行的基于卷积神经网络cnn的物体检测模型(You Only Look Once v3, RetinanNet, Mask R-CNN和Faster R-CNN)来映射单个树。我们使用高分辨率航空图像(~ 8厘米空间分辨率),手动注释以获得训练(4,859)和测试(1,184)数据集。分析分三个阶段进行:首先,我们对所有模型进行了20个epoch的训练,并使用标准指标(Precision, Recall和F1分数)评估了性能。其次,选取最好的模型,用更多的数据(5002条注释)对更长的时间(30个epoch)进行再训练,开发美国西北内陆小城市Pullman的城市树冠检测模型。最后,我们在两种情况下对所建立的模型进行了可靠性测试。根据我们的分析,YOLOv3 (F1得分:69%)优于Mask R-CNN (F1得分:60%),RetinaNet (F1得分:57%)和Faster R-CNN (F1得分:52%)。基于评价指标和视觉评价,我们选择YOLOv3为我们的研究区域开发最终的城市树冠探测器——Pullman树冠网络(PTCNet)。PTCNet的查准率为78%,查全率为62%。在不同树形布置下表现良好,F1得分均在70%以上。该模型用于生成约12,000个单独的树位置。随后,利用激光雷达提取树冠高度模型的高度信息,得到一个完整的树木清查数据集。模型和数据集是公开的(https://github.com/Okikiola-Michael/PTCNet),可用于不同的应用,因此,有助于开放科学。该研究为研究人员和管理人员提供了一个简单、可重复的框架来绘制城市树木的高度信息,这对树木的时空监测是有用的。本研究进一步强调了四种流行模型的性能,并支持深度学习和航空图像在复杂城市环境中用于单个树木检测的应用。
{"title":"Urban tree crown detection based on deep learning and high-resolution aerial imagery: PTCNet for Pullman, WA, USA","authors":"Okikiola Michael Alegbeleye, Arjan Johan Herman Meddens, Yetunde Oladepe Rotimi, Kelechi Godwin Ibeh","doi":"10.1016/j.rsase.2025.101818","DOIUrl":"10.1016/j.rsase.2025.101818","url":null,"abstract":"<div><div>Individual tree data in urban settings are used for many purposes, and gathering such information requires time and other limited resources. Additionally, the data collected are spatially and temporally sparse, especially for continuous monitoring. However, high-resolution images and deep learning can offer automated and accurate detection of trees in complex urban settings. Therefore, this study compared four popular convolutional neural network CNN-based object detection models (You Only Look Once v3, RetinanNet, Mask R-CNN, and Faster R-CNN) to map individual trees. We used high-resolution aerial imagery (∼8 cm spatial resolution), which was manually annotated to derive training (4,859) and testing (1,184) datasets. The analysis was carried out in three phases: First, we trained all the models for 20 epochs and evaluated the performance using standard metrics (Precision, Recall, and F1 score). Second, the best model was selected and retrained longer (30 epochs) with more data (5002 annotations) to develop an urban tree crown detection model for Pullman – a small-sized city in the inland northwest of the United States. Finally, we tested the reliability of the developed model under two scenarios. According to our analysis, YOLOv3 (F1 score: 69 %) outperformed Mask R-CNN (F1 score: 60 %), RetinaNet (F1 score: 57 %), and Faster R-CNN (F1 score: 52 %). Based on the evaluation metrics and visual assessment, YOLOv3 was selected to develop the final urban tree crown detector – Pullman Tree Crown Network (PTCNet), for our study area. PTCNet had precision and recall values of 78 % and 62 %, respectively. It also performed well under different tree arrangements, achieving an F1 score of over 70 %. The model was used to generate ∼12,000 individual tree locations. Subsequently, height information was extracted from a LiDAR-derived canopy height model, and a comprehensive tree inventory dataset was derived. The model and dataset are publicly available (<span><span>https://github.com/Okikiola-Michael/PTCNet</span><svg><path></path></svg></span>) for different applications, thus, contributing to open science. This study provides a straightforward and repeatable framework for researchers and managers to map urban trees with height information, which is useful for spatial and temporal tree monitoring. This study further highlights the performance of four popular models and supports the application of deep learning and aerial imagery for individual tree detection in complex urban settings.</div></div>","PeriodicalId":53227,"journal":{"name":"Remote Sensing Applications-Society and Environment","volume":"41 ","pages":"Article 101818"},"PeriodicalIF":4.5,"publicationDate":"2026-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145738454","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2026-01-01Epub Date: 2026-01-17DOI: 10.1016/j.rsase.2026.101877
Shabarinath S. Nair , Josef Wagner , Sergii Skakun , Yuval Sadeh , Manav Gupta , Thomas Lampert , Mehdi Hosseini , Saeed Khabbazan , Sheila Baber , Blake Munshell , Fangjie Li , Abhishek Kotcharlakota , Oleksandra Oliinyk , Danylo Poliakov , Erik Duncan , Inbal Becker-Reshef
The full-scale invasion of Ukraine on 24 February 2022 resulted in widespread disruption to its agricultural system. As winter crops were already planted in late 2021, this led to uncertainty regarding whether all the planted fields would be harvested. Monitoring the harvest status was therefore essential for reliable production estimates. As ground-based assessments were no longer feasible in conflict-affected areas, we relied on remote sensing techniques. We developed a method to monitor crop harvest status in-season with the capability to detect fields that were not-harvested.
We monitored harvest from 13 June 2022 until 19 September 2022 and found that 94.1% and 87.5% of planted winter crops were harvested in government controlled and temporarily occupied regions, respectively. The highest intensity of not-harvested fields was observed along the occupation boundary. Validation using visually interpreted high-temporal-frequency Planet imagery yielded an overall accuracy of 85%, with an F1-score of 90% for the harvested class and 73% for the not-harvested class.
{"title":"In-season winter crop harvest status monitoring in Ukraine for 2022 using unsupervised change detection","authors":"Shabarinath S. Nair , Josef Wagner , Sergii Skakun , Yuval Sadeh , Manav Gupta , Thomas Lampert , Mehdi Hosseini , Saeed Khabbazan , Sheila Baber , Blake Munshell , Fangjie Li , Abhishek Kotcharlakota , Oleksandra Oliinyk , Danylo Poliakov , Erik Duncan , Inbal Becker-Reshef","doi":"10.1016/j.rsase.2026.101877","DOIUrl":"10.1016/j.rsase.2026.101877","url":null,"abstract":"<div><div>The full-scale invasion of Ukraine on 24 February 2022 resulted in widespread disruption to its agricultural system. As winter crops were already planted in late 2021, this led to uncertainty regarding whether all the planted fields would be harvested. Monitoring the harvest status was therefore essential for reliable production estimates. As ground-based assessments were no longer feasible in conflict-affected areas, we relied on remote sensing techniques. We developed a method to monitor crop harvest status in-season with the capability to detect fields that were not-harvested.</div><div>We monitored harvest from 13 June 2022 until 19 September 2022 and found that 94.1% and 87.5% of planted winter crops were harvested in government controlled and temporarily occupied regions, respectively. The highest intensity of not-harvested fields was observed along the occupation boundary. Validation using visually interpreted high-temporal-frequency Planet imagery yielded an overall accuracy of 85%, with an F1-score of 90% for the harvested class and 73% for the not-harvested class.</div></div>","PeriodicalId":53227,"journal":{"name":"Remote Sensing Applications-Society and Environment","volume":"41 ","pages":"Article 101877"},"PeriodicalIF":4.5,"publicationDate":"2026-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"146022716","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2026-01-01Epub Date: 2026-01-14DOI: 10.1016/j.rsase.2026.101871
Victor Igwe , Bahram Salehi , Mohammad Marjani , Nima Farhadi , Masoud Mahdianpari
Wetlands provide important ecosystem services, including water purification, flood regulation, carbon storage, and habitat for diverse species. Despite their importance, North America continues to see significant loss of wetlands due to development, agriculture, and climate-related pressures. These ongoing declines threaten ecological integrity and reduce the capacity of wetlands to provide essential services. As a result, regular updates and monitoring are essential to protect these important ecosystems to support evidence-based management and meet the needs of evolving conservation policies. One cost-effective method for monitoring wetlands is the segmentation of satellite images. Automating the segmentation of remote sensing images to update land cover maps enhances the frequency of map production, enabling more timely and efficient monitoring. Deep learning models such as Convolutional Neural Networks (CNNs) have performed well for segmentation. Still, the need for large, densely annotated datasets has limited their adoption in remote sensing. Meeting this requirement poses substantial challenges for regular map updates because of the extensive number of labels, the complexity of annotations, and the significant time and financial resources required for field data collection campaigns. Therefore, this paper targets Minnesota's state-wide wetland monitoring by training deep CNNs with weak labels extracted from existing thematic products. Our approach obtains training samples from existing land-cover maps by applying change detection to identify stable pixels and then refining labels with objects produced by the Simple Non-Iterative Clustering (SNIC) algorithm. The resulting weakly labeled samples are used to train and evaluate U-Net++ and DeepLabV3+ architectures. The proposed method achieved robust performance in the study area, with an average F1-score of 91.3 % for U-Net++ across seven analyzed classes, compared to 90.6 % for DeepLabV3+ and 88.3 % for Random Forest (RF). Notably, U-Net++ outperformed the other models, indicating that dense skip connections effectively classify complex remote-sensing scenes such as wetlands. These results demonstrate the effectiveness of the proposed weak-label-driven deep learning workflow for large-scale wetland-inventory mapping in Minnesota, while remaining generalizable to other land-cover classification problems.
{"title":"Cost-effective statewide wetland inventory update using weakly supervised deep learning: A case study in Minnesota, USA","authors":"Victor Igwe , Bahram Salehi , Mohammad Marjani , Nima Farhadi , Masoud Mahdianpari","doi":"10.1016/j.rsase.2026.101871","DOIUrl":"10.1016/j.rsase.2026.101871","url":null,"abstract":"<div><div>Wetlands provide important ecosystem services, including water purification, flood regulation, carbon storage, and habitat for diverse species. Despite their importance, North America continues to see significant loss of wetlands due to development, agriculture, and climate-related pressures. These ongoing declines threaten ecological integrity and reduce the capacity of wetlands to provide essential services. As a result, regular updates and monitoring are essential to protect these important ecosystems to support evidence-based management and meet the needs of evolving conservation policies. One cost-effective method for monitoring wetlands is the segmentation of satellite images. Automating the segmentation of remote sensing images to update land cover maps enhances the frequency of map production, enabling more timely and efficient monitoring. Deep learning models such as Convolutional Neural Networks (CNNs) have performed well for segmentation. Still, the need for large, densely annotated datasets has limited their adoption in remote sensing. Meeting this requirement poses substantial challenges for regular map updates because of the extensive number of labels, the complexity of annotations, and the significant time and financial resources required for field data collection campaigns. Therefore, this paper targets Minnesota's state-wide wetland monitoring by training deep CNNs with weak labels extracted from existing thematic products. Our approach obtains training samples from existing land-cover maps by applying change detection to identify stable pixels and then refining labels with objects produced by the Simple Non-Iterative Clustering (SNIC) algorithm. The resulting weakly labeled samples are used to train and evaluate U-Net++ and DeepLabV3+ architectures. The proposed method achieved robust performance in the study area, with an average F1-score of 91.3 % for U-Net++ across seven analyzed classes, compared to 90.6 % for DeepLabV3+ and 88.3 % for Random Forest (RF). Notably, U-Net++ outperformed the other models, indicating that dense skip connections effectively classify complex remote-sensing scenes such as wetlands. These results demonstrate the effectiveness of the proposed weak-label-driven deep learning workflow for large-scale wetland-inventory mapping in Minnesota, while remaining generalizable to other land-cover classification problems.</div></div>","PeriodicalId":53227,"journal":{"name":"Remote Sensing Applications-Society and Environment","volume":"41 ","pages":"Article 101871"},"PeriodicalIF":4.5,"publicationDate":"2026-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"146022718","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}