The number of in-situ stations measuring river discharge, one of the Essential Climate Variables (ECV), is declining steadily, and numerous basins have never been gauged. With the aim of improving data availability worldwide, we propose an easily applicable and transferable approach to estimate reach-scale discharge solely using remote sensing data that is suitable for filling gaps in the in-situ network. We combine 20 years of satellite altimetry observations with high-resolution satellite imagery via a hypsometric function to observe large portions of the reach-scale bathymetry. The high-resolution satellite images, which are classified using deep learning image segmentation, allow for detecting small rivers (narrower than 100 m) and can capture small width variations. The unobserved part of the bathymetry is estimated using an empirical width-to-depth function. Combined with precise satellite-derived slope measurements, river discharge is calculated at multiple consecutive cross-sections within the reach. The unknown roughness coefficient is optimized by minimizing the discharge differences between the cross-sections. The approach requires minimal input and approximate boundary conditions based on expert knowledge but is not dependent on calibration. We provide realistic uncertainties, which are crucial for data assimilation, by accounting for errors and uncertainties in the different input quantities. The approach is applied globally to 27 river sections with a median normalized root mean square error of 12% and a Nash–Sutcliffe model efficiency of 0.560. On average, the 90% uncertainty range includes 91% of the in-situ measurements.
Digital Elevation Models (DEMs) are pivotal in scientific research and engineering because they provide essential topographic and geomorphological information. Voids in DEM data result in the loss of terrain information, significantly impacting its broad applicability. Although spatial interpolation methods are frequently employed to address these voids, they suffer from accuracy degradation and struggle to reconstruct intricate terrain features. Generative Adversarial Network (GAN)-based approaches have emerged as promising solutions to enhance elevation accuracy and facilitate the reconstruction of partial terrain features. Nonetheless, GAN-based methods exhibit limitations with specific void shapes, and their performance is susceptible to artifacts and elevation jumps around the void boundaries. To address shortcomings mentioned above, we propose a terrain feature-guided diffusion model (TFDM) to fill the DEM data voids. The training and inference processes of the diffusion model were constrained by terrain feature lines to ensure the stability of the generated DEM surface. The TFDM is distinguished by its ability to generate seamless DEM surfaces and maintain stable terrain contours in response to varying terrain conditions. Experiments were conducted to validate the applicability of TFDM using different DEMs, including Advanced Spaceborne Thermal Emission and Reflection Radiometer Global Digital Elevation Models (ASTER GDEMv3) and the TanDEM-X global DEM. The proposed TFDM algorithm and comparison methods such as DDPM, GAN, and Kriging were applied to a full test set of 271 DEM images covering different terrain environments. The mean absolute error (MAE) and root mean square error (RMSE) of the DEM restored by TFDM were 28.91 ± 9.45 m and 38.16 ± 13.00 m, respectively, while the MAE and RMSE of the comparison algorithms were no less than 60.87 ± 26.24 m and 82.80 ± 36.51 m or even higher, validating the effectiveness of the TFDM algorithm in filling DEM voids. Profile analysis in partial details indicates that the TFDM outperforms alternative methods in reconstructing terrain features, as confirmed through visual inspection and quantitative comparison. TFDM exhibits versatility when applied to DEM data with diverse resolutions and produced using various measurement techniques.
The grounding line marks the transition between a glacier's floating and grounded parts and serves as a crucial parameter for monitoring sea level changes and assessing glacier retreat. The Differential Interferometric Synthetic Aperture Radar (DInSAR) technique for grounding line mapping currently requires the involvement of human experts, which becomes challenging with the continuously growing volume of grounding line data available for every Antarctic glacier. While a deep learning approach has been recently proposed for mapping grounding lines over C-band Sentinel-1 DInSAR data, its effectiveness has not been assessed over X-Band COSMO-SkyMed DInSAR data. Similarly, the applicability of an analytical algorithm developed for X-band TerraSAR-X DInSAR data has not been evaluated over a large diverse dataset. Here we apply both techniques to map grounding lines over a large X-band COSMO-SkyMed DInSAR dataset from 2020 to 2022, covering Stancomb-Wills, Veststraumen, Jutulstraumen, Moscow University, and Rennick Antarctic glaciers. We determine strengths and limitations of each algorithm, compare their performance with manual mapping and provide recommendations for choosing appropriate data processing methods for effective grounding line mapping. We also note that since 1996, Moscow University glacier's main trunk was retreating at a rate of 340 ± 80 m/year, while the other four glaciers experienced no retreat. Considering the grounding zone widths, which represent the difference between the high and low tide grounding line positions during a tidal cycle, we detect a grounding zone of 9.7 km over Veststraumen Glacier, which is almost six times larger than the average grounding zone of the other four glaciers.
The Advanced Baseline Imager (ABI) sensors on the Geostationary Operational Environment Satellite-R series (GOES-R) broaden the application of global vegetation monitoring due to their higher temporal (5–15 min) and appropriate spatial (0.5–1 km) resolution compared to previous geostationary and current polar-orbiting sensing systems. Notably, ABI Land Surface Phenology (LSP) quantification may be improved due to the greater availability of cloud-free observations as compared to those from legacy GOES satellite generations and from polar-orbiting sensors such as the Moderate Resolution Imaging Spectroradiometer (MODIS) and Visible Infrared Imaging Radiometer Suite (VIIRS). Geostationary satellites sense a location with a fixed view geometry but changing solar geometry and consequently capture pronounced temporal reflectance variations over anisotropic surfaces. These reflectance variations can be reduced by application of a Bidirectional Reflectance Distribution Function (BRDF) model to adjust or predict the reflectance for a new solar geometry and a fixed view geometry. Empirical and semi-empirical BRDF models perform less effectively when used to predict reflectance acquired at angles not found in the observations used to parameterize the model, or acquired under hot-spot sensing conditions when the solar and viewing directions coincide. Consequently, using a fixed solar geometry or even the geometry at local solar noon may introduce errors due to diurnal and seasonal variations in the position of the sun and the incidence of hot-spot sensing conditions. In this paper, a new solar geometry definition based on a Constant Scattering Angle (CSA) criterion is presented that, as we demonstrate, reduces the impacts of solar geometry changes on reflectance and derived vegetation indices used for LSP quantification. The CSA criterion is used with the Ross-Thick-Li-Sparse (RTLS) BRDF model applied to North America ABI surface reflectance data acquired by GOES-16 (1 January 2018 to 31 December 2020) and GOES-17 (1 January 2019 to 31 December 2020) to normalize solar geometry BRDF effects and generate 3-day two-band Enhanced Vegetation Index (EVI2) time series. Compared to the local solar noon geometry, the CSA criterion is shown to reduce solar geometry reflectance and EVI2 time series artifacts. Further, comparison with contemporaneous VIIRS NBAR (Nadir BRDF-Adjusted Reflectance) EVI2 time series is also presented to illustrate the efficacy of the CSA criterion. Finally, the CSA-adjusted EVI2 time series are shown to produce LSP results that agree well with PhenoCam-based observations, with no obvious systematic bias in onsets of vegetation maturity, senescence, and dormancy dates compared to about 10-day bias found with local solar noon adjusted EVI2 time series.
Precipitation and temperature are critical factors influencing terrestrial water storage (TWS) can lead to unexpected TWS losses when compounded by dryness and high temperatures. Yet, a dynamic assessment of the individual and combined effects of these conditions on TWS is lacking. This study proposes a framework to assess TWS loss driven by compound dry-hot conditions (CDHC) and dynamically evaluates risk probabilities and thresholds for 2003–2012 and 2013–2022. Results showed that CDHC exert a greater impact on TWS than dry or hot conditions alone. The risk probabilities of global TWS loss are higher in the late period than in the early period, with risk probabilities for light and extreme levels increasing by approximately 9–11 % and 2–7 %, respectively. Although the resilience of water resource systems to CDHC has increased in some regions, it still shows a decreasing trend on a global scale. The decrease in the resilience to TWS in major hyperarid areas is primarily influenced by temperature, whereas that in arid areas is primarily affected by precipitation. These distinct patterns may be the primary factors contributing to the exacerbation of global TWS loss. This study provides a novel approach for the dynamic assessment of global TWS under CDHC. The research findings offer valuable insights for decision-makers developing adaptive strategies to mitigate future CDHC challenges.
Ongoing advances in satellite remote sensing data and machine learning methods have enabled crop yield estimation at various spatial and temporal resolutions. While yield mapping at broader scales (e.g., state or county level) has become common, mapping at finer scales (e.g., field or subfield) has been limited by the lack of ground truth data for model training and evaluation. Here we present a scale transfer framework, named Quantile loss Domain Adversarial Neural Networks (QDANN), that leverages knowledge from county-level datasets to map crop yields at the subfield level. Based on the strategy of unsupervised domain adaptation, QDANN is trained on labeled county-level data and unlabeled subfield-level data, with no requirement for yield information at the subfield level. We evaluate the proposed method applied to Landsat imagery and Gridmet weather data for maize, soybean, and winter wheat fields in the United States, using as reference data yield monitor records from roughly one million field-year observations. The model is compared with several process-based and machine learning-based benchmark approaches that train on simulated yield records or county-level data. QDANN-estimated yields achieved an R2 score (RMSE) of 48 % (2.29 t/ha), 32 % (0.85 t/ha), and 39 % (1.40 t/ha) for maize, soybean, and winter wheat in comparison with the ground-based yield measures, respectively. These performances are higher than benchmark approaches and are nearly as good as models trained on field-level data. When aggregated to the county level, the improvement achieved by QDANN is more pronounced and the R2 scores (RMSE) improved to 78 % (0.98 t/ha), 62 % (0.37 t/ha), and 53 % (1.00 t/ha) for maize, soybean, and winter wheat, respectively. This study demonstrates that the proposed scale transfer framework can serve as a reliable approach for yield mapping at the subfield level when there is no access to fine-scale yield information. Based on the QDANN model, we have generated and made publicly available 30-m annual yield maps for major crop-producing states in the U.S. since 2008.
The frequency and intensity of global drought events are continuously increasing, posing an elevated risk of forest mortality worldwide. Accurately understanding the impact of drought on forests, particularly the distribution of mortality due to drought, is crucial for scientifically understanding global ecological drought. Atmospheric indicators and soil moisture are typically correlated with tree growth and influence tree water status and drought severity; however, they do not directly represent forest drought conditions. Optical vegetation indices reflect forest mortality but are affected by response delays, low temporal resolution, and cloud contamination. Therefore, the accuracy of current assessment methods for global drought-induced forest mortality, which are based on meteorological and vegetation variables, still needs improvement. To address this challenge, we utilized vegetation optical depth (VOD) data to characterize the changes in forest canopy moisture due to drought. VOD is a parameter that describes the transmissivity of vegetation in the microwave band and is closely related to forest water content and biomass, with longer wavelengths and greater penetration capabilities than visible and near-infrared remote sensing signals. We calculated the annual variation of VOD (ΔVOD) as a supplementary indicator to enhance the accuracy of monitoring and modeling of global drought-induced forest mortality. We integrated VOD with vegetation indices, meteorological data, terrain, and other variables to construct a predictive model of forest mortality due to drought and used this model to generate a series of global maps depicting drought-induced forest mortality. The results indicated that variables related to VOD contributed significantly to the mortality model compared with those based on vegetation or meteorological variables. Furthermore, ΔVOD exhibited a higher correlation with reference mortality rates compared to relative water content, the enhanced vegetation index, and climate water deficit. Notably, by validating the model fit with reference mortality rates, we found that incorporating ΔVOD into the model improved the accuracy of the global forest mortality map from R2 = 0.45 to R2 = 0.63. By optimizing the training points using a two-stage correlation threshold between ΔVOD and the reference mortality, map accuracy was further improved to R2 = 0.72. This study highlights the effectiveness of VOD, particularly ΔVOD, as a direct indicator of vegetation water content variation, for predicting drought-induced forest mortality. The global forest mortality map obtained from 2014 to 2018 is of significant value for the further analysis of forest carbon variations induced by extreme global drought events.
To enable future improvement on current leaf optical property models, more data incorporating a larger range of measured properties is needed. To this end, a dataset was collected to associate spectral measurements (ultraviolet, visible, and near infrared) with biochemical and biophysical properties of leaves. The leaves represented in this dataset were selected to provide a more comprehensive representation of both tree and agricultural species as well as leaves with a wide variety of color (pigment) expression, surface characteristics, and stages in a leaf lifecycle. Extensive data were collected for each of the 290 leaf samples studied in this project including multiple spectral measurement orientations and ranges, biochemical assessment, and biophysical assessment of that has not previously been a focus in other leaf datasets. The methods and results associated with this dataset are described in this work.