首页 > 最新文献

Science of Remote Sensing最新文献

英文 中文
DeepSARFlood: Rapid and automated SAR-based flood inundation mapping using vision transformer-based deep ensembles with uncertainty estimates
IF 5.7 Q1 ENVIRONMENTAL SCIENCES Pub Date : 2025-02-07 DOI: 10.1016/j.srs.2025.100203
Nirdesh Kumar Sharma , Manabendra Saharia
Rapid and automated flood inundation mapping is critical for disaster management. While optical satellites provide valuable data on flood extent and impact, their real-time usage is limited by challenges such as cloud cover, limited vegetation penetration, and the inability to operate at night, making real-time flood assessments difficult. Synthetic Aperture Radar (SAR) satellites can overcome these limitations, allowing for high-resolution flood mapping. However, SAR data remains underutilized due to less availability of training data, and reliance on labor-intensive manual or semi-automated change detection methods. This study introduces a novel end-to-end methodology for generating SAR-based flood inundation maps, by training deep learning models on weak flood labels generated from concurrent optical imagery. These labels are used to train deep learning models based on Convolutional Neural Networks (CNN) and Vision Transformer (ViT) architectures, optimized through multitask learning and model soups. Additionally, we develop a novel gain algorithm to identify diverse ensemble members and estimate uncertainty through deep ensembles. Our results show that ViT-based and CNN-ViT hybrid architectures significantly outperform traditional CNN models, achieving a state-of-the-art Intersection over Union (IoU) score of 0.72 on the Sen1Floods11 test dataset, while also providing uncertainty quantification. These models have been integrated into an open-source and fully automated, Python-based tool called DeepSARFlood, and demonstrated for the Pakistan floods of 2022 and Assam (India) floods of 2020. With its high accuracy, processing speed, and ability to estimate uncertainty, DeepSARFlood is optimized for real-time deployment, processing a 1° × 1° (12,100 km2) area in under 40 s, and will complement upcoming SAR missions like NISAR and Sentinel 1-C for flood mapping.
{"title":"DeepSARFlood: Rapid and automated SAR-based flood inundation mapping using vision transformer-based deep ensembles with uncertainty estimates","authors":"Nirdesh Kumar Sharma ,&nbsp;Manabendra Saharia","doi":"10.1016/j.srs.2025.100203","DOIUrl":"10.1016/j.srs.2025.100203","url":null,"abstract":"<div><div>Rapid and automated flood inundation mapping is critical for disaster management. While optical satellites provide valuable data on flood extent and impact, their real-time usage is limited by challenges such as cloud cover, limited vegetation penetration, and the inability to operate at night, making real-time flood assessments difficult. Synthetic Aperture Radar (SAR) satellites can overcome these limitations, allowing for high-resolution flood mapping. However, SAR data remains underutilized due to less availability of training data, and reliance on labor-intensive manual or semi-automated change detection methods. This study introduces a novel end-to-end methodology for generating SAR-based flood inundation maps, by training deep learning models on weak flood labels generated from concurrent optical imagery. These labels are used to train deep learning models based on Convolutional Neural Networks (CNN) and Vision Transformer (ViT) architectures, optimized through multitask learning and model soups. Additionally, we develop a novel gain algorithm to identify diverse ensemble members and estimate uncertainty through deep ensembles. Our results show that ViT-based and CNN-ViT hybrid architectures significantly outperform traditional CNN models, achieving a state-of-the-art Intersection over Union (IoU) score of 0.72 on the Sen1Floods11 test dataset, while also providing uncertainty quantification. These models have been integrated into an open-source and fully automated, Python-based tool called DeepSARFlood, and demonstrated for the Pakistan floods of 2022 and Assam (India) floods of 2020. With its high accuracy, processing speed, and ability to estimate uncertainty, DeepSARFlood is optimized for real-time deployment, processing a 1° × 1° (12,100 km<sup>2</sup>) area in under 40 s, and will complement upcoming SAR missions like NISAR and Sentinel 1-C for flood mapping.</div></div>","PeriodicalId":101147,"journal":{"name":"Science of Remote Sensing","volume":"11 ","pages":"Article 100203"},"PeriodicalIF":5.7,"publicationDate":"2025-02-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143394429","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Improving aboveground biomass density mapping of arid and semi-arid vegetation by combining GEDI LiDAR, Sentinel-1/2 imagery and field data
IF 5.7 Q1 ENVIRONMENTAL SCIENCES Pub Date : 2025-02-06 DOI: 10.1016/j.srs.2025.100204
Luis A. Hernández-Martínez , Juan Manuel Dupuy-Rada , Alfonso Medel-Narváez , Carlos Portillo-Quintero , José Luis Hernández-Stefanoni
Accurate estimates of forest aboveground biomass density (AGBD) are essential to guide mitigation strategies for climate change. NASA's Global Ecosystem Dynamics Investigation (GEDI) project delivers full-waveform LiDAR data and provides a unique opportunity to improve AGBD estimates. However, global GEDI estimates (GEDI-L4A) have some constraints, such as lack of full coverage of AGBD maps and scarcity of training data for some biomes, particularly in arid areas. Moreover, uncertainties remain about the type of GEDI footprint that best penetrates the canopy and yields accurate vegetation structure metrics. This study estimates forest biomass of arid and semi-arid zones in two stages. First, a model was fitted to predict AGBD by relating GEDI and field data from different vegetation types, including xeric shrubland. Second, different footprint qualities were evaluated, and their AGBD was related to images from Sentinel-1 and -2 satellites to produce a wall-to-wall map of AGBD. The model fitted with field data and GEDI showed adequate performance (%RMSE = 45.0) and produced more accurate estimates than GEDI-L4A (%RMSE = 84.6). The wall-to-wall mapping model also performed well (%RMSE = 37.0) and substantially reduced the underestimation of AGBD for arid zones. This study highlights the advantages of fitting new models for AGBD estimation from GEDI and local field data, whose combination with satellite imagery yielded accurate wall-to-wall AGBD estimates with a 10 m resolution. The results of this study contribute new perspectives to improve the accuracy of AGBD estimates in arid zones, whose role in climate change mitigation may be markedly underestimated.
{"title":"Improving aboveground biomass density mapping of arid and semi-arid vegetation by combining GEDI LiDAR, Sentinel-1/2 imagery and field data","authors":"Luis A. Hernández-Martínez ,&nbsp;Juan Manuel Dupuy-Rada ,&nbsp;Alfonso Medel-Narváez ,&nbsp;Carlos Portillo-Quintero ,&nbsp;José Luis Hernández-Stefanoni","doi":"10.1016/j.srs.2025.100204","DOIUrl":"10.1016/j.srs.2025.100204","url":null,"abstract":"<div><div>Accurate estimates of forest aboveground biomass density (AGBD) are essential to guide mitigation strategies for climate change. NASA's Global Ecosystem Dynamics Investigation (GEDI) project delivers full-waveform LiDAR data and provides a unique opportunity to improve AGBD estimates. However, global GEDI estimates (GEDI-L4A) have some constraints, such as lack of full coverage of AGBD maps and scarcity of training data for some biomes, particularly in arid areas. Moreover, uncertainties remain about the type of GEDI footprint that best penetrates the canopy and yields accurate vegetation structure metrics. This study estimates forest biomass of arid and semi-arid zones in two stages. First, a model was fitted to predict AGBD by relating GEDI and field data from different vegetation types, including xeric shrubland. Second, different footprint qualities were evaluated, and their AGBD was related to images from Sentinel-1 and -2 satellites to produce a wall-to-wall map of AGBD. The model fitted with field data and GEDI showed adequate performance (%RMSE = 45.0) and produced more accurate estimates than GEDI-L4A (%RMSE = 84.6). The wall-to-wall mapping model also performed well (%RMSE = 37.0) and substantially reduced the underestimation of AGBD for arid zones. This study highlights the advantages of fitting new models for AGBD estimation from GEDI and local field data, whose combination with satellite imagery yielded accurate wall-to-wall AGBD estimates with a 10 m resolution. The results of this study contribute new perspectives to improve the accuracy of AGBD estimates in arid zones, whose role in climate change mitigation may be markedly underestimated.</div></div>","PeriodicalId":101147,"journal":{"name":"Science of Remote Sensing","volume":"11 ","pages":"Article 100204"},"PeriodicalIF":5.7,"publicationDate":"2025-02-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143388168","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Evaluating war-induced damage to agricultural land in the Gaza Strip since October 2023 using PlanetScope and SkySat imagery
IF 5.7 Q1 ENVIRONMENTAL SCIENCES Pub Date : 2025-02-01 DOI: 10.1016/j.srs.2025.100199
He Yin , Lina Eklund , Dimah Habash , Mazin B. Qumsiyeh , Jamon Van Den Hoek
The ongoing 2023 Israel-Hamas War has severe and far-reaching consequences for the people, economy, food security, and environment. The immediate impacts of damage and destruction to cities and farms are apparent in widespread reporting and first-hand accounts from within the Gaza Strip. However, there is a lack of comprehensive assessment of the war's impacts on key Gazan agricultural land that are vital for immediate humanitarian concerns during the ongoing war and for long-term recovery. In the Gaza Strip, agriculture is arguably one of the most important land use systems. However, remote detection of damage to Gazan agriculture is challenged by the diverse agronomic landscapes and small farm sizes. This study uses multi-resolution satellite imagery to monitor damage to tree crops and greenhouses, the most important agricultural land in the Gaza Strip. Our methodology involved several key steps: First, we generated a pre-war cropland map, distinguishing between tree crops (e.g., olives) and greenhouses, using a random forest (RF) model and the Segment Anything Model (SAM) on nominally 3-m PlanetScope and 50-cm Planet SkySat imagery, obtained from 2022 to 2023. Second, we assessed damage to tree crop fields due to the war, employing a harmonic model-based time series analysis using PlanetScope imagery. Third, we assessed the damage to greenhouses by classifying PlanetScope imagery using a random forest model. We performed accuracy assessments on a generated tree crop fields damage map using 1,200 randomly sampled 3 × 3-m areas, and we generated error-adjusted area estimates with a 95% confidence interval. To validate the generated greenhouse damage map, we used a random sampling-based analysis. We found that 64–70% of tree crop fields and 58% of greenhouses had been damaged by 27 September 2024, after almost one year of war in the Gaza Strip. Agricultural land in Gaza City and North Gaza were the most heavily damaged with 90% and 73% of tree crop fields damaged in each governorate, respectively. By the end of 2023, all greenhouses in North Gaza and Gaza City had been damaged. Our damage estimate overall agrees with that from UNOSAT but provides more detailed and accurate information, such as the timing of the damage as well as fine-scale changes. Our results attest to the severe impacts of the Israel-Hamas War on Gaza's agricultural sector with direct relevance for food security and economic recovery needs. Due to the rapid progression of the war, we have made the latest damage maps and area estimates available on GitHub (https://github.com/hyinhe/Gaza).
{"title":"Evaluating war-induced damage to agricultural land in the Gaza Strip since October 2023 using PlanetScope and SkySat imagery","authors":"He Yin ,&nbsp;Lina Eklund ,&nbsp;Dimah Habash ,&nbsp;Mazin B. Qumsiyeh ,&nbsp;Jamon Van Den Hoek","doi":"10.1016/j.srs.2025.100199","DOIUrl":"10.1016/j.srs.2025.100199","url":null,"abstract":"<div><div>The ongoing 2023 Israel-Hamas War has severe and far-reaching consequences for the people, economy, food security, and environment. The immediate impacts of damage and destruction to cities and farms are apparent in widespread reporting and first-hand accounts from within the Gaza Strip. However, there is a lack of comprehensive assessment of the war's impacts on key Gazan agricultural land that are vital for immediate humanitarian concerns during the ongoing war and for long-term recovery. In the Gaza Strip, agriculture is arguably one of the most important land use systems. However, remote detection of damage to Gazan agriculture is challenged by the diverse agronomic landscapes and small farm sizes. This study uses multi-resolution satellite imagery to monitor damage to tree crops and greenhouses, the most important agricultural land in the Gaza Strip. Our methodology involved several key steps: First, we generated a pre-war cropland map, distinguishing between tree crops (e.g., olives) and greenhouses, using a random forest (RF) model and the Segment Anything Model (SAM) on nominally 3-m PlanetScope and 50-cm Planet SkySat imagery, obtained from 2022 to 2023. Second, we assessed damage to tree crop fields due to the war, employing a harmonic model-based time series analysis using PlanetScope imagery. Third, we assessed the damage to greenhouses by classifying PlanetScope imagery using a random forest model. We performed accuracy assessments on a generated tree crop fields damage map using 1,200 randomly sampled 3 × 3-m areas, and we generated error-adjusted area estimates with a 95% confidence interval. To validate the generated greenhouse damage map, we used a random sampling-based analysis. We found that 64–70% of tree crop fields and 58% of greenhouses had been damaged by 27 September 2024, after almost one year of war in the Gaza Strip. Agricultural land in Gaza City and North Gaza were the most heavily damaged with 90% and 73% of tree crop fields damaged in each governorate, respectively. By the end of 2023, all greenhouses in North Gaza and Gaza City had been damaged. Our damage estimate overall agrees with that from UNOSAT but provides more detailed and accurate information, such as the timing of the damage as well as fine-scale changes. Our results attest to the severe impacts of the Israel-Hamas War on Gaza's agricultural sector with direct relevance for food security and economic recovery needs. Due to the rapid progression of the war, we have made the latest damage maps and area estimates available on GitHub (<span><span>https://github.com/hyinhe/Gaza</span><svg><path></path></svg></span>).</div></div>","PeriodicalId":101147,"journal":{"name":"Science of Remote Sensing","volume":"11 ","pages":"Article 100199"},"PeriodicalIF":5.7,"publicationDate":"2025-02-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143421043","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Volatility characteristics and hyperspectral-based detection models of diesel in soils
IF 5.7 Q1 ENVIRONMENTAL SCIENCES Pub Date : 2025-02-01 DOI: 10.1016/j.srs.2025.100201
Jihye Shin , Jaehyung Yu , Jihee Seo , Lei Wang , Hyun-Cheol Kim
This study developed an efficient method using hyperspectral camera for detecting diesel content in soils with spectral indices. Over 70 days of the experiment, clean soils were saturated with diesel, and 186 measurements were taken to monitor the evaporation rate and spectral variation. The diesel evaporation followed a logarithmic pattern, where the diesel volatility decreased from 1.57% per day during the initial period to 0.06% per day during the late period. Using the hull-quotient reflectance at 2236 nm, the diesel content prediction model derived from a stepwise multiple linear regression (SMLR) achieved satisfactory accuracy with sufficient statistical significance (R2 = 0.89, RPD = 2.52). This spectral band was well visualized for diesel presence in hyperspectral images as the band infers variations in two absorptions (CH/AlOH and CH) concurrently. Additionally, this study presented an age estimation model based on the diesel evaporation rate using the same spectral band. Given the fact that this study is based on the largest number of samples with the longest observation period and models were developed excluding atmospheric absorption bands, the simple form of the spectral index makes it applicable to large-scale diesel pollution detection with hyperspectral scanners or narrow-band multispectral cameras in real-world cases.
{"title":"Volatility characteristics and hyperspectral-based detection models of diesel in soils","authors":"Jihye Shin ,&nbsp;Jaehyung Yu ,&nbsp;Jihee Seo ,&nbsp;Lei Wang ,&nbsp;Hyun-Cheol Kim","doi":"10.1016/j.srs.2025.100201","DOIUrl":"10.1016/j.srs.2025.100201","url":null,"abstract":"<div><div>This study developed an efficient method using hyperspectral camera for detecting diesel content in soils with spectral indices. Over 70 days of the experiment, clean soils were saturated with diesel, and 186 measurements were taken to monitor the evaporation rate and spectral variation. The diesel evaporation followed a logarithmic pattern, where the diesel volatility decreased from 1.57% per day during the initial period to 0.06% per day during the late period. Using the hull-quotient reflectance at 2236 nm, the diesel content prediction model derived from a stepwise multiple linear regression (SMLR) achieved satisfactory accuracy with sufficient statistical significance (R<sup>2</sup> = 0.89, RPD = 2.52). This spectral band was well visualized for diesel presence in hyperspectral images as the band infers variations in two absorptions (CH/AlOH and CH) concurrently. Additionally, this study presented an age estimation model based on the diesel evaporation rate using the same spectral band. Given the fact that this study is based on the largest number of samples with the longest observation period and models were developed excluding atmospheric absorption bands, the simple form of the spectral index makes it applicable to large-scale diesel pollution detection with hyperspectral scanners or narrow-band multispectral cameras in real-world cases.</div></div>","PeriodicalId":101147,"journal":{"name":"Science of Remote Sensing","volume":"11 ","pages":"Article 100201"},"PeriodicalIF":5.7,"publicationDate":"2025-02-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143327505","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Combining machine learning algorithms for bridging gaps in GRACE and GRACE Follow-On missions using ERA5-Land reanalysis
IF 5.7 Q1 ENVIRONMENTAL SCIENCES Pub Date : 2025-01-25 DOI: 10.1016/j.srs.2025.100198
Jaydeo K. Dharpure , Ian M. Howat , Saurabh Kaushik , Bryan G. Mark
The Gravity Recovery and Climate Experiment (GRACE) and GRACE Follow-On (GFO) missions have provided valuable data for monitoring global terrestrial water storage anomalies (TWSA) over the past two decades. However, the nearly one-year gap between these missions pose challenges for long-term TWSA measurements and various applications. Unlike previous studies, we use a combination of Machine Learning (ML) methods—Random Forest (RF), Support Vector Machine (SVM), eXtreme Gradient Boosting (XGB), Deep Neural Network (DNN), and Stacked Long-Short Term Memory (SLSTM)—to identify and efficiently bridge the gap between GRACE and GFO by using the best-performing ML model to estimate TWSA at each grid cell. The models were trained using six hydroclimatic variables (temperature, precipitation, runoff, evapotranspiration, ERA5-Land derived TWSA, and cumulative water storage change), as well as a vegetation index and timing variables, to reconstruct global land TWSA at 0.5° grid resolution. We evaluated the performance of each model using Nash-Sutcliffe Efficiency (NSE), Pearson's Correlation Coefficient (PCC), and Root Mean Square Error (RMSE). Our results demonstrate test accuracy with area weighted average NSE, PCC, and RMSE of 0.51 ± 0.31, 0.71 ± 0.23, and 4.75 ± 3.63 cm, respectively. The model's performance was further compared across five climatic zones, with two previously reconstructed products (Li and Humphrey methods) at 26 major river basins, during flood/drought events, and for sea-level rise. Our results showcase the model's superior performance and its capability to accurately predict data gaps at both grid and basin scales globally.
{"title":"Combining machine learning algorithms for bridging gaps in GRACE and GRACE Follow-On missions using ERA5-Land reanalysis","authors":"Jaydeo K. Dharpure ,&nbsp;Ian M. Howat ,&nbsp;Saurabh Kaushik ,&nbsp;Bryan G. Mark","doi":"10.1016/j.srs.2025.100198","DOIUrl":"10.1016/j.srs.2025.100198","url":null,"abstract":"<div><div>The Gravity Recovery and Climate Experiment (GRACE) and GRACE Follow-On (GFO) missions have provided valuable data for monitoring global terrestrial water storage anomalies (TWSA) over the past two decades. However, the nearly one-year gap between these missions pose challenges for long-term TWSA measurements and various applications. Unlike previous studies, we use a combination of Machine Learning (ML) methods—Random Forest (RF), Support Vector Machine (SVM), eXtreme Gradient Boosting (XGB), Deep Neural Network (DNN), and Stacked Long-Short Term Memory (SLSTM)—to identify and efficiently bridge the gap between GRACE and GFO by using the best-performing ML model to estimate TWSA at each grid cell. The models were trained using six hydroclimatic variables (temperature, precipitation, runoff, evapotranspiration, ERA5-Land derived TWSA, and cumulative water storage change), as well as a vegetation index and timing variables, to reconstruct global land TWSA at 0.5° grid resolution. We evaluated the performance of each model using Nash-Sutcliffe Efficiency (NSE), Pearson's Correlation Coefficient (PCC), and Root Mean Square Error (RMSE). Our results demonstrate test accuracy with area weighted average NSE, PCC, and RMSE of 0.51 ± 0.31, 0.71 ± 0.23, and 4.75 ± 3.63 cm, respectively. The model's performance was further compared across five climatic zones, with two previously reconstructed products (Li and Humphrey methods) at 26 major river basins, during flood/drought events, and for sea-level rise. Our results showcase the model's superior performance and its capability to accurately predict data gaps at both grid and basin scales globally.</div></div>","PeriodicalId":101147,"journal":{"name":"Science of Remote Sensing","volume":"11 ","pages":"Article 100198"},"PeriodicalIF":5.7,"publicationDate":"2025-01-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143327506","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Investigating the contribution of understory to radiative transfer simulations through reconstructing 3-D realistic temperate broadleaf forest scenes based on multi-platform laser scanning
IF 5.7 Q1 ENVIRONMENTAL SCIENCES Pub Date : 2025-01-23 DOI: 10.1016/j.srs.2025.100196
Xiaohan Lin , Ainong Li , Jinhu Bian , Zhengjian Zhang , Xi Nan , Limin Chen , Yi Bai , Yi Deng , Siyuan Li
Forests are complex, multi-layered ecosystems mainly comprising an overstory, understory, and soil. Radiative transfer simulations of these forests underpin the theoretical framework for retrieving forest parameters; however, the understory has often been neglected due to limitations in data acquisition technology. In this study, we assessed the contribution of the understory to canopy reflectance in a temperate broadleaf forest by comparing simulated bidirectional reflectance factor (BRF) differences between forest scenes with and without the understory. These scenes were reconstructed through voxel-based, boundary-based, and ellipsoid-based approaches respectively based on the multi-layered point cloud data acquired via combining unmanned aerial vehicle (UAV) and backpack laser scanning. The results show that the understory influences the simulated BRF across all three forest scene reconstruction approaches, suggesting that canopy reflectance signals can be used to evaluate the understory information, which provides a theoretical foundation for the feasibility of retrieving understory parameters via remote sensing. The understory increases BRF by 80% in shaded regions beneath the overstory in the red and NIR bands, and can increase BRF by 40% in the NIR band for voxel-based and ellipsoid-based forest scenes. Conversely, it reduces the simulated BRF in sunlit soil areas in the red band. Among the three forest reconstruction methods, the canopy reflectance simulation using the boundary-based model can consistently project the most understory information. Notably, the findings also indicate that the reflectance of the forest canopy definitely capture less understory vegetation information as the simulation resolution decreases, for instance, as the simulated resolution decreased from 1 m to 30 m, the absolute difference in the red band between the multi-layered BRF and L50 BRF decreased from 23.93% to 10.22% when using the boundary-based approach. It implies that higher resolution remote sensing observations are more advantageous for the retrieval of understory parameters. This study provides a successful case for modeling the multi-layered forest structure in natural temperate broadleaf forests, and even offers a theoretical reference for facilitating the retrieval of biochemical and biophysical information from the understory by remote sensing.
{"title":"Investigating the contribution of understory to radiative transfer simulations through reconstructing 3-D realistic temperate broadleaf forest scenes based on multi-platform laser scanning","authors":"Xiaohan Lin ,&nbsp;Ainong Li ,&nbsp;Jinhu Bian ,&nbsp;Zhengjian Zhang ,&nbsp;Xi Nan ,&nbsp;Limin Chen ,&nbsp;Yi Bai ,&nbsp;Yi Deng ,&nbsp;Siyuan Li","doi":"10.1016/j.srs.2025.100196","DOIUrl":"10.1016/j.srs.2025.100196","url":null,"abstract":"<div><div>Forests are complex, multi-layered ecosystems mainly comprising an overstory, understory, and soil. Radiative transfer simulations of these forests underpin the theoretical framework for retrieving forest parameters; however, the understory has often been neglected due to limitations in data acquisition technology. In this study, we assessed the contribution of the understory to canopy reflectance in a temperate broadleaf forest by comparing simulated bidirectional reflectance factor (BRF) differences between forest scenes with and without the understory. These scenes were reconstructed through voxel-based, boundary-based, and ellipsoid-based approaches respectively based on the multi-layered point cloud data acquired via combining unmanned aerial vehicle (UAV) and backpack laser scanning. The results show that the understory influences the simulated BRF across all three forest scene reconstruction approaches, suggesting that canopy reflectance signals can be used to evaluate the understory information, which provides a theoretical foundation for the feasibility of retrieving understory parameters via remote sensing. The understory increases BRF by 80% in shaded regions beneath the overstory in the red and NIR bands, and can increase BRF by 40% in the NIR band for voxel-based and ellipsoid-based forest scenes. Conversely, it reduces the simulated BRF in sunlit soil areas in the red band. Among the three forest reconstruction methods, the canopy reflectance simulation using the boundary-based model can consistently project the most understory information. Notably, the findings also indicate that the reflectance of the forest canopy definitely capture less understory vegetation information as the simulation resolution decreases, for instance, as the simulated resolution decreased from 1 m to 30 m, the absolute difference in the red band between the multi-layered BRF and L50 BRF decreased from 23.93% to 10.22% when using the boundary-based approach. It implies that higher resolution remote sensing observations are more advantageous for the retrieval of understory parameters. This study provides a successful case for modeling the multi-layered forest structure in natural temperate broadleaf forests, and even offers a theoretical reference for facilitating the retrieval of biochemical and biophysical information from the understory by remote sensing.</div></div>","PeriodicalId":101147,"journal":{"name":"Science of Remote Sensing","volume":"11 ","pages":"Article 100196"},"PeriodicalIF":5.7,"publicationDate":"2025-01-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143099577","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Using airborne LiDAR and enhanced-geolocated GEDI metrics to map structural traits over a Mediterranean forest
IF 5.7 Q1 ENVIRONMENTAL SCIENCES Pub Date : 2025-01-12 DOI: 10.1016/j.srs.2025.100195
Aaron Cardenas-Martinez , Adrian Pascual , Emilia Guisado-Pintado , Victor Rodriguez-Galiano
The estimation of three-dimensional (3D) vegetation metrics from space-borne LiDAR allows to capture spatio-temporal trends in forest ecosystems. Structural traits from the NASA Global Ecosystem Dynamics Investigation (GEDI) are vital to support forest monitoring, restoration and biodiversity protection. The Mediterranean Basin is home of relict forest species facing the consequences of intensified climate change effects and whose habitats have been progressively shrinking over time. We used two sources of 3D-structural metrics, LiDAR point clouds and full-waveform space-borne LiDAR from GEDI to estimate forest structure in a protected area of Southern Spain, home of relict species in jeopardy due to recent extreme water-stress conditions. We locally calibrated GEDI spaceborne measurements using discrete point clouds collected by Airborne Laser Scanner (ALS) to adjust the geolocation of GEDI waveform metrics and to predict GEDI structural traits such as canopy height, foliage height diversity or leaf area index. Our results showed significant improvements in the retrieval of ecological indicators when using data collocation between ALS point clouds and comparable GEDI metrics. The best results for canopy height retrieval after collocation yielded an RMSE of 2.6 m, when limited to forest-classified areas and flat terrain, compared to an RMSE of 3.4 m without collocation. Trends for foliage height diversity (FHD; RMSE = 2.1) and leaf area index (LAI; RMSE = 1.6 m2/m2) were less consistent than those for canopy height but confirmed the enhancement derived from collocation. The wall-to-wall mapping of GEDI traits framed over ALS surveys is currently available to monitor Mediterranean sparse mountain forests with sufficiency. Our results showed that combining different LiDAR platforms is particularly important for mapping areas where access to in-situ data is limited and especially in regions with abrupt changes in vegetation cover, such as Mediterranean mountainous forests.
{"title":"Using airborne LiDAR and enhanced-geolocated GEDI metrics to map structural traits over a Mediterranean forest","authors":"Aaron Cardenas-Martinez ,&nbsp;Adrian Pascual ,&nbsp;Emilia Guisado-Pintado ,&nbsp;Victor Rodriguez-Galiano","doi":"10.1016/j.srs.2025.100195","DOIUrl":"10.1016/j.srs.2025.100195","url":null,"abstract":"<div><div>The estimation of three-dimensional (3D) vegetation metrics from space-borne LiDAR allows to capture spatio-temporal trends in forest ecosystems. Structural traits from the <span>NASA</span> <span>Global</span> Ecosystem Dynamics Investigation (GEDI) are vital to support forest monitoring, restoration and biodiversity protection. The Mediterranean Basin is home of relict forest species facing the consequences of intensified climate change effects and whose habitats have been progressively shrinking over time. We used two sources of 3D-structural metrics, LiDAR point clouds and full-waveform space-borne LiDAR from GEDI to estimate forest structure in a protected area of Southern Spain, home of relict species in jeopardy due to recent extreme water-stress conditions. We locally calibrated GEDI spaceborne measurements using discrete point clouds collected by Airborne Laser Scanner (ALS) to adjust the geolocation of GEDI waveform metrics and to predict GEDI structural traits such as canopy height, foliage height diversity or leaf area index. Our results showed significant improvements in the retrieval of ecological indicators when using data collocation between ALS point clouds and comparable GEDI metrics. The best results for canopy height retrieval after collocation yielded an RMSE of 2.6 m, when limited to forest-classified areas and flat terrain, compared to an RMSE of 3.4 m without collocation. Trends for foliage height diversity (FHD; RMSE = 2.1) and leaf area index (LAI; RMSE = 1.6 m<sup>2</sup>/m<sup>2</sup>) were less consistent than those for canopy height but confirmed the enhancement derived from collocation. The wall-to-wall mapping of GEDI traits framed over ALS surveys is currently available to monitor Mediterranean sparse mountain forests with sufficiency. Our results showed that combining different LiDAR platforms is particularly important for mapping areas where access to in-situ data is limited and especially in regions with abrupt changes in vegetation cover, such as Mediterranean mountainous forests.</div></div>","PeriodicalId":101147,"journal":{"name":"Science of Remote Sensing","volume":"11 ","pages":"Article 100195"},"PeriodicalIF":5.7,"publicationDate":"2025-01-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143094977","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
ForestAlign: Automatic forest structure-based alignment for multi-view TLS and ALS point clouds
IF 5.7 Q1 ENVIRONMENTAL SCIENCES Pub Date : 2025-01-06 DOI: 10.1016/j.srs.2024.100194
Juan Castorena , L. Turin Dickman , Adam J. Killebrew , James R. Gattiker , Rod Linn , E. Louise Loudermilk
Access to highly detailed models of heterogeneous forests, spanning from the near surface to above the tree canopy at varying scales, is increasingly in demand. This enables advanced computational tools for analysis, planning, and ecosystem management. LiDAR sensors, available through terrestrial (TLS) and aerial (ALS) scanning platforms, have become established as primary technologies for forest monitoring due to their capability to rapidly collect precise 3D structural information directly. Selection of these platforms typically depends on the scales (tree-level, plot, regional) required for observational or intervention studies. Forestry now recognizes the benefits of a multi-scale approach, leveraging the strengths of each platform while minimizing individual source uncertainties. However, effective integration of these LiDAR sources relies heavily on efficient multi-scale, multi-view co-registration or point-cloud alignment methods. In GPS-denied areas, forestry has traditionally relied on target-based co-registration methods (e.g., reflective or marked trees), which are impractical at scale. Here, we propose ForestAlign: an effective, target-less, and fully automatic co-registration method for aligning forest point clouds collected from multi-view, multi-scale LiDAR sources. Our co-registration approach employs an incremental alignment strategy, grouping and aggregating 3D points based on increasing levels of structural complexity. This strategy aligns 3D points from less complex (e.g., ground surface) to more complex structures (e.g., tree trunks/branches, foliage) sequentially, refining alignment iteratively. Empirical evidence demonstrates the method’s effectiveness in aligning TLS-to-TLS and TLS-to-ALS scans locally, across various ecosystem conditions, including pre/post fire treatment effects. In TLS-to-TLS scenarios, parameter RMSE errors were less than 0.75 degrees in rotation and 5.5 cm in translation. For TLS-to-ALS, corresponding errors were less than 0.8 degrees and 8 cm, respectively. These results, show that our ForestAlign method is effective for co-registering both TLS-to-TLS and TLS-to-ALS in such forest environments, without relying on targets, while achieving high performance.
{"title":"ForestAlign: Automatic forest structure-based alignment for multi-view TLS and ALS point clouds","authors":"Juan Castorena ,&nbsp;L. Turin Dickman ,&nbsp;Adam J. Killebrew ,&nbsp;James R. Gattiker ,&nbsp;Rod Linn ,&nbsp;E. Louise Loudermilk","doi":"10.1016/j.srs.2024.100194","DOIUrl":"10.1016/j.srs.2024.100194","url":null,"abstract":"<div><div>Access to highly detailed models of heterogeneous forests, spanning from the near surface to above the tree canopy at varying scales, is increasingly in demand. This enables advanced computational tools for analysis, planning, and ecosystem management. LiDAR sensors, available through terrestrial (TLS) and aerial (ALS) scanning platforms, have become established as primary technologies for forest monitoring due to their capability to rapidly collect precise 3D structural information directly. Selection of these platforms typically depends on the scales (tree-level, plot, regional) required for observational or intervention studies. Forestry now recognizes the benefits of a multi-scale approach, leveraging the strengths of each platform while minimizing individual source uncertainties. However, effective integration of these LiDAR sources relies heavily on efficient multi-scale, multi-view co-registration or point-cloud alignment methods. In GPS-denied areas, forestry has traditionally relied on target-based co-registration methods (e.g., reflective or marked trees), which are impractical at scale. Here, we propose ForestAlign: an effective, target-less, and fully automatic co-registration method for aligning forest point clouds collected from multi-view, multi-scale LiDAR sources. Our co-registration approach employs an incremental alignment strategy, grouping and aggregating 3D points based on increasing levels of structural complexity. This strategy aligns 3D points from less complex (e.g., ground surface) to more complex structures (e.g., tree trunks/branches, foliage) sequentially, refining alignment iteratively. Empirical evidence demonstrates the method’s effectiveness in aligning TLS-to-TLS and TLS-to-ALS scans locally, across various ecosystem conditions, including pre/post fire treatment effects. In TLS-to-TLS scenarios, parameter RMSE errors were less than 0.75 degrees in rotation and 5.5 cm in translation. For TLS-to-ALS, corresponding errors were less than 0.8 degrees and 8 cm, respectively. These results, show that our ForestAlign method is effective for co-registering both TLS-to-TLS and TLS-to-ALS in such forest environments, without relying on targets, while achieving high performance.</div></div>","PeriodicalId":101147,"journal":{"name":"Science of Remote Sensing","volume":"11 ","pages":"Article 100194"},"PeriodicalIF":5.7,"publicationDate":"2025-01-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143094967","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Comparative analysis of UAV-based LiDAR and photogrammetric systems for the detection of terrain anomalies in a historical conflict landscape
IF 5.7 Q1 ENVIRONMENTAL SCIENCES Pub Date : 2025-01-04 DOI: 10.1016/j.srs.2024.100191
Marcel Storch , Benjamin Kisliuk , Thomas Jarmer , Björn Waske , Norbert de Lange
The documentation of historical artefacts and cultural heritage using high-resolution data obtained from unmanned aerial vehicles (UAVs) is of paramount importance in the preservation of historical knowledge. This study compares three UAV-based systems for the detection of historically relevant terrain anomalies in a conflict landscape. Two laser scanners, a high-end (RIEGL miniVUX-1UAV) and a lower priced model (DJI Zenmuse L1), along with a cost-effective optical camera system (photogrammetry using Structure from Motion, SfM) were employed in two study sites with different densities of vegetation. In the study area with deciduous trees and little low vegetation, the DJI Zenmuse L1 system performs comparably to the RIEGL miniVUX-1UAV, with higher completeness but lower correctness. The SfM method demonstrated inferior performance with respect to correctness and the F1-score, yet achieved comparable or higher completeness values compared to the laser scanners (maximum 1.0, median 0.84). In the study area characterized by dense near-ground vegetation, the detection results are less optimal. However, the RIEGL miniVUX-1UAV system still demonstrates superior results in anomaly detection (F1-score maximum 0.61, median 0.53) compared to the other systems. The DJI Zenmuse L1 data showed lower performance (F1-score maximum 0.56, median 0.46). Both laser scanners exhibited enhanced results in comparison to the SfM approach, with a maximum F1-score of 0.12. Hence, the SfM method is viable under specific conditions, such as defoliated trees without dense low vegetation. Therefore, lower-cost systems can offer cost-effective alternatives to the high-end LiDAR system in suitable environments. However, limitations persist in densely vegetated areas.
{"title":"Comparative analysis of UAV-based LiDAR and photogrammetric systems for the detection of terrain anomalies in a historical conflict landscape","authors":"Marcel Storch ,&nbsp;Benjamin Kisliuk ,&nbsp;Thomas Jarmer ,&nbsp;Björn Waske ,&nbsp;Norbert de Lange","doi":"10.1016/j.srs.2024.100191","DOIUrl":"10.1016/j.srs.2024.100191","url":null,"abstract":"<div><div>The documentation of historical artefacts and cultural heritage using high-resolution data obtained from unmanned aerial vehicles (UAVs) is of paramount importance in the preservation of historical knowledge. This study compares three UAV-based systems for the detection of historically relevant terrain anomalies in a conflict landscape. Two laser scanners, a high-end (RIEGL miniVUX-1UAV) and a lower priced model (DJI Zenmuse L1), along with a cost-effective optical camera system (photogrammetry using Structure from Motion, SfM) were employed in two study sites with different densities of vegetation. In the study area with deciduous trees and little low vegetation, the DJI Zenmuse L1 system performs comparably to the RIEGL miniVUX-1UAV, with higher completeness but lower correctness. The SfM method demonstrated inferior performance with respect to correctness and the F1-score, yet achieved comparable or higher completeness values compared to the laser scanners (maximum 1.0, median 0.84). In the study area characterized by dense near-ground vegetation, the detection results are less optimal. However, the RIEGL miniVUX-1UAV system still demonstrates superior results in anomaly detection (F1-score maximum 0.61, median 0.53) compared to the other systems. The DJI Zenmuse L1 data showed lower performance (F1-score maximum 0.56, median 0.46). Both laser scanners exhibited enhanced results in comparison to the SfM approach, with a maximum F1-score of 0.12. Hence, the SfM method is viable under specific conditions, such as defoliated trees without dense low vegetation. Therefore, lower-cost systems can offer cost-effective alternatives to the high-end LiDAR system in suitable environments. However, limitations persist in densely vegetated areas.</div></div>","PeriodicalId":101147,"journal":{"name":"Science of Remote Sensing","volume":"11 ","pages":"Article 100191"},"PeriodicalIF":5.7,"publicationDate":"2025-01-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143099668","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Large-scale mapping of plastic-mulched land from Sentinel-2 using an index-feature-spatial-attention fused deep learning model
IF 5.7 Q1 ENVIRONMENTAL SCIENCES Pub Date : 2025-01-03 DOI: 10.1016/j.srs.2024.100188
Lizhen Lu , Yunci Xu , Xinyu Huang , Hankui K. Zhang , Yuqi Du
Accurate and timely mapping of Plastic-Mulched Land (PML) on a large-scale using satellite data supports precision agriculture and enhances understanding the PML's impacts on regional climate and environment. However, accurately mapping large-scale PML remains challenging due to the relatively small size and short lifespan of visible PML. In this paper, we demonstrated a large-scale PML mapping using Sentinel-2 data by combining the PML domain knowledge and the deep Convolutional Neural Network (CNN). We developed a dual-branch Index-Feature-Spatial-Attention fused Deep Learning Model (IFSA_DLM) for effectively acquiring and fusing multi-scale discriminative features and thus for accurately detecting PML. The proposed model was trained on one agricultural zone with 2019 Sentinel-2 data and evaluated across six agricultural zones in Xinjiang, China (span >1500 km in dimension) for Sentinel-2 and Landsat 8 data acquired over 2019 and 2023 to examine the spatial, temporal and across-sensor transferability. Results show that the IFSA_DLM model outperforms three compared U-Net series models with 94.48% Overall Accuracy (OA), 87.69% mean Intersection over Union (mIoU) and 93.38% F1 score. The model's spatial, temporal and sensor transferability is demonstrated by its successful cross-region, cross-time and Landsat-8 applications. Large-scale maps of PML in Xinjiang in both 2019 and 2023 further confirmed the effectiveness of the proposed approach.
{"title":"Large-scale mapping of plastic-mulched land from Sentinel-2 using an index-feature-spatial-attention fused deep learning model","authors":"Lizhen Lu ,&nbsp;Yunci Xu ,&nbsp;Xinyu Huang ,&nbsp;Hankui K. Zhang ,&nbsp;Yuqi Du","doi":"10.1016/j.srs.2024.100188","DOIUrl":"10.1016/j.srs.2024.100188","url":null,"abstract":"<div><div>Accurate and timely mapping of Plastic-Mulched Land (PML) on a large-scale using satellite data supports precision agriculture and enhances understanding the PML's impacts on regional climate and environment. However, accurately mapping large-scale PML remains challenging due to the relatively small size and short lifespan of visible PML. In this paper, we demonstrated a large-scale PML mapping using Sentinel-2 data by combining the PML domain knowledge and the deep Convolutional Neural Network (CNN). We developed a dual-branch Index-Feature-Spatial-Attention fused Deep Learning Model (IFSA_DLM) for effectively acquiring and fusing multi-scale discriminative features and thus for accurately detecting PML. The proposed model was trained on one agricultural zone with 2019 Sentinel-2 data and evaluated across six agricultural zones in Xinjiang, China (span &gt;1500 km in dimension) for Sentinel-2 and Landsat 8 data acquired over 2019 and 2023 to examine the spatial, temporal and across-sensor transferability. Results show that the IFSA_DLM model outperforms three compared U-Net series models with 94.48% Overall Accuracy (OA), 87.69% mean Intersection over Union (mIoU) and 93.38% F1 score. The model's spatial, temporal and sensor transferability is demonstrated by its successful cross-region, cross-time and Landsat-8 applications. Large-scale maps of PML in Xinjiang in both 2019 and 2023 further confirmed the effectiveness of the proposed approach.</div></div>","PeriodicalId":101147,"journal":{"name":"Science of Remote Sensing","volume":"11 ","pages":"Article 100188"},"PeriodicalIF":5.7,"publicationDate":"2025-01-03","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143099567","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
期刊
Science of Remote Sensing
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1