Magnus Onyiriagwu, Nereoh Leley, Caleb W. T. Ngaba, Anthony Macharia, Henry Muchiri, Abdalla Kisiwa, Martin Ehbrecht, Delphine Clara Zemp
In tropical ecosystems, accurately quantifying vegetation structure is crucial to determining their capacity to deliver ecosystem services. Terrestrial laser scanning (TLS) and UAV‐based digital aerial photogrammetry (DAP) are remote sensing tools used to assess vegetation structure, but are challenging to use with conventional methods. Single‐Scan TLS and DTM‐independent DAPs are alternative scanning approaches used to describe vegetation structure; however, it remains unclear to what extent they relate to each other and how accurately they can distinguish forest structural characteristics, including vertical structure, horizontal structure, vegetation density, and structural heterogeneity. First, we quantified bivariate and multivariate correlations between equivalent/analogous structural metrics from these data sources using principal component and Procrustes analysis. We then evaluated their ability to characterize the forest and agroforestry landscapes. DAP, TLS, and Field metrics were moderately aligned for vegetation density, canopy top height, and gap dynamics, but differed in height variability and surface heterogeneity, reflecting differences in data structure. DAP and TLS achieved the highest accuracy in classifying forests and agroforestry plots, with overall accuracies of 89% and 78%, respectively. Though the field metrics were unable to resolve 3D characteristics related to heterogeneity, their capacity to distinguish the stand structure at 69% accuracy was driven by the relative pattern of its suite of metrics. The results indicate that the single‐scan TLS and DTM‐independent DAP yield meaningful descriptors of vegetation structure, which, when combined, can provide a comprehensive representation of the structure in these tropical landscapes.
{"title":"On the compatibility of single‐scan terrestrial LiDAR with digital photogrammetry and field inventory metrics of vegetation structure in forest and agroforestry landscapes","authors":"Magnus Onyiriagwu, Nereoh Leley, Caleb W. T. Ngaba, Anthony Macharia, Henry Muchiri, Abdalla Kisiwa, Martin Ehbrecht, Delphine Clara Zemp","doi":"10.1002/rse2.70047","DOIUrl":"https://doi.org/10.1002/rse2.70047","url":null,"abstract":"In tropical ecosystems, accurately quantifying vegetation structure is crucial to determining their capacity to deliver ecosystem services. Terrestrial laser scanning (TLS) and UAV‐based digital aerial photogrammetry (DAP) are remote sensing tools used to assess vegetation structure, but are challenging to use with conventional methods. Single‐Scan TLS and DTM‐independent DAPs are alternative scanning approaches used to describe vegetation structure; however, it remains unclear to what extent they relate to each other and how accurately they can distinguish forest structural characteristics, including vertical structure, horizontal structure, vegetation density, and structural heterogeneity. First, we quantified bivariate and multivariate correlations between equivalent/analogous structural metrics from these data sources using principal component and Procrustes analysis. We then evaluated their ability to characterize the forest and agroforestry landscapes. DAP, TLS, and Field metrics were moderately aligned for vegetation density, canopy top height, and gap dynamics, but differed in height variability and surface heterogeneity, reflecting differences in data structure. DAP and TLS achieved the highest accuracy in classifying forests and agroforestry plots, with overall accuracies of 89% and 78%, respectively. Though the field metrics were unable to resolve 3D characteristics related to heterogeneity, their capacity to distinguish the stand structure at 69% accuracy was driven by the relative pattern of its suite of metrics. The results indicate that the single‐scan TLS and DTM‐independent DAP yield meaningful descriptors of vegetation structure, which, when combined, can provide a comprehensive representation of the structure in these tropical landscapes.","PeriodicalId":21132,"journal":{"name":"Remote Sensing in Ecology and Conservation","volume":"93 1","pages":""},"PeriodicalIF":5.5,"publicationDate":"2025-12-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145746672","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"环境科学与生态学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Kelsey S. Huelsman, Howard E. Epstein, Xi Yang, Roderick Walker
Mapping and managing invasive plants are top priorities for land managers, but traditional approaches are time and labor‐intensive. To improve detection efforts, we explored the effectiveness of hyperspectral, drone‐based detection algorithms that incorporate phenology. We collected fine‐resolution (3 cm) hyperspectral images using a drone equipped with a Nano‐Hyperspec imager on seven dates from April to November, 2020 and then used a subsample of pixels from the images to develop multitemporal detection algorithms for three invasive plant species within heterogeneous vegetation communities. The three species are invasive in much of the U.S. and in Virginia, where the data were collected: Ailanthus altissima (tree of heaven), Elaeagnus umbellata (autumn olive), and Rhamnus davurica (Dahurian buckthorn). We determined when each species could be accurately detected, what spectral features allowed for detection, and the consistency of those features over a growing season. All three species could be detected in June. Only E. umbellata had consistently accurate algorithms and used consistent features in the visible and red edge across the growing season. Its most accurate detection algorithms in the summer included features in the yellow‐orange spectral region. A. altissima and R. davurica were both detectable in the mid‐ and late‐growing seasons, with little overlap in key spectral features across dates. Our results indicate that even a small subset of data from hyperspectral imagery can be used to accurately detect invasive plants in heterogeneous plant communities, and that incorporating species‐specific phenological traits into detection algorithms improves detection, laying methodological and theoretical groundwork for the future of invasive species management.
{"title":"Using phenology to improve invasive plant detection in fine‐scale hyperspectral drone‐based images","authors":"Kelsey S. Huelsman, Howard E. Epstein, Xi Yang, Roderick Walker","doi":"10.1002/rse2.70049","DOIUrl":"https://doi.org/10.1002/rse2.70049","url":null,"abstract":"Mapping and managing invasive plants are top priorities for land managers, but traditional approaches are time and labor‐intensive. To improve detection efforts, we explored the effectiveness of hyperspectral, drone‐based detection algorithms that incorporate phenology. We collected fine‐resolution (3 cm) hyperspectral images using a drone equipped with a Nano‐Hyperspec imager on seven dates from April to November, 2020 and then used a subsample of pixels from the images to develop multitemporal detection algorithms for three invasive plant species within heterogeneous vegetation communities. The three species are invasive in much of the U.S. and in Virginia, where the data were collected: <jats:italic>Ailanthus altissima</jats:italic> (tree of heaven), <jats:italic>Elaeagnus umbellata</jats:italic> (autumn olive), and <jats:italic>Rhamnus davurica</jats:italic> (Dahurian buckthorn). We determined when each species could be accurately detected, what spectral features allowed for detection, and the consistency of those features over a growing season. All three species could be detected in June. Only <jats:italic>E. umbellata</jats:italic> had consistently accurate algorithms and used consistent features in the visible and red edge across the growing season. Its most accurate detection algorithms in the summer included features in the yellow‐orange spectral region. <jats:italic>A. altissima</jats:italic> and <jats:italic>R. davurica</jats:italic> were both detectable in the mid‐ and late‐growing seasons, with little overlap in key spectral features across dates. Our results indicate that even a small subset of data from hyperspectral imagery can be used to accurately detect invasive plants in heterogeneous plant communities, and that incorporating species‐specific phenological traits into detection algorithms improves detection, laying methodological and theoretical groundwork for the future of invasive species management.","PeriodicalId":21132,"journal":{"name":"Remote Sensing in Ecology and Conservation","volume":"17 1","pages":""},"PeriodicalIF":5.5,"publicationDate":"2025-12-09","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145704601","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"环境科学与生态学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Katarzyna Bojarska, Michał Żmihorski, Morteza Naderi, J. David Blount, Mark Chynoweth, Emrah Coban, Çağan H. Şekercioğlu, Josip Kusak
While animal‐attached devices provide the most detailed information on animal behaviour, camera traps have become an increasingly popular non‐invasive alternative in wildlife ecology. Here, we compared activity patterns of wolves ( Canis lupus ) assessed with accelerometers and road‐positioned camera traps in two study areas in Croatia and north‐eastern Türkiye. We used accelerometer data from 37 wolves and camera trap data from 82,375 camera trap days at 358 road locations from 2010 to 2021. We fitted generalised additive mixed models to determine the times of day and parts of the year with the highest and lowest wolf activity and correlated the predictions between accelerometer‐ and camera‐based models. Wolf activity patterns predicted from road‐positioned camera traps and accelerometer data were significantly positively correlated, but the strength of the correlation varied among areas, times of day and seasons. The lowest and highest activity periods showed little overlap between the two methods. In both study areas, camera trap data failed to detect the increase in daylight activity during the pup‐rearing season evident in accelerometer data. Overall, camera traps proved adequate for describing general daily and seasonal wolf activity patterns, while discrepancies between the two methods may largely be attributed to camera placement on roads. In light of the increasing use of camera traps in ecological research, our results highlight the value of animal‐attached devices for tracking individuals and recommend caution when interpreting activity patterns from road‐mounted cameras.
{"title":"Cameras do not always take a full picture: wolf activity patterns revealed by accelerometers versus road‐positioned camera traps","authors":"Katarzyna Bojarska, Michał Żmihorski, Morteza Naderi, J. David Blount, Mark Chynoweth, Emrah Coban, Çağan H. Şekercioğlu, Josip Kusak","doi":"10.1002/rse2.70045","DOIUrl":"https://doi.org/10.1002/rse2.70045","url":null,"abstract":"While animal‐attached devices provide the most detailed information on animal behaviour, camera traps have become an increasingly popular non‐invasive alternative in wildlife ecology. Here, we compared activity patterns of wolves ( <jats:italic>Canis lupus</jats:italic> ) assessed with accelerometers and road‐positioned camera traps in two study areas in Croatia and north‐eastern Türkiye. We used accelerometer data from 37 wolves and camera trap data from 82,375 camera trap days at 358 road locations from 2010 to 2021. We fitted generalised additive mixed models to determine the times of day and parts of the year with the highest and lowest wolf activity and correlated the predictions between accelerometer‐ and camera‐based models. Wolf activity patterns predicted from road‐positioned camera traps and accelerometer data were significantly positively correlated, but the strength of the correlation varied among areas, times of day and seasons. The lowest and highest activity periods showed little overlap between the two methods. In both study areas, camera trap data failed to detect the increase in daylight activity during the pup‐rearing season evident in accelerometer data. Overall, camera traps proved adequate for describing general daily and seasonal wolf activity patterns, while discrepancies between the two methods may largely be attributed to camera placement on roads. In light of the increasing use of camera traps in ecological research, our results highlight the value of animal‐attached devices for tracking individuals and recommend caution when interpreting activity patterns from road‐mounted cameras.","PeriodicalId":21132,"journal":{"name":"Remote Sensing in Ecology and Conservation","volume":"29 1","pages":""},"PeriodicalIF":5.5,"publicationDate":"2025-12-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145673785","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"环境科学与生态学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Juan C. Azofeifa‐Solano, Miles J. G. Parsons, James Kemp, Rohan M. Brooker, Robert D. McCauley, Shyam Madhusudhana, Mathew Wyatt, Stephen D. Simpson, Christine Erbe
Acoustic indices are increasingly used to characterize soundscapes and infer biodiversity patterns in terrestrial and marine environments. However, methodological choices during data collection and signal processing—particularly the selection of sampling frequency, Fourier transform number of points and window overlap—can influence the output of acoustic indices, multivariate analysis and their ecological interpretations. Here, we evaluated the effects of these parameters on multivariate soundscape separation with two example environment comparisons: terrestrial (Bushland vs. Urban) and underwater ( Pocillopora dominated vs. Non‐ Pocillopora dominated). We assessed the influence of parameterization by computing 432 spectrogram configurations per recording across five commonly used acoustic indices. Using non‐metric multidimensional scaling, multivariate descriptors and Bayesian models, we found that parameter selection influenced soundscape separation in each environment example with data‐specific interactions. For instance, greater NFFT values increased centroid distance between habitats in terrestrial soundscapes but decreased it in underwater soundscapes. Our results confirm earlier findings that acoustic indices can be sensitive to spectrogram parameterization, and extend these by demonstrating, with a systematic multivariate framework, how interactions among sampling frequency, NFFT and window overlap affect soundscape separation across environments. This approach emphasizes the need for parameter sensitivity testing, transparent reporting and careful interpretation when comparing soundscapes. Code: https://github.com/juancarlosazofeifasolano/acousticindices_parametrisation.git .
{"title":"Impact of parameterization in multiple acoustic index comparisons: practical cases in terrestrial and underwater soundscapes","authors":"Juan C. Azofeifa‐Solano, Miles J. G. Parsons, James Kemp, Rohan M. Brooker, Robert D. McCauley, Shyam Madhusudhana, Mathew Wyatt, Stephen D. Simpson, Christine Erbe","doi":"10.1002/rse2.70044","DOIUrl":"https://doi.org/10.1002/rse2.70044","url":null,"abstract":"Acoustic indices are increasingly used to characterize soundscapes and infer biodiversity patterns in terrestrial and marine environments. However, methodological choices during data collection and signal processing—particularly the selection of sampling frequency, Fourier transform number of points and window overlap—can influence the output of acoustic indices, multivariate analysis and their ecological interpretations. Here, we evaluated the effects of these parameters on multivariate soundscape separation with two example environment comparisons: terrestrial (Bushland vs. Urban) and underwater ( <jats:italic>Pocillopora</jats:italic> dominated vs. Non‐ <jats:italic>Pocillopora</jats:italic> dominated). We assessed the influence of parameterization by computing 432 spectrogram configurations per recording across five commonly used acoustic indices. Using non‐metric multidimensional scaling, multivariate descriptors and Bayesian models, we found that parameter selection influenced soundscape separation in each environment example with data‐specific interactions. For instance, greater NFFT values increased centroid distance between habitats in terrestrial soundscapes but decreased it in underwater soundscapes. Our results confirm earlier findings that acoustic indices can be sensitive to spectrogram parameterization, and extend these by demonstrating, with a systematic multivariate framework, how interactions among sampling frequency, NFFT and window overlap affect soundscape separation across environments. This approach emphasizes the need for parameter sensitivity testing, transparent reporting and careful interpretation when comparing soundscapes. Code: <jats:ext-link xmlns:xlink=\"http://www.w3.org/1999/xlink\" xlink:href=\"https://github.com/juancarlosazofeifasolano/acousticindices_parametrisation.git\">https://github.com/juancarlosazofeifasolano/acousticindices_parametrisation.git</jats:ext-link> .","PeriodicalId":21132,"journal":{"name":"Remote Sensing in Ecology and Conservation","volume":"21 1","pages":""},"PeriodicalIF":5.5,"publicationDate":"2025-12-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145673784","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"环境科学与生态学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Targeted conservation measures are contingent on robust knowledge of spatio‐temporal animal distribution in areas of interest. We explore unmanned aerial vehicle (UAV) transect monitoring as a novel method for standardized digital aerial surveys of marine megafauna by investigating the fine‐resolution spatio‐temporal distribution of harbour porpoises ( Phocoena phocoena ) in a Swedish nature reserve along with drivers of this distribution and potential biases. Biweekly UAV video data were collected along pre‐programmed strip transects over 17 weeks from June to September 2023, totalling a survey area of 3.37 km 2 , thereby providing porpoise monitoring data covering 89% of a special area of conservation for the species. All UAV video data were manually reviewed by a primary observer, and 25% of the UAV footage was also reviewed by a second, unexperienced observer to identify observer bias and learning effects. No significant observer bias or learning effect was found, but increased sea state affected porpoise density negatively. From the monitoring data, we were able to calculate relative density estimates, identify small‐scale spatio‐temporal differences and detect negative effects of recreational boat activity on porpoise presence. We further demonstrate that within this restricted area, porpoises are found in higher relative densities outside a designated conservation area, compared to within the conservation area, providing important knowledge to guide fine‐scale local conservation actions. We highlight advantages and areas of improvement of UAV transect monitoring as an accessible, versatile and adaptable method to survey marine megafauna in spatially restricted specific areas of interest. We conclude that this method constitutes a promising and valuable tool for wildlife monitoring, especially as it can be easily adapted and modified for specific contexts and species.
{"title":"Programmed unmanned aerial vehicles show great potential for monitoring marine megafauna in specific areas of interest","authors":"Dinah Hartmann, Valdemar Palmqvist, Johanna Stedt","doi":"10.1002/rse2.70043","DOIUrl":"https://doi.org/10.1002/rse2.70043","url":null,"abstract":"Targeted conservation measures are contingent on robust knowledge of spatio‐temporal animal distribution in areas of interest. We explore unmanned aerial vehicle (UAV) transect monitoring as a novel method for standardized digital aerial surveys of marine megafauna by investigating the fine‐resolution spatio‐temporal distribution of harbour porpoises ( <jats:italic>Phocoena phocoena</jats:italic> ) in a Swedish nature reserve along with drivers of this distribution and potential biases. Biweekly UAV video data were collected along pre‐programmed strip transects over 17 weeks from June to September 2023, totalling a survey area of 3.37 km <jats:sup>2</jats:sup> , thereby providing porpoise monitoring data covering 89% of a special area of conservation for the species. All UAV video data were manually reviewed by a primary observer, and 25% of the UAV footage was also reviewed by a second, unexperienced observer to identify observer bias and learning effects. No significant observer bias or learning effect was found, but increased sea state affected porpoise density negatively. From the monitoring data, we were able to calculate relative density estimates, identify small‐scale spatio‐temporal differences and detect negative effects of recreational boat activity on porpoise presence. We further demonstrate that within this restricted area, porpoises are found in higher relative densities outside a designated conservation area, compared to within the conservation area, providing important knowledge to guide fine‐scale local conservation actions. We highlight advantages and areas of improvement of UAV transect monitoring as an accessible, versatile and adaptable method to survey marine megafauna in spatially restricted specific areas of interest. We conclude that this method constitutes a promising and valuable tool for wildlife monitoring, especially as it can be easily adapted and modified for specific contexts and species.","PeriodicalId":21132,"journal":{"name":"Remote Sensing in Ecology and Conservation","volume":"121 1","pages":""},"PeriodicalIF":5.5,"publicationDate":"2025-11-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145593542","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"环境科学与生态学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Ying Ki Law, Yi‐Fei Gu, Shuwen Liu, Guangqin Song, Aland H. Y. Chan, Cham Man Tse, Zhonghua Liu, Martha J. Ledger, Billy C. H. Hau, Sawaid Abbas, Jin Wu
Accurate forest age estimation is essential for understanding forest recovery trajectories and evaluating the efficacy of restoration strategies. While field‐based methods for forest age estimation offer high accuracy, they are spatially constrained and challenging to apply retrospectively. In contrast, satellite‐based approaches provide extensive regional coverage but may lack precision at the local landscape level. Historical aerial photographs can bridge this gap by delivering fine‐scale land cover information. However, challenges such as limited spectral bands and topographic shadows in hilly terrains introduce uncertainty in land cover segmentation and temporal dynamics, complicating accurate forest age determination. To address these challenges, we developed a two‐step deep learning approach for image segmentation using historical aerial photographs. The method involves using a pre‐trained deep learning model with open‐source forest labels, followed by fine‐tuning based on localized forest data. This approach achieved accurate forest segmentation, with our highest accuracy model (mean IoU of 0.859) utilizing a combined U‐Net and ResNet50 architecture. Our forest age estimates demonstrated superior agreement, significantly outperforming existing national forest age products for China in terms of both temporal coverage and accuracy. By overlaying our age product with LiDAR structural metrics, we uncovered strong yet distinct recovery trajectories across forest structure attributes. Collectively, our study demonstrates the effectiveness of deep learning algorithms for forest age monitoring using greyscale historical aerial photographs, while pinpointing the limitations of existing national‐scale forest age products for local monitoring. Enhanced fine‐scale forest age mapping provides an essential technique and dataset to advance our understanding of forest regrowth and structural dynamics, and this improved knowledge of forest dynamics will aid in assessing carbon sequestration potential and informing targeted forest management and restoration strategies.
{"title":"Improving forest age estimation to understand subtropical forest regrowth dynamics using deep learning image segmentation of time‐series historical aerial photographs","authors":"Ying Ki Law, Yi‐Fei Gu, Shuwen Liu, Guangqin Song, Aland H. Y. Chan, Cham Man Tse, Zhonghua Liu, Martha J. Ledger, Billy C. H. Hau, Sawaid Abbas, Jin Wu","doi":"10.1002/rse2.70042","DOIUrl":"https://doi.org/10.1002/rse2.70042","url":null,"abstract":"Accurate forest age estimation is essential for understanding forest recovery trajectories and evaluating the efficacy of restoration strategies. While field‐based methods for forest age estimation offer high accuracy, they are spatially constrained and challenging to apply retrospectively. In contrast, satellite‐based approaches provide extensive regional coverage but may lack precision at the local landscape level. Historical aerial photographs can bridge this gap by delivering fine‐scale land cover information. However, challenges such as limited spectral bands and topographic shadows in hilly terrains introduce uncertainty in land cover segmentation and temporal dynamics, complicating accurate forest age determination. To address these challenges, we developed a two‐step deep learning approach for image segmentation using historical aerial photographs. The method involves using a pre‐trained deep learning model with open‐source forest labels, followed by fine‐tuning based on localized forest data. This approach achieved accurate forest segmentation, with our highest accuracy model (mean IoU of 0.859) utilizing a combined U‐Net and ResNet50 architecture. Our forest age estimates demonstrated superior agreement, significantly outperforming existing national forest age products for China in terms of both temporal coverage and accuracy. By overlaying our age product with LiDAR structural metrics, we uncovered strong yet distinct recovery trajectories across forest structure attributes. Collectively, our study demonstrates the effectiveness of deep learning algorithms for forest age monitoring using greyscale historical aerial photographs, while pinpointing the limitations of existing national‐scale forest age products for local monitoring. Enhanced fine‐scale forest age mapping provides an essential technique and dataset to advance our understanding of forest regrowth and structural dynamics, and this improved knowledge of forest dynamics will aid in assessing carbon sequestration potential and informing targeted forest management and restoration strategies.","PeriodicalId":21132,"journal":{"name":"Remote Sensing in Ecology and Conservation","volume":"17 1","pages":""},"PeriodicalIF":5.5,"publicationDate":"2025-11-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145525203","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"环境科学与生态学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Fabien H. Wagner, Ricardo Dalagnol, Griffin Carter, Mayumi C. M. Hirye, Shivraj Gill, Le Bienfaiteur Sagang Takougoum, Samuel Favrichon, Michael Keller, Jean P. H. B. Ometto, Lorena Alves, Cynthia Creze, Stephanie P. George‐Chacon, Shuang Li, Zhihua Liu, Adugna Mullissa, Yan Yang, Erone G. Santos, Sarah R. Worden, Martin Brandt, Philippe Ciais, Stephen C. Hagen, Sassan Saatchi
Tree canopy height is a key indicator of forest biomass, productivity and structure, yet measuring it accurately at regional or larger scales, whether from the ground or remotely, remains challenging. The objective of this study is to generate the first complete canopy height map of the Amazon forest at ~4.78 m resolution using Planet NICFI imagery and deep learning. Specifically, we (i) trained a U‐Net regression model with canopy height models (CHMs) derived from tropical airborne LiDAR and their corresponding Planet NICFI images to estimate canopy height, (ii) evaluated the accuracy of our map against existing global products based on Sentinel‐2/1 and Maxar Vivid2 imagery and (iii) assessed its capacity to capture small‐scale canopy height changes. Tree height predictions on the validation sample had a mean absolute error of 3.68 m, with minimal systematic bias across the full range of tree heights in the Amazon forest. The main biases are a slight overestimation (up to 5 m) for heights of 5–15 m and an underestimation for most trees above 50 m. Outperforming existing global model‐based canopy height products in this region, the model accurately estimated canopy heights up to 40–50 m with minimal saturation. We determined that the Amazon forest has an average canopy height of ~22 m (standard deviation ~5.3 m) and exhibits large‐scale patterns, ranging from the tallest forests of the Guiana Shield to shorter forests along wetlands, rivers, rocky outcrops, savannas and high elevations. Events such as logging or deforestation could be detected from changes in tree height, and the results demonstrated a first success in monitoring the height of regenerating forests. Finally, the map of the Amazon forest canopy height is displayed.
{"title":"Wall‐to‐wall Amazon forest height mapping with planet NICFI , Aerial LiDAR , and a U‐Net regression model","authors":"Fabien H. Wagner, Ricardo Dalagnol, Griffin Carter, Mayumi C. M. Hirye, Shivraj Gill, Le Bienfaiteur Sagang Takougoum, Samuel Favrichon, Michael Keller, Jean P. H. B. Ometto, Lorena Alves, Cynthia Creze, Stephanie P. George‐Chacon, Shuang Li, Zhihua Liu, Adugna Mullissa, Yan Yang, Erone G. Santos, Sarah R. Worden, Martin Brandt, Philippe Ciais, Stephen C. Hagen, Sassan Saatchi","doi":"10.1002/rse2.70041","DOIUrl":"https://doi.org/10.1002/rse2.70041","url":null,"abstract":"Tree canopy height is a key indicator of forest biomass, productivity and structure, yet measuring it accurately at regional or larger scales, whether from the ground or remotely, remains challenging. The objective of this study is to generate the first complete canopy height map of the Amazon forest at ~4.78 m resolution using Planet NICFI imagery and deep learning. Specifically, we (i) trained a U‐Net regression model with canopy height models (CHMs) derived from tropical airborne LiDAR and their corresponding Planet NICFI images to estimate canopy height, (ii) evaluated the accuracy of our map against existing global products based on Sentinel‐2/1 and Maxar Vivid2 imagery and (iii) assessed its capacity to capture small‐scale canopy height changes. Tree height predictions on the validation sample had a mean absolute error of 3.68 m, with minimal systematic bias across the full range of tree heights in the Amazon forest. The main biases are a slight overestimation (up to 5 m) for heights of 5–15 m and an underestimation for most trees above 50 m. Outperforming existing global model‐based canopy height products in this region, the model accurately estimated canopy heights up to 40–50 m with minimal saturation. We determined that the Amazon forest has an average canopy height of ~22 m (standard deviation ~5.3 m) and exhibits large‐scale patterns, ranging from the tallest forests of the Guiana Shield to shorter forests along wetlands, rivers, rocky outcrops, savannas and high elevations. Events such as logging or deforestation could be detected from changes in tree height, and the results demonstrated a first success in monitoring the height of regenerating forests. Finally, the map of the Amazon forest canopy height is displayed.","PeriodicalId":21132,"journal":{"name":"Remote Sensing in Ecology and Conservation","volume":"92 1","pages":""},"PeriodicalIF":5.5,"publicationDate":"2025-11-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145525202","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"环境科学与生态学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Henry Simmons, Dang Nguyen, Benjamin Misiuk, Daniel Ierodiaconou, Sunil Gupta, Oli Dalby, Mary Young
Marine habitat maps are essential tools for marine spatial planning, providing information for decision‐making in conservation and resource management. Accurate classification of benthic habitats supports their sustainable use and identifies key areas for protection. Convolutional neural networks (CNNs) are powerful deep learning algorithms that have shown promise for advancing habitat classification tasks and mapping complex marine environments. This study compares the performance of a CNN and a Random Forest (RF) model in classifying benthic habitats within Apollo Marine Park, Victoria, Australia. Models were trained to classify three distinct habitat types using bathymetry, multibeam backscatter, wave height and positioning data; however, the RF model had access to 100 additional bathymetric derivatives, of which 10 were selected as predictors. The CNN achieved an overall accuracy of 67.32%, while the RF model achieved 62.57%. For individual habitats, the CNN obtained F1‐scores of 0.664 for high energy circalittoral rock with seabed‐covering sponges , 0.538 for low complexity circalittoral rock with non‐crowded erect sponges and 0.774 for infralittoral sand and shell mixes . The corresponding RF scores were 0.598, 0.506 and 0.739. Both models encountered challenges in classifying transitional habitat zones, where diffuse boundaries between habitat types led to overlaps and shared acoustic properties. However, the CNN demonstrated an advantage due to its ability to automatically analyse spatial patterns across multiple scales. In contrast, while the RF model incorporated terrain attributes that capture local variation, its ability to utilize spatial context was constrained to predefined scales of the derived features. The CNN's ability to leverage spatial relationships resulted in clearer and more coherent habitat maps, reducing the salt‐and‐pepper effect commonly observed in pixel‐based classifications. This study highlights the potential of CNNs for marine habitat mapping through their ability to classify data derived from multibeam bathymetry, while also identifying avenues for further refinement to enhance their utility in marine spatial planning tasks.
{"title":"Comparing convolutional neural network and random forest for benthic habitat mapping in Apollo Marine Park","authors":"Henry Simmons, Dang Nguyen, Benjamin Misiuk, Daniel Ierodiaconou, Sunil Gupta, Oli Dalby, Mary Young","doi":"10.1002/rse2.70038","DOIUrl":"https://doi.org/10.1002/rse2.70038","url":null,"abstract":"Marine habitat maps are essential tools for marine spatial planning, providing information for decision‐making in conservation and resource management. Accurate classification of benthic habitats supports their sustainable use and identifies key areas for protection. Convolutional neural networks (CNNs) are powerful deep learning algorithms that have shown promise for advancing habitat classification tasks and mapping complex marine environments. This study compares the performance of a CNN and a Random Forest (RF) model in classifying benthic habitats within Apollo Marine Park, Victoria, Australia. Models were trained to classify three distinct habitat types using bathymetry, multibeam backscatter, wave height and positioning data; however, the RF model had access to 100 additional bathymetric derivatives, of which 10 were selected as predictors. The CNN achieved an overall accuracy of 67.32%, while the RF model achieved 62.57%. For individual habitats, the CNN obtained F1‐scores of 0.664 for <jats:italic>high energy circalittoral rock with seabed‐covering sponges</jats:italic> , 0.538 for <jats:italic>low complexity circalittoral rock with non‐crowded erect sponges</jats:italic> and 0.774 for <jats:italic>infralittoral sand and shell mixes</jats:italic> . The corresponding RF scores were 0.598, 0.506 and 0.739. Both models encountered challenges in classifying transitional habitat zones, where diffuse boundaries between habitat types led to overlaps and shared acoustic properties. However, the CNN demonstrated an advantage due to its ability to automatically analyse spatial patterns across multiple scales. In contrast, while the RF model incorporated terrain attributes that capture local variation, its ability to utilize spatial context was constrained to predefined scales of the derived features. The CNN's ability to leverage spatial relationships resulted in clearer and more coherent habitat maps, reducing the salt‐and‐pepper effect commonly observed in pixel‐based classifications. This study highlights the potential of CNNs for marine habitat mapping through their ability to classify data derived from multibeam bathymetry, while also identifying avenues for further refinement to enhance their utility in marine spatial planning tasks.","PeriodicalId":21132,"journal":{"name":"Remote Sensing in Ecology and Conservation","volume":"29 1","pages":""},"PeriodicalIF":5.5,"publicationDate":"2025-11-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145525204","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"环境科学与生态学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
National‐scale benthic marine habitat maps underpin monitoring and conservation of vulnerable marine and coastal ecosystems. Cloud‐based satellite remote sensing can streamline these processes over spatial scales that would otherwise be financially and logistically challenging. Here, we test the sensitivity of mapped outputs to three key methodological choices when generating open‐source cloud‐based satellite maps of seagrass meadows: (1) period of image retrieval (seasonality, tested at n = 7 sites over n = 5 years); (2) machine learning classification method (SVM, RF, CART) over a range of training pixel densities ( n = 12 points with 0.0004–0.8757 training points/km 2 ) and (3) input satellite data choice ( n = 3: Landsat 8, Planet NICFI and Sentinel‐2). We found that in the Maldives, when using best available cloud masking methods, monsoonal cloud patterns introduce noise into satellite images, with implications for mapping accuracy. Comparing methods at the classification phase, Overall Accuracy (OA) was similar between classification methods, though SVM performed best (OA = 84.6%). We also determined that workflows using data derived from Sentinel‐2 resulted in the most accurate binary thematic seagrass map (OA = 80.3%), compared to Landsat 8 and Planet NICFI (OA = 72.7 and 74.8%, respectively). These results indicate that data source has a larger effect on OA than classifier type, and therefore should be the primary consideration for map producers. We further recommend that, as studies increasingly work over larger extents (i.e. >1,000 km 2 ), the minimum density of points used to train a binary classification of seagrass from Sentinel‐2 data ought to be 0.67/km 2 . We present an open‐source (for non‐commercial uses) workflow for generating high‐resolution national‐scale seagrass maps. Insights from this work can be applied in other settings globally to improve outcomes for marine planning and international targets on climate change and the conservation of biodiversity.
{"title":"Evaluating methods for high‐resolution, national‐scale seagrass mapping in Google Earth Engine","authors":"Matthew Floyd, Holly K. East, Andrew J. Suggitt","doi":"10.1002/rse2.70039","DOIUrl":"https://doi.org/10.1002/rse2.70039","url":null,"abstract":"National‐scale benthic marine habitat maps underpin monitoring and conservation of vulnerable marine and coastal ecosystems. Cloud‐based satellite remote sensing can streamline these processes over spatial scales that would otherwise be financially and logistically challenging. Here, we test the sensitivity of mapped outputs to three key methodological choices when generating open‐source cloud‐based satellite maps of seagrass meadows: (1) period of image retrieval (seasonality, tested at <jats:italic>n</jats:italic> = 7 sites over <jats:italic>n</jats:italic> = 5 years); (2) machine learning classification method (SVM, RF, CART) over a range of training pixel densities ( <jats:italic>n</jats:italic> = 12 points with 0.0004–0.8757 training points/km <jats:sup>2</jats:sup> ) and (3) input satellite data choice ( <jats:italic>n</jats:italic> = 3: Landsat 8, Planet NICFI and Sentinel‐2). We found that in the Maldives, when using best available cloud masking methods, monsoonal cloud patterns introduce noise into satellite images, with implications for mapping accuracy. Comparing methods at the classification phase, Overall Accuracy (OA) was similar between classification methods, though SVM performed best (OA = 84.6%). We also determined that workflows using data derived from Sentinel‐2 resulted in the most accurate binary thematic seagrass map (OA = 80.3%), compared to Landsat 8 and Planet NICFI (OA = 72.7 and 74.8%, respectively). These results indicate that data source has a larger effect on OA than classifier type, and therefore should be the primary consideration for map producers. We further recommend that, as studies increasingly work over larger extents (i.e. >1,000 km <jats:sup>2</jats:sup> ), the minimum density of points used to train a binary classification of seagrass from Sentinel‐2 data ought to be 0.67/km <jats:sup>2</jats:sup> . We present an open‐source (for non‐commercial uses) workflow for generating high‐resolution national‐scale seagrass maps. Insights from this work can be applied in other settings globally to improve outcomes for marine planning and international targets on climate change and the conservation of biodiversity.","PeriodicalId":21132,"journal":{"name":"Remote Sensing in Ecology and Conservation","volume":"56 1","pages":""},"PeriodicalIF":5.5,"publicationDate":"2025-10-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145382037","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"环境科学与生态学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Alexandra J. Strang, Dean P. Anderson, Esme Robinson, Grant Ballard, Annie E. Schmidt, David G. Ainley, Kerry Barton, Fiona Shanhun, Elissa Z. Cameron, Michelle A. LaRue
Adélie penguin ( Pygoscelis adeliae ) colonies can be detected from space using very high‐resolution (VHR; 0.3–0.6 m resolution) satellite imagery, as the contrast between their guano and the surrounding terrain enables colony identification even when physical access is not possible. While VHR imagery has been used to estimate colony size, its potential to detect annual changes remains underexplored, yet is critical for linking population dynamics to oceanographic change. We investigated the utility of VHR imagery for indirect population assessments of this species, expanding on previous work with a decade of imagery and independent population counts. We studied VHR images from four well‐surveyed Ross Sea colonies, that together represent ~10% of the global population: capes Crozier, Bird and Royds, and Inexpressible Island, over the austral summers of 2009–2021. We used supervised object‐based support vector machine classifications to extract guano area from 30 VHR images. We related guano area (m 2 ) to colony size (aerial census counts), assessing for both spatial and temporal autocorrelation. In the process, we investigated various spatial parameters (the average slope steepness, aspect, and perimeter‐to‐area ratio of the guano). Guano area was highly correlated with concurrent counts of breeding pairs, indicating the ability to detect several orders of magnitude difference in colony size. However, large within‐colony variation meant that when using guano area alone the number of breeding pairs had to change by 44% to confidently detect a true change in colony size. Therefore, although VHR imagery can be used to detect significant differences in colony size, minimal sensitivity to interannual fluctuations was indicated, likely due to the difficulty in distinguishing the fresh, current‐year guano from guano of previous years, affected by the rate of weathering. This highlights an important limitation to advances in VHR imagery for some wildlife monitoring and enforces the criticality of ground validation.
{"title":"Ground‐truthing of satellite imagery to assess seabird colony size: A test using Adélie penguins","authors":"Alexandra J. Strang, Dean P. Anderson, Esme Robinson, Grant Ballard, Annie E. Schmidt, David G. Ainley, Kerry Barton, Fiona Shanhun, Elissa Z. Cameron, Michelle A. LaRue","doi":"10.1002/rse2.70040","DOIUrl":"https://doi.org/10.1002/rse2.70040","url":null,"abstract":"Adélie penguin ( <jats:italic>Pygoscelis adeliae</jats:italic> ) colonies can be detected from space using very high‐resolution (VHR; 0.3–0.6 m resolution) satellite imagery, as the contrast between their guano and the surrounding terrain enables colony identification even when physical access is not possible. While VHR imagery has been used to estimate colony size, its potential to detect annual changes remains underexplored, yet is critical for linking population dynamics to oceanographic change. We investigated the utility of VHR imagery for indirect population assessments of this species, expanding on previous work with a decade of imagery and independent population counts. We studied VHR images from four well‐surveyed Ross Sea colonies, that together represent ~10% of the global population: capes Crozier, Bird and Royds, and Inexpressible Island, over the austral summers of 2009–2021. We used supervised object‐based support vector machine classifications to extract guano area from 30 VHR images. We related guano area (m <jats:sup>2</jats:sup> ) to colony size (aerial census counts), assessing for both spatial and temporal autocorrelation. In the process, we investigated various spatial parameters (the average slope steepness, aspect, and perimeter‐to‐area ratio of the guano). Guano area was highly correlated with concurrent counts of breeding pairs, indicating the ability to detect several orders of magnitude difference in colony size. However, large within‐colony variation meant that when using guano area alone the number of breeding pairs had to change by 44% to confidently detect a true change in colony size. Therefore, although VHR imagery can be used to detect significant differences in colony size, minimal sensitivity to interannual fluctuations was indicated, likely due to the difficulty in distinguishing the fresh, current‐year guano from guano of previous years, affected by the rate of weathering. This highlights an important limitation to advances in VHR imagery for some wildlife monitoring and enforces the criticality of ground validation.","PeriodicalId":21132,"journal":{"name":"Remote Sensing in Ecology and Conservation","volume":"110 1","pages":""},"PeriodicalIF":5.5,"publicationDate":"2025-10-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145382040","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"环境科学与生态学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}