Arthur Bayle, Baptiste Nicoud, Jérôme Mansons, Loïc Francon, Christophe Corona, Philippe Choler
Multidecadal time series of satellite observations, such as those from Landsat, offer the possibility to study trends in vegetation greenness at unprecedented spatial and temporal scales. Alpine ecosystems have exhibited large increases in vegetation greenness as seen from space; nevertheless, the ecological processes underlying alpine greening have rarely been investigated. Here, we used a unique dataset of forest stand and structure characteristics derived from manually orthorectified high‐resolution diachronic images (1983 and 2018), dendrochronology and LiDAR analysis to decipher the ecological processes underlying alpine greening in the southwestern French Alps, formerly identified as a hotspot of greening at the scale of the European Alps by previous studies. We found that most of the alpine greening in this area can be attributed to forest dynamics, including forest ingrowth and treeline upward shift. Furthermore, we showed that the magnitude of the greening was highest in pixels/areas where trees were first established at the beginning of the Landsat time series in the mid‐80s corresponding to a specific forest successional stage. In these pixels, we observe that trees from the first wave of establishment have grown between 1984 and 2023, while over the same period, younger trees established in forest gaps, leading to increases in both vertical and horizontal vegetation cover. This study provides an in‐depth description of the causal relationship between forest dynamics and greening, providing a unique example of how ecological processes translate into radiometric signals, while also paving the way for the study of large‐scale treeline dynamics using satellite remote sensing.
{"title":"Alpine greening deciphered by forest stand and structure dynamics in advancing treelines of the southwestern European Alps","authors":"Arthur Bayle, Baptiste Nicoud, Jérôme Mansons, Loïc Francon, Christophe Corona, Philippe Choler","doi":"10.1002/rse2.430","DOIUrl":"https://doi.org/10.1002/rse2.430","url":null,"abstract":"Multidecadal time series of satellite observations, such as those from Landsat, offer the possibility to study trends in vegetation greenness at unprecedented spatial and temporal scales. Alpine ecosystems have exhibited large increases in vegetation greenness as seen from space; nevertheless, the ecological processes underlying alpine greening have rarely been investigated. Here, we used a unique dataset of forest stand and structure characteristics derived from manually orthorectified high‐resolution diachronic images (1983 and 2018), dendrochronology and LiDAR analysis to decipher the ecological processes underlying alpine greening in the southwestern French Alps, formerly identified as a hotspot of greening at the scale of the European Alps by previous studies. We found that most of the alpine greening in this area can be attributed to forest dynamics, including forest ingrowth and treeline upward shift. Furthermore, we showed that the magnitude of the greening was highest in pixels/areas where trees were first established at the beginning of the Landsat time series in the mid‐80s corresponding to a specific forest successional stage. In these pixels, we observe that trees from the first wave of establishment have grown between 1984 and 2023, while over the same period, younger trees established in forest gaps, leading to increases in both vertical and horizontal vegetation cover. This study provides an in‐depth description of the causal relationship between forest dynamics and greening, providing a unique example of how ecological processes translate into radiometric signals, while also paving the way for the study of large‐scale treeline dynamics using satellite remote sensing.","PeriodicalId":21132,"journal":{"name":"Remote Sensing in Ecology and Conservation","volume":"27 1","pages":""},"PeriodicalIF":5.5,"publicationDate":"2025-01-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142916837","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"环境科学与生态学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Jonathan Growcott, Alex Lobora, Andrew Markham, Charlotte E. Searle, Johan Wahlström, Matthew Wijers, Benno I. Simmons
Conservation requires accurate information about species occupancy, populations and behaviour. However, gathering these data for elusive, solitary species, such as leopards (Panthera pardus), is often challenging. Utilizing novel technologies that augment data collection by exploiting different species' traits could enable monitoring at larger spatiotemporal scales. Here, we conducted the first, large‐scale (~450 km2) paired passive acoustic monitoring (n = 50) and camera trapping survey (n = 50), for large African carnivores, in Nyerere National Park, Tanzania. We tested whether leopards could be individually distinguished by their vocalizations. We identified individual leopards from camera trap images and then extracted their roaring bouts in the concurrent audio. We extracted leopard roar summary features and used 2‐state Gaussian Hidden–Markov Models (HMMs) to model the temporal pattern of individual leopard roars. Using leopard roar summary features, individual vocal discrimination was achieved at a maximum accuracy of 46.6%. When using HMMs to evaluate the temporal pattern of a leopard's roar, individual identification was more successful, with an overall accuracy of 93.1% and macro‐F1 score of 0.78. Our study shows that using multiple modes of technology, which record complementary data, can be used to discover species traits, such as, individual leopards can be identified from their vocalizations. Even though additional equipment, data management and analytical expertise are required, paired surveys are still a promising monitoring methodology which can exploit a wider variety of species traits, to monitor and inform species conservation more efficiently, than single technology studies alone.
{"title":"The secret acoustic world of leopards: A paired camera trap and bioacoustics survey facilitates the individual identification of leopards via their roars","authors":"Jonathan Growcott, Alex Lobora, Andrew Markham, Charlotte E. Searle, Johan Wahlström, Matthew Wijers, Benno I. Simmons","doi":"10.1002/rse2.429","DOIUrl":"https://doi.org/10.1002/rse2.429","url":null,"abstract":"Conservation requires accurate information about species occupancy, populations and behaviour. However, gathering these data for elusive, solitary species, such as leopards (<jats:italic>Panthera pardus</jats:italic>), is often challenging. Utilizing novel technologies that augment data collection by exploiting different species' traits could enable monitoring at larger spatiotemporal scales. Here, we conducted the first, large‐scale (~450 km<jats:sup>2</jats:sup>) paired passive acoustic monitoring (<jats:italic>n</jats:italic> = 50) and camera trapping survey (<jats:italic>n =</jats:italic> 50), for large African carnivores, in Nyerere National Park, Tanzania. We tested whether leopards could be individually distinguished by their vocalizations. We identified individual leopards from camera trap images and then extracted their roaring bouts in the concurrent audio. We extracted leopard roar summary features and used 2‐state Gaussian Hidden–Markov Models (HMMs) to model the temporal pattern of individual leopard roars. Using leopard roar summary features, individual vocal discrimination was achieved at a maximum accuracy of 46.6%. When using HMMs to evaluate the temporal pattern of a leopard's roar, individual identification was more successful, with an overall accuracy of 93.1% and macro‐F1 score of 0.78. Our study shows that using multiple modes of technology, which record complementary data, can be used to discover species traits, such as, individual leopards can be identified from their vocalizations. Even though additional equipment, data management and analytical expertise are required, paired surveys are still a promising monitoring methodology which can exploit a wider variety of species traits, to monitor and inform species conservation more efficiently, than single technology studies alone.","PeriodicalId":21132,"journal":{"name":"Remote Sensing in Ecology and Conservation","volume":"48 1","pages":""},"PeriodicalIF":5.5,"publicationDate":"2024-12-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142874482","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"环境科学与生态学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Mohammed S. Ozigis, Serge Wich, Adrià Descals, Zoltan Szantoi, Erik Meijaard
Oil palm (Elaeis guineensis) cultivation in Central Africa (CA) has become important because of the increased global demand for vegetable oils. The region is highly suitable for the cultivation of oil palm and this increases pressure on forest biodiversity in the region. Accurate maps are therefore needed to understand trends in oil palm expansion for landscape‐level planning, conservation management of endangered species, such as great apes, biodiversity appraisal and supply of ecosystem services. In this study, we demonstrate the utility of a U‐Net Deep Learning Model and product fusion for mapping the extent of oil palm plantations for six countries within CA, including Cameroon, Central African Republic, Democratic Republic of Congo (DRC), Equatorial Guinea, Gabon and Republic of Congo. Sentinel‐1 and Sentinel‐2 data for the year 2021 were classified using a U‐Net model. Overall classification accuracy for the final oil palm layer was 96.4 ± 1.1%. Producer Accuracy (PA) and User Accuracy (UA) for the industrial and smallholder oil palm classes were 91.6 ± 1.7% and 95.0 ± 1.3%, 67.7 ± 2.8% and 70.0 ± 2.8%. Post classification assessment of the transition from tropical moist forest (TMF) cover to oil palm within the six CA countries suggests that over 1000 Square Kilometer (km2) of forest within great ape ranges had so far been converted to oil palm between 2000 and 2021. Results from this study indicate a more extensive cover of smallholder oil palm than previously reported for the region. Our results also indicate that expansion of other agricultural activities may be an important driver of deforestation as nearly 170 000 km2 of forest loss was recorded within the IUCN ranges of the African great apes between 2000 and 2021. Output from this study represents the first oil palm map for the CA, with specific emphasis on the impact of its expansion on great ape ranges. This presents a dependable baseline through which future actions can be formulated in addressing conservation needs for the African Great Apes within the region.
{"title":"Mapping oil palm plantations and their implications on forest and great ape habitat loss in Central Africa","authors":"Mohammed S. Ozigis, Serge Wich, Adrià Descals, Zoltan Szantoi, Erik Meijaard","doi":"10.1002/rse2.428","DOIUrl":"https://doi.org/10.1002/rse2.428","url":null,"abstract":"Oil palm (<jats:italic>Elaeis guineensis</jats:italic>) cultivation in Central Africa (CA) has become important because of the increased global demand for vegetable oils. The region is highly suitable for the cultivation of oil palm and this increases pressure on forest biodiversity in the region. Accurate maps are therefore needed to understand trends in oil palm expansion for landscape‐level planning, conservation management of endangered species, such as great apes, biodiversity appraisal and supply of ecosystem services. In this study, we demonstrate the utility of a U‐Net Deep Learning Model and product fusion for mapping the extent of oil palm plantations for six countries within CA, including Cameroon, Central African Republic, Democratic Republic of Congo (DRC), Equatorial Guinea, Gabon and Republic of Congo. Sentinel‐1 and Sentinel‐2 data for the year 2021 were classified using a U‐Net model. Overall classification accuracy for the final oil palm layer was 96.4 ± 1.1%. Producer Accuracy (PA) and User Accuracy (UA) for the industrial and smallholder oil palm classes were 91.6 ± 1.7% and 95.0 ± 1.3%, 67.7 ± 2.8% and 70.0 ± 2.8%. Post classification assessment of the transition from tropical moist forest (TMF) cover to oil palm within the six CA countries suggests that over 1000 Square Kilometer (km<jats:sup>2</jats:sup>) of forest within great ape ranges had so far been converted to oil palm between 2000 and 2021. Results from this study indicate a more extensive cover of smallholder oil palm than previously reported for the region. Our results also indicate that expansion of other agricultural activities may be an important driver of deforestation as nearly 170 000 km<jats:sup>2</jats:sup> of forest loss was recorded within the IUCN ranges of the African great apes between 2000 and 2021. Output from this study represents the first oil palm map for the CA, with specific emphasis on the impact of its expansion on great ape ranges. This presents a dependable baseline through which future actions can be formulated in addressing conservation needs for the African Great Apes within the region.","PeriodicalId":21132,"journal":{"name":"Remote Sensing in Ecology and Conservation","volume":"35 1","pages":""},"PeriodicalIF":5.5,"publicationDate":"2024-12-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142825182","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"环境科学与生态学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Stephanie Roilo, Tim R. Hofmeester, Magali Frauendorf, Anna Widén, Anna F. Cord
Agroecosystems are experiencing a biodiversity crisis. Biodiversity monitoring is needed to inform conservation, but existing monitoring schemes lack standardisation and are biased towards birds, insects and plants. Automated monitoring techniques offer a promising solution, but while passive acoustic monitoring and remote sensing are increasingly used, the potential of camera traps (CTs) in farmland remains underexplored. We reviewed CT publications from the last 30 years and found only 59 articles that sampled farmland habitats in Europe. The main research topics addressed management or (avian) conservation issues, such as monitoring wildlife‐livestock interactions, nest predation, and the use of feeders and water troughs. Fewer studies employed landscape‐wide approaches to investigate species' habitat use or activity patterns over large agricultural areas. We discuss existing barriers to a more widespread use of CTs in farmland and suggest strategies to overcome them: boxed CTs tailored for small mammals, reptiles and amphibians, perch‐mounted CTs for raptor monitoring and time‐lapse imagery can help in overcoming the technical challenges of monitoring (small) elusive species in open habitats where misfires and missed detections are more frequent. Such approaches would also expand the taxonomic coverage of farmland monitoring schemes towards under‐surveyed species and species groups. Moreover, the engagement of farmers in CT‐based biodiversity monitoring programmes and advances in computer vision for image classification provide opportunities for low‐cost, broad‐scale and automated monitoring schemes. Research priorities that could be tackled through such CT applications include basic science topics such as unravelling animal space use in agricultural landscapes, and how this is influenced by varying agricultural practices. Management‐related research priorities relate to crop damage and livestock predation by wildlife, disease transmission between wildlife and livestock, effects of agrochemicals on wildlife, and the monitoring and assessment of conservation measures. Altogether, CTs hold great, yet unexplored, potential to advance agroecological research.
{"title":"The untapped potential of camera traps for farmland biodiversity monitoring: current practice and outstanding agroecological questions","authors":"Stephanie Roilo, Tim R. Hofmeester, Magali Frauendorf, Anna Widén, Anna F. Cord","doi":"10.1002/rse2.426","DOIUrl":"https://doi.org/10.1002/rse2.426","url":null,"abstract":"Agroecosystems are experiencing a biodiversity crisis. Biodiversity monitoring is needed to inform conservation, but existing monitoring schemes lack standardisation and are biased towards birds, insects and plants. Automated monitoring techniques offer a promising solution, but while passive acoustic monitoring and remote sensing are increasingly used, the potential of camera traps (CTs) in farmland remains underexplored. We reviewed CT publications from the last 30 years and found only 59 articles that sampled farmland habitats in Europe. The main research topics addressed management or (avian) conservation issues, such as monitoring wildlife‐livestock interactions, nest predation, and the use of feeders and water troughs. Fewer studies employed landscape‐wide approaches to investigate species' habitat use or activity patterns over large agricultural areas. We discuss existing barriers to a more widespread use of CTs in farmland and suggest strategies to overcome them: boxed CTs tailored for small mammals, reptiles and amphibians, perch‐mounted CTs for raptor monitoring and time‐lapse imagery can help in overcoming the technical challenges of monitoring (small) elusive species in open habitats where misfires and missed detections are more frequent. Such approaches would also expand the taxonomic coverage of farmland monitoring schemes towards under‐surveyed species and species groups. Moreover, the engagement of farmers in CT‐based biodiversity monitoring programmes and advances in computer vision for image classification provide opportunities for low‐cost, broad‐scale and automated monitoring schemes. Research priorities that could be tackled through such CT applications include basic science topics such as unravelling animal space use in agricultural landscapes, and how this is influenced by varying agricultural practices. Management‐related research priorities relate to crop damage and livestock predation by wildlife, disease transmission between wildlife and livestock, effects of agrochemicals on wildlife, and the monitoring and assessment of conservation measures. Altogether, CTs hold great, yet unexplored, potential to advance agroecological research.","PeriodicalId":21132,"journal":{"name":"Remote Sensing in Ecology and Conservation","volume":"45 1","pages":""},"PeriodicalIF":5.5,"publicationDate":"2024-12-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142820622","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"环境科学与生态学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Miguel F. Jimenez, Birgen Haest, Ali Khalighifar, Annika L. Abbott, Abigail Feuka, Aitao Liu, Kyle G. Horton
Weather radar systems have become a central tool in the study of nocturnal bird migration. Yet, while studies have sought to validate weather radar data through comparison to other sampling techniques, few have explicitly examined the impact of range and topographical blockage on sampling detection—critical dimensions that can bias broader inferences. Here, we assess these biases with relation to the Cheyenne, WY Next Generation Weather Radar (NEXRAD) site, one of the large‐scale radars in a network of 160 weather surveillance stations based in the United States. We compared local density measures collected using a mobile, vertically looking radar with reflectivity from the NEXRAD station in the corresponding area. Both mean nightly migration activity and within night migration activity between NEXRAD and the mobile radar were strongly correlated (r = 0.85 and 0.70, respectively), but this relationship degraded with both increasing distance and beam blockage. Range‐corrected NEXRAD reflectivity was a stronger predictor of observed mobile radar densities than uncorrected reflectivity at the mean nightly scale, suggesting that current range correction methods are somewhat effective at correcting for this bias. At the within night temporal scale, corrected and uncorrected reflectivity models performed similarly up to 65 km, but beyond this distance, uncorrected reflectivity became a stronger predictor than range‐corrected reflectivity, suggesting range limitations to these corrections. Together, our findings further validate weather radar as an ornithological tool, but also highlight and quantify potential sampling biases.
{"title":"Quantifying range‐ and topographical biases in weather surveillance radar measures of migratory bird activity","authors":"Miguel F. Jimenez, Birgen Haest, Ali Khalighifar, Annika L. Abbott, Abigail Feuka, Aitao Liu, Kyle G. Horton","doi":"10.1002/rse2.423","DOIUrl":"https://doi.org/10.1002/rse2.423","url":null,"abstract":"Weather radar systems have become a central tool in the study of nocturnal bird migration. Yet, while studies have sought to validate weather radar data through comparison to other sampling techniques, few have explicitly examined the impact of range and topographical blockage on sampling detection—critical dimensions that can bias broader inferences. Here, we assess these biases with relation to the Cheyenne, WY Next Generation Weather Radar (NEXRAD) site, one of the large‐scale radars in a network of 160 weather surveillance stations based in the United States. We compared local density measures collected using a mobile, vertically looking radar with reflectivity from the NEXRAD station in the corresponding area. Both mean nightly migration activity and within night migration activity between NEXRAD and the mobile radar were strongly correlated (<jats:italic>r</jats:italic> = 0.85 and 0.70, respectively), but this relationship degraded with both increasing distance and beam blockage. Range‐corrected NEXRAD reflectivity was a stronger predictor of observed mobile radar densities than uncorrected reflectivity at the mean nightly scale, suggesting that current range correction methods are somewhat effective at correcting for this bias. At the within night temporal scale, corrected and uncorrected reflectivity models performed similarly up to 65 km, but beyond this distance, uncorrected reflectivity became a stronger predictor than range‐corrected reflectivity, suggesting range limitations to these corrections. Together, our findings further validate weather radar as an ornithological tool, but also highlight and quantify potential sampling biases.","PeriodicalId":21132,"journal":{"name":"Remote Sensing in Ecology and Conservation","volume":"86 1","pages":""},"PeriodicalIF":5.5,"publicationDate":"2024-12-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142820623","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"环境科学与生态学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Shuiqing He, J. Marcus Rowcliffe, Hanzhe Lin, Chris Carbone, Yorick Liefting, Shyam K. Thapa, Bishnu P. Shrestha, Patrick A. Jansen
The random encounter model (REM) estimates animal densities from camera‐trap data by correcting capture rates for a set of biological variables of the animals (average group size, speed and activity level) and characteristics of camera sensors. The REM has been widely used for setups in which cameras are mounted on trees or other structures aimed parallel to the ground. Here, we modify the REM formula to accommodate an alternative field of view acquired with vertically oriented camera traps, a type of deployment used to avoid camera theft and damage. We show how the calculations can be adapted to account for a different detection zone with minor modifications. We find that the effective detection area can be close to a rectangle with dimensions influenced by the properties of the Fresnel lens of the camera's motion sensor, the body mass of different species and the height of the camera. The other REM parameters remain the same. We tested the modified REM (vREM) by applying it to wildlife data collected with vertically oriented camera traps in Bardia National Park, Nepal. We further validated that the effective detection area for the camera model used was best approximated as a rectangle shape using maximum likelihood estimation. Density estimates obtained broadly matched independent density estimates for nine species from the previous studies in Bardia with varying body sizes by four orders of magnitude. We conclude that these modifications allow the REM to be effectively used for mammal density estimation for species with a wide range of body sizes, with vertically oriented camera traps.
{"title":"A random encounter model for wildlife density estimation with vertically oriented camera traps","authors":"Shuiqing He, J. Marcus Rowcliffe, Hanzhe Lin, Chris Carbone, Yorick Liefting, Shyam K. Thapa, Bishnu P. Shrestha, Patrick A. Jansen","doi":"10.1002/rse2.427","DOIUrl":"https://doi.org/10.1002/rse2.427","url":null,"abstract":"The random encounter model (REM) estimates animal densities from camera‐trap data by correcting capture rates for a set of biological variables of the animals (average group size, speed and activity level) and characteristics of camera sensors. The REM has been widely used for setups in which cameras are mounted on trees or other structures aimed parallel to the ground. Here, we modify the REM formula to accommodate an alternative field of view acquired with vertically oriented camera traps, a type of deployment used to avoid camera theft and damage. We show how the calculations can be adapted to account for a different detection zone with minor modifications. We find that the effective detection area can be close to a rectangle with dimensions influenced by the properties of the Fresnel lens of the camera's motion sensor, the body mass of different species and the height of the camera. The other REM parameters remain the same. We tested the modified REM (vREM) by applying it to wildlife data collected with vertically oriented camera traps in Bardia National Park, Nepal. We further validated that the effective detection area for the camera model used was best approximated as a rectangle shape using maximum likelihood estimation. Density estimates obtained broadly matched independent density estimates for nine species from the previous studies in Bardia with varying body sizes by four orders of magnitude. We conclude that these modifications allow the REM to be effectively used for mammal density estimation for species with a wide range of body sizes, with vertically oriented camera traps.","PeriodicalId":21132,"journal":{"name":"Remote Sensing in Ecology and Conservation","volume":"13 1","pages":""},"PeriodicalIF":5.5,"publicationDate":"2024-12-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142760516","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"环境科学与生态学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
J. Coleman, N. Fenney, P.N. Trathan, A. Fox, E. Fox, A. Bennison, L. Ireland, M.A. Collins, P.R. Hollyman
Drones are being increasingly used to monitor wildlife populations; their large spatial coverage and minimal disturbance make them ideal for use in remote environments where access and time are limited. The methods used to count resulting imagery need consideration as they can be time‐consuming and costly. In this study, we used a fixed‐wing drone and Beyond Visual Line of Sight flying to create high‐resolution imagery and digital surface models (DSMs) of six large king penguin colonies (colony population sizes ranging from 10,671 to 132,577 pairs) in South Georgia. We used a novel DSM‐based method to facilitate automated and semi‐automated counts of each colony to estimate population size. We assessed these DSM‐derived counts against other popular counting and post‐processing methodologies, including those from satellite imagery, and compared these to the results from four colonies counted manually to evaluate accuracy and effort. We randomly subsampled four colonies to test the most efficient and accurate methods for density‐based counts, including at the colony edge, where population density is lower. Sub‐sampling quadrats (each 25 m2) together with DSM‐based counts offered the best compromise between accuracy and effort. Where high‐resolution drone imagery was available, accuracy was within 3.5% of manual reference counts. DSM methods were more accurate than other established methods including estimation from satellite imagery and are applicable for population studies across other taxa worldwide. Results and methods will be used to inform and develop a long‐term king penguin monitoring programme.
{"title":"A comparison of established and digital surface model (DSM)‐based methods to determine population estimates and densities for king penguin colonies, using fixed‐wing drone and satellite imagery","authors":"J. Coleman, N. Fenney, P.N. Trathan, A. Fox, E. Fox, A. Bennison, L. Ireland, M.A. Collins, P.R. Hollyman","doi":"10.1002/rse2.424","DOIUrl":"https://doi.org/10.1002/rse2.424","url":null,"abstract":"Drones are being increasingly used to monitor wildlife populations; their large spatial coverage and minimal disturbance make them ideal for use in remote environments where access and time are limited. The methods used to count resulting imagery need consideration as they can be time‐consuming and costly. In this study, we used a fixed‐wing drone and Beyond Visual Line of Sight flying to create high‐resolution imagery and digital surface models (DSMs) of six large king penguin colonies (colony population sizes ranging from 10,671 to 132,577 pairs) in South Georgia. We used a novel DSM‐based method to facilitate automated and semi‐automated counts of each colony to estimate population size. We assessed these DSM‐derived counts against other popular counting and post‐processing methodologies, including those from satellite imagery, and compared these to the results from four colonies counted manually to evaluate accuracy and effort. We randomly subsampled four colonies to test the most efficient and accurate methods for density‐based counts, including at the colony edge, where population density is lower. Sub‐sampling quadrats (each 25 m<jats:sup>2</jats:sup>) together with DSM‐based counts offered the best compromise between accuracy and effort. Where high‐resolution drone imagery was available, accuracy was within 3.5% of manual reference counts. DSM methods were more accurate than other established methods including estimation from satellite imagery and are applicable for population studies across other taxa worldwide. Results and methods will be used to inform and develop a long‐term king penguin monitoring programme.","PeriodicalId":21132,"journal":{"name":"Remote Sensing in Ecology and Conservation","volume":"46 1","pages":""},"PeriodicalIF":5.5,"publicationDate":"2024-11-29","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142753671","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"环境科学与生态学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Kaja Balazy, Dariusz Jakubas, Andrzej Kotarba, Katarzyna Wojczulanis‐Jakubas
Artificial light at night (ALAN) has global impacts on animals, often negative, yet its effects in polar regions remains largely underexplored. These regions experience prolonged darkness during the polar night, while human activity and artificial lighting are rapidly increasing. In this study, we analyzed a decade of citizen science data on light‐sensitive seabird occurrences in Longyearbyen, a High‐Arctic port settlement, to examine the impact of environmental factors including ALAN during polar night. Our investigation incorporated remote sensing data on nighttime lights levels, sea ice presence, and air temperature measurements from local meteorological station. Our findings reveal that artificial light may potentially impact seabird diversity in this region, with overall diversity decreasing alongside light intensity. However, the relationship between artificial light and seabird diversity was not uniformly negative; individual species exhibited varied responses. We also detected a correlation between artificial light and air temperature, emphasizing the complexity of environmental interactions. Notably, the piscivorous Black Guillemot (Cepphus grylle), the dominant species in Longyearbyen during the polar night, showed increased contribution in the local seabird assemblage with higher light levels. In contrast, the zooplanktivorous Little Auk (Alle alle) exhibited reduced contribution with higher light intensity and increased presence with higher air temperatures. We hypothesize that these differing responses are closely tied to the distinct dietary habits, varying sensitivity to artificial light due to individual adaptations, and overall ecological flexibility of these species, underscoring the need for further research. This study, which uniquely combines citizen science with remote sensing data, represents the first effort to systematically assess the effects of artificial lighting on seabirds during the polar night. The findings underscore the potential importance of this issue for seabird conservation in polar regions.
夜间人工照明(ALAN)对全球动物都有影响,而且往往是负面的,但其对极地地区的影响在很大程度上仍未得到充分探索。这些地区在极夜会经历长时间的黑暗,而人类活动和人工照明却在迅速增加。在这项研究中,我们分析了高纬度北极港口居民点朗伊尔城十年来对光敏感的海鸟出现情况的公民科学数据,以研究包括 ALAN 在内的环境因素对极夜的影响。我们的调查结合了夜间灯光亮度、海冰存在情况的遥感数据以及当地气象站的气温测量数据。我们的研究结果表明,人工光照可能会对该地区的海鸟多样性产生潜在影响,总体多样性会随着光照强度的降低而降低。然而,人工光照与海鸟多样性之间的关系并不是一致的负相关;个别物种表现出不同的反应。我们还检测到人工光照与气温之间的相关性,强调了环境相互作用的复杂性。值得注意的是,食鱼的黑斑鸠(Cepphus grylle)是朗伊尔城极夜的主要物种,它在当地海鸟群中的比例随着光照度的增加而增加。与此相反,浮游动物小白头翁(Alle alle)则表现出光照强度越高,其贡献率越低,而气温越高,其存在率越高。我们推测,这些不同的反应与这些物种不同的饮食习惯、个体适应性导致的对人工光照的不同敏感度以及整体生态灵活性密切相关,这也强调了进一步研究的必要性。这项研究将公民科学与遥感数据独特地结合在一起,是系统评估极夜人工照明对海鸟影响的首次尝试。研究结果强调了这一问题对极地海鸟保护的潜在重要性。
{"title":"Illuminating the Arctic: Unveiling seabird responses to artificial light during polar darkness through citizen science and remote sensing","authors":"Kaja Balazy, Dariusz Jakubas, Andrzej Kotarba, Katarzyna Wojczulanis‐Jakubas","doi":"10.1002/rse2.425","DOIUrl":"https://doi.org/10.1002/rse2.425","url":null,"abstract":"Artificial light at night (ALAN) has global impacts on animals, often negative, yet its effects in polar regions remains largely underexplored. These regions experience prolonged darkness during the polar night, while human activity and artificial lighting are rapidly increasing. In this study, we analyzed a decade of citizen science data on light‐sensitive seabird occurrences in Longyearbyen, a High‐Arctic port settlement, to examine the impact of environmental factors including ALAN during polar night. Our investigation incorporated remote sensing data on nighttime lights levels, sea ice presence, and air temperature measurements from local meteorological station. Our findings reveal that artificial light may potentially impact seabird diversity in this region, with overall diversity decreasing alongside light intensity. However, the relationship between artificial light and seabird diversity was not uniformly negative; individual species exhibited varied responses. We also detected a correlation between artificial light and air temperature, emphasizing the complexity of environmental interactions. Notably, the piscivorous Black Guillemot (<jats:italic>Cepphus grylle</jats:italic>), the dominant species in Longyearbyen during the polar night, showed increased contribution in the local seabird assemblage with higher light levels. In contrast, the zooplanktivorous Little Auk (<jats:italic>Alle alle</jats:italic>) exhibited reduced contribution with higher light intensity and increased presence with higher air temperatures. We hypothesize that these differing responses are closely tied to the distinct dietary habits, varying sensitivity to artificial light due to individual adaptations, and overall ecological flexibility of these species, underscoring the need for further research. This study, which uniquely combines citizen science with remote sensing data, represents the first effort to systematically assess the effects of artificial lighting on seabirds during the polar night. The findings underscore the potential importance of this issue for seabird conservation in polar regions.","PeriodicalId":21132,"journal":{"name":"Remote Sensing in Ecology and Conservation","volume":"67 1","pages":""},"PeriodicalIF":5.5,"publicationDate":"2024-11-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142694143","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"环境科学与生态学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Ethan P. White, Lindsey Garner, Ben G. Weinstein, Henry Senyondo, Andrew Ortega, Ashley Steinkraus, Glenda M. Yenni, Peter Frederick, S. K. Morgan Ernest
Wildlife population monitoring over large geographic areas is increasingly feasible due to developments in aerial survey methods coupled with the use of computer vision models for identifying and classifying individual organisms. However, aerial surveys still occur infrequently, and there are often long delays between the acquisition of airborne imagery and its conversion into population monitoring data. Near real‐time monitoring is increasingly important for active management decisions and ecological forecasting. Accomplishing this over large scales requires a combination of airborne imagery, computer vision models to process imagery into information on individual organisms, and automated workflows to ensure that imagery is quickly processed into data following acquisition. Here we present our end‐to‐end workflow for conducting near real‐time monitoring of wading birds in the Everglades, Florida, USA. Imagery is acquired as frequently as weekly using uncrewed aircraft systems (aka drones), processed into orthomosaics (using Agisoft metashape), converted into individual‐level species data using a Retinanet‐50 object detector, post‐processed, archived, and presented on a web‐based visualization platform (using Shiny). The main components of the workflow are automated using Snakemake. The underlying computer vision model provides accurate object detection, species classification, and both total and species‐level counts for five out of six target species (White Ibis, Great Egret, Great Blue Heron, Wood Stork, and Roseate Spoonbill). The model performed poorly for Snowy Egrets due to the small number of labels and difficulty distinguishing them from White Ibis (the most abundant species). By automating the post‐survey processing, data on the populations of these species is available in near real‐time (<1 week from the date of the survey) providing information at the time scales needed for ecological forecasting and active management.
{"title":"Near real‐time monitoring of wading birds using uncrewed aircraft systems and computer vision","authors":"Ethan P. White, Lindsey Garner, Ben G. Weinstein, Henry Senyondo, Andrew Ortega, Ashley Steinkraus, Glenda M. Yenni, Peter Frederick, S. K. Morgan Ernest","doi":"10.1002/rse2.421","DOIUrl":"https://doi.org/10.1002/rse2.421","url":null,"abstract":"Wildlife population monitoring over large geographic areas is increasingly feasible due to developments in aerial survey methods coupled with the use of computer vision models for identifying and classifying individual organisms. However, aerial surveys still occur infrequently, and there are often long delays between the acquisition of airborne imagery and its conversion into population monitoring data. Near real‐time monitoring is increasingly important for active management decisions and ecological forecasting. Accomplishing this over large scales requires a combination of airborne imagery, computer vision models to process imagery into information on individual organisms, and automated workflows to ensure that imagery is quickly processed into data following acquisition. Here we present our end‐to‐end workflow for conducting near real‐time monitoring of wading birds in the Everglades, Florida, USA. Imagery is acquired as frequently as weekly using uncrewed aircraft systems (aka drones), processed into orthomosaics (using Agisoft metashape), converted into individual‐level species data using a Retinanet‐50 object detector, post‐processed, archived, and presented on a web‐based visualization platform (using Shiny). The main components of the workflow are automated using Snakemake. The underlying computer vision model provides accurate object detection, species classification, and both total and species‐level counts for five out of six target species (White Ibis, Great Egret, Great Blue Heron, Wood Stork, and Roseate Spoonbill). The model performed poorly for Snowy Egrets due to the small number of labels and difficulty distinguishing them from White Ibis (the most abundant species). By automating the post‐survey processing, data on the populations of these species is available in near real‐time (<1 week from the date of the survey) providing information at the time scales needed for ecological forecasting and active management.","PeriodicalId":21132,"journal":{"name":"Remote Sensing in Ecology and Conservation","volume":"70 1","pages":""},"PeriodicalIF":5.5,"publicationDate":"2024-11-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142597720","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"环境科学与生态学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Yuanhui Zhu, Shakthi B. Murugesan, Ivone K. Masara, Soe W. Myint, Joshua B. Fisher
Wildfires are increasing in risk and prevalence. The most destructive wildfires in decades in Australia occurred in 2019–2020. However, there is still a challenge in developing effective models to understand the likelihood of wildfire spread (susceptibility) and pre‐fire vegetation conditions. The recent launch of NASA's ECOSTRESS presents an opportunity to monitor fire dynamics with a high resolution of 70 m by measuring ecosystem stress and drought conditions preceding wildfires. We incorporated ECOSTRESS data, vegetation indices, rainfall, and topographic data as independent variables and fire events as dependent variables into machine learning algorithms applied to the historic Australian wildfires of 2019–2020. With these data, we predicted over 90% of all wildfire occurrences 1 week ahead of these wildfire events. Our models identified vegetation conditions with a 3‐week time lag before wildfire events in the fourth week and predicted the probability of wildfire occurrences in the subsequent week (fifth week). ECOSTRESS water use efficiency (WUE) consistently emerged as the leading factor in all models predicting wildfires. Results suggest that the pre‐fire vegetation was affected by wildfires in areas with WUE above 2 g C kg−1 H₂O at 95% probability level. Additionally, the ECOSTRESS evaporative stress index and topographic slope were identified as significant contributors in predicting wildfire susceptibility. These results indicate a significant potential for ECOSTRESS data to predict and analyze wildfires and emphasize the crucial role of drought conditions in wildfire events, as evident from ECOSTRESS data. Our approaches developed in this study and outcome can help policymakers, fire managers, and city planners assess, manage, prepare, and mitigate wildfires in the future.
{"title":"Examining wildfire dynamics using ECOSTRESS data with machine learning approaches: the case of South‐Eastern Australia's black summer","authors":"Yuanhui Zhu, Shakthi B. Murugesan, Ivone K. Masara, Soe W. Myint, Joshua B. Fisher","doi":"10.1002/rse2.422","DOIUrl":"https://doi.org/10.1002/rse2.422","url":null,"abstract":"Wildfires are increasing in risk and prevalence. The most destructive wildfires in decades in Australia occurred in 2019–2020. However, there is still a challenge in developing effective models to understand the likelihood of wildfire spread (susceptibility) and pre‐fire vegetation conditions. The recent launch of NASA's ECOSTRESS presents an opportunity to monitor fire dynamics with a high resolution of 70 m by measuring ecosystem stress and drought conditions preceding wildfires. We incorporated ECOSTRESS data, vegetation indices, rainfall, and topographic data as independent variables and fire events as dependent variables into machine learning algorithms applied to the historic Australian wildfires of 2019–2020. With these data, we predicted over 90% of all wildfire occurrences 1 week ahead of these wildfire events. Our models identified vegetation conditions with a 3‐week time lag before wildfire events in the fourth week and predicted the probability of wildfire occurrences in the subsequent week (fifth week). ECOSTRESS water use efficiency (WUE) consistently emerged as the leading factor in all models predicting wildfires. Results suggest that the pre‐fire vegetation was affected by wildfires in areas with WUE above 2 g C kg<jats:sup>−1</jats:sup> H₂O at 95% probability level. Additionally, the ECOSTRESS evaporative stress index and topographic slope were identified as significant contributors in predicting wildfire susceptibility. These results indicate a significant potential for ECOSTRESS data to predict and analyze wildfires and emphasize the crucial role of drought conditions in wildfire events, as evident from ECOSTRESS data. Our approaches developed in this study and outcome can help policymakers, fire managers, and city planners assess, manage, prepare, and mitigate wildfires in the future.","PeriodicalId":21132,"journal":{"name":"Remote Sensing in Ecology and Conservation","volume":"13 1","pages":""},"PeriodicalIF":5.5,"publicationDate":"2024-11-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142588850","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"环境科学与生态学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}