Stephanie Roilo, Tim R. Hofmeester, Magali Frauendorf, Anna Widén, Anna F. Cord
Agroecosystems are experiencing a biodiversity crisis. Biodiversity monitoring is needed to inform conservation, but existing monitoring schemes lack standardisation and are biased towards birds, insects and plants. Automated monitoring techniques offer a promising solution, but while passive acoustic monitoring and remote sensing are increasingly used, the potential of camera traps (CTs) in farmland remains underexplored. We reviewed CT publications from the last 30 years and found only 59 articles that sampled farmland habitats in Europe. The main research topics addressed management or (avian) conservation issues, such as monitoring wildlife‐livestock interactions, nest predation, and the use of feeders and water troughs. Fewer studies employed landscape‐wide approaches to investigate species' habitat use or activity patterns over large agricultural areas. We discuss existing barriers to a more widespread use of CTs in farmland and suggest strategies to overcome them: boxed CTs tailored for small mammals, reptiles and amphibians, perch‐mounted CTs for raptor monitoring and time‐lapse imagery can help in overcoming the technical challenges of monitoring (small) elusive species in open habitats where misfires and missed detections are more frequent. Such approaches would also expand the taxonomic coverage of farmland monitoring schemes towards under‐surveyed species and species groups. Moreover, the engagement of farmers in CT‐based biodiversity monitoring programmes and advances in computer vision for image classification provide opportunities for low‐cost, broad‐scale and automated monitoring schemes. Research priorities that could be tackled through such CT applications include basic science topics such as unravelling animal space use in agricultural landscapes, and how this is influenced by varying agricultural practices. Management‐related research priorities relate to crop damage and livestock predation by wildlife, disease transmission between wildlife and livestock, effects of agrochemicals on wildlife, and the monitoring and assessment of conservation measures. Altogether, CTs hold great, yet unexplored, potential to advance agroecological research.
{"title":"The untapped potential of camera traps for farmland biodiversity monitoring: current practice and outstanding agroecological questions","authors":"Stephanie Roilo, Tim R. Hofmeester, Magali Frauendorf, Anna Widén, Anna F. Cord","doi":"10.1002/rse2.426","DOIUrl":"https://doi.org/10.1002/rse2.426","url":null,"abstract":"Agroecosystems are experiencing a biodiversity crisis. Biodiversity monitoring is needed to inform conservation, but existing monitoring schemes lack standardisation and are biased towards birds, insects and plants. Automated monitoring techniques offer a promising solution, but while passive acoustic monitoring and remote sensing are increasingly used, the potential of camera traps (CTs) in farmland remains underexplored. We reviewed CT publications from the last 30 years and found only 59 articles that sampled farmland habitats in Europe. The main research topics addressed management or (avian) conservation issues, such as monitoring wildlife‐livestock interactions, nest predation, and the use of feeders and water troughs. Fewer studies employed landscape‐wide approaches to investigate species' habitat use or activity patterns over large agricultural areas. We discuss existing barriers to a more widespread use of CTs in farmland and suggest strategies to overcome them: boxed CTs tailored for small mammals, reptiles and amphibians, perch‐mounted CTs for raptor monitoring and time‐lapse imagery can help in overcoming the technical challenges of monitoring (small) elusive species in open habitats where misfires and missed detections are more frequent. Such approaches would also expand the taxonomic coverage of farmland monitoring schemes towards under‐surveyed species and species groups. Moreover, the engagement of farmers in CT‐based biodiversity monitoring programmes and advances in computer vision for image classification provide opportunities for low‐cost, broad‐scale and automated monitoring schemes. Research priorities that could be tackled through such CT applications include basic science topics such as unravelling animal space use in agricultural landscapes, and how this is influenced by varying agricultural practices. Management‐related research priorities relate to crop damage and livestock predation by wildlife, disease transmission between wildlife and livestock, effects of agrochemicals on wildlife, and the monitoring and assessment of conservation measures. Altogether, CTs hold great, yet unexplored, potential to advance agroecological research.","PeriodicalId":21132,"journal":{"name":"Remote Sensing in Ecology and Conservation","volume":"45 1","pages":""},"PeriodicalIF":5.5,"publicationDate":"2024-12-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142820622","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"环境科学与生态学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Miguel F. Jimenez, Birgen Haest, Ali Khalighifar, Annika L. Abbott, Abigail Feuka, Aitao Liu, Kyle G. Horton
Weather radar systems have become a central tool in the study of nocturnal bird migration. Yet, while studies have sought to validate weather radar data through comparison to other sampling techniques, few have explicitly examined the impact of range and topographical blockage on sampling detection—critical dimensions that can bias broader inferences. Here, we assess these biases with relation to the Cheyenne, WY Next Generation Weather Radar (NEXRAD) site, one of the large‐scale radars in a network of 160 weather surveillance stations based in the United States. We compared local density measures collected using a mobile, vertically looking radar with reflectivity from the NEXRAD station in the corresponding area. Both mean nightly migration activity and within night migration activity between NEXRAD and the mobile radar were strongly correlated (r = 0.85 and 0.70, respectively), but this relationship degraded with both increasing distance and beam blockage. Range‐corrected NEXRAD reflectivity was a stronger predictor of observed mobile radar densities than uncorrected reflectivity at the mean nightly scale, suggesting that current range correction methods are somewhat effective at correcting for this bias. At the within night temporal scale, corrected and uncorrected reflectivity models performed similarly up to 65 km, but beyond this distance, uncorrected reflectivity became a stronger predictor than range‐corrected reflectivity, suggesting range limitations to these corrections. Together, our findings further validate weather radar as an ornithological tool, but also highlight and quantify potential sampling biases.
{"title":"Quantifying range‐ and topographical biases in weather surveillance radar measures of migratory bird activity","authors":"Miguel F. Jimenez, Birgen Haest, Ali Khalighifar, Annika L. Abbott, Abigail Feuka, Aitao Liu, Kyle G. Horton","doi":"10.1002/rse2.423","DOIUrl":"https://doi.org/10.1002/rse2.423","url":null,"abstract":"Weather radar systems have become a central tool in the study of nocturnal bird migration. Yet, while studies have sought to validate weather radar data through comparison to other sampling techniques, few have explicitly examined the impact of range and topographical blockage on sampling detection—critical dimensions that can bias broader inferences. Here, we assess these biases with relation to the Cheyenne, WY Next Generation Weather Radar (NEXRAD) site, one of the large‐scale radars in a network of 160 weather surveillance stations based in the United States. We compared local density measures collected using a mobile, vertically looking radar with reflectivity from the NEXRAD station in the corresponding area. Both mean nightly migration activity and within night migration activity between NEXRAD and the mobile radar were strongly correlated (<jats:italic>r</jats:italic> = 0.85 and 0.70, respectively), but this relationship degraded with both increasing distance and beam blockage. Range‐corrected NEXRAD reflectivity was a stronger predictor of observed mobile radar densities than uncorrected reflectivity at the mean nightly scale, suggesting that current range correction methods are somewhat effective at correcting for this bias. At the within night temporal scale, corrected and uncorrected reflectivity models performed similarly up to 65 km, but beyond this distance, uncorrected reflectivity became a stronger predictor than range‐corrected reflectivity, suggesting range limitations to these corrections. Together, our findings further validate weather radar as an ornithological tool, but also highlight and quantify potential sampling biases.","PeriodicalId":21132,"journal":{"name":"Remote Sensing in Ecology and Conservation","volume":"86 1","pages":""},"PeriodicalIF":5.5,"publicationDate":"2024-12-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142820623","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"环境科学与生态学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Shuiqing He, J. Marcus Rowcliffe, Hanzhe Lin, Chris Carbone, Yorick Liefting, Shyam K. Thapa, Bishnu P. Shrestha, Patrick A. Jansen
The random encounter model (REM) estimates animal densities from camera‐trap data by correcting capture rates for a set of biological variables of the animals (average group size, speed and activity level) and characteristics of camera sensors. The REM has been widely used for setups in which cameras are mounted on trees or other structures aimed parallel to the ground. Here, we modify the REM formula to accommodate an alternative field of view acquired with vertically oriented camera traps, a type of deployment used to avoid camera theft and damage. We show how the calculations can be adapted to account for a different detection zone with minor modifications. We find that the effective detection area can be close to a rectangle with dimensions influenced by the properties of the Fresnel lens of the camera's motion sensor, the body mass of different species and the height of the camera. The other REM parameters remain the same. We tested the modified REM (vREM) by applying it to wildlife data collected with vertically oriented camera traps in Bardia National Park, Nepal. We further validated that the effective detection area for the camera model used was best approximated as a rectangle shape using maximum likelihood estimation. Density estimates obtained broadly matched independent density estimates for nine species from the previous studies in Bardia with varying body sizes by four orders of magnitude. We conclude that these modifications allow the REM to be effectively used for mammal density estimation for species with a wide range of body sizes, with vertically oriented camera traps.
{"title":"A random encounter model for wildlife density estimation with vertically oriented camera traps","authors":"Shuiqing He, J. Marcus Rowcliffe, Hanzhe Lin, Chris Carbone, Yorick Liefting, Shyam K. Thapa, Bishnu P. Shrestha, Patrick A. Jansen","doi":"10.1002/rse2.427","DOIUrl":"https://doi.org/10.1002/rse2.427","url":null,"abstract":"The random encounter model (REM) estimates animal densities from camera‐trap data by correcting capture rates for a set of biological variables of the animals (average group size, speed and activity level) and characteristics of camera sensors. The REM has been widely used for setups in which cameras are mounted on trees or other structures aimed parallel to the ground. Here, we modify the REM formula to accommodate an alternative field of view acquired with vertically oriented camera traps, a type of deployment used to avoid camera theft and damage. We show how the calculations can be adapted to account for a different detection zone with minor modifications. We find that the effective detection area can be close to a rectangle with dimensions influenced by the properties of the Fresnel lens of the camera's motion sensor, the body mass of different species and the height of the camera. The other REM parameters remain the same. We tested the modified REM (vREM) by applying it to wildlife data collected with vertically oriented camera traps in Bardia National Park, Nepal. We further validated that the effective detection area for the camera model used was best approximated as a rectangle shape using maximum likelihood estimation. Density estimates obtained broadly matched independent density estimates for nine species from the previous studies in Bardia with varying body sizes by four orders of magnitude. We conclude that these modifications allow the REM to be effectively used for mammal density estimation for species with a wide range of body sizes, with vertically oriented camera traps.","PeriodicalId":21132,"journal":{"name":"Remote Sensing in Ecology and Conservation","volume":"13 1","pages":""},"PeriodicalIF":5.5,"publicationDate":"2024-12-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142760516","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"环境科学与生态学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
J. Coleman, N. Fenney, P.N. Trathan, A. Fox, E. Fox, A. Bennison, L. Ireland, M.A. Collins, P.R. Hollyman
Drones are being increasingly used to monitor wildlife populations; their large spatial coverage and minimal disturbance make them ideal for use in remote environments where access and time are limited. The methods used to count resulting imagery need consideration as they can be time‐consuming and costly. In this study, we used a fixed‐wing drone and Beyond Visual Line of Sight flying to create high‐resolution imagery and digital surface models (DSMs) of six large king penguin colonies (colony population sizes ranging from 10,671 to 132,577 pairs) in South Georgia. We used a novel DSM‐based method to facilitate automated and semi‐automated counts of each colony to estimate population size. We assessed these DSM‐derived counts against other popular counting and post‐processing methodologies, including those from satellite imagery, and compared these to the results from four colonies counted manually to evaluate accuracy and effort. We randomly subsampled four colonies to test the most efficient and accurate methods for density‐based counts, including at the colony edge, where population density is lower. Sub‐sampling quadrats (each 25 m2) together with DSM‐based counts offered the best compromise between accuracy and effort. Where high‐resolution drone imagery was available, accuracy was within 3.5% of manual reference counts. DSM methods were more accurate than other established methods including estimation from satellite imagery and are applicable for population studies across other taxa worldwide. Results and methods will be used to inform and develop a long‐term king penguin monitoring programme.
{"title":"A comparison of established and digital surface model (DSM)‐based methods to determine population estimates and densities for king penguin colonies, using fixed‐wing drone and satellite imagery","authors":"J. Coleman, N. Fenney, P.N. Trathan, A. Fox, E. Fox, A. Bennison, L. Ireland, M.A. Collins, P.R. Hollyman","doi":"10.1002/rse2.424","DOIUrl":"https://doi.org/10.1002/rse2.424","url":null,"abstract":"Drones are being increasingly used to monitor wildlife populations; their large spatial coverage and minimal disturbance make them ideal for use in remote environments where access and time are limited. The methods used to count resulting imagery need consideration as they can be time‐consuming and costly. In this study, we used a fixed‐wing drone and Beyond Visual Line of Sight flying to create high‐resolution imagery and digital surface models (DSMs) of six large king penguin colonies (colony population sizes ranging from 10,671 to 132,577 pairs) in South Georgia. We used a novel DSM‐based method to facilitate automated and semi‐automated counts of each colony to estimate population size. We assessed these DSM‐derived counts against other popular counting and post‐processing methodologies, including those from satellite imagery, and compared these to the results from four colonies counted manually to evaluate accuracy and effort. We randomly subsampled four colonies to test the most efficient and accurate methods for density‐based counts, including at the colony edge, where population density is lower. Sub‐sampling quadrats (each 25 m<jats:sup>2</jats:sup>) together with DSM‐based counts offered the best compromise between accuracy and effort. Where high‐resolution drone imagery was available, accuracy was within 3.5% of manual reference counts. DSM methods were more accurate than other established methods including estimation from satellite imagery and are applicable for population studies across other taxa worldwide. Results and methods will be used to inform and develop a long‐term king penguin monitoring programme.","PeriodicalId":21132,"journal":{"name":"Remote Sensing in Ecology and Conservation","volume":"46 1","pages":""},"PeriodicalIF":5.5,"publicationDate":"2024-11-29","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142753671","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"环境科学与生态学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Kaja Balazy, Dariusz Jakubas, Andrzej Kotarba, Katarzyna Wojczulanis‐Jakubas
Artificial light at night (ALAN) has global impacts on animals, often negative, yet its effects in polar regions remains largely underexplored. These regions experience prolonged darkness during the polar night, while human activity and artificial lighting are rapidly increasing. In this study, we analyzed a decade of citizen science data on light‐sensitive seabird occurrences in Longyearbyen, a High‐Arctic port settlement, to examine the impact of environmental factors including ALAN during polar night. Our investigation incorporated remote sensing data on nighttime lights levels, sea ice presence, and air temperature measurements from local meteorological station. Our findings reveal that artificial light may potentially impact seabird diversity in this region, with overall diversity decreasing alongside light intensity. However, the relationship between artificial light and seabird diversity was not uniformly negative; individual species exhibited varied responses. We also detected a correlation between artificial light and air temperature, emphasizing the complexity of environmental interactions. Notably, the piscivorous Black Guillemot (Cepphus grylle), the dominant species in Longyearbyen during the polar night, showed increased contribution in the local seabird assemblage with higher light levels. In contrast, the zooplanktivorous Little Auk (Alle alle) exhibited reduced contribution with higher light intensity and increased presence with higher air temperatures. We hypothesize that these differing responses are closely tied to the distinct dietary habits, varying sensitivity to artificial light due to individual adaptations, and overall ecological flexibility of these species, underscoring the need for further research. This study, which uniquely combines citizen science with remote sensing data, represents the first effort to systematically assess the effects of artificial lighting on seabirds during the polar night. The findings underscore the potential importance of this issue for seabird conservation in polar regions.
夜间人工照明(ALAN)对全球动物都有影响,而且往往是负面的,但其对极地地区的影响在很大程度上仍未得到充分探索。这些地区在极夜会经历长时间的黑暗,而人类活动和人工照明却在迅速增加。在这项研究中,我们分析了高纬度北极港口居民点朗伊尔城十年来对光敏感的海鸟出现情况的公民科学数据,以研究包括 ALAN 在内的环境因素对极夜的影响。我们的调查结合了夜间灯光亮度、海冰存在情况的遥感数据以及当地气象站的气温测量数据。我们的研究结果表明,人工光照可能会对该地区的海鸟多样性产生潜在影响,总体多样性会随着光照强度的降低而降低。然而,人工光照与海鸟多样性之间的关系并不是一致的负相关;个别物种表现出不同的反应。我们还检测到人工光照与气温之间的相关性,强调了环境相互作用的复杂性。值得注意的是,食鱼的黑斑鸠(Cepphus grylle)是朗伊尔城极夜的主要物种,它在当地海鸟群中的比例随着光照度的增加而增加。与此相反,浮游动物小白头翁(Alle alle)则表现出光照强度越高,其贡献率越低,而气温越高,其存在率越高。我们推测,这些不同的反应与这些物种不同的饮食习惯、个体适应性导致的对人工光照的不同敏感度以及整体生态灵活性密切相关,这也强调了进一步研究的必要性。这项研究将公民科学与遥感数据独特地结合在一起,是系统评估极夜人工照明对海鸟影响的首次尝试。研究结果强调了这一问题对极地海鸟保护的潜在重要性。
{"title":"Illuminating the Arctic: Unveiling seabird responses to artificial light during polar darkness through citizen science and remote sensing","authors":"Kaja Balazy, Dariusz Jakubas, Andrzej Kotarba, Katarzyna Wojczulanis‐Jakubas","doi":"10.1002/rse2.425","DOIUrl":"https://doi.org/10.1002/rse2.425","url":null,"abstract":"Artificial light at night (ALAN) has global impacts on animals, often negative, yet its effects in polar regions remains largely underexplored. These regions experience prolonged darkness during the polar night, while human activity and artificial lighting are rapidly increasing. In this study, we analyzed a decade of citizen science data on light‐sensitive seabird occurrences in Longyearbyen, a High‐Arctic port settlement, to examine the impact of environmental factors including ALAN during polar night. Our investigation incorporated remote sensing data on nighttime lights levels, sea ice presence, and air temperature measurements from local meteorological station. Our findings reveal that artificial light may potentially impact seabird diversity in this region, with overall diversity decreasing alongside light intensity. However, the relationship between artificial light and seabird diversity was not uniformly negative; individual species exhibited varied responses. We also detected a correlation between artificial light and air temperature, emphasizing the complexity of environmental interactions. Notably, the piscivorous Black Guillemot (<jats:italic>Cepphus grylle</jats:italic>), the dominant species in Longyearbyen during the polar night, showed increased contribution in the local seabird assemblage with higher light levels. In contrast, the zooplanktivorous Little Auk (<jats:italic>Alle alle</jats:italic>) exhibited reduced contribution with higher light intensity and increased presence with higher air temperatures. We hypothesize that these differing responses are closely tied to the distinct dietary habits, varying sensitivity to artificial light due to individual adaptations, and overall ecological flexibility of these species, underscoring the need for further research. This study, which uniquely combines citizen science with remote sensing data, represents the first effort to systematically assess the effects of artificial lighting on seabirds during the polar night. The findings underscore the potential importance of this issue for seabird conservation in polar regions.","PeriodicalId":21132,"journal":{"name":"Remote Sensing in Ecology and Conservation","volume":"67 1","pages":""},"PeriodicalIF":5.5,"publicationDate":"2024-11-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142694143","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"环境科学与生态学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Ethan P. White, Lindsey Garner, Ben G. Weinstein, Henry Senyondo, Andrew Ortega, Ashley Steinkraus, Glenda M. Yenni, Peter Frederick, S. K. Morgan Ernest
Wildlife population monitoring over large geographic areas is increasingly feasible due to developments in aerial survey methods coupled with the use of computer vision models for identifying and classifying individual organisms. However, aerial surveys still occur infrequently, and there are often long delays between the acquisition of airborne imagery and its conversion into population monitoring data. Near real‐time monitoring is increasingly important for active management decisions and ecological forecasting. Accomplishing this over large scales requires a combination of airborne imagery, computer vision models to process imagery into information on individual organisms, and automated workflows to ensure that imagery is quickly processed into data following acquisition. Here we present our end‐to‐end workflow for conducting near real‐time monitoring of wading birds in the Everglades, Florida, USA. Imagery is acquired as frequently as weekly using uncrewed aircraft systems (aka drones), processed into orthomosaics (using Agisoft metashape), converted into individual‐level species data using a Retinanet‐50 object detector, post‐processed, archived, and presented on a web‐based visualization platform (using Shiny). The main components of the workflow are automated using Snakemake. The underlying computer vision model provides accurate object detection, species classification, and both total and species‐level counts for five out of six target species (White Ibis, Great Egret, Great Blue Heron, Wood Stork, and Roseate Spoonbill). The model performed poorly for Snowy Egrets due to the small number of labels and difficulty distinguishing them from White Ibis (the most abundant species). By automating the post‐survey processing, data on the populations of these species is available in near real‐time (<1 week from the date of the survey) providing information at the time scales needed for ecological forecasting and active management.
{"title":"Near real‐time monitoring of wading birds using uncrewed aircraft systems and computer vision","authors":"Ethan P. White, Lindsey Garner, Ben G. Weinstein, Henry Senyondo, Andrew Ortega, Ashley Steinkraus, Glenda M. Yenni, Peter Frederick, S. K. Morgan Ernest","doi":"10.1002/rse2.421","DOIUrl":"https://doi.org/10.1002/rse2.421","url":null,"abstract":"Wildlife population monitoring over large geographic areas is increasingly feasible due to developments in aerial survey methods coupled with the use of computer vision models for identifying and classifying individual organisms. However, aerial surveys still occur infrequently, and there are often long delays between the acquisition of airborne imagery and its conversion into population monitoring data. Near real‐time monitoring is increasingly important for active management decisions and ecological forecasting. Accomplishing this over large scales requires a combination of airborne imagery, computer vision models to process imagery into information on individual organisms, and automated workflows to ensure that imagery is quickly processed into data following acquisition. Here we present our end‐to‐end workflow for conducting near real‐time monitoring of wading birds in the Everglades, Florida, USA. Imagery is acquired as frequently as weekly using uncrewed aircraft systems (aka drones), processed into orthomosaics (using Agisoft metashape), converted into individual‐level species data using a Retinanet‐50 object detector, post‐processed, archived, and presented on a web‐based visualization platform (using Shiny). The main components of the workflow are automated using Snakemake. The underlying computer vision model provides accurate object detection, species classification, and both total and species‐level counts for five out of six target species (White Ibis, Great Egret, Great Blue Heron, Wood Stork, and Roseate Spoonbill). The model performed poorly for Snowy Egrets due to the small number of labels and difficulty distinguishing them from White Ibis (the most abundant species). By automating the post‐survey processing, data on the populations of these species is available in near real‐time (<1 week from the date of the survey) providing information at the time scales needed for ecological forecasting and active management.","PeriodicalId":21132,"journal":{"name":"Remote Sensing in Ecology and Conservation","volume":"70 1","pages":""},"PeriodicalIF":5.5,"publicationDate":"2024-11-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142597720","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"环境科学与生态学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Yuanhui Zhu, Shakthi B. Murugesan, Ivone K. Masara, Soe W. Myint, Joshua B. Fisher
Wildfires are increasing in risk and prevalence. The most destructive wildfires in decades in Australia occurred in 2019–2020. However, there is still a challenge in developing effective models to understand the likelihood of wildfire spread (susceptibility) and pre‐fire vegetation conditions. The recent launch of NASA's ECOSTRESS presents an opportunity to monitor fire dynamics with a high resolution of 70 m by measuring ecosystem stress and drought conditions preceding wildfires. We incorporated ECOSTRESS data, vegetation indices, rainfall, and topographic data as independent variables and fire events as dependent variables into machine learning algorithms applied to the historic Australian wildfires of 2019–2020. With these data, we predicted over 90% of all wildfire occurrences 1 week ahead of these wildfire events. Our models identified vegetation conditions with a 3‐week time lag before wildfire events in the fourth week and predicted the probability of wildfire occurrences in the subsequent week (fifth week). ECOSTRESS water use efficiency (WUE) consistently emerged as the leading factor in all models predicting wildfires. Results suggest that the pre‐fire vegetation was affected by wildfires in areas with WUE above 2 g C kg−1 H₂O at 95% probability level. Additionally, the ECOSTRESS evaporative stress index and topographic slope were identified as significant contributors in predicting wildfire susceptibility. These results indicate a significant potential for ECOSTRESS data to predict and analyze wildfires and emphasize the crucial role of drought conditions in wildfire events, as evident from ECOSTRESS data. Our approaches developed in this study and outcome can help policymakers, fire managers, and city planners assess, manage, prepare, and mitigate wildfires in the future.
{"title":"Examining wildfire dynamics using ECOSTRESS data with machine learning approaches: the case of South‐Eastern Australia's black summer","authors":"Yuanhui Zhu, Shakthi B. Murugesan, Ivone K. Masara, Soe W. Myint, Joshua B. Fisher","doi":"10.1002/rse2.422","DOIUrl":"https://doi.org/10.1002/rse2.422","url":null,"abstract":"Wildfires are increasing in risk and prevalence. The most destructive wildfires in decades in Australia occurred in 2019–2020. However, there is still a challenge in developing effective models to understand the likelihood of wildfire spread (susceptibility) and pre‐fire vegetation conditions. The recent launch of NASA's ECOSTRESS presents an opportunity to monitor fire dynamics with a high resolution of 70 m by measuring ecosystem stress and drought conditions preceding wildfires. We incorporated ECOSTRESS data, vegetation indices, rainfall, and topographic data as independent variables and fire events as dependent variables into machine learning algorithms applied to the historic Australian wildfires of 2019–2020. With these data, we predicted over 90% of all wildfire occurrences 1 week ahead of these wildfire events. Our models identified vegetation conditions with a 3‐week time lag before wildfire events in the fourth week and predicted the probability of wildfire occurrences in the subsequent week (fifth week). ECOSTRESS water use efficiency (WUE) consistently emerged as the leading factor in all models predicting wildfires. Results suggest that the pre‐fire vegetation was affected by wildfires in areas with WUE above 2 g C kg<jats:sup>−1</jats:sup> H₂O at 95% probability level. Additionally, the ECOSTRESS evaporative stress index and topographic slope were identified as significant contributors in predicting wildfire susceptibility. These results indicate a significant potential for ECOSTRESS data to predict and analyze wildfires and emphasize the crucial role of drought conditions in wildfire events, as evident from ECOSTRESS data. Our approaches developed in this study and outcome can help policymakers, fire managers, and city planners assess, manage, prepare, and mitigate wildfires in the future.","PeriodicalId":21132,"journal":{"name":"Remote Sensing in Ecology and Conservation","volume":"13 1","pages":""},"PeriodicalIF":5.5,"publicationDate":"2024-11-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142588850","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"环境科学与生态学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Florence Erbs, Mike van der Schaar, Miriam Marmontel, Marina Gaona, Emiliano Ramalho, Michel André
For many species at risk, monitoring challenges related to low visual detectability and elusive behavior limit the use of traditional visual surveys to collect critical information, hindering the development of sound conservation strategies. Passive acoustics can cost‐effectively acquire terrestrial and underwater long‐term data. However, to extract valuable information from large datasets, automatic methods need to be developed, tested and applied. Combining passive acoustics with deep learning models, we developed a method to monitor the secretive Amazonian manatee over two consecutive flooded seasons in the Brazilian Amazon floodplains. Subsequently, we investigated the vocal behavior parameters based on vocalization frequencies and temporal characteristics in the context of habitat use. A Convolutional Neural Network model successfully detected Amazonian manatee vocalizations with a 0.98 average precision on training data. Similar classification performance in terms of precision (range: 0.83–1.00) and recall (range: 0.97–1.00) was achieved for each year. Using this model, we evaluated manatee acoustic presence over a total of 226 days comprising recording periods in 2021 and 2022. Manatee vocalizations were consistently detected during both years, reaching 94% daily temporal occurrence in 2021, and up to 11 h a day with detections during peak presence. Manatee calls were characterized by a high emphasized frequency and high repetition rate, being mostly produced in rapid sequences. This vocal behavior strongly indicates an exchange between females and their calves. Combining passive acoustic monitoring with deep learning models, and extending temporal monitoring and increasing species detectability, we demonstrated that the approach can be used to identify manatee core habitats according to seasonality. The combined method represents a reliable, cost‐effective, scalable ecological monitoring technique that can be integrated into long‐term, standardized survey protocols of aquatic species. It can considerably benefit the monitoring of inaccessible regions, such as the Amazonian freshwater systems, which are facing immediate threats from increased hydropower construction.
{"title":"Amazonian manatee critical habitat revealed by artificial intelligence‐based passive acoustic techniques","authors":"Florence Erbs, Mike van der Schaar, Miriam Marmontel, Marina Gaona, Emiliano Ramalho, Michel André","doi":"10.1002/rse2.418","DOIUrl":"https://doi.org/10.1002/rse2.418","url":null,"abstract":"For many species at risk, monitoring challenges related to low visual detectability and elusive behavior limit the use of traditional visual surveys to collect critical information, hindering the development of sound conservation strategies. Passive acoustics can cost‐effectively acquire terrestrial and underwater long‐term data. However, to extract valuable information from large datasets, automatic methods need to be developed, tested and applied. Combining passive acoustics with deep learning models, we developed a method to monitor the secretive Amazonian manatee over two consecutive flooded seasons in the Brazilian Amazon floodplains. Subsequently, we investigated the vocal behavior parameters based on vocalization frequencies and temporal characteristics in the context of habitat use. A Convolutional Neural Network model successfully detected Amazonian manatee vocalizations with a 0.98 average precision on training data. Similar classification performance in terms of precision (range: 0.83–1.00) and recall (range: 0.97–1.00) was achieved for each year. Using this model, we evaluated manatee acoustic presence over a total of 226 days comprising recording periods in 2021 and 2022. Manatee vocalizations were consistently detected during both years, reaching 94% daily temporal occurrence in 2021, and up to 11 h a day with detections during peak presence. Manatee calls were characterized by a high emphasized frequency and high repetition rate, being mostly produced in rapid sequences. This vocal behavior strongly indicates an exchange between females and their calves. Combining passive acoustic monitoring with deep learning models, and extending temporal monitoring and increasing species detectability, we demonstrated that the approach can be used to identify manatee core habitats according to seasonality. The combined method represents a reliable, cost‐effective, scalable ecological monitoring technique that can be integrated into long‐term, standardized survey protocols of aquatic species. It can considerably benefit the monitoring of inaccessible regions, such as the Amazonian freshwater systems, which are facing immediate threats from increased hydropower construction.","PeriodicalId":21132,"journal":{"name":"Remote Sensing in Ecology and Conservation","volume":"240 1","pages":""},"PeriodicalIF":5.5,"publicationDate":"2024-10-31","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142561964","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"环境科学与生态学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Juliette Picard, Maïalicah M. Nungi‐Pambu Dembi, Nicolas Barbier, Guillaume Cornu, Pierre Couteron, Eric Forni, Gwili Gibbon, Felix Lim, Pierre Ploton, Robin Pouteau, Paul Tresson, Tom van Loon, Gaëlle Viennois, Maxime Réjou‐Méchain
Tropical moist forests are not the homogeneous green carpet often illustrated in maps or considered by global models. They harbour a complex mixture of forest types organized at different spatial scales that can now be more accurately mapped thanks to remote sensing products and artificial intelligence. In this study, we built a large‐scale vegetation map of the North of Congo and assessed the environmental drivers of the main forest types, their forest structure, their floristic and functional compositions and their faunistic composition. To build the map, we used Sentinel‐2 satellite images and recent deep learning architectures. We tested the effect of topographically determined water availability on vegetation type distribution by linking the map with a water drainage depth proxy (HAND, height above the nearest drainage index). We also described vegetation type structure and composition (floristic, functional and associated fauna) by linking the map with data from large inventories and derived from satellite images. We found that water drainage depth is a major driver of forest type distribution and that the different forest types are characterized by different structure, composition and functions, bringing new insights about their origins and successional dynamics. We discuss not only the crucial role of soil–water depth, but also the importance of consistently reproducing such maps through time to develop an accurate monitoring of tropical forest types and functions, and we provide insights on peculiar forest types (Marantaceae forests and monodominant Gilbertiodendron forests) on which future studies should focus more. Under the current context of global change, expected to trigger major forest structural and compositional changes in the tropics, an appropriate monitoring strategy of the spatio‐temporal dynamics of forest types and their associated floristic and faunistic composition would considerably help anticipate detrimental shifts.
{"title":"Combining satellite and field data reveals Congo's forest types structure, functioning and composition","authors":"Juliette Picard, Maïalicah M. Nungi‐Pambu Dembi, Nicolas Barbier, Guillaume Cornu, Pierre Couteron, Eric Forni, Gwili Gibbon, Felix Lim, Pierre Ploton, Robin Pouteau, Paul Tresson, Tom van Loon, Gaëlle Viennois, Maxime Réjou‐Méchain","doi":"10.1002/rse2.419","DOIUrl":"https://doi.org/10.1002/rse2.419","url":null,"abstract":"Tropical moist forests are not the homogeneous green carpet often illustrated in maps or considered by global models. They harbour a complex mixture of forest types organized at different spatial scales that can now be more accurately mapped thanks to remote sensing products and artificial intelligence. In this study, we built a large‐scale vegetation map of the North of Congo and assessed the environmental drivers of the main forest types, their forest structure, their floristic and functional compositions and their faunistic composition. To build the map, we used Sentinel‐2 satellite images and recent deep learning architectures. We tested the effect of topographically determined water availability on vegetation type distribution by linking the map with a water drainage depth proxy (HAND, height above the nearest drainage index). We also described vegetation type structure and composition (floristic, functional and associated fauna) by linking the map with data from large inventories and derived from satellite images. We found that water drainage depth is a major driver of forest type distribution and that the different forest types are characterized by different structure, composition and functions, bringing new insights about their origins and successional dynamics. We discuss not only the crucial role of soil–water depth, but also the importance of consistently reproducing such maps through time to develop an accurate monitoring of tropical forest types and functions, and we provide insights on peculiar forest types (Marantaceae forests and monodominant <jats:italic>Gilbertiodendron</jats:italic> forests) on which future studies should focus more. Under the current context of global change, expected to trigger major forest structural and compositional changes in the tropics, an appropriate monitoring strategy of the spatio‐temporal dynamics of forest types and their associated floristic and faunistic composition would considerably help anticipate detrimental shifts.","PeriodicalId":21132,"journal":{"name":"Remote Sensing in Ecology and Conservation","volume":"16 1","pages":""},"PeriodicalIF":5.5,"publicationDate":"2024-10-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142430421","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"环境科学与生态学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Sarah Smith‐Tripp, Nicholas C. Coops, Christopher Mulverhill, Joanne C. White, Sarah Gergel
Western North America has seen a recent dramatic increase in large and often high‐severity wildfires. After forest fire, understanding patterns of structural recovery is important, as recovery patterns impact critical ecosystem services. Continuous forest monitoring provided by satellite observations is particularly beneficial to capture the pivotal post‐fire period when forest recovery begins. However, it is challenging to optimize optical satellite imagery to both interpolate current and extrapolate future forest structure and composition. We identified a need to understand how early spectral dynamics (5 years post‐fire) inform patterns of structural recovery after fire disturbance. To create these structural patterns, we collected metrics of forest structure using high‐density Remotely Piloted Aircraft (RPAS) lidar (light detection and ranging). We employed a space‐for‐time substitution in the highly fire‐disturbed forests of interior British Columbia. In this region, we collected RPAS lidar and corresponding field plot data 5‐, 8‐, 11‐,12‐, and 16‐years postfire to predict structural attributes relevant to management, including the percent bare ground, the proportion of coniferous trees, stem density, and basal area. We compared forest structural attributes with unique early spectral responses, or trajectories, derived from Landsat time series data 5 years after fire. A total of eight unique spectral recovery trajectories were identified from spectral responses of seven vegetation indices (NBR, NDMI, NDVI, TCA, TCB, TCG, and TCW) that described five distinct patterns of structural recovery captured with RPAS lidar. Two structural patterns covered more than 80% of the study area. Both patterns had strong coniferous regrowth, but one had a higher basal area with more bare ground and the other pattern had a high stem density, but a low basal area and a higher deciduous proportion. Our approach highlights the ability to use early spectral responses to capture unique spectral trajectories and their associated distinct structural recovery patterns.
{"title":"Early spectral dynamics are indicative of distinct growth patterns in post‐wildfire forests","authors":"Sarah Smith‐Tripp, Nicholas C. Coops, Christopher Mulverhill, Joanne C. White, Sarah Gergel","doi":"10.1002/rse2.420","DOIUrl":"https://doi.org/10.1002/rse2.420","url":null,"abstract":"Western North America has seen a recent dramatic increase in large and often high‐severity wildfires. After forest fire, understanding patterns of structural recovery is important, as recovery patterns impact critical ecosystem services. Continuous forest monitoring provided by satellite observations is particularly beneficial to capture the pivotal post‐fire period when forest recovery begins. However, it is challenging to optimize optical satellite imagery to both interpolate current and extrapolate future forest structure and composition. We identified a need to understand how early spectral dynamics (5 years post‐fire) inform patterns of structural recovery after fire disturbance. To create these structural patterns, we collected metrics of forest structure using high‐density Remotely Piloted Aircraft (RPAS) lidar (light detection and ranging). We employed a space‐for‐time substitution in the highly fire‐disturbed forests of interior British Columbia. In this region, we collected RPAS lidar and corresponding field plot data 5‐, 8‐, 11‐,12‐, and 16‐years postfire to predict structural attributes relevant to management, including the percent bare ground, the proportion of coniferous trees, stem density, and basal area. We compared forest structural attributes with unique early spectral responses, or trajectories, derived from Landsat time series data 5 years after fire. A total of eight unique spectral recovery trajectories were identified from spectral responses of seven vegetation indices (NBR, NDMI, NDVI, TCA, TCB, TCG, and TCW) that described five distinct patterns of structural recovery captured with RPAS lidar. Two structural patterns covered more than 80% of the study area. Both patterns had strong coniferous regrowth, but one had a higher basal area with more bare ground and the other pattern had a high stem density, but a low basal area and a higher deciduous proportion. Our approach highlights the ability to use early spectral responses to capture unique spectral trajectories and their associated distinct structural recovery patterns.","PeriodicalId":21132,"journal":{"name":"Remote Sensing in Ecology and Conservation","volume":"55 1","pages":""},"PeriodicalIF":5.5,"publicationDate":"2024-09-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142245852","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"环境科学与生态学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}