首页 > 最新文献

Remote Sensing in Ecology and Conservation最新文献

英文 中文
Assessing group size and the demographic composition of a canopy‐dwelling primate, the northern muriqui (Brachyteles hypoxanthus), using arboreal camera trapping and genetic tagging 利用树栖相机诱捕和遗传标记技术,对栖息在树冠上的灵长类动物——北毛猴(Brachyteles hypoxanthus)的种群规模和人口组成进行了评估
IF 5.5 2区 环境科学与生态学 Q1 ECOLOGY Pub Date : 2025-10-17 DOI: 10.1002/rse2.70035
Mariane C. Kaizer, Naiara G. Sales, Thiago H. G. Alvim, Karen B. Strier, Fabiano R. de Melo, Jean P. Boubli, Robert J. Young, Allan D. McDevitt
Obtaining accurate population measures of endangered species is critical for effective conservation and management actions and to evaluate their success over time. However, determining the population size and demographic composition of most canopy forest‐dwelling species has proven to be challenging. Here, we apply two non‐invasive biomonitoring methods, arboreal camera trap and genetic tagging of fecal samples, to estimate the population size of a critically endangered primate, the northern muriqui (Brachyteles hypoxanthus), in the Caparaó National Park, Brazil. When comparing group sizes between camera trapping and genetic tagging, the genetic tagging survey estimated fewer individuals for one of the muriqui groups studied but showed slightly higher population size estimates for the other group. In terms of the cost‐efficiency of both methods, arboreal camera trapping had high initial costs but was more cost‐effective in the long term. Genetic tagging, on the other hand, did not require expensive equipment for data collection but had higher associated expenses for laboratory consumables and data processing. We recommend the use of both methods for northern muriqui monitoring and provide suggestions for improving the implementation of these non‐invasive methods for future routine monitoring. Our findings also highlight the potential of arboreal camera trapping and genetic tagging for other arboreal mammals in tropical forests.
获得准确的濒危物种数量对于有效的保护和管理行动以及评估其成功与否至关重要。然而,确定大多数冠层森林栖息物种的种群规模和人口组成已被证明是具有挑战性的。本文采用两种非侵入性生物监测方法,即树栖相机陷阱和粪便样本的遗传标记,对巴西Caparaó国家公园一种极度濒危的灵长类动物——北毛猴(Brachyteles hypoxanthus)的种群规模进行了估计。当比较相机捕获和遗传标记的群体规模时,遗传标记调查估计其中一个穆里奇群体的个体数量较少,但另一个群体的群体规模估计略高。就两种方法的成本效益而言,树栖相机捕获的初始成本较高,但从长远来看更具成本效益。另一方面,基因标记不需要昂贵的数据收集设备,但实验室消耗品和数据处理的相关费用较高。我们建议使用这两种方法监测北方muriqui,并提出建议,以改进这些非侵入性方法的实施,以用于未来的常规监测。我们的研究结果还强调了在热带森林中对其他树栖哺乳动物进行树栖相机捕获和基因标记的潜力。
{"title":"Assessing group size and the demographic composition of a canopy‐dwelling primate, the northern muriqui (Brachyteles hypoxanthus), using arboreal camera trapping and genetic tagging","authors":"Mariane C. Kaizer, Naiara G. Sales, Thiago H. G. Alvim, Karen B. Strier, Fabiano R. de Melo, Jean P. Boubli, Robert J. Young, Allan D. McDevitt","doi":"10.1002/rse2.70035","DOIUrl":"https://doi.org/10.1002/rse2.70035","url":null,"abstract":"Obtaining accurate population measures of endangered species is critical for effective conservation and management actions and to evaluate their success over time. However, determining the population size and demographic composition of most canopy forest‐dwelling species has proven to be challenging. Here, we apply two non‐invasive biomonitoring methods, arboreal camera trap and genetic tagging of fecal samples, to estimate the population size of a critically endangered primate, the northern muriqui (<jats:italic>Brachyteles hypoxanthus</jats:italic>), in the Caparaó National Park, Brazil. When comparing group sizes between camera trapping and genetic tagging, the genetic tagging survey estimated fewer individuals for one of the muriqui groups studied but showed slightly higher population size estimates for the other group. In terms of the cost‐efficiency of both methods, arboreal camera trapping had high initial costs but was more cost‐effective in the long term. Genetic tagging, on the other hand, did not require expensive equipment for data collection but had higher associated expenses for laboratory consumables and data processing. We recommend the use of both methods for northern muriqui monitoring and provide suggestions for improving the implementation of these non‐invasive methods for future routine monitoring. Our findings also highlight the potential of arboreal camera trapping and genetic tagging for other arboreal mammals in tropical forests.","PeriodicalId":21132,"journal":{"name":"Remote Sensing in Ecology and Conservation","volume":"35 1","pages":""},"PeriodicalIF":5.5,"publicationDate":"2025-10-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145311579","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"环境科学与生态学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
A UAV‐based deep learning pipeline for intertidal macrobenthos monitoring: Behavioral and age classification in Tachypleus tridentatus 基于无人机的潮间带大型底栖动物监测的深度学习管道:三叉戟鲎的行为和年龄分类
IF 5.5 2区 环境科学与生态学 Q1 ECOLOGY Pub Date : 2025-10-12 DOI: 10.1002/rse2.70036
Xiaohai Chen, Yuyuan Bao, Ziwei Ying, Mujiao Xie, Ting Li, Jixing Zou, Jun Shi, Xiaoyong Xie
Intertidal macrobenthos are vital bioindicators of coastal ecosystem health due to their ecological roles, limited mobility, and sensitivity to environmental disturbances. However, traditional field‐based monitoring methods are time‐consuming, spatially restricted, and unsuitable for large‐scale ecological surveillance. Integrating unmanned aerial vehicles (UAVs) with deep learning offers a promising alternative for high‐resolution, cost‐effective monitoring. Yet, species‐specific object detection frameworks for mobile macrobenthic fauna remain underdeveloped. Tachypleus tridentatus, an endangered “living fossil” with over 430 million years of evolutionary history, serves as a flagship species for intertidal conservation due to its ecological significance and biomedical value. This study develops a customized deep learning pipeline for monitoring T. tridentatus, combining UAV‐based image acquisition, automated detection, and ecological trait inference. We constructed the first UAV‐derived dataset of juvenile T. tridentatus (n = 761) and implemented a convolutional autoencoder for unsupervised behavioral classification, achieving 96% accuracy in distinguishing buried from exposed individuals. A YOLO‐based detection model was optimized using lightweight pruning and a high–low frequency fusion module (HLFM), improving detection accuracy (mAP@50 increased by 1.74%) and computational efficiency. Additionally, we established robust regression models linking crawling trace width to prosomal width (R2 = 0.99) and prosomal width to instar stage (R2 = 0.91). The inferred instar stages showed no significant deviation across datasets, validating their use as indicators of age structure. By bridging species‐level detection with population‐level ecological inference, this study provides a scalable, field‐deployable framework for monitoring T. tridentatus and other intertidal macrobenthic taxa. The approach supports data‐driven conservation strategies and enhances our capacity to assess the status of endangered coastal species in complex intertidal environments.
潮间带大型底栖动物具有重要的生态作用、有限的流动性和对环境干扰的敏感性,是沿海生态系统健康的重要生物指标。然而,传统的野外监测方法耗时长、空间有限,不适合大规模的生态监测。将无人机(uav)与深度学习相结合,为高分辨率、高成本效益的监测提供了一种有前景的替代方案。然而,针对移动大型底栖动物的物种特异性目标检测框架仍然不发达。tridentatus Tachypleus tridentatus是一种具有4.3亿年进化史的濒危“活化石”,具有重要的生态意义和生物医学价值,是潮间带保护的旗舰物种。本研究将基于无人机的图像采集、自动检测和生态特征推断相结合,开发了一种定制的三叉戟天牛监测深度学习管道。我们构建了第一个无人机衍生的幼年三叉齿鼠数据集(n = 761),并实现了一个卷积自编码器用于无监督行为分类,在区分埋藏个体和暴露个体方面达到了96%的准确率。利用轻量级剪枝和高低频融合模块(HLFM)对基于YOLO的检测模型进行了优化,提高了检测精度(mAP@50提高了1.74%)和计算效率。此外,我们建立了将爬行痕迹宽度与前体宽度(R2 = 0.99)和前体宽度与龄期(R2 = 0.91)联系起来的稳健回归模型。推断的年龄阶段在数据集之间没有显着偏差,验证了它们作为年龄结构指标的使用。通过将物种水平的检测与种群水平的生态推断联系起来,本研究为三叉戟河鼠和其他潮间带大型底栖动物类群的监测提供了一个可扩展的、可现场部署的框架。该方法支持数据驱动的保护策略,增强了我们在复杂潮间带环境中评估濒危沿海物种状况的能力。
{"title":"A UAV‐based deep learning pipeline for intertidal macrobenthos monitoring: Behavioral and age classification in Tachypleus tridentatus","authors":"Xiaohai Chen, Yuyuan Bao, Ziwei Ying, Mujiao Xie, Ting Li, Jixing Zou, Jun Shi, Xiaoyong Xie","doi":"10.1002/rse2.70036","DOIUrl":"https://doi.org/10.1002/rse2.70036","url":null,"abstract":"Intertidal macrobenthos are vital bioindicators of coastal ecosystem health due to their ecological roles, limited mobility, and sensitivity to environmental disturbances. However, traditional field‐based monitoring methods are time‐consuming, spatially restricted, and unsuitable for large‐scale ecological surveillance. Integrating unmanned aerial vehicles (UAVs) with deep learning offers a promising alternative for high‐resolution, cost‐effective monitoring. Yet, species‐specific object detection frameworks for mobile macrobenthic fauna remain underdeveloped. <jats:italic>Tachypleus tridentatus</jats:italic>, an endangered “living fossil” with over 430 million years of evolutionary history, serves as a flagship species for intertidal conservation due to its ecological significance and biomedical value. This study develops a customized deep learning pipeline for monitoring <jats:italic>T. tridentatus</jats:italic>, combining UAV‐based image acquisition, automated detection, and ecological trait inference. We constructed the first UAV‐derived dataset of juvenile <jats:italic>T. tridentatus</jats:italic> (<jats:italic>n</jats:italic> = 761) and implemented a convolutional autoencoder for unsupervised behavioral classification, achieving 96% accuracy in distinguishing buried from exposed individuals. A YOLO‐based detection model was optimized using lightweight pruning and a high–low frequency fusion module (HLFM), improving detection accuracy (mAP@50 increased by 1.74%) and computational efficiency. Additionally, we established robust regression models linking crawling trace width to prosomal width (<jats:italic>R</jats:italic><jats:sup>2</jats:sup> = 0.99) and prosomal width to instar stage (<jats:italic>R</jats:italic><jats:sup>2</jats:sup> = 0.91). The inferred instar stages showed no significant deviation across datasets, validating their use as indicators of age structure. By bridging species‐level detection with population‐level ecological inference, this study provides a scalable, field‐deployable framework for monitoring <jats:italic>T. tridentatus</jats:italic> and other intertidal macrobenthic taxa. The approach supports data‐driven conservation strategies and enhances our capacity to assess the status of endangered coastal species in complex intertidal environments.","PeriodicalId":21132,"journal":{"name":"Remote Sensing in Ecology and Conservation","volume":"117 1","pages":""},"PeriodicalIF":5.5,"publicationDate":"2025-10-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145277482","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"环境科学与生态学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Breaking down seagrass fragmentation in a marine heatwave impacted World Heritage Area 海洋热浪导致海草破碎,影响了世界遗产区
IF 5.5 2区 环境科学与生态学 Q1 ECOLOGY Pub Date : 2025-10-05 DOI: 10.1002/rse2.70032
Michael D. Taylor, Simone Strydom, Matthew W. Fraser, Ana M. M. Sequeira, Gary A. Kendrick
Marine heatwaves, and other extreme climatic events, are driving mass mortality of habitat‐forming species and substantial ecological change worldwide. However, habitat fragmentation is rarely considered despite its role in structuring seascapes and potential to exacerbate the negative impacts of habitat loss. Here, we quantify fragmentation of globally significant seagrass meadows within the Shark Bay World Heritage Area before and after an unprecedented marine heatwave impacting the Western Australian coastline over the austral summer of 2010/11. We use a spatial pattern index to quantify seagrass fragmentation from satellite‐derived habitat maps (2002, 2010, 2014 and 2016), assess potential predictors of fragmentation and investigate seascape dynamics defined by relationships between seagrass fragmentation and cover change. Our spatiotemporal analysis illustrates widespread fragmentation of seagrass following the marine heatwave, contributing to a dramatic alteration of seascape structure across the World Heritage Area. Fragmentation immediately following the marine heatwave coincided with widespread seagrass loss and was best explained by interactions between a heat stress metric (i.e. degree heating weeks) and depth. Based on the relationship between fragmentation and seagrass cover change, we revealed near‐ubiquitous fragmentation from 2014 to 2016 represents a mixture of long‐term seagrass degradation and evidence of early, patchy recovery. Fragmentation effects are expected to compound the ecological impacts of seagrass mortality following the marine heatwave and prolong recovery. As sea temperatures and the threat of marine heatwaves continue to rise globally, our results highlight the importance of considering fragmentation effects alongside the negative impacts of habitat loss. Our seascape dynamic framework provides a novel approach to define the response of habitat‐forming species to disturbances, including marine heatwaves, that integrates the processes of fragmentation and cover change. This framework provides the opportunity to consider these important processes across a range of threatened ecosystems and identify areas of vulnerability, stability and recovery.
海洋热浪和其他极端气候事件正在导致栖息地形成物种的大规模死亡和全球范围内的重大生态变化。然而,尽管栖息地破碎化在构建海景和可能加剧栖息地丧失的负面影响方面发挥了作用,但很少考虑到栖息地破碎化。在这里,我们量化了鲨鱼湾世界遗产区内全球重要的海草草甸在2010/11年夏季前所未有的海洋热浪影响西澳大利亚海岸线之前和之后的破碎程度。我们利用空间格局指数量化了卫星生境地图(2002年、2010年、2014年和2016年)中的海草破碎化,评估了破碎化的潜在预测因子,并研究了海草破碎化与覆盖变化之间的关系所定义的海景动态。我们的时空分析表明,海洋热浪过后,海草大面积破碎,导致整个世界遗产区的海景结构发生了巨大变化。紧随海洋热浪而来的破碎与海草的大面积损失相吻合,最好的解释是热应力度量(即加热周度)和深度之间的相互作用。基于破碎化与海草覆盖变化之间的关系,我们发现2014年至2016年期间,几乎无处不在的破碎化代表了长期海草退化和早期斑块性恢复的混合。碎片化效应预计将加剧海洋热浪后海草死亡的生态影响,并延长恢复时间。随着海洋温度和海洋热浪的威胁在全球范围内持续上升,我们的研究结果强调了将碎片化效应与栖息地丧失的负面影响一并考虑的重要性。我们的海景动态框架提供了一种新的方法来定义栖息地形成物种对干扰的响应,包括海洋热浪,它整合了破碎化和覆盖变化的过程。该框架提供了在一系列受到威胁的生态系统中考虑这些重要过程的机会,并确定了脆弱、稳定和恢复的领域。
{"title":"Breaking down seagrass fragmentation in a marine heatwave impacted World Heritage Area","authors":"Michael D. Taylor, Simone Strydom, Matthew W. Fraser, Ana M. M. Sequeira, Gary A. Kendrick","doi":"10.1002/rse2.70032","DOIUrl":"https://doi.org/10.1002/rse2.70032","url":null,"abstract":"Marine heatwaves, and other extreme climatic events, are driving mass mortality of habitat‐forming species and substantial ecological change worldwide. However, habitat fragmentation is rarely considered despite its role in structuring seascapes and potential to exacerbate the negative impacts of habitat loss. Here, we quantify fragmentation of globally significant seagrass meadows within the Shark Bay World Heritage Area before and after an unprecedented marine heatwave impacting the Western Australian coastline over the austral summer of 2010/11. We use a spatial pattern index to quantify seagrass fragmentation from satellite‐derived habitat maps (2002, 2010, 2014 and 2016), assess potential predictors of fragmentation and investigate seascape dynamics defined by relationships between seagrass fragmentation and cover change. Our spatiotemporal analysis illustrates widespread fragmentation of seagrass following the marine heatwave, contributing to a dramatic alteration of seascape structure across the World Heritage Area. Fragmentation immediately following the marine heatwave coincided with widespread seagrass loss and was best explained by interactions between a heat stress metric (i.e. degree heating weeks) and depth. Based on the relationship between fragmentation and seagrass cover change, we revealed near‐ubiquitous fragmentation from 2014 to 2016 represents a mixture of long‐term seagrass degradation and evidence of early, patchy recovery. Fragmentation effects are expected to compound the ecological impacts of seagrass mortality following the marine heatwave and prolong recovery. As sea temperatures and the threat of marine heatwaves continue to rise globally, our results highlight the importance of considering fragmentation effects alongside the negative impacts of habitat loss. Our seascape dynamic framework provides a novel approach to define the response of habitat‐forming species to disturbances, including marine heatwaves, that integrates the processes of fragmentation and cover change. This framework provides the opportunity to consider these important processes across a range of threatened ecosystems and identify areas of vulnerability, stability and recovery.","PeriodicalId":21132,"journal":{"name":"Remote Sensing in Ecology and Conservation","volume":"157 1","pages":""},"PeriodicalIF":5.5,"publicationDate":"2025-10-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145226606","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"环境科学与生态学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Spectral characterization of plant diversity in a biodiversity‐enriched oil palm plantation 生物多样性富集油棕种植园植物多样性的光谱特征
IF 5.5 2区 环境科学与生态学 Q1 ECOLOGY Pub Date : 2025-09-29 DOI: 10.1002/rse2.70034
Vannesa Montoya‐Sánchez, Anna K. Schweiger, Michael Schlund, Gustavo Brant Paterno, Stefan Erasmi, Holger Kreft, Dirk Hölscher, Fabian Brambach, Bambang Irawan, Leti Sundawati, Delphine Clara Zemp
Assessing plant diversity using remote sensing, including airborne imaging spectroscopy, shows promise for large‐scale biodiversity monitoring in landscape restoration and conservation. Enriching plantations with native trees is a key restoration strategy to enhance biodiversity and ecosystem functions in agricultural lands. In this study, we tested how well imaging spectroscopy characterizes plant diversity in 37 experimental plots of varying sizes and planted diversity levels in a biodiversity‐enriched oil palm plantation in Sumatra, Indonesia. Six years after establishing the plots, we acquired airborne imaging spectroscopy data comprising 160 spectral bands (400–1000 nm, at ~3.7 nm bandwidth) at 0.3 m spatial resolution. We calculated spectral diversity as the variance among image pixels and partitioned spectral diversity into alpha and beta diversity components. After controlling for differences in sampling area through rarefaction, we found no significant relationship between spectral and plant alpha diversity. Further, the relationships between the local contribution of spectral beta diversity and plant beta diversity revealed no significant trends. Spectral variability within plots was substantially higher than among plots (spectral alpha diversity ~82%–87%, spectral beta diversity ~11%–18%). These discrepancies are likely due to the structural dominance of oil palm crowns, which absorbed most of the light, while most of the plant diversity occurring below the oil palm canopy was not detectable by airborne spectroscopy. Our study highlights that remote sensing of plant diversity in ecosystems with strong vertical stratification and high understory diversity, such as agroforests, would benefit from combining data from passive with data from active sensors, such as LiDAR, to capture structural diversity.
利用包括航空成像光谱在内的遥感技术评估植物多样性,有望在景观恢复和保护中进行大规模的生物多样性监测。以原生树种丰富人工林是增强农业用地生物多样性和生态系统功能的关键恢复策略。在这项研究中,我们测试了成像光谱在印度尼西亚苏门答腊岛一个生物多样性丰富的油棕种植园的37个不同大小和种植多样性水平的试验田中如何很好地表征植物多样性。在建立地块6年后,我们以0.3 m空间分辨率获得了160个光谱波段(400-1000 nm,带宽约3.7 nm)的航空成像光谱数据。我们将光谱多样性计算为图像像素间的方差,并将光谱多样性划分为alpha和beta多样性分量。在通过稀疏控制采样面积差异后,我们发现光谱与植物α多样性之间没有显著的关系。此外,光谱多样性的局部贡献与植物多样性之间的关系没有明显的趋势。样地内光谱变异性显著高于样地间(光谱α多样性为82% ~ 87%,光谱β多样性为11% ~ 18%)。这些差异可能是由于油棕树冠的结构优势,它吸收了大部分的光,而油棕树冠下的大多数植物多样性无法通过航空光谱检测到。我们的研究强调,在垂直分层强、林下植物多样性高的生态系统(如农林业)中,将被动传感器数据与主动传感器数据(如激光雷达)相结合,将有利于植物多样性的遥感。
{"title":"Spectral characterization of plant diversity in a biodiversity‐enriched oil palm plantation","authors":"Vannesa Montoya‐Sánchez, Anna K. Schweiger, Michael Schlund, Gustavo Brant Paterno, Stefan Erasmi, Holger Kreft, Dirk Hölscher, Fabian Brambach, Bambang Irawan, Leti Sundawati, Delphine Clara Zemp","doi":"10.1002/rse2.70034","DOIUrl":"https://doi.org/10.1002/rse2.70034","url":null,"abstract":"Assessing plant diversity using remote sensing, including airborne imaging spectroscopy, shows promise for large‐scale biodiversity monitoring in landscape restoration and conservation. Enriching plantations with native trees is a key restoration strategy to enhance biodiversity and ecosystem functions in agricultural lands. In this study, we tested how well imaging spectroscopy characterizes plant diversity in 37 experimental plots of varying sizes and planted diversity levels in a biodiversity‐enriched oil palm plantation in Sumatra, Indonesia. Six years after establishing the plots, we acquired airborne imaging spectroscopy data comprising 160 spectral bands (400–1000 nm, at ~3.7 nm bandwidth) at 0.3 m spatial resolution. We calculated spectral diversity as the variance among image pixels and partitioned spectral diversity into alpha and beta diversity components. After controlling for differences in sampling area through rarefaction, we found no significant relationship between spectral and plant alpha diversity. Further, the relationships between the local contribution of spectral beta diversity and plant beta diversity revealed no significant trends. Spectral variability within plots was substantially higher than among plots (spectral alpha diversity ~82%–87%, spectral beta diversity ~11%–18%). These discrepancies are likely due to the structural dominance of oil palm crowns, which absorbed most of the light, while most of the plant diversity occurring below the oil palm canopy was not detectable by airborne spectroscopy. Our study highlights that remote sensing of plant diversity in ecosystems with strong vertical stratification and high understory diversity, such as agroforests, would benefit from combining data from passive with data from active sensors, such as LiDAR, to capture structural diversity.","PeriodicalId":21132,"journal":{"name":"Remote Sensing in Ecology and Conservation","volume":"17 1","pages":""},"PeriodicalIF":5.5,"publicationDate":"2025-09-29","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145188399","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"环境科学与生态学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
UAVs unveil the role of small scale vegetation structure on wader nest survival 无人机揭示了小尺度植被结构对涉禽巢生存的作用
IF 5.5 2区 环境科学与生态学 Q1 ECOLOGY Pub Date : 2025-09-27 DOI: 10.1002/rse2.70033
Miguel Silva‐Monteiro, Miguel Villoslada, Thaisa Bergamo, Triin Kaasiku, Camilo Carneiro, David Kleijn
Several ground‐nesting wader species rely on Baltic coastal meadows for breeding. Drastic reduction in the area of the habitat at the end of the 20th century has been followed by habitat restoration activities over the last 20 years. However, wader populations are not responding as hoped to the current conservation effort. Therefore, identifying which grassland characteristics are essential for waders to select their nesting location and which ones enhance their clutch survival probability is vital to implementing efficient conservation plans. However, many vegetation structural characteristics, such as sward height or heterogeneity, can be logistically complex to measure using traditional methods in relatively large areas, especially considering the highly accurate resolution needed. Here, we assessed several sward characteristics together with other key landscape features by combining very high‐resolution images from unmanned aerial vehicle (UAV) surveys with nest survival monitoring, in five key Estonian coastal grasslands for waders. We found that the main four wader species, Northern Lapwing (Vanellus vanellus), Common Redshank (Tringa totanus), Common Ringed Plover (Charadrius hiaticula) and the Baltic Dunlin (Calidris alpina schinzii), do not significantly differ in their nest‐site selection in terms of vegetation height, growth rates, or sward heterogeneity. Yet, we found that vegetation sward height and heterogeneity surrounding the nest sites within a 2‐meter buffer positively increased the daily nest survival probability from 0.883 to 0.979 along the gradients observed. Additionally, the distance between the nest location and flooded areas (≥20m2) was negatively correlated, and all variables affected the wader community similarly. Our results signal the need for a higher diversity of sward structures and the importance of constantly flooded areas in Estonian coastal meadows. Moreover, our study highlights the importance of integrating UAV remote sensing techniques within the animal conservation research field to unveil ecological patterns that may remain hidden using more traditional methods.
一些地面筑巢的涉禽物种依靠波罗的海沿岸的草甸繁殖。在20世纪末栖息地面积急剧减少之后,在过去20年中进行了栖息地恢复活动。然而,对目前的保护工作,涉水动物数量的反应并不如人们所希望的那样。因此,确定哪些草原特征对涉禽选择筑巢地点至关重要,哪些特征能提高其窝群存活率,对于实施有效的保护计划至关重要。然而,许多植被结构特征,如草地高度或异质性,在相对较大的区域使用传统方法测量可能非常复杂,特别是考虑到所需的高精度分辨率。在这里,我们通过将无人机(UAV)调查的高分辨率图像与巢生存监测相结合,评估了爱沙尼亚五个主要的涉水动物沿海草原的几个特征以及其他主要景观特征。研究发现,北田凫(Vanellus Vanellus)、普通红脚鹬(Tringa totanus)、普通环鸻(Charadrius hiaticula)和波罗的海鸻(Calidris alpina schinzii)这四种主要的水禽物种在筑巢地点的选择上,在植被高度、生长速率或草地异质性方面没有显著差异。然而,我们发现,在2米缓冲区内,巢址周围植被的高度和异质性正增加了巢的日存活率,从0.883增加到0.979。巢位与淹水面积(≥20m2)之间的距离呈负相关,各变量对涉禽群落的影响相似。我们的研究结果表明,爱沙尼亚沿海草甸需要更高多样性的草地结构,以及不断被淹没的地区的重要性。此外,我们的研究强调了将无人机遥感技术整合到动物保护研究领域的重要性,以揭示使用更传统方法可能仍然隐藏的生态模式。
{"title":"UAVs unveil the role of small scale vegetation structure on wader nest survival","authors":"Miguel Silva‐Monteiro, Miguel Villoslada, Thaisa Bergamo, Triin Kaasiku, Camilo Carneiro, David Kleijn","doi":"10.1002/rse2.70033","DOIUrl":"https://doi.org/10.1002/rse2.70033","url":null,"abstract":"Several ground‐nesting wader species rely on Baltic coastal meadows for breeding. Drastic reduction in the area of the habitat at the end of the 20th century has been followed by habitat restoration activities over the last 20 years. However, wader populations are not responding as hoped to the current conservation effort. Therefore, identifying which grassland characteristics are essential for waders to select their nesting location and which ones enhance their clutch survival probability is vital to implementing efficient conservation plans. However, many vegetation structural characteristics, such as sward height or heterogeneity, can be logistically complex to measure using traditional methods in relatively large areas, especially considering the highly accurate resolution needed. Here, we assessed several sward characteristics together with other key landscape features by combining very high‐resolution images from unmanned aerial vehicle (UAV) surveys with nest survival monitoring, in five key Estonian coastal grasslands for waders. We found that the main four wader species, Northern Lapwing (<jats:italic>Vanellus vanellus</jats:italic>), Common Redshank (<jats:italic>Tringa totanus</jats:italic>), Common Ringed Plover (<jats:italic>Charadrius hiaticula</jats:italic>) and the Baltic Dunlin (<jats:italic>Calidris alpina schinzii),</jats:italic> do not significantly differ in their nest‐site selection in terms of vegetation height, growth rates, or sward heterogeneity. Yet, we found that vegetation sward height and heterogeneity surrounding the nest sites within a 2‐meter buffer positively increased the daily nest survival probability from 0.883 to 0.979 along the gradients observed. Additionally, the distance between the nest location and flooded areas (≥20m<jats:sup>2</jats:sup>) was negatively correlated, and all variables affected the wader community similarly. Our results signal the need for a higher diversity of sward structures and the importance of constantly flooded areas in Estonian coastal meadows. Moreover, our study highlights the importance of integrating UAV remote sensing techniques within the animal conservation research field to unveil ecological patterns that may remain hidden using more traditional methods.","PeriodicalId":21132,"journal":{"name":"Remote Sensing in Ecology and Conservation","volume":"4 1","pages":""},"PeriodicalIF":5.5,"publicationDate":"2025-09-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145153629","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"环境科学与生态学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
HOWLish: a CNN for automated wolf howl detection HOWLish: CNN的自动狼嗥检测
IF 5.5 2区 环境科学与生态学 Q1 ECOLOGY Pub Date : 2025-09-23 DOI: 10.1002/rse2.70024
Rafael Campos, Miha Krofel, Helena Rio‐Maior, Francesco Renna
Automated sound‐event detection is crucial for large‐scale passive acoustic monitoring of wildlife, but the availability of ready‐to‐use tools is narrow across taxa. Machine learning is currently the state‐of‐the‐art framework for developing sound‐event detection tools tailored to specific wildlife calls. Gray wolves (Canis lupus), a species with intricate management necessities, howl spontaneously for long‐distance intra‐ and inter‐pack communication, which makes them a prime target for passive acoustic monitoring. Yet, there is currently no pre‐trained, open‐access tool that allows reliable automated detection of wolf howls in recorded soundscapes. We collected 50 137 h of soundscape data, where we manually labeled 841 unique howling events. We used this dataset to fine‐tune VGGish—a convolutional neural network trained for audio classification—effectively retraining it for wolf howl detection. HOWLish correctly classified 77% of the wolf howling examples present on our test set, with a false positive rate of 1.74%; still, precision was low (0.006) granted extreme class imbalance (7124:1). During field tests, HOWLish retrieved 81.3% of the observed howling events while offering a 15‐fold reduction in operator time when compared to fully manual detection. This work establishes the baseline for open‐access automated wolf howl detection. HOWLish facilitates remote sensing of wild wolf populations, offering new opportunities in non‐invasive large‐scale monitoring and communication research of wolves. The knowledge gap we addressed here spans across many soniferous taxa, to which our approach also tallies.
自动声事件检测对于野生动物的大规模被动声学监测至关重要,但在不同的分类群中,现成的工具的可用性很窄。机器学习是目前最先进的框架,用于开发针对特定野生动物叫声的声音事件检测工具。灰狼(Canis lupus)是一种需要复杂管理的物种,它们会自发地嚎叫以进行长距离的群内和群间交流,这使它们成为被动声学监测的主要目标。然而,目前还没有经过预先训练的开放访问工具,可以在录制的音景中可靠地自动检测狼的嚎叫。我们收集了50 137小时的声景数据,其中我们手动标记了841个独特的嚎叫事件。我们使用该数据集对vggish(一种训练用于音频分类的卷积神经网络)进行微调,有效地将其重新训练用于狼嚎检测。HOWLish正确分类了我们测试集中77%的狼嚎样本,假阳性率为1.74%;尽管如此,由于极端的类别不平衡(7124:1),精确度很低(0.006)。在现场测试中,HOWLish能够检索到81.3%的嚎叫事件,同时与完全手动检测相比,操作员的时间减少了15倍。这项工作为开放获取的自动狼嚎检测建立了基线。HOWLish为野生狼种群的遥感研究提供了便利,为非侵入性大规模野生狼监测和交流研究提供了新的机遇。我们在这里解决的知识差距跨越了许多soniferous分类群,我们的方法也符合。
{"title":"HOWLish: a CNN for automated wolf howl detection","authors":"Rafael Campos, Miha Krofel, Helena Rio‐Maior, Francesco Renna","doi":"10.1002/rse2.70024","DOIUrl":"https://doi.org/10.1002/rse2.70024","url":null,"abstract":"Automated sound‐event detection is crucial for large‐scale passive acoustic monitoring of wildlife, but the availability of ready‐to‐use tools is narrow across taxa. Machine learning is currently the state‐of‐the‐art framework for developing sound‐event detection tools tailored to specific wildlife calls. Gray wolves (<jats:italic>Canis lupus</jats:italic>), a species with intricate management necessities, howl spontaneously for long‐distance intra‐ and inter‐pack communication, which makes them a prime target for passive acoustic monitoring. Yet, there is currently no pre‐trained, open‐access tool that allows reliable automated detection of wolf howls in recorded soundscapes. We collected 50 137 h of soundscape data, where we manually labeled 841 unique howling events. We used this dataset to fine‐tune VGGish—a convolutional neural network trained for audio classification—effectively retraining it for wolf howl detection. HOWLish correctly classified 77% of the wolf howling examples present on our test set, with a false positive rate of 1.74%; still, precision was low (0.006) granted extreme class imbalance (7124:1). During field tests, HOWLish retrieved 81.3% of the observed howling events while offering a 15‐fold reduction in operator time when compared to fully manual detection. This work establishes the baseline for open‐access automated wolf howl detection. HOWLish facilitates remote sensing of wild wolf populations, offering new opportunities in non‐invasive large‐scale monitoring and communication research of wolves. The knowledge gap we addressed here spans across many soniferous taxa, to which our approach also tallies.","PeriodicalId":21132,"journal":{"name":"Remote Sensing in Ecology and Conservation","volume":"41 1","pages":""},"PeriodicalIF":5.5,"publicationDate":"2025-09-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145117005","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"环境科学与生态学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
DuckNet: an open‐source deep learning tool for waterfowl species identification in UAV imagery DuckNet:一个开源的深度学习工具,用于在无人机图像中识别水禽物种
IF 5.5 2区 环境科学与生态学 Q1 ECOLOGY Pub Date : 2025-09-18 DOI: 10.1002/rse2.70028
Zack Loken, Kevin M. Ringelman, Anne Mini, J. Dale James, Mike Mitchell
Understanding how waterfowl respond to habitat restoration and management activities is crucial for evaluating and refining conservation delivery programs. However, site‐specific waterfowl monitoring is challenging, especially in heavily forested systems such as the Mississippi Alluvial Valley (MAV)—a primary wintering region for waterfowl in North America. We hypothesized that using uncrewed aerial vehicles (UAVs) coupled with deep learning‐based methods for object detection would provide an efficient and effective means for surveying non‐breeding waterfowl on difficult‐to‐access restored wetland sites. Accordingly, during the winters of 2021 and 2022, we surveyed wetland restoration easements in the MAV using a UAV equipped with a dual thermal‐RGB high‐resolution sensor to collect 2360 digital images of non‐breeding waterfowl. We then developed, optimized, and trained a RetinaNet object detection model with a ResNet‐50 backbone to locate and identify seven species of waterfowl drakes, waterfowl hens, and one species of waterbird in the UAV imagery. The final model achieved an average precision and average recall of 88.1% (class ranges from 68.8 to 99.6%) and 89.0% (class ranges from 70.0 to 100%), respectively, at an intersection‐over‐union of 0.5. This study successfully surveys non‐breeding waterfowl in structurally complex and difficult‐to‐access habitats using UAV and, furthermore, provides a functional, open‐source, deep learning‐based object detection framework (DuckNet) for automated detection of waterfowl in UAV imagery. DuckNet provides a user‐friendly interface for running inference on custom images using the model developed here and, additionally, allows users to fine‐tune the model on custom datasets to expand the number of species classes the model can detect. This framework provides managers with an efficient and cost‐effective means to count waterfowl on project sites, thereby improving their capacity to evaluate waterfowl response to wetland restoration efforts.
了解水禽对栖息地恢复和管理活动的反应对于评估和完善保护交付计划至关重要。然而,特定地点的水禽监测具有挑战性,特别是在茂密的森林系统中,如密西西比河冲积谷(MAV),这是北美水禽的主要越冬区。我们假设,使用无人驾驶飞行器(uav)结合基于深度学习的目标检测方法,将为在难以进入的恢复湿地上调查非繁殖水禽提供一种高效和有效的手段。因此,在2021年和2022年冬季,我们使用配备双热- RGB高分辨率传感器的无人机对MAV中的湿地恢复地权进行了调查,收集了2360张非繁殖水禽的数字图像。然后,我们开发、优化并训练了一个具有ResNet‐50主干的retanet目标检测模型,以定位和识别无人机图像中的7种水禽、水禽母鸡和一种水鸟。最终模型的平均准确率和平均召回率分别为88.1%(类别范围从68.8到99.6%)和89.0%(类别范围从70.0到100%),交集-过联合为0.5。本研究成功地利用无人机对结构复杂、难以进入栖息地的非繁殖水禽进行了调查,此外,还提供了一个功能强大、开源、基于深度学习的目标检测框架(DuckNet),用于无人机图像中的水禽自动检测。DuckNet提供了一个用户友好的界面,可以使用这里开发的模型在自定义图像上运行推断,此外,允许用户在自定义数据集上微调模型,以扩大模型可以检测的物种类别的数量。该框架为管理人员提供了一种高效且具有成本效益的方法来统计项目地点的水禽,从而提高他们评估水禽对湿地恢复工作的反应的能力。
{"title":"DuckNet: an open‐source deep learning tool for waterfowl species identification in UAV imagery","authors":"Zack Loken, Kevin M. Ringelman, Anne Mini, J. Dale James, Mike Mitchell","doi":"10.1002/rse2.70028","DOIUrl":"https://doi.org/10.1002/rse2.70028","url":null,"abstract":"Understanding how waterfowl respond to habitat restoration and management activities is crucial for evaluating and refining conservation delivery programs. However, site‐specific waterfowl monitoring is challenging, especially in heavily forested systems such as the Mississippi Alluvial Valley (MAV)—a primary wintering region for waterfowl in North America. We hypothesized that using uncrewed aerial vehicles (UAVs) coupled with deep learning‐based methods for object detection would provide an efficient and effective means for surveying non‐breeding waterfowl on difficult‐to‐access restored wetland sites. Accordingly, during the winters of 2021 and 2022, we surveyed wetland restoration easements in the MAV using a UAV equipped with a dual thermal‐RGB high‐resolution sensor to collect 2360 digital images of non‐breeding waterfowl. We then developed, optimized, and trained a RetinaNet object detection model with a ResNet‐50 backbone to locate and identify seven species of waterfowl drakes, waterfowl hens, and one species of waterbird in the UAV imagery. The final model achieved an average precision and average recall of 88.1% (class ranges from 68.8 to 99.6%) and 89.0% (class ranges from 70.0 to 100%), respectively, at an intersection‐over‐union of 0.5. This study successfully surveys non‐breeding waterfowl in structurally complex and difficult‐to‐access habitats using UAV and, furthermore, provides a functional, open‐source, deep learning‐based object detection framework (DuckNet) for automated detection of waterfowl in UAV imagery. DuckNet provides a user‐friendly interface for running inference on custom images using the model developed here and, additionally, allows users to fine‐tune the model on custom datasets to expand the number of species classes the model can detect. This framework provides managers with an efficient and cost‐effective means to count waterfowl on project sites, thereby improving their capacity to evaluate waterfowl response to wetland restoration efforts.","PeriodicalId":21132,"journal":{"name":"Remote Sensing in Ecology and Conservation","volume":"78 1","pages":""},"PeriodicalIF":5.5,"publicationDate":"2025-09-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145084330","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"环境科学与生态学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Investigating boreal forest successional stages in Alaska and Northwest Canada using UAV‐LiDAR and RGB and a community detection network 利用无人机-激光雷达、RGB和社区探测网络对阿拉斯加和加拿大西北部的北方森林演替阶段进行了调查
IF 5.5 2区 环境科学与生态学 Q1 ECOLOGY Pub Date : 2025-09-09 DOI: 10.1002/rse2.70029
Léa Enguehard, Birgit Heim, Ulrike Herzschuh, Viktor Dinkel, Glenn Juday, Santosh Panda, Nicola Falco, Jacob Schladebach, Jakob Broers, Stefan Kruse
Boreal forests are a key component of the global carbon cycle, forming North America's most extensive biome. Different successional stages in boreal forests have varying levels of ecological values and biodiversity, which in turn affect their functions. A knowledge gap remains concerning the present successional stages, their geographic patterns and possible successions. This study develops a novel application of UAV‐LiDAR and Red Green Blue (RGB) data and network analysis to enhance our understanding of boreal forest succession. Between 2022 and 2024, we collected UAV‐LiDAR and RGB data from 48 forested sites in Alaska and Northwest Canada to (i) identify present successional stages and (ii) deepen our understanding of successional trajectories. We first applied UAV‐derived spectral and structural tree attributes to classify individual trees into plant functional types representative of boreal forest succession, amely, evergreen and deciduous. Second, we built a forest‐patch network to characterize successional stages and their interactions and assessed future stage transitions. Finally, we applied a simplified forward model to predict future dynamics and highlight different successional trajectories. Our results indicate that tree height and spectral variables are the most influential predictors of plant functional type in random forest algorithms, and high overall accuracies were attained. The network‐based community detection algorithm reveals five interconnected successional stages that could be interpreted as ranging from early to late successional and a disturbed stage. We find that disturbed sites are mainly located in Interior and Southcentral Alaska, while late successional sites are predominant in the southern Canadian sites. Transitional stages are mainly located near the tundra‐taiga boundary. These findings highlight the critical role of disturbances, such as fire or insect outbreaks, in shaping forest succession in Alaska and Northwest Canada.
北方森林是全球碳循环的关键组成部分,形成了北美最广泛的生物群落。不同演替阶段的北方森林具有不同程度的生态价值和生物多样性,从而影响其功能。关于目前演替阶段、它们的地理格局和可能演替的知识差距仍然存在。本研究开发了无人机-激光雷达和红绿蓝(RGB)数据和网络分析的新应用,以增强我们对北方森林演替的理解。在2022年至2024年间,我们收集了阿拉斯加和加拿大西北部48个森林站点的无人机- LiDAR和RGB数据,以(i)确定当前的演替阶段,(ii)加深我们对演替轨迹的理解。我们首先利用无人机衍生的光谱和结构树属性将单个树木划分为代表北方森林演替的植物功能类型,即常绿和落叶。其次,我们构建了森林-斑块网络来表征演替阶段及其相互作用,并评估了未来阶段的转变。最后,我们应用简化的正演模型来预测未来动态,并突出不同的演替轨迹。结果表明,在随机森林算法中,树高和光谱变量对植物功能类型的预测影响最大,总体精度较高。基于网络的群落检测算法揭示了5个相互关联的演替阶段,可以解释为早期演替到晚期演替和扰动阶段。受干扰地点主要分布在阿拉斯加内陆和中南部,而晚演替地点则主要分布在加拿大南部。过渡阶段主要位于冻土带-针叶林边界附近。这些发现突出了干扰的关键作用,如火灾或昆虫的爆发,在塑造阿拉斯加和加拿大西北部的森林演替。
{"title":"Investigating boreal forest successional stages in Alaska and Northwest Canada using UAV‐LiDAR and RGB and a community detection network","authors":"Léa Enguehard, Birgit Heim, Ulrike Herzschuh, Viktor Dinkel, Glenn Juday, Santosh Panda, Nicola Falco, Jacob Schladebach, Jakob Broers, Stefan Kruse","doi":"10.1002/rse2.70029","DOIUrl":"https://doi.org/10.1002/rse2.70029","url":null,"abstract":"Boreal forests are a key component of the global carbon cycle, forming North America's most extensive biome. Different successional stages in boreal forests have varying levels of ecological values and biodiversity, which in turn affect their functions. A knowledge gap remains concerning the present successional stages, their geographic patterns and possible successions. This study develops a novel application of UAV‐LiDAR and Red Green Blue (RGB) data and network analysis to enhance our understanding of boreal forest succession. Between 2022 and 2024, we collected UAV‐LiDAR and RGB data from 48 forested sites in Alaska and Northwest Canada to (i) identify present successional stages and (ii) deepen our understanding of successional trajectories. We first applied UAV‐derived spectral and structural tree attributes to classify individual trees into plant functional types representative of boreal forest succession, amely, <jats:italic>evergreen</jats:italic> and <jats:italic>deciduous</jats:italic>. Second, we built a forest‐patch network to characterize successional stages and their interactions and assessed future stage transitions. Finally, we applied a simplified forward model to predict future dynamics and highlight different successional trajectories. Our results indicate that tree height and spectral variables are the most influential predictors of plant functional type in random forest algorithms, and high overall accuracies were attained. The network‐based community detection algorithm reveals five interconnected successional stages that could be interpreted as ranging from early to late successional and a disturbed stage. We find that disturbed sites are mainly located in Interior and Southcentral Alaska, while late successional sites are predominant in the southern Canadian sites. Transitional stages are mainly located near the tundra‐taiga boundary. These findings highlight the critical role of disturbances, such as fire or insect outbreaks, in shaping forest succession in Alaska and Northwest Canada.","PeriodicalId":21132,"journal":{"name":"Remote Sensing in Ecology and Conservation","volume":"135 1","pages":""},"PeriodicalIF":5.5,"publicationDate":"2025-09-09","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145035554","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"环境科学与生态学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Camera traps and deep learning enable efficient large‐scale density estimation of wildlife in temperate forest ecosystems 相机陷阱和深度学习可以有效地估计温带森林生态系统中野生动物的大规模密度
IF 5.5 2区 环境科学与生态学 Q1 ECOLOGY Pub Date : 2025-09-09 DOI: 10.1002/rse2.70030
Maik Henrich, Christian Fiderer, Alisa Klamm, Anja Schneider, Axel Ballmann, Jürgen Stein, Raffael Kratzer, Rudolf Reiner, Sina Greiner, Sönke Twietmeyer, Tobias Rönitz, Volker Spicher, Simon Chamaillé‐Jammes, Vincent Miele, Gaspard Dussert, Marco Heurich
Automated detectors such as camera traps allow the efficient collection of large amounts of data for the monitoring of animal populations, but data processing and classification are a major bottleneck. Deep learning algorithms have gained increasing attention in this context, as they have the potential to dramatically decrease the time and effort required to obtain population density estimates. However, the robustness of such an approach has not yet been evaluated across a wide range of species and study areas. This study evaluated the application of DeepFaune, an open‐source deep learning algorithm for the classification of European animal species and camera trap distance sampling (CTDS) to a year‐round dataset containing 895,019 manually classified photos from 10 protected areas across Germany. For all wild animal species and higher taxonomic groups on which DeepFaune was trained, the algorithm achieved an overall accuracy of 90%. The 95% confidence interval (CI) of the difference between the CTDS estimates based on manual and automated image classification contained zero for all species and seasons with a minimum sample size of 20 independent observations per study area, except for two. Meta‐regression revealed an average difference between the classification methods of −0.005 (95% CI: −0.205–0.196) animals/km2. Classification success correlated with the divergence of the population density estimates, but false negative and false positive detections had complex effects on the density estimates via different CTDS parameters. Therefore, metrics of classification performance alone are insufficient to assess the effect of deep learning classifiers on the population density estimation process, which should instead be followed through entirely for proper validation. In general, however, our results demonstrate that readily available deep learning algorithms can be used in largely unsupervised workflows for estimating population densities from camera trap data.
像相机陷阱这样的自动探测器可以有效地收集大量数据来监测动物种群,但是数据处理和分类是一个主要的瓶颈。在这种情况下,深度学习算法得到了越来越多的关注,因为它们有可能大大减少获得人口密度估计所需的时间和精力。然而,这种方法的稳健性尚未在广泛的物种和研究领域中得到评估。本研究评估了DeepFaune的应用,这是一种开源的深度学习算法,用于欧洲动物物种分类和相机陷阱距离采样(CTDS),该算法包含来自德国10个保护区的895,019张手动分类照片。对于DeepFaune所训练的所有野生动物物种和高级分类群,该算法的总体准确率达到90%。除2个外,在每个研究区最小样本量为20个独立观测点的所有物种和季节中,基于人工和自动图像分类的CTDS估计值差异的95%置信区间(CI)均为零。Meta回归显示,分类方法之间的平均差异为- 0.005 (95% CI: - 0.205-0.196)只/km2。分类成功率与种群密度估计值的散度相关,但假阴性和假阳性检测对不同CTDS参数下种群密度估计值的影响较为复杂。因此,单独的分类性能指标不足以评估深度学习分类器对人口密度估计过程的影响,而应该完全遵循该过程以进行适当的验证。然而,总的来说,我们的研究结果表明,现成的深度学习算法可以用于在很大程度上无监督的工作流程中,从相机陷阱数据中估计种群密度。
{"title":"Camera traps and deep learning enable efficient large‐scale density estimation of wildlife in temperate forest ecosystems","authors":"Maik Henrich, Christian Fiderer, Alisa Klamm, Anja Schneider, Axel Ballmann, Jürgen Stein, Raffael Kratzer, Rudolf Reiner, Sina Greiner, Sönke Twietmeyer, Tobias Rönitz, Volker Spicher, Simon Chamaillé‐Jammes, Vincent Miele, Gaspard Dussert, Marco Heurich","doi":"10.1002/rse2.70030","DOIUrl":"https://doi.org/10.1002/rse2.70030","url":null,"abstract":"Automated detectors such as camera traps allow the efficient collection of large amounts of data for the monitoring of animal populations, but data processing and classification are a major bottleneck. Deep learning algorithms have gained increasing attention in this context, as they have the potential to dramatically decrease the time and effort required to obtain population density estimates. However, the robustness of such an approach has not yet been evaluated across a wide range of species and study areas. This study evaluated the application of DeepFaune, an open‐source deep learning algorithm for the classification of European animal species and camera trap distance sampling (CTDS) to a year‐round dataset containing 895,019 manually classified photos from 10 protected areas across Germany. For all wild animal species and higher taxonomic groups on which DeepFaune was trained, the algorithm achieved an overall accuracy of 90%. The 95% confidence interval (CI) of the difference between the CTDS estimates based on manual and automated image classification contained zero for all species and seasons with a minimum sample size of 20 independent observations per study area, except for two. Meta‐regression revealed an average difference between the classification methods of −0.005 (95% CI: −0.205–0.196) animals/km<jats:sup>2</jats:sup>. Classification success correlated with the divergence of the population density estimates, but false negative and false positive detections had complex effects on the density estimates via different CTDS parameters. Therefore, metrics of classification performance alone are insufficient to assess the effect of deep learning classifiers on the population density estimation process, which should instead be followed through entirely for proper validation. In general, however, our results demonstrate that readily available deep learning algorithms can be used in largely unsupervised workflows for estimating population densities from camera trap data.","PeriodicalId":21132,"journal":{"name":"Remote Sensing in Ecology and Conservation","volume":"158 1","pages":""},"PeriodicalIF":5.5,"publicationDate":"2025-09-09","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145035725","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"环境科学与生态学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Inferring camera trap detection zones for rare species using species‐ and camera‐specific traits: a meta‐level analysis 利用物种和相机特异性特征推断稀有物种的相机陷阱探测区:一项元水平分析
IF 5.5 2区 环境科学与生态学 Q1 ECOLOGY Pub Date : 2025-09-05 DOI: 10.1002/rse2.70027
Johannes N. Wiegers, Kathryn E. Barry, Marijke van Kuijk
Camera trapping is a vital tool for wildlife monitoring. Accurately estimating a camera's detection zone, the area where animals are detected, is essential, particularly for calculating population densities of unmarked species. However, obtaining enough detection events to estimate detection zones accurately remains difficult, particularly for rare species. Given that detection zones are influenced by species‐ and camera‐specific traits, it may be possible to infer detection zones from these traits when data are scarce. We conducted a meta‐level analysis to assess how the number of detection events, species traits and site‐specific variables influence the estimation of the effective camera trap detection distance and angle. We reviewed published studies on detection zones, performed a power analysis to estimate the sample sizes required for accurate and precise estimates and used mixed‐effects models to test whether detection zones can be predicted from biological and technical traits. Our results show that c. 50 detection events are needed to achieve error rates below 10%. The mixed‐effects models explained 81% and 85% of the variation in effective detection distance and angle, respectively. Key predictors of detection distance included body mass, right‐truncation distance and camera brand, while angle was predicted by camera brand and installation height. Importantly, we demonstrate that combining model‐based predictions with limited empirical data (fewer than 25 detections) can reduce estimation error to below 15% for rare species. This study highlights that detection zones can be predicted not only within, but also across, studies using shared traits and that the right‐truncation distance is a useful metric to account for habitat‐specific visibility. These findings enhance the utility of detection zones in ecological studies and support better study design, especially for rare or understudied species.
相机陷阱是野生动物监测的重要工具。准确估计摄像机的探测区,即动物被探测到的区域,是至关重要的,特别是对于计算未标记物种的种群密度而言。然而,获得足够的检测事件来准确估计检测区域仍然很困难,特别是对于稀有物种。考虑到探测区域受到物种和相机特定特征的影响,当数据稀缺时,可以从这些特征推断出探测区域。我们进行了一项元水平分析,以评估检测事件的数量、物种特征和地点特定变量如何影响有效相机陷阱检测距离和角度的估计。我们回顾了已发表的关于检测区的研究,进行了功率分析来估计准确和精确估计所需的样本量,并使用混合效应模型来测试是否可以从生物和技术性状中预测检测区。我们的结果表明,要实现低于10%的错误率,需要c. 50个检测事件。混合效应模型分别解释了81%和85%的有效探测距离和角度的变化。检测距离的关键预测因子包括体重、右截断距离和摄像机品牌,而角度由摄像机品牌和安装高度预测。重要的是,我们证明了将基于模型的预测与有限的经验数据(少于25个检测)相结合可以将稀有物种的估计误差降低到15%以下。这项研究强调,检测区域不仅可以在使用共享特征的研究中预测,而且可以跨研究进行预测,并且右截断距离是解释栖息地特定可见性的有用度量。这些发现增强了生态研究中探测带的实用性,并支持更好的研究设计,特别是对稀有或研究不足的物种。
{"title":"Inferring camera trap detection zones for rare species using species‐ and camera‐specific traits: a meta‐level analysis","authors":"Johannes N. Wiegers, Kathryn E. Barry, Marijke van Kuijk","doi":"10.1002/rse2.70027","DOIUrl":"https://doi.org/10.1002/rse2.70027","url":null,"abstract":"Camera trapping is a vital tool for wildlife monitoring. Accurately estimating a camera's detection zone, the area where animals are detected, is essential, particularly for calculating population densities of unmarked species. However, obtaining enough detection events to estimate detection zones accurately remains difficult, particularly for rare species. Given that detection zones are influenced by species‐ and camera‐specific traits, it may be possible to infer detection zones from these traits when data are scarce. We conducted a meta‐level analysis to assess how the number of detection events, species traits and site‐specific variables influence the estimation of the effective camera trap detection distance and angle. We reviewed published studies on detection zones, performed a power analysis to estimate the sample sizes required for accurate and precise estimates and used mixed‐effects models to test whether detection zones can be predicted from biological and technical traits. Our results show that c. 50 detection events are needed to achieve error rates below 10%. The mixed‐effects models explained 81% and 85% of the variation in effective detection distance and angle, respectively. Key predictors of detection distance included body mass, right‐truncation distance and camera brand, while angle was predicted by camera brand and installation height. Importantly, we demonstrate that combining model‐based predictions with limited empirical data (fewer than 25 detections) can reduce estimation error to below 15% for rare species. This study highlights that detection zones can be predicted not only within, but also across, studies using shared traits and that the right‐truncation distance is a useful metric to account for habitat‐specific visibility. These findings enhance the utility of detection zones in ecological studies and support better study design, especially for rare or understudied species.","PeriodicalId":21132,"journal":{"name":"Remote Sensing in Ecology and Conservation","volume":"31 1","pages":""},"PeriodicalIF":5.5,"publicationDate":"2025-09-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145002780","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"环境科学与生态学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
期刊
Remote Sensing in Ecology and Conservation
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1