首页 > 最新文献

Remote Sensing in Ecology and Conservation最新文献

英文 中文
Consistent and scalable monitoring of birds and habitats along a coffee production intensity gradient 沿着咖啡生产强度梯度对鸟类和栖息地进行一致和可扩展的监测
IF 5.5 2区 环境科学与生态学 Q1 ECOLOGY Pub Date : 2025-06-21 DOI: 10.1002/rse2.70015
Marius Somveille, Joe Grainger‐Hull, Nicole Ferguson, Sarab S. Sethi, Fernando González‐García, Valentine Chassagnon, Cansu Oktem, Mathias Disney, Gustavo López Bautista, John Vandermeer, Ivette Perfecto
Land use change associated with agricultural intensification is a leading driver of biodiversity loss in the tropics. To evaluate the habitat–biodiversity relationship in production systems of tropical agricultural commodities, birds are commonly used as indicators. However, a consistent and reliable methodological approach for monitoring tropical avian communities and habitat quality in a way that is scalable is largely lacking. In this study, we examined whether the automated analysis of audio data collected by passive acoustic monitoring, together with the analysis of remote sensing data, can be used to efficiently monitor avian biodiversity along the gradient of habitat degradation associated with the intensification of coffee production. Coffee is an important crop produced in tropical forested regions, whose production is expanding and intensifying, and coffee production systems form a gradient of ecological complexity ranging from forest‐like shaded polyculture to dense sun‐exposed monoculture. We used LiDAR technology to survey the habitat, together with autonomous recording units and a vocalization classifier to assess bird community composition in a coffee landscape comprising a shade‐grown coffee farm, a sun coffee farm and a forest remnant, located in southern Mexico. We found that LiDAR can capture relevant variation in vegetation across the habitat gradient in coffee systems, specifically matching the generally observed pattern that the intensification of coffee production is associated with a decrease in vegetation density and complexity. We also found that bioacoustics can capture known functional signatures of avian communities across this habitat degradation gradient. Thus, we show that these technologies can be used in a robust way to monitor how biodiversity responds to land use intensification in the tropics. A major advantage of this approach is that it has the potential to be deployed cost‐effectively at large scales to help design and certify biodiversity‐friendly productive landscapes.
与农业集约化相关的土地利用变化是热带地区生物多样性丧失的主要驱动因素。为了评价热带农产品生产系统中生境与生物多样性的关系,鸟类通常被用作指标。然而,目前在很大程度上缺乏一种可扩展的监测热带鸟类群落和栖息地质量的一致和可靠的方法学方法。在这项研究中,我们研究了被动声学监测音频数据的自动分析与遥感数据分析是否可以有效地监测与咖啡生产集约化相关的栖息地退化梯度上的鸟类生物多样性。咖啡是热带森林地区的一种重要作物,其产量正在扩大和强化,咖啡生产系统形成了一个生态复杂性的梯度,从森林般的遮荫复合栽培到密集的阳光照射单一栽培。我们使用激光雷达技术来调查栖息地,连同自主记录单元和发声分类器来评估位于墨西哥南部的咖啡景观中的鸟类群落组成,该景观包括遮荫咖啡农场、阳光咖啡农场和森林遗迹。我们发现,激光雷达可以捕捉到咖啡系统中不同生境梯度下植被的相关变化,特别是与普遍观察到的模式相匹配,即咖啡产量的增加与植被密度和复杂性的降低有关。我们还发现,生物声学可以捕捉到鸟类群落在这种栖息地退化梯度中的已知功能特征。因此,我们表明,这些技术可以以一种稳健的方式用于监测生物多样性如何响应热带地区的土地利用集约化。这种方法的一个主要优点是,它有可能在大规模的成本有效的部署,以帮助设计和认证生物多样性友好的生产性景观。
{"title":"Consistent and scalable monitoring of birds and habitats along a coffee production intensity gradient","authors":"Marius Somveille, Joe Grainger‐Hull, Nicole Ferguson, Sarab S. Sethi, Fernando González‐García, Valentine Chassagnon, Cansu Oktem, Mathias Disney, Gustavo López Bautista, John Vandermeer, Ivette Perfecto","doi":"10.1002/rse2.70015","DOIUrl":"https://doi.org/10.1002/rse2.70015","url":null,"abstract":"Land use change associated with agricultural intensification is a leading driver of biodiversity loss in the tropics. To evaluate the habitat–biodiversity relationship in production systems of tropical agricultural commodities, birds are commonly used as indicators. However, a consistent and reliable methodological approach for monitoring tropical avian communities and habitat quality in a way that is scalable is largely lacking. In this study, we examined whether the automated analysis of audio data collected by passive acoustic monitoring, together with the analysis of remote sensing data, can be used to efficiently monitor avian biodiversity along the gradient of habitat degradation associated with the intensification of coffee production. Coffee is an important crop produced in tropical forested regions, whose production is expanding and intensifying, and coffee production systems form a gradient of ecological complexity ranging from forest‐like shaded polyculture to dense sun‐exposed monoculture. We used LiDAR technology to survey the habitat, together with autonomous recording units and a vocalization classifier to assess bird community composition in a coffee landscape comprising a shade‐grown coffee farm, a sun coffee farm and a forest remnant, located in southern Mexico. We found that LiDAR can capture relevant variation in vegetation across the habitat gradient in coffee systems, specifically matching the generally observed pattern that the intensification of coffee production is associated with a decrease in vegetation density and complexity. We also found that bioacoustics can capture known functional signatures of avian communities across this habitat degradation gradient. Thus, we show that these technologies can be used in a robust way to monitor how biodiversity responds to land use intensification in the tropics. A major advantage of this approach is that it has the potential to be deployed cost‐effectively at large scales to help design and certify biodiversity‐friendly productive landscapes.","PeriodicalId":21132,"journal":{"name":"Remote Sensing in Ecology and Conservation","volume":"16 1","pages":""},"PeriodicalIF":5.5,"publicationDate":"2025-06-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144337521","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"环境科学与生态学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Eigenfeature‐enhanced deep learning: advancing tree species classification in mixed conifer forests with lidar 特征增强深度学习:利用激光雷达推进混交林树种分类
IF 5.5 2区 环境科学与生态学 Q1 ECOLOGY Pub Date : 2025-06-09 DOI: 10.1002/rse2.70014
Ryan C. Blackburn, Robert Buscaglia, Andrew J. Sánchez Meador, Margaret M. Moore, Temuulen Sankey, Steven E. Sesnie
Accurately classifying tree species using remotely sensed data remains a significant challenge, yet it is essential for forest monitoring and understanding ecosystem dynamics over large spatial extents. While light detection and ranging (lidar) has shown promise for species classification, its accuracy typically decreases in complex forests or with lower lidar point densities. Recent advancements in lidar processing and machine learning offer new opportunities to leverage previously unavailable structural information. In this study, we present an automated machine learning pipeline that reduces practitioner burden by utilizing canonical deep learning and improved input layers through the derivation of eigenfeatures. These eigenfeatures were used as inputs for a 2D convolutional neural network (CNN) to classify seven tree species in the Mogollon Rim Ranger District of the Coconino National Forest, AZ, US. We compared eigenfeature images derived from unoccupied aerial vehicle laser scanning (UAV‐LS) and airborne laser scanning (ALS) individual tree segmentation algorithms against raw intensity and colorless control images. Remarkably, mean overall accuracies for classifying seven species reached 94.8% for ALS and 93.4% for UAV‐LS. White image types underperformed for both ALS and UAV‐LS compared to eigenfeature images, while ALS and UAV‐LS image types showed marginal differences in model performance. These results demonstrate that lower point density ALS data can achieve high classification accuracy when paired with eigenfeatures in an automated pipeline. This study advances the field by addressing species classification at scales ranging from individual trees to landscapes, offering a scalable and efficient approach for understanding tree composition in complex forests.
利用遥感数据对树种进行准确分类仍然是一个重大挑战,但它对于森林监测和了解大空间范围内的生态系统动态至关重要。虽然光探测和测距(激光雷达)已经显示出物种分类的前景,但在复杂的森林或激光雷达点密度较低的情况下,其准确性通常会降低。激光雷达处理和机器学习的最新进展为利用以前无法获得的结构信息提供了新的机会。在这项研究中,我们提出了一个自动化的机器学习管道,通过使用规范深度学习和通过推导特征来改进输入层,从而减少了从业者的负担。这些特征被用作二维卷积神经网络(CNN)的输入,用于对美国亚利桑那州科科尼诺国家森林Mogollon Rim Ranger区的七种树种进行分类。我们将无人驾驶飞行器激光扫描(UAV - LS)和机载激光扫描(ALS)单树分割算法获得的特征图像与原始强度和无色控制图像进行了比较。值得注意的是,ALS和UAV - LS对7个物种分类的平均总体准确率分别达到94.8%和93.4%。与特征图像相比,白色图像类型在ALS和UAV - LS模型中的表现都较差,而ALS和UAV - LS图像类型在模型性能上存在微小差异。结果表明,低点密度的ALS数据与特征特征在自动管道中配对时可以获得较高的分类精度。本研究通过解决从单个树木到景观的尺度上的物种分类,为了解复杂森林中的树木组成提供了一种可扩展和有效的方法。
{"title":"Eigenfeature‐enhanced deep learning: advancing tree species classification in mixed conifer forests with lidar","authors":"Ryan C. Blackburn, Robert Buscaglia, Andrew J. Sánchez Meador, Margaret M. Moore, Temuulen Sankey, Steven E. Sesnie","doi":"10.1002/rse2.70014","DOIUrl":"https://doi.org/10.1002/rse2.70014","url":null,"abstract":"Accurately classifying tree species using remotely sensed data remains a significant challenge, yet it is essential for forest monitoring and understanding ecosystem dynamics over large spatial extents. While light detection and ranging (lidar) has shown promise for species classification, its accuracy typically decreases in complex forests or with lower lidar point densities. Recent advancements in lidar processing and machine learning offer new opportunities to leverage previously unavailable structural information. In this study, we present an automated machine learning pipeline that reduces practitioner burden by utilizing canonical deep learning and improved input layers through the derivation of eigenfeatures. These eigenfeatures were used as inputs for a 2D convolutional neural network (CNN) to classify seven tree species in the Mogollon Rim Ranger District of the Coconino National Forest, AZ, US. We compared eigenfeature images derived from unoccupied aerial vehicle laser scanning (UAV‐LS) and airborne laser scanning (ALS) individual tree segmentation algorithms against raw intensity and colorless control images. Remarkably, mean overall accuracies for classifying seven species reached 94.8% for ALS and 93.4% for UAV‐LS. White image types underperformed for both ALS and UAV‐LS compared to eigenfeature images, while ALS and UAV‐LS image types showed marginal differences in model performance. These results demonstrate that lower point density ALS data can achieve high classification accuracy when paired with eigenfeatures in an automated pipeline. This study advances the field by addressing species classification at scales ranging from individual trees to landscapes, offering a scalable and efficient approach for understanding tree composition in complex forests.","PeriodicalId":21132,"journal":{"name":"Remote Sensing in Ecology and Conservation","volume":"47 1","pages":""},"PeriodicalIF":5.5,"publicationDate":"2025-06-09","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144252281","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"环境科学与生态学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Hyperspectral imagery, LiDAR point clouds, and environmental DNA to assess land‐water linkage of biodiversity across aquatic functional feeding groups 利用高光谱图像、激光雷达点云和环境DNA评估水生功能性摄食群体的陆地-水生物多样性联系
IF 5.5 2区 环境科学与生态学 Q1 ECOLOGY Pub Date : 2025-06-02 DOI: 10.1002/rse2.70010
Heng Zhang, Carmen Meiller, Andreas Hueni, Rosetta C. Blackman, Felix Morsdorf, Isabelle S. Helfenstein, Michael E. Schaepman, Florian Altermatt
Different organismal functional feeding groups (FFGs) are key components of aquatic food webs and are important for sustaining ecosystem functioning in riverine ecosystems. Their distribution and diversity are tightly associated with the surrounding terrestrial landscape through land‐water linkages. Nevertheless, knowledge about the spatial extent and magnitude of these cross‐ecosystem linkages within major FFGs still remains unclear. Here, we conducted an airborne imaging spectroscopy campaign and a systematic environmental DNA (eDNA) field sampling of river water in a 740‐km2 mountainous catchment, combined with light detection and ranging (LiDAR) point clouds, to obtain the spectral and morphological diversity of the terrestrial landscape and the diversity of major FFGs in rivers. We identified the scale of these linkages, ranging from a few hundred meters to more than 10 km, with collectors and filterers, shredders, and small invertebrate predators having local‐scale associations, while invertebrate‐eating fish, grazers, and scrapers have more landscape‐scale associations. Among all major FFGs, shredders, grazers, and scrapers in the streams had the strongest association with surrounding terrestrial vegetation. Our research reveals the reference spatial scales at which major FFGs are linked to the surrounding terrestrial landscape, providing spatially explicit evidence of the cross‐ecosystem linkages needed for conservation design and management.
不同的有机功能摄食群(ffg)是水生食物网的关键组成部分,对维持河流生态系统的生态系统功能具有重要意义。它们的分布和多样性通过陆地与水的联系与周围的陆地景观密切相关。然而,关于主要ffg内这些跨生态系统联系的空间范围和程度的知识仍然不清楚。在这里,我们进行了航空成像光谱运动和系统的环境DNA (eDNA)现场采样,在740平方公里的山区集水区,结合光探测和测距(LiDAR)点云,以获得陆地景观的光谱和形态多样性以及河流中主要ffg的多样性。我们确定了这些联系的规模,从几百米到超过10公里,收集者和过滤器,碎纸机和小型无脊椎食肉动物具有局部规模的联系,而无脊椎食性鱼类,食草动物和刮刀动物具有更多的景观规模的联系。在所有主要ffg中,河流中的碎纸机、食草动物和刮削动物与周围陆生植被的相关性最强。我们的研究揭示了主要ffg与周围陆地景观联系的参考空间尺度,为保护设计和管理所需的跨生态系统联系提供了空间上明确的证据。
{"title":"Hyperspectral imagery, LiDAR point clouds, and environmental DNA to assess land‐water linkage of biodiversity across aquatic functional feeding groups","authors":"Heng Zhang, Carmen Meiller, Andreas Hueni, Rosetta C. Blackman, Felix Morsdorf, Isabelle S. Helfenstein, Michael E. Schaepman, Florian Altermatt","doi":"10.1002/rse2.70010","DOIUrl":"https://doi.org/10.1002/rse2.70010","url":null,"abstract":"Different organismal functional feeding groups (FFGs) are key components of aquatic food webs and are important for sustaining ecosystem functioning in riverine ecosystems. Their distribution and diversity are tightly associated with the surrounding terrestrial landscape through land‐water linkages. Nevertheless, knowledge about the spatial extent and magnitude of these cross‐ecosystem linkages within major FFGs still remains unclear. Here, we conducted an airborne imaging spectroscopy campaign and a systematic environmental DNA (eDNA) field sampling of river water in a 740‐km<jats:sup>2</jats:sup> mountainous catchment, combined with light detection and ranging (LiDAR) point clouds, to obtain the spectral and morphological diversity of the terrestrial landscape and the diversity of major FFGs in rivers. We identified the scale of these linkages, ranging from a few hundred meters to more than 10 km, with collectors and filterers, shredders, and small invertebrate predators having local‐scale associations, while invertebrate‐eating fish, grazers, and scrapers have more landscape‐scale associations. Among all major FFGs, shredders, grazers, and scrapers in the streams had the strongest association with surrounding terrestrial vegetation. Our research reveals the reference spatial scales at which major FFGs are linked to the surrounding terrestrial landscape, providing spatially explicit evidence of the cross‐ecosystem linkages needed for conservation design and management.","PeriodicalId":21132,"journal":{"name":"Remote Sensing in Ecology and Conservation","volume":"25 1","pages":""},"PeriodicalIF":5.5,"publicationDate":"2025-06-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144192865","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"环境科学与生态学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Hyperspectral imaging has a limited ability to remotely sense the onset of beech bark disease 高光谱成像对山毛榉树皮疾病的远程感知能力有限
IF 5.5 2区 环境科学与生态学 Q1 ECOLOGY Pub Date : 2025-05-30 DOI: 10.1002/rse2.70013
Guillaume Tougas, Christine I. B. Wallis, Etienne Laliberté, Mark Vellend
Insect and pathogen outbreaks have a major impact on northern forest ecosystems. Even for pathogens that have been present in a region for decades, such as beech bark disease (BBD), new waves of tree mortality are expected. Hence, there is a need for innovative approaches to monitor disease advancement in real time. Here, we test whether airborne hyperspectral imaging – involving data from 344 wavelengths in the visible, near infrared (NIR) and short‐wave infrared (SWIR) – can be used to assess beech bark disease severity in southern Quebec, Canada. Field data on disease severity were linked to airborne hyperspectral data for individual beech crowns. Partial least‐squares regression (PLSR) models using airborne imaging spectroscopy data predicted a small proportion of the variance in beech bark disease severity: the best model had an R2 of only 0.09. Wavelengths with the strongest contributions were from the red‐edge region (~715 nm) and the SWIR (~1287 nm), which may suggest mediation by canopy greenness, water content, and canopy architecture. Similar models using hyperspectral data taken directly on individual leaves had no explanatory power (R2 = 0). In addition, airborne and leaf‐level hyperspectral datasets were uncorrelated. The failure of leaf‐level models suggests that canopy structure was likely responsible for the limited predictive ability of the airborne model. Somewhat better performance in predicting disease severity was found using common band ratios for canopy greenness assessment (e.g., the Green Normalized Difference Vegetation Index, gNDVI, and the Normalized Phaeophytinization Index, NPQI); these variables explained up to 19% of the variation in disease severity. Overall, we argue that the complexity of hyperspectral data is not necessary for assessing BBD spread and that spectral data in general may not provide an efficient means of improving BBD monitoring on a larger scale.
昆虫和病原体的爆发对北方森林生态系统有重大影响。即使是在一个地区已经存在了几十年的病原体,如山毛榉树皮病(BBD),预计也会出现新的树木死亡浪潮。因此,需要创新的方法来实时监测疾病进展。在这里,我们测试了航空高光谱成像是否可以用于评估加拿大魁北克南部山毛榉树皮疾病的严重程度,该成像涉及可见光、近红外(NIR)和短波红外(SWIR)的344个波长的数据。疾病严重程度的实地数据与单个山毛榉冠的空中高光谱数据相关联。使用航空成像光谱数据的偏最小二乘回归(PLSR)模型预测了山毛榉树皮疾病严重程度的一小部分方差:最佳模型的R2仅为0.09。贡献最大的波长来自红边区(~715 nm)和SWIR区(~1287 nm),这可能与冠层绿度、含水量和冠层结构有关。利用单叶直接采集的高光谱数据建立的类似模型没有解释力(R2 = 0)。此外,航空和叶片水平的高光谱数据集不相关。叶片水平模型的失败表明,冠层结构可能是机载模型预测能力有限的原因。在预测疾病严重程度方面,使用冠层绿度评估的共同频带比率(例如,绿色归一化差异植被指数,gNDVI和归一化褐藻化指数,NPQI)具有更好的性能;这些变量解释了高达19%的疾病严重程度差异。总的来说,我们认为高光谱数据的复杂性对于评估BBD传播是不必要的,并且光谱数据通常可能无法提供更大规模改善BBD监测的有效手段。
{"title":"Hyperspectral imaging has a limited ability to remotely sense the onset of beech bark disease","authors":"Guillaume Tougas, Christine I. B. Wallis, Etienne Laliberté, Mark Vellend","doi":"10.1002/rse2.70013","DOIUrl":"https://doi.org/10.1002/rse2.70013","url":null,"abstract":"Insect and pathogen outbreaks have a major impact on northern forest ecosystems. Even for pathogens that have been present in a region for decades, such as beech bark disease (BBD), new waves of tree mortality are expected. Hence, there is a need for innovative approaches to monitor disease advancement in real time. Here, we test whether airborne hyperspectral imaging – involving data from 344 wavelengths in the visible, near infrared (NIR) and short‐wave infrared (SWIR) – can be used to assess beech bark disease severity in southern Quebec, Canada. Field data on disease severity were linked to airborne hyperspectral data for individual beech crowns. Partial least‐squares regression (PLSR) models using airborne imaging spectroscopy data predicted a small proportion of the variance in beech bark disease severity: the best model had an <jats:italic>R</jats:italic><jats:sup>2</jats:sup> of only 0.09. Wavelengths with the strongest contributions were from the red‐edge region (~715 nm) and the SWIR (~1287 nm), which may suggest mediation by canopy greenness, water content, and canopy architecture. Similar models using hyperspectral data taken directly on individual leaves had no explanatory power (<jats:italic>R</jats:italic><jats:sup>2</jats:sup> = 0). In addition, airborne and leaf‐level hyperspectral datasets were uncorrelated. The failure of leaf‐level models suggests that canopy structure was likely responsible for the limited predictive ability of the airborne model. Somewhat better performance in predicting disease severity was found using common band ratios for canopy greenness assessment (e.g., the Green Normalized Difference Vegetation Index, gNDVI, and the Normalized Phaeophytinization Index, NPQI); these variables explained up to 19% of the variation in disease severity. Overall, we argue that the complexity of hyperspectral data is not necessary for assessing BBD spread and that spectral data in general may not provide an efficient means of improving BBD monitoring on a larger scale.","PeriodicalId":21132,"journal":{"name":"Remote Sensing in Ecology and Conservation","volume":"41 1","pages":""},"PeriodicalIF":5.5,"publicationDate":"2025-05-30","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144183762","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"环境科学与生态学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Increasing citizen scientist accuracy with artificial intelligence on UK camera‐trap data 提高公民科学家对英国相机陷阱数据的人工智能准确性
IF 5.5 2区 环境科学与生态学 Q1 ECOLOGY Pub Date : 2025-05-19 DOI: 10.1002/rse2.70012
C. R. Sharpe, R. A. Hill, H. M. Chappell, S. E. Green, K. Holden, P. Fergus, C. Chalmers, P. A. Stephens
As camera traps have become more widely used, extracting information from images at the pace they are acquired has become challenging, resulting in backlogs that delay the communication of results and the use of data for conservation and management. To ameliorate this, artificial intelligence (AI), crowdsourcing to citizen scientists and combined approaches have surfaced as solutions. Using data from the UK mammal monitoring initiative MammalWeb, we assess the accuracies of classifications from registered citizen scientists, anonymous participants and a convolutional neural network (CNN). The engagement of anonymous volunteers was facilitated by the strategic placement of MammalWeb interfaces in a natural history museum with high footfall related to the ‘Dippy on Tour’ exhibition. The accuracy of anonymous volunteer classifications gathered through public interfaces has not been reported previously, and here we consider this form of citizen science in the context of alternative forms of data acquisition. While AI models have performed well at species identification in bespoke settings, here we report model performance on a dataset for which the model in question was not explicitly trained. We also consider combining AI output with that of human volunteers to demonstrate combined workflows that produce high accuracy predictions. We find the consensus of registered users has greater overall accuracy (97%) than the consensus from anonymous contributors (71%); AI accuracy lies in between (78%). A combined approach between registered citizen scientists and AI output provides an overall accuracy of 96%. Further, when the contributions of anonymous citizen scientists are concordant with AI output, 98% accuracy can be achieved. The generality of this last finding merits further investigation, given the potential to gather classifications much more rapidly if public displays are placed in areas of high footfall. We suggest that combined approaches to image classification are optimal when the minimisation of classification errors is desired.
随着相机陷阱的应用越来越广泛,从获取图像的速度中提取信息变得具有挑战性,导致积压,从而延迟了结果的交流和数据的保护和管理使用。为了改善这种情况,人工智能(AI)、向公民科学家众包以及综合方法已经浮出水面。使用来自英国哺乳动物监测倡议MammalWeb的数据,我们评估了注册公民科学家,匿名参与者和卷积神经网络(CNN)分类的准确性。通过将MammalWeb界面战略性地放置在自然历史博物馆中,促进了匿名志愿者的参与,该博物馆与“Dippy on Tour”展览有关。通过公共接口收集的匿名志愿者分类的准确性以前没有报道过,在这里,我们在其他数据获取形式的背景下考虑这种形式的公民科学。虽然人工智能模型在定制设置的物种识别方面表现良好,但在这里,我们报告了模型在未明确训练的数据集上的表现。我们还考虑将人工智能输出与人类志愿者的输出相结合,以展示产生高精度预测的组合工作流程。我们发现注册用户的共识总体准确性(97%)高于匿名贡献者的共识(71%);人工智能的准确率介于两者之间(78%)。注册公民科学家和人工智能输出的结合方法提供了96%的总体准确性。此外,当匿名公民科学家的贡献与人工智能输出一致时,准确率可以达到98%。考虑到如果将公共展览放置在人流量大的地方,可能会更快地收集分类信息,最后这一发现的普遍性值得进一步调查。我们建议,当分类误差最小化时,组合方法对图像分类是最佳的。
{"title":"Increasing citizen scientist accuracy with artificial intelligence on UK camera‐trap data","authors":"C. R. Sharpe, R. A. Hill, H. M. Chappell, S. E. Green, K. Holden, P. Fergus, C. Chalmers, P. A. Stephens","doi":"10.1002/rse2.70012","DOIUrl":"https://doi.org/10.1002/rse2.70012","url":null,"abstract":"As camera traps have become more widely used, extracting information from images at the pace they are acquired has become challenging, resulting in backlogs that delay the communication of results and the use of data for conservation and management. To ameliorate this, artificial intelligence (AI), crowdsourcing to citizen scientists and combined approaches have surfaced as solutions. Using data from the UK mammal monitoring initiative MammalWeb, we assess the accuracies of classifications from registered citizen scientists, anonymous participants and a convolutional neural network (CNN). The engagement of anonymous volunteers was facilitated by the strategic placement of MammalWeb interfaces in a natural history museum with high footfall related to the ‘Dippy on Tour’ exhibition. The accuracy of anonymous volunteer classifications gathered through public interfaces has not been reported previously, and here we consider this form of citizen science in the context of alternative forms of data acquisition. While AI models have performed well at species identification in bespoke settings, here we report model performance on a dataset for which the model in question was not explicitly trained. We also consider combining AI output with that of human volunteers to demonstrate combined workflows that produce high accuracy predictions. We find the consensus of registered users has greater overall accuracy (97%) than the consensus from anonymous contributors (71%); AI accuracy lies in between (78%). A combined approach between registered citizen scientists and AI output provides an overall accuracy of 96%. Further, when the contributions of anonymous citizen scientists are concordant with AI output, 98% accuracy can be achieved. The generality of this last finding merits further investigation, given the potential to gather classifications much more rapidly if public displays are placed in areas of high footfall. We suggest that combined approaches to image classification are optimal when the minimisation of classification errors is desired.","PeriodicalId":21132,"journal":{"name":"Remote Sensing in Ecology and Conservation","volume":"18 1","pages":""},"PeriodicalIF":5.5,"publicationDate":"2025-05-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144097312","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"环境科学与生态学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Night lights from space: potential of SDGSAT‐1 for ecological applications 太空夜灯:SDGSAT - 1在生态应用中的潜力
IF 5.5 2区 环境科学与生态学 Q1 ECOLOGY Pub Date : 2025-05-16 DOI: 10.1002/rse2.70011
Dominique Weber, Janine Bolliger, Klaus Ecker, Claude Fischer, Christian Ginzler, Martin M. Gossner, Laurent Huber, Martin K. Obrist, Florian Zellweger, Noam Levin
Light pollution affects biodiversity at all levels, from genes to ecosystems, and improved monitoring and research is needed to better assess its various ecological impacts. Here, we review the current contribution of night‐time satellites to ecological applications and elaborate on the potential value of the Glimmer sensor onboard the Chinese Sustainable Development Goals Science Satellite 1 (SDGSAT‐1), a novel medium‐resolution and multispectral sensor, for quantifying artificial light at night (ALAN). Due to their coarse spatial, spectral or temporal resolution, most of the currently used space‐borne sensors are limited in their contribution to assessments of light pollution at multiple scales and of the ecological and conservation‐relevant effects of ALAN. SDGSAT‐1 now offers new opportunities to map the variability in light intensity and spectra at finer spatial resolution, providing the means to disentangle and characterize different sources of ALAN, and to relate ALAN to local environmental parameters, in situ measurements and surveys. Monitoring direct light emissions at 10–40 m spatial resolution enables scientists to better understand the origins and impacts of light pollution on sensitive species and ecosystems, and assists practitioners in implementing local conservation measures. We demonstrate some key ecological applications of SDGSAT‐1, such as quantifying the exposure of protected areas to light pollution, assessing wildlife corridors and dark refuges in urban areas, and modelling the visibility of light sources to animals. We conclude that SDGSAT‐1, and possibly similar future satellite missions, will significantly advance ecological light pollution research to better understand the environmental impacts of light pollution and to devise strategies to mitigate them.
光污染影响从基因到生态系统的各个层面的生物多样性,需要改进监测和研究,以更好地评估其各种生态影响。本文综述了目前夜间卫星对生态应用的贡献,并详细介绍了中国可持续发展目标科学卫星1号(SDGSAT - 1)上搭载的微光传感器的潜在价值。微光传感器是一种新型的中分辨率和多光谱传感器,用于量化夜间人造光(ALAN)。由于空间、光谱或时间分辨率较差,目前使用的大多数空间传感器在评估多尺度光污染以及ALAN的生态和保护相关影响方面的贡献有限。SDGSAT - 1现在提供了新的机会,以更精细的空间分辨率绘制光强度和光谱的变化,提供了解开和表征不同ALAN来源的方法,并将ALAN与当地环境参数、原位测量和调查联系起来。在10-40米的空间分辨率下监测直接光发射,使科学家能够更好地了解光污染对敏感物种和生态系统的起源和影响,并帮助从业者实施当地的保护措施。我们展示了SDGSAT‐1的一些关键生态应用,例如量化保护区的光污染暴露,评估城市地区的野生动物走廊和黑暗避难所,以及模拟光源对动物的可见度。我们的结论是,SDGSAT - 1以及未来可能类似的卫星任务将显著推进生态光污染研究,以更好地了解光污染对环境的影响,并制定减轻这些影响的策略。
{"title":"Night lights from space: potential of SDGSAT‐1 for ecological applications","authors":"Dominique Weber, Janine Bolliger, Klaus Ecker, Claude Fischer, Christian Ginzler, Martin M. Gossner, Laurent Huber, Martin K. Obrist, Florian Zellweger, Noam Levin","doi":"10.1002/rse2.70011","DOIUrl":"https://doi.org/10.1002/rse2.70011","url":null,"abstract":"Light pollution affects biodiversity at all levels, from genes to ecosystems, and improved monitoring and research is needed to better assess its various ecological impacts. Here, we review the current contribution of night‐time satellites to ecological applications and elaborate on the potential value of the Glimmer sensor onboard the Chinese Sustainable Development Goals Science Satellite 1 (SDGSAT‐1), a novel medium‐resolution and multispectral sensor, for quantifying artificial light at night (ALAN). Due to their coarse spatial, spectral or temporal resolution, most of the currently used space‐borne sensors are limited in their contribution to assessments of light pollution at multiple scales and of the ecological and conservation‐relevant effects of ALAN. SDGSAT‐1 now offers new opportunities to map the variability in light intensity and spectra at finer spatial resolution, providing the means to disentangle and characterize different sources of ALAN, and to relate ALAN to local environmental parameters, in situ measurements and surveys. Monitoring direct light emissions at 10–40 m spatial resolution enables scientists to better understand the origins and impacts of light pollution on sensitive species and ecosystems, and assists practitioners in implementing local conservation measures. We demonstrate some key ecological applications of SDGSAT‐1, such as quantifying the exposure of protected areas to light pollution, assessing wildlife corridors and dark refuges in urban areas, and modelling the visibility of light sources to animals. We conclude that SDGSAT‐1, and possibly similar future satellite missions, will significantly advance ecological light pollution research to better understand the environmental impacts of light pollution and to devise strategies to mitigate them.","PeriodicalId":21132,"journal":{"name":"Remote Sensing in Ecology and Conservation","volume":"54 1","pages":""},"PeriodicalIF":5.5,"publicationDate":"2025-05-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144066914","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"环境科学与生态学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
A scalable transfer learning workflow for extracting biological and behavioural insights from forest elephant vocalizations 一个可扩展的迁移学习工作流,用于从森林象的发声中提取生物学和行为学见解
IF 5.5 2区 环境科学与生态学 Q1 ECOLOGY Pub Date : 2025-04-25 DOI: 10.1002/rse2.70008
Alastair Pickering, Santiago Martinez Balvanera, Kate E. Jones, Daniela Hedwig
Animal vocalizations encode rich biological information—such as age, sex, behavioural context and emotional state—making bioacoustic analysis a promising non‐invasive method for assessing welfare and population demography. However, traditional bioacoustic approaches, which rely on manually defined acoustic features, are time‐consuming, require specialized expertise and may introduce subjective bias. These constraints reduce the feasibility of analysing increasingly large datasets generated by passive acoustic monitoring (PAM). Transfer learning with Convolutional Neural Networks (CNNs) offers a scalable alternative by enabling automatic acoustic feature extraction without predefined criteria. Here, we applied four pre‐trained CNNs—two general purpose models (VGGish and YAMNet) and two avian bioacoustic models (Perch and BirdNET)—to African forest elephant (Loxodonta cyclotis) recordings. We used a dimensionality reduction algorithm (UMAP) to represent the extracted acoustic features in two dimensions and evaluated these representations across three key tasks: (1) call‐type classification (rumble, roar and trumpet), (2) rumble sub‐type identification and (3) behavioural and demographic analysis. A Random Forest classifier trained on these features achieved near‐perfect accuracy for rumbles, with Perch attaining the highest average accuracy (0.85) across all call types. Clustering the reduced features identified biologically meaningful rumble sub‐types—such as adult female calls linked to logistics—and provided clearer groupings than manual classification. Statistical analyses showed that factors including age and behavioural context significantly influenced call variation (P < 0.001), with additional comparisons revealing clear differences among contexts (e.g. nursing, competition, separation), sexes and multiple age classes. Perch and BirdNET consistently outperformed general purpose models when dealing with complex or ambiguous calls. These findings demonstrate that transfer learning enables scalable, reproducible bioacoustic workflows capable of detecting biologically meaningful acoustic variation. Integrating this approach into PAM pipelines can enhance the non‐invasive assessment of population dynamics, behaviour and welfare in acoustically active species.
动物发声编码了丰富的生物信息,如年龄、性别、行为背景和情绪状态,使生物声学分析成为评估福利和人口统计的一种有前途的非侵入性方法。然而,传统的生物声学方法依赖于手动定义的声学特征,耗时,需要专业知识,并且可能会引入主观偏见。这些限制因素降低了被动声学监测(PAM)产生的越来越大的数据集分析的可行性。卷积神经网络(cnn)的迁移学习提供了一种可扩展的替代方案,可以在没有预定义标准的情况下自动提取声学特征。在这里,我们应用了四个预先训练的cnn -两个通用模型(VGGish和YAMNet)和两个鸟类生物声学模型(Perch和BirdNET) -非洲森林象(Loxodonta cyclotis)的录音。我们使用降维算法(UMAP)在两个维度上表示提取的声学特征,并在三个关键任务中评估这些表征:(1)呼叫类型分类(隆隆声、轰鸣声和小号),(2)隆隆声子类型识别和(3)行为和人口统计分析。在这些特征上训练的随机森林分类器对隆隆声达到了近乎完美的准确率,其中珀奇在所有呼叫类型中达到了最高的平均准确率(0.85)。将减少的特征聚类识别出生物学上有意义的隆隆声亚类型——比如与物流相关的成年雌性叫声——提供了比人工分类更清晰的分组。统计分析表明,包括年龄和行为背景在内的因素对呼叫差异有显著影响(P <;0.001),进一步的比较揭示了环境(如护理、竞争、分离)、性别和多年龄阶层之间的明显差异。在处理复杂或模糊的呼叫时,珀奇和BirdNET始终优于通用模型。这些发现表明,迁移学习能够实现可扩展的、可重复的生物声学工作流程,能够检测生物学上有意义的声学变化。将这种方法整合到PAM管道中可以增强对声活跃物种种群动态、行为和福利的非侵入性评估。
{"title":"A scalable transfer learning workflow for extracting biological and behavioural insights from forest elephant vocalizations","authors":"Alastair Pickering, Santiago Martinez Balvanera, Kate E. Jones, Daniela Hedwig","doi":"10.1002/rse2.70008","DOIUrl":"https://doi.org/10.1002/rse2.70008","url":null,"abstract":"Animal vocalizations encode rich biological information—such as age, sex, behavioural context and emotional state—making bioacoustic analysis a promising non‐invasive method for assessing welfare and population demography. However, traditional bioacoustic approaches, which rely on manually defined acoustic features, are time‐consuming, require specialized expertise and may introduce subjective bias. These constraints reduce the feasibility of analysing increasingly large datasets generated by passive acoustic monitoring (PAM). Transfer learning with Convolutional Neural Networks (CNNs) offers a scalable alternative by enabling automatic acoustic feature extraction without predefined criteria. Here, we applied four pre‐trained CNNs—two general purpose models (VGGish and YAMNet) and two avian bioacoustic models (Perch and BirdNET)—to African forest elephant (<jats:italic>Loxodonta cyclotis</jats:italic>) recordings. We used a dimensionality reduction algorithm (UMAP) to represent the extracted acoustic features in two dimensions and evaluated these representations across three key tasks: (1) call‐type classification (rumble, roar and trumpet), (2) rumble sub‐type identification and (3) behavioural and demographic analysis. A Random Forest classifier trained on these features achieved near‐perfect accuracy for rumbles, with Perch attaining the highest average accuracy (0.85) across all call types. Clustering the reduced features identified biologically meaningful rumble sub‐types—such as adult female calls linked to logistics—and provided clearer groupings than manual classification. Statistical analyses showed that factors including age and behavioural context significantly influenced call variation (<jats:italic>P</jats:italic> &lt; 0.001), with additional comparisons revealing clear differences among contexts (e.g. nursing, competition, separation), sexes and multiple age classes. Perch and BirdNET consistently outperformed general purpose models when dealing with complex or ambiguous calls. These findings demonstrate that transfer learning enables scalable, reproducible bioacoustic workflows capable of detecting biologically meaningful acoustic variation. Integrating this approach into PAM pipelines can enhance the non‐invasive assessment of population dynamics, behaviour and welfare in acoustically active species.","PeriodicalId":21132,"journal":{"name":"Remote Sensing in Ecology and Conservation","volume":"219 1","pages":""},"PeriodicalIF":5.5,"publicationDate":"2025-04-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143875850","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"环境科学与生态学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Advancing the mapping of vegetation structure in savannas using Sentinel‐1 imagery 利用哨兵-1 图像推进绘制热带草原植被结构图的工作
IF 5.5 2区 环境科学与生态学 Q1 ECOLOGY Pub Date : 2025-04-22 DOI: 10.1002/rse2.70006
Vera Thijssen, Marianthi Tangili, Ruth A. Howison, Han Olff
Vegetation structure monitoring is important for the understanding and conservation of savanna ecosystems. Optical satellite imagery can be used to estimate canopy cover, but provides limited information about the structure of savannas, and is restricted to daytime and clear‐sky captures. Active remote sensing can potentially overcome this. We explore the utility of C‐band synthetic aperture radar imagery for mapping both grassland and woody vegetation structure in savannas. We calibrated Sentinel‐1 VH () and VV () backscatter coefficients and their ratio () to ground‐based estimates of grass biomass, woody canopy volume (<50 000 m3/ha) and tree basal area (<15 m2/ha) in the Greater Serengeti‐Mara Ecosystem, and simultaneously explored their sensitivity to soil moisture. We show that in particular can be used to estimate grass biomass (R2 = 0.54, RMSE = 630 kg/ha, %range = 20.6), woody canopy volume (R2 = 0.69, RMSE = 4188 m3/ha, %range = 11.8) and tree basal area (R2 = 0.44, RMSE = 2.03 m2/ha, %range = 18.6) in the dry season, allowing for the extrapolation to regional scale vegetation structure maps. We also introduce new proxies for soil moisture as an option for extending this approach to the wet season using the 90‐day preceding bounded running averages of the Climate Hazards Group InfraRed Precipitation with Station (CHIRPS) and the Multi‐satellitE Retrievals for Global Precipitation Measurement (IMERG) datasets. We discuss the potential of Sentinel‐1 imagery for better understanding of the spatio‐temporal dynamics of vegetation structure in savannas.
植被结构监测对认识和保护热带稀树草原生态系统具有重要意义。光学卫星图像可用于估算树冠覆盖度,但提供的关于稀树草原结构的信息有限,而且仅限于白天和晴空的捕获。主动遥感可以潜在地克服这一点。我们探索了C波段合成孔径雷达成像在稀树草原草地和木本植被结构制图中的应用。我们校准了Sentinel‐1 VH()和VV()后向散射系数及其与基于地面估算的大塞伦盖蒂-马拉生态系统中草生物量、木质冠层体积(<;5万m3/ha)和树木基面积(<15 m2/ha)的比值,并同时探索了它们对土壤湿度的敏感性。我们发现,特别是可以用来估算旱季的草生物量(R2 = 0.54, RMSE = 630 kg/ha, %范围= 20.6),木质冠层体积(R2 = 0.69, RMSE = 4188 m3/ha, %范围= 11.8)和树木基面积(R2 = 0.44, RMSE = 2.03 m2/ha, %范围= 18.6),允许外推到区域尺度的植被结构图。我们还引入了土壤湿度的新代用物,作为将该方法扩展到雨季的一种选择,使用气候危害组红外站降水(CHIRPS)和全球降水测量多卫星检索(IMERG)数据集的90天前有边界运行平均值。我们讨论了Sentinel - 1图像在更好地理解热带稀树草原植被结构时空动态方面的潜力。
{"title":"Advancing the mapping of vegetation structure in savannas using Sentinel‐1 imagery","authors":"Vera Thijssen, Marianthi Tangili, Ruth A. Howison, Han Olff","doi":"10.1002/rse2.70006","DOIUrl":"https://doi.org/10.1002/rse2.70006","url":null,"abstract":"Vegetation structure monitoring is important for the understanding and conservation of savanna ecosystems. Optical satellite imagery can be used to estimate canopy cover, but provides limited information about the structure of savannas, and is restricted to daytime and clear‐sky captures. Active remote sensing can potentially overcome this. We explore the utility of C‐band synthetic aperture radar imagery for mapping both grassland and woody vegetation structure in savannas. We calibrated Sentinel‐1 VH () and VV () backscatter coefficients and their ratio () to ground‐based estimates of grass biomass, woody canopy volume (&lt;50 000 m<jats:sup>3</jats:sup>/ha) and tree basal area (&lt;15 m<jats:sup>2</jats:sup>/ha) in the Greater Serengeti‐Mara Ecosystem, and simultaneously explored their sensitivity to soil moisture. We show that in particular can be used to estimate grass biomass (<jats:italic>R</jats:italic><jats:sup>2</jats:sup> = 0.54, RMSE = 630 kg/ha, %range = 20.6), woody canopy volume (<jats:italic>R</jats:italic><jats:sup>2</jats:sup> = 0.69, RMSE = 4188 m<jats:sup>3</jats:sup>/ha, %range = 11.8) and tree basal area (<jats:italic>R</jats:italic><jats:sup>2</jats:sup> = 0.44, RMSE = 2.03 m<jats:sup>2</jats:sup>/ha, %range = 18.6) in the dry season, allowing for the extrapolation to regional scale vegetation structure maps. We also introduce new proxies for soil moisture as an option for extending this approach to the wet season using the 90‐day preceding bounded running averages of the Climate Hazards Group InfraRed Precipitation with Station (CHIRPS) and the Multi‐satellitE Retrievals for Global Precipitation Measurement (IMERG) datasets. We discuss the potential of Sentinel‐1 imagery for better understanding of the spatio‐temporal dynamics of vegetation structure in savannas.","PeriodicalId":21132,"journal":{"name":"Remote Sensing in Ecology and Conservation","volume":"91 1","pages":""},"PeriodicalIF":5.5,"publicationDate":"2025-04-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143862136","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"环境科学与生态学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Object detection‐assisted workflow facilitates cryptic snake monitoring 物体探测辅助工作流程有助于监测隐蛇
IF 5.5 2区 环境科学与生态学 Q1 ECOLOGY Pub Date : 2025-04-21 DOI: 10.1002/rse2.70009
Storm Miller, Michael Kirkland, Kristen M. Hart, Robert A. McCleery
Camera traps are an important tool used to study rare and cryptic animals, including snakes. Time‐lapse photography can be particularly useful for studying snakes that often fail to trigger a camera's infrared motion sensor due to their ectothermic nature. However, the large datasets produced by time‐lapse photography require labor‐intensive classification, limiting their use in large‐scale studies. While many artificial intelligence‐based object detection models are effective at identifying mammals in images, their ability to detect snakes is unproven. Here, we used camera data to evaluate the efficacy of an object detection model to rapidly and accurately detect snakes. We classified images manually to the species level and compared this with a hybrid review workflow where the model removed blank images followed by a manual review. Using a ≥0.05 model confidence threshold, our hybrid review workflow correctly identified 94.5% of blank images, completed image classification 6× faster, and detected large (>66 cm) snakes as well as manual review. Conversely, the hybrid review method often failed to detect all instances of a snake in a string of images and detected fewer small (<66 cm) snakes than manual review. However, most relevant ecological information requires only a single detection in a sequence of images, and study design changes could likely improve the detection of smaller snakes. Our findings suggest that an object detection‐assisted hybrid workflow can greatly reduce time spent manually classifying data‐heavy time‐lapse snake studies and facilitate ecological monitoring for large snakes.
相机陷阱是研究包括蛇在内的稀有和神秘动物的重要工具。延时摄影对于研究蛇特别有用,因为蛇的变温特性往往无法触发相机的红外运动传感器。然而,延时摄影产生的大型数据集需要劳动密集型分类,限制了它们在大规模研究中的应用。虽然许多基于人工智能的物体检测模型在识别图像中的哺乳动物方面很有效,但它们检测蛇的能力尚未得到证实。在这里,我们使用相机数据来评估物体检测模型快速准确地检测蛇的功效。我们将图像手动分类到物种水平,并将其与混合审查工作流进行比较,其中模型删除空白图像,然后进行手动审查。使用≥0.05的模型置信阈值,我们的混合审查工作流正确识别了94.5%的空白图像,完成图像分类的速度提高了6倍,并且检测到大型(>66 cm)蛇和人工审查。相反,混合审查方法往往不能检测到一串图像中的所有蛇的实例,并且检测到的小蛇(<;66厘米)比人工审查少。然而,大多数相关的生态信息只需要在一系列图像中进行一次检测,研究设计的改变可能会提高对较小蛇的检测。我们的研究结果表明,目标检测辅助混合工作流程可以大大减少人工分类数据的时间-大量的延时蛇研究,并促进对大型蛇的生态监测。
{"title":"Object detection‐assisted workflow facilitates cryptic snake monitoring","authors":"Storm Miller, Michael Kirkland, Kristen M. Hart, Robert A. McCleery","doi":"10.1002/rse2.70009","DOIUrl":"https://doi.org/10.1002/rse2.70009","url":null,"abstract":"Camera traps are an important tool used to study rare and cryptic animals, including snakes. Time‐lapse photography can be particularly useful for studying snakes that often fail to trigger a camera's infrared motion sensor due to their ectothermic nature. However, the large datasets produced by time‐lapse photography require labor‐intensive classification, limiting their use in large‐scale studies. While many artificial intelligence‐based object detection models are effective at identifying mammals in images, their ability to detect snakes is unproven. Here, we used camera data to evaluate the efficacy of an object detection model to rapidly and accurately detect snakes. We classified images manually to the species level and compared this with a hybrid review workflow where the model removed blank images followed by a manual review. Using a ≥0.05 model confidence threshold, our hybrid review workflow correctly identified 94.5% of blank images, completed image classification 6× faster, and detected large (&gt;66 cm) snakes as well as manual review. Conversely, the hybrid review method often failed to detect all instances of a snake in a string of images and detected fewer small (&lt;66 cm) snakes than manual review. However, most relevant ecological information requires only a single detection in a sequence of images, and study design changes could likely improve the detection of smaller snakes. Our findings suggest that an object detection‐assisted hybrid workflow can greatly reduce time spent manually classifying data‐heavy time‐lapse snake studies and facilitate ecological monitoring for large snakes.","PeriodicalId":21132,"journal":{"name":"Remote Sensing in Ecology and Conservation","volume":"68 1","pages":""},"PeriodicalIF":5.5,"publicationDate":"2025-04-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143853540","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"环境科学与生态学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Towards edge processing of images from insect camera traps 昆虫相机陷阱图像的边缘处理
IF 5.5 2区 环境科学与生态学 Q1 ECOLOGY Pub Date : 2025-04-17 DOI: 10.1002/rse2.70007
Kim Bjerge, Henrik Karstoft, Toke T. Høye
Insects represent nearly half of all known multicellular species, but knowledge about them lags behind for most vertebrate species. In part for this reason, they are often neglected in biodiversity conservation policies and practice. Computer vision tools, such as insect camera traps, for automated monitoring have the potential to revolutionize insect study and conservation. To further advance insect camera trapping and the analysis of their image data, effective image processing pipelines are needed. In this paper, we present a flexible and fast processing pipeline designed to analyse these recordings by detecting, tracking and classifying nocturnal insects in a broad taxonomy of 15 insect classes and resolution of individual moth species. A classifier with anomaly detection is proposed to filter dark, blurred or partially visible insects that will be uncertain to classify correctly. A simple track‐by‐detection algorithm is proposed to track classified insects by incorporating feature embeddings, distance and area cost. We evaluated the computational speed and power performance of different edge computing devices (Raspberry Pi's and NVIDIA Jetson Nano) and compared various time‐lapse (TL) strategies with tracking. The minimum difference of detections was found for 2‐min TL intervals compared to tracking with 0.5 frames per second; however, for insects with fewer than one detection per night, the Pearson correlation decreases. Shifting from tracking to TL monitoring would reduce the number of recorded images and would allow for edge processing of images in real‐time on a camera trap with Raspberry Pi. The Jetson Nano is the most energy‐efficient solution, capable of real‐time tracking at nearly 0.5 fps. Our processing pipeline was applied to more than 5.7 million images recorded at 0.5 frames per second from 12 light camera traps during two full seasons located in diverse habitats, including bogs, heaths and forests. Our results thus show the scalability of insect camera traps.
在所有已知的多细胞物种中,昆虫几乎占了一半,但对它们的了解却落后于大多数脊椎动物物种。部分由于这个原因,它们在生物多样性保护政策和实践中经常被忽视。用于自动监测的计算机视觉工具,如昆虫相机陷阱,有可能彻底改变昆虫的研究和保护。为了进一步推进昆虫相机捕获及其图像数据的分析,需要有效的图像处理管道。在本文中,我们提出了一个灵活和快速的处理管道,旨在通过检测,跟踪和分类夜间昆虫在15个昆虫类的广泛分类和单个蛾种的分辨率来分析这些记录。提出了一种带有异常检测的分类器,用于过滤不确定分类正确的深色、模糊或部分可见的昆虫。提出了一种结合特征嵌入、距离和面积代价的简单逐迹检测算法。我们评估了不同边缘计算设备(树莓派和NVIDIA Jetson Nano)的计算速度和功耗性能,并比较了不同的延时(TL)跟踪策略。与0.5帧/秒的跟踪相比,2分钟TL间隔的检测差异最小;然而,对于每晚检测不到一次的昆虫,Pearson相关性降低。从跟踪转向TL监控将减少记录图像的数量,并允许在树莓派的相机陷阱上实时处理图像的边缘。Jetson Nano是最节能的解决方案,能够以近0.5 fps的速度实时跟踪。我们的处理流程应用于超过570万张图像,以每秒0.5帧的速度从12个轻型相机陷阱中记录下来,在两个完整的季节中位于不同的栖息地,包括沼泽,荒原和森林。因此,我们的结果显示了昆虫相机陷阱的可扩展性。
{"title":"Towards edge processing of images from insect camera traps","authors":"Kim Bjerge, Henrik Karstoft, Toke T. Høye","doi":"10.1002/rse2.70007","DOIUrl":"https://doi.org/10.1002/rse2.70007","url":null,"abstract":"Insects represent nearly half of all known multicellular species, but knowledge about them lags behind for most vertebrate species. In part for this reason, they are often neglected in biodiversity conservation policies and practice. Computer vision tools, such as insect camera traps, for automated monitoring have the potential to revolutionize insect study and conservation. To further advance insect camera trapping and the analysis of their image data, effective image processing pipelines are needed. In this paper, we present a flexible and fast processing pipeline designed to analyse these recordings by detecting, tracking and classifying nocturnal insects in a broad taxonomy of 15 insect classes and resolution of individual moth species. A classifier with anomaly detection is proposed to filter dark, blurred or partially visible insects that will be uncertain to classify correctly. A simple track‐by‐detection algorithm is proposed to track classified insects by incorporating feature embeddings, distance and area cost. We evaluated the computational speed and power performance of different edge computing devices (Raspberry Pi's and NVIDIA Jetson Nano) and compared various time‐lapse (TL) strategies with tracking. The minimum difference of detections was found for 2‐min TL intervals compared to tracking with 0.5 frames per second; however, for insects with fewer than one detection per night, the Pearson correlation decreases. Shifting from tracking to TL monitoring would reduce the number of recorded images and would allow for edge processing of images in real‐time on a camera trap with Raspberry Pi. The Jetson Nano is the most energy‐efficient solution, capable of real‐time tracking at nearly 0.5 fps. Our processing pipeline was applied to more than 5.7 million images recorded at 0.5 frames per second from 12 light camera traps during two full seasons located in diverse habitats, including bogs, heaths and forests. Our results thus show the scalability of insect camera traps.","PeriodicalId":21132,"journal":{"name":"Remote Sensing in Ecology and Conservation","volume":"123 1","pages":""},"PeriodicalIF":5.5,"publicationDate":"2025-04-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143847243","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"环境科学与生态学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
期刊
Remote Sensing in Ecology and Conservation
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1