首页 > 最新文献

Remote Sensing in Ecology and Conservation最新文献

英文 中文
Night lights from space: potential of SDGSAT‐1 for ecological applications 太空夜灯:SDGSAT - 1在生态应用中的潜力
IF 5.5 2区 环境科学与生态学 Q1 ECOLOGY Pub Date : 2025-05-16 DOI: 10.1002/rse2.70011
Dominique Weber, Janine Bolliger, Klaus Ecker, Claude Fischer, Christian Ginzler, Martin M. Gossner, Laurent Huber, Martin K. Obrist, Florian Zellweger, Noam Levin
Light pollution affects biodiversity at all levels, from genes to ecosystems, and improved monitoring and research is needed to better assess its various ecological impacts. Here, we review the current contribution of night‐time satellites to ecological applications and elaborate on the potential value of the Glimmer sensor onboard the Chinese Sustainable Development Goals Science Satellite 1 (SDGSAT‐1), a novel medium‐resolution and multispectral sensor, for quantifying artificial light at night (ALAN). Due to their coarse spatial, spectral or temporal resolution, most of the currently used space‐borne sensors are limited in their contribution to assessments of light pollution at multiple scales and of the ecological and conservation‐relevant effects of ALAN. SDGSAT‐1 now offers new opportunities to map the variability in light intensity and spectra at finer spatial resolution, providing the means to disentangle and characterize different sources of ALAN, and to relate ALAN to local environmental parameters, in situ measurements and surveys. Monitoring direct light emissions at 10–40 m spatial resolution enables scientists to better understand the origins and impacts of light pollution on sensitive species and ecosystems, and assists practitioners in implementing local conservation measures. We demonstrate some key ecological applications of SDGSAT‐1, such as quantifying the exposure of protected areas to light pollution, assessing wildlife corridors and dark refuges in urban areas, and modelling the visibility of light sources to animals. We conclude that SDGSAT‐1, and possibly similar future satellite missions, will significantly advance ecological light pollution research to better understand the environmental impacts of light pollution and to devise strategies to mitigate them.
光污染影响从基因到生态系统的各个层面的生物多样性,需要改进监测和研究,以更好地评估其各种生态影响。本文综述了目前夜间卫星对生态应用的贡献,并详细介绍了中国可持续发展目标科学卫星1号(SDGSAT - 1)上搭载的微光传感器的潜在价值。微光传感器是一种新型的中分辨率和多光谱传感器,用于量化夜间人造光(ALAN)。由于空间、光谱或时间分辨率较差,目前使用的大多数空间传感器在评估多尺度光污染以及ALAN的生态和保护相关影响方面的贡献有限。SDGSAT - 1现在提供了新的机会,以更精细的空间分辨率绘制光强度和光谱的变化,提供了解开和表征不同ALAN来源的方法,并将ALAN与当地环境参数、原位测量和调查联系起来。在10-40米的空间分辨率下监测直接光发射,使科学家能够更好地了解光污染对敏感物种和生态系统的起源和影响,并帮助从业者实施当地的保护措施。我们展示了SDGSAT‐1的一些关键生态应用,例如量化保护区的光污染暴露,评估城市地区的野生动物走廊和黑暗避难所,以及模拟光源对动物的可见度。我们的结论是,SDGSAT - 1以及未来可能类似的卫星任务将显著推进生态光污染研究,以更好地了解光污染对环境的影响,并制定减轻这些影响的策略。
{"title":"Night lights from space: potential of SDGSAT‐1 for ecological applications","authors":"Dominique Weber, Janine Bolliger, Klaus Ecker, Claude Fischer, Christian Ginzler, Martin M. Gossner, Laurent Huber, Martin K. Obrist, Florian Zellweger, Noam Levin","doi":"10.1002/rse2.70011","DOIUrl":"https://doi.org/10.1002/rse2.70011","url":null,"abstract":"Light pollution affects biodiversity at all levels, from genes to ecosystems, and improved monitoring and research is needed to better assess its various ecological impacts. Here, we review the current contribution of night‐time satellites to ecological applications and elaborate on the potential value of the Glimmer sensor onboard the Chinese Sustainable Development Goals Science Satellite 1 (SDGSAT‐1), a novel medium‐resolution and multispectral sensor, for quantifying artificial light at night (ALAN). Due to their coarse spatial, spectral or temporal resolution, most of the currently used space‐borne sensors are limited in their contribution to assessments of light pollution at multiple scales and of the ecological and conservation‐relevant effects of ALAN. SDGSAT‐1 now offers new opportunities to map the variability in light intensity and spectra at finer spatial resolution, providing the means to disentangle and characterize different sources of ALAN, and to relate ALAN to local environmental parameters, in situ measurements and surveys. Monitoring direct light emissions at 10–40 m spatial resolution enables scientists to better understand the origins and impacts of light pollution on sensitive species and ecosystems, and assists practitioners in implementing local conservation measures. We demonstrate some key ecological applications of SDGSAT‐1, such as quantifying the exposure of protected areas to light pollution, assessing wildlife corridors and dark refuges in urban areas, and modelling the visibility of light sources to animals. We conclude that SDGSAT‐1, and possibly similar future satellite missions, will significantly advance ecological light pollution research to better understand the environmental impacts of light pollution and to devise strategies to mitigate them.","PeriodicalId":21132,"journal":{"name":"Remote Sensing in Ecology and Conservation","volume":"54 1","pages":""},"PeriodicalIF":5.5,"publicationDate":"2025-05-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144066914","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"环境科学与生态学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
A scalable transfer learning workflow for extracting biological and behavioural insights from forest elephant vocalizations 一个可扩展的迁移学习工作流,用于从森林象的发声中提取生物学和行为学见解
IF 5.5 2区 环境科学与生态学 Q1 ECOLOGY Pub Date : 2025-04-25 DOI: 10.1002/rse2.70008
Alastair Pickering, Santiago Martinez Balvanera, Kate E. Jones, Daniela Hedwig
Animal vocalizations encode rich biological information—such as age, sex, behavioural context and emotional state—making bioacoustic analysis a promising non‐invasive method for assessing welfare and population demography. However, traditional bioacoustic approaches, which rely on manually defined acoustic features, are time‐consuming, require specialized expertise and may introduce subjective bias. These constraints reduce the feasibility of analysing increasingly large datasets generated by passive acoustic monitoring (PAM). Transfer learning with Convolutional Neural Networks (CNNs) offers a scalable alternative by enabling automatic acoustic feature extraction without predefined criteria. Here, we applied four pre‐trained CNNs—two general purpose models (VGGish and YAMNet) and two avian bioacoustic models (Perch and BirdNET)—to African forest elephant (Loxodonta cyclotis) recordings. We used a dimensionality reduction algorithm (UMAP) to represent the extracted acoustic features in two dimensions and evaluated these representations across three key tasks: (1) call‐type classification (rumble, roar and trumpet), (2) rumble sub‐type identification and (3) behavioural and demographic analysis. A Random Forest classifier trained on these features achieved near‐perfect accuracy for rumbles, with Perch attaining the highest average accuracy (0.85) across all call types. Clustering the reduced features identified biologically meaningful rumble sub‐types—such as adult female calls linked to logistics—and provided clearer groupings than manual classification. Statistical analyses showed that factors including age and behavioural context significantly influenced call variation (P < 0.001), with additional comparisons revealing clear differences among contexts (e.g. nursing, competition, separation), sexes and multiple age classes. Perch and BirdNET consistently outperformed general purpose models when dealing with complex or ambiguous calls. These findings demonstrate that transfer learning enables scalable, reproducible bioacoustic workflows capable of detecting biologically meaningful acoustic variation. Integrating this approach into PAM pipelines can enhance the non‐invasive assessment of population dynamics, behaviour and welfare in acoustically active species.
动物发声编码了丰富的生物信息,如年龄、性别、行为背景和情绪状态,使生物声学分析成为评估福利和人口统计的一种有前途的非侵入性方法。然而,传统的生物声学方法依赖于手动定义的声学特征,耗时,需要专业知识,并且可能会引入主观偏见。这些限制因素降低了被动声学监测(PAM)产生的越来越大的数据集分析的可行性。卷积神经网络(cnn)的迁移学习提供了一种可扩展的替代方案,可以在没有预定义标准的情况下自动提取声学特征。在这里,我们应用了四个预先训练的cnn -两个通用模型(VGGish和YAMNet)和两个鸟类生物声学模型(Perch和BirdNET) -非洲森林象(Loxodonta cyclotis)的录音。我们使用降维算法(UMAP)在两个维度上表示提取的声学特征,并在三个关键任务中评估这些表征:(1)呼叫类型分类(隆隆声、轰鸣声和小号),(2)隆隆声子类型识别和(3)行为和人口统计分析。在这些特征上训练的随机森林分类器对隆隆声达到了近乎完美的准确率,其中珀奇在所有呼叫类型中达到了最高的平均准确率(0.85)。将减少的特征聚类识别出生物学上有意义的隆隆声亚类型——比如与物流相关的成年雌性叫声——提供了比人工分类更清晰的分组。统计分析表明,包括年龄和行为背景在内的因素对呼叫差异有显著影响(P <;0.001),进一步的比较揭示了环境(如护理、竞争、分离)、性别和多年龄阶层之间的明显差异。在处理复杂或模糊的呼叫时,珀奇和BirdNET始终优于通用模型。这些发现表明,迁移学习能够实现可扩展的、可重复的生物声学工作流程,能够检测生物学上有意义的声学变化。将这种方法整合到PAM管道中可以增强对声活跃物种种群动态、行为和福利的非侵入性评估。
{"title":"A scalable transfer learning workflow for extracting biological and behavioural insights from forest elephant vocalizations","authors":"Alastair Pickering, Santiago Martinez Balvanera, Kate E. Jones, Daniela Hedwig","doi":"10.1002/rse2.70008","DOIUrl":"https://doi.org/10.1002/rse2.70008","url":null,"abstract":"Animal vocalizations encode rich biological information—such as age, sex, behavioural context and emotional state—making bioacoustic analysis a promising non‐invasive method for assessing welfare and population demography. However, traditional bioacoustic approaches, which rely on manually defined acoustic features, are time‐consuming, require specialized expertise and may introduce subjective bias. These constraints reduce the feasibility of analysing increasingly large datasets generated by passive acoustic monitoring (PAM). Transfer learning with Convolutional Neural Networks (CNNs) offers a scalable alternative by enabling automatic acoustic feature extraction without predefined criteria. Here, we applied four pre‐trained CNNs—two general purpose models (VGGish and YAMNet) and two avian bioacoustic models (Perch and BirdNET)—to African forest elephant (<jats:italic>Loxodonta cyclotis</jats:italic>) recordings. We used a dimensionality reduction algorithm (UMAP) to represent the extracted acoustic features in two dimensions and evaluated these representations across three key tasks: (1) call‐type classification (rumble, roar and trumpet), (2) rumble sub‐type identification and (3) behavioural and demographic analysis. A Random Forest classifier trained on these features achieved near‐perfect accuracy for rumbles, with Perch attaining the highest average accuracy (0.85) across all call types. Clustering the reduced features identified biologically meaningful rumble sub‐types—such as adult female calls linked to logistics—and provided clearer groupings than manual classification. Statistical analyses showed that factors including age and behavioural context significantly influenced call variation (<jats:italic>P</jats:italic> &lt; 0.001), with additional comparisons revealing clear differences among contexts (e.g. nursing, competition, separation), sexes and multiple age classes. Perch and BirdNET consistently outperformed general purpose models when dealing with complex or ambiguous calls. These findings demonstrate that transfer learning enables scalable, reproducible bioacoustic workflows capable of detecting biologically meaningful acoustic variation. Integrating this approach into PAM pipelines can enhance the non‐invasive assessment of population dynamics, behaviour and welfare in acoustically active species.","PeriodicalId":21132,"journal":{"name":"Remote Sensing in Ecology and Conservation","volume":"219 1","pages":""},"PeriodicalIF":5.5,"publicationDate":"2025-04-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143875850","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"环境科学与生态学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Advancing the mapping of vegetation structure in savannas using Sentinel‐1 imagery 利用哨兵-1 图像推进绘制热带草原植被结构图的工作
IF 5.5 2区 环境科学与生态学 Q1 ECOLOGY Pub Date : 2025-04-22 DOI: 10.1002/rse2.70006
Vera Thijssen, Marianthi Tangili, Ruth A. Howison, Han Olff
Vegetation structure monitoring is important for the understanding and conservation of savanna ecosystems. Optical satellite imagery can be used to estimate canopy cover, but provides limited information about the structure of savannas, and is restricted to daytime and clear‐sky captures. Active remote sensing can potentially overcome this. We explore the utility of C‐band synthetic aperture radar imagery for mapping both grassland and woody vegetation structure in savannas. We calibrated Sentinel‐1 VH () and VV () backscatter coefficients and their ratio () to ground‐based estimates of grass biomass, woody canopy volume (<50 000 m3/ha) and tree basal area (<15 m2/ha) in the Greater Serengeti‐Mara Ecosystem, and simultaneously explored their sensitivity to soil moisture. We show that in particular can be used to estimate grass biomass (R2 = 0.54, RMSE = 630 kg/ha, %range = 20.6), woody canopy volume (R2 = 0.69, RMSE = 4188 m3/ha, %range = 11.8) and tree basal area (R2 = 0.44, RMSE = 2.03 m2/ha, %range = 18.6) in the dry season, allowing for the extrapolation to regional scale vegetation structure maps. We also introduce new proxies for soil moisture as an option for extending this approach to the wet season using the 90‐day preceding bounded running averages of the Climate Hazards Group InfraRed Precipitation with Station (CHIRPS) and the Multi‐satellitE Retrievals for Global Precipitation Measurement (IMERG) datasets. We discuss the potential of Sentinel‐1 imagery for better understanding of the spatio‐temporal dynamics of vegetation structure in savannas.
植被结构监测对认识和保护热带稀树草原生态系统具有重要意义。光学卫星图像可用于估算树冠覆盖度,但提供的关于稀树草原结构的信息有限,而且仅限于白天和晴空的捕获。主动遥感可以潜在地克服这一点。我们探索了C波段合成孔径雷达成像在稀树草原草地和木本植被结构制图中的应用。我们校准了Sentinel‐1 VH()和VV()后向散射系数及其与基于地面估算的大塞伦盖蒂-马拉生态系统中草生物量、木质冠层体积(<;5万m3/ha)和树木基面积(<15 m2/ha)的比值,并同时探索了它们对土壤湿度的敏感性。我们发现,特别是可以用来估算旱季的草生物量(R2 = 0.54, RMSE = 630 kg/ha, %范围= 20.6),木质冠层体积(R2 = 0.69, RMSE = 4188 m3/ha, %范围= 11.8)和树木基面积(R2 = 0.44, RMSE = 2.03 m2/ha, %范围= 18.6),允许外推到区域尺度的植被结构图。我们还引入了土壤湿度的新代用物,作为将该方法扩展到雨季的一种选择,使用气候危害组红外站降水(CHIRPS)和全球降水测量多卫星检索(IMERG)数据集的90天前有边界运行平均值。我们讨论了Sentinel - 1图像在更好地理解热带稀树草原植被结构时空动态方面的潜力。
{"title":"Advancing the mapping of vegetation structure in savannas using Sentinel‐1 imagery","authors":"Vera Thijssen, Marianthi Tangili, Ruth A. Howison, Han Olff","doi":"10.1002/rse2.70006","DOIUrl":"https://doi.org/10.1002/rse2.70006","url":null,"abstract":"Vegetation structure monitoring is important for the understanding and conservation of savanna ecosystems. Optical satellite imagery can be used to estimate canopy cover, but provides limited information about the structure of savannas, and is restricted to daytime and clear‐sky captures. Active remote sensing can potentially overcome this. We explore the utility of C‐band synthetic aperture radar imagery for mapping both grassland and woody vegetation structure in savannas. We calibrated Sentinel‐1 VH () and VV () backscatter coefficients and their ratio () to ground‐based estimates of grass biomass, woody canopy volume (&lt;50 000 m<jats:sup>3</jats:sup>/ha) and tree basal area (&lt;15 m<jats:sup>2</jats:sup>/ha) in the Greater Serengeti‐Mara Ecosystem, and simultaneously explored their sensitivity to soil moisture. We show that in particular can be used to estimate grass biomass (<jats:italic>R</jats:italic><jats:sup>2</jats:sup> = 0.54, RMSE = 630 kg/ha, %range = 20.6), woody canopy volume (<jats:italic>R</jats:italic><jats:sup>2</jats:sup> = 0.69, RMSE = 4188 m<jats:sup>3</jats:sup>/ha, %range = 11.8) and tree basal area (<jats:italic>R</jats:italic><jats:sup>2</jats:sup> = 0.44, RMSE = 2.03 m<jats:sup>2</jats:sup>/ha, %range = 18.6) in the dry season, allowing for the extrapolation to regional scale vegetation structure maps. We also introduce new proxies for soil moisture as an option for extending this approach to the wet season using the 90‐day preceding bounded running averages of the Climate Hazards Group InfraRed Precipitation with Station (CHIRPS) and the Multi‐satellitE Retrievals for Global Precipitation Measurement (IMERG) datasets. We discuss the potential of Sentinel‐1 imagery for better understanding of the spatio‐temporal dynamics of vegetation structure in savannas.","PeriodicalId":21132,"journal":{"name":"Remote Sensing in Ecology and Conservation","volume":"91 1","pages":""},"PeriodicalIF":5.5,"publicationDate":"2025-04-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143862136","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"环境科学与生态学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Object detection‐assisted workflow facilitates cryptic snake monitoring 物体探测辅助工作流程有助于监测隐蛇
IF 5.5 2区 环境科学与生态学 Q1 ECOLOGY Pub Date : 2025-04-21 DOI: 10.1002/rse2.70009
Storm Miller, Michael Kirkland, Kristen M. Hart, Robert A. McCleery
Camera traps are an important tool used to study rare and cryptic animals, including snakes. Time‐lapse photography can be particularly useful for studying snakes that often fail to trigger a camera's infrared motion sensor due to their ectothermic nature. However, the large datasets produced by time‐lapse photography require labor‐intensive classification, limiting their use in large‐scale studies. While many artificial intelligence‐based object detection models are effective at identifying mammals in images, their ability to detect snakes is unproven. Here, we used camera data to evaluate the efficacy of an object detection model to rapidly and accurately detect snakes. We classified images manually to the species level and compared this with a hybrid review workflow where the model removed blank images followed by a manual review. Using a ≥0.05 model confidence threshold, our hybrid review workflow correctly identified 94.5% of blank images, completed image classification 6× faster, and detected large (>66 cm) snakes as well as manual review. Conversely, the hybrid review method often failed to detect all instances of a snake in a string of images and detected fewer small (<66 cm) snakes than manual review. However, most relevant ecological information requires only a single detection in a sequence of images, and study design changes could likely improve the detection of smaller snakes. Our findings suggest that an object detection‐assisted hybrid workflow can greatly reduce time spent manually classifying data‐heavy time‐lapse snake studies and facilitate ecological monitoring for large snakes.
相机陷阱是研究包括蛇在内的稀有和神秘动物的重要工具。延时摄影对于研究蛇特别有用,因为蛇的变温特性往往无法触发相机的红外运动传感器。然而,延时摄影产生的大型数据集需要劳动密集型分类,限制了它们在大规模研究中的应用。虽然许多基于人工智能的物体检测模型在识别图像中的哺乳动物方面很有效,但它们检测蛇的能力尚未得到证实。在这里,我们使用相机数据来评估物体检测模型快速准确地检测蛇的功效。我们将图像手动分类到物种水平,并将其与混合审查工作流进行比较,其中模型删除空白图像,然后进行手动审查。使用≥0.05的模型置信阈值,我们的混合审查工作流正确识别了94.5%的空白图像,完成图像分类的速度提高了6倍,并且检测到大型(>66 cm)蛇和人工审查。相反,混合审查方法往往不能检测到一串图像中的所有蛇的实例,并且检测到的小蛇(<;66厘米)比人工审查少。然而,大多数相关的生态信息只需要在一系列图像中进行一次检测,研究设计的改变可能会提高对较小蛇的检测。我们的研究结果表明,目标检测辅助混合工作流程可以大大减少人工分类数据的时间-大量的延时蛇研究,并促进对大型蛇的生态监测。
{"title":"Object detection‐assisted workflow facilitates cryptic snake monitoring","authors":"Storm Miller, Michael Kirkland, Kristen M. Hart, Robert A. McCleery","doi":"10.1002/rse2.70009","DOIUrl":"https://doi.org/10.1002/rse2.70009","url":null,"abstract":"Camera traps are an important tool used to study rare and cryptic animals, including snakes. Time‐lapse photography can be particularly useful for studying snakes that often fail to trigger a camera's infrared motion sensor due to their ectothermic nature. However, the large datasets produced by time‐lapse photography require labor‐intensive classification, limiting their use in large‐scale studies. While many artificial intelligence‐based object detection models are effective at identifying mammals in images, their ability to detect snakes is unproven. Here, we used camera data to evaluate the efficacy of an object detection model to rapidly and accurately detect snakes. We classified images manually to the species level and compared this with a hybrid review workflow where the model removed blank images followed by a manual review. Using a ≥0.05 model confidence threshold, our hybrid review workflow correctly identified 94.5% of blank images, completed image classification 6× faster, and detected large (&gt;66 cm) snakes as well as manual review. Conversely, the hybrid review method often failed to detect all instances of a snake in a string of images and detected fewer small (&lt;66 cm) snakes than manual review. However, most relevant ecological information requires only a single detection in a sequence of images, and study design changes could likely improve the detection of smaller snakes. Our findings suggest that an object detection‐assisted hybrid workflow can greatly reduce time spent manually classifying data‐heavy time‐lapse snake studies and facilitate ecological monitoring for large snakes.","PeriodicalId":21132,"journal":{"name":"Remote Sensing in Ecology and Conservation","volume":"68 1","pages":""},"PeriodicalIF":5.5,"publicationDate":"2025-04-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143853540","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"环境科学与生态学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Towards edge processing of images from insect camera traps 昆虫相机陷阱图像的边缘处理
IF 5.5 2区 环境科学与生态学 Q1 ECOLOGY Pub Date : 2025-04-17 DOI: 10.1002/rse2.70007
Kim Bjerge, Henrik Karstoft, Toke T. Høye
Insects represent nearly half of all known multicellular species, but knowledge about them lags behind for most vertebrate species. In part for this reason, they are often neglected in biodiversity conservation policies and practice. Computer vision tools, such as insect camera traps, for automated monitoring have the potential to revolutionize insect study and conservation. To further advance insect camera trapping and the analysis of their image data, effective image processing pipelines are needed. In this paper, we present a flexible and fast processing pipeline designed to analyse these recordings by detecting, tracking and classifying nocturnal insects in a broad taxonomy of 15 insect classes and resolution of individual moth species. A classifier with anomaly detection is proposed to filter dark, blurred or partially visible insects that will be uncertain to classify correctly. A simple track‐by‐detection algorithm is proposed to track classified insects by incorporating feature embeddings, distance and area cost. We evaluated the computational speed and power performance of different edge computing devices (Raspberry Pi's and NVIDIA Jetson Nano) and compared various time‐lapse (TL) strategies with tracking. The minimum difference of detections was found for 2‐min TL intervals compared to tracking with 0.5 frames per second; however, for insects with fewer than one detection per night, the Pearson correlation decreases. Shifting from tracking to TL monitoring would reduce the number of recorded images and would allow for edge processing of images in real‐time on a camera trap with Raspberry Pi. The Jetson Nano is the most energy‐efficient solution, capable of real‐time tracking at nearly 0.5 fps. Our processing pipeline was applied to more than 5.7 million images recorded at 0.5 frames per second from 12 light camera traps during two full seasons located in diverse habitats, including bogs, heaths and forests. Our results thus show the scalability of insect camera traps.
在所有已知的多细胞物种中,昆虫几乎占了一半,但对它们的了解却落后于大多数脊椎动物物种。部分由于这个原因,它们在生物多样性保护政策和实践中经常被忽视。用于自动监测的计算机视觉工具,如昆虫相机陷阱,有可能彻底改变昆虫的研究和保护。为了进一步推进昆虫相机捕获及其图像数据的分析,需要有效的图像处理管道。在本文中,我们提出了一个灵活和快速的处理管道,旨在通过检测,跟踪和分类夜间昆虫在15个昆虫类的广泛分类和单个蛾种的分辨率来分析这些记录。提出了一种带有异常检测的分类器,用于过滤不确定分类正确的深色、模糊或部分可见的昆虫。提出了一种结合特征嵌入、距离和面积代价的简单逐迹检测算法。我们评估了不同边缘计算设备(树莓派和NVIDIA Jetson Nano)的计算速度和功耗性能,并比较了不同的延时(TL)跟踪策略。与0.5帧/秒的跟踪相比,2分钟TL间隔的检测差异最小;然而,对于每晚检测不到一次的昆虫,Pearson相关性降低。从跟踪转向TL监控将减少记录图像的数量,并允许在树莓派的相机陷阱上实时处理图像的边缘。Jetson Nano是最节能的解决方案,能够以近0.5 fps的速度实时跟踪。我们的处理流程应用于超过570万张图像,以每秒0.5帧的速度从12个轻型相机陷阱中记录下来,在两个完整的季节中位于不同的栖息地,包括沼泽,荒原和森林。因此,我们的结果显示了昆虫相机陷阱的可扩展性。
{"title":"Towards edge processing of images from insect camera traps","authors":"Kim Bjerge, Henrik Karstoft, Toke T. Høye","doi":"10.1002/rse2.70007","DOIUrl":"https://doi.org/10.1002/rse2.70007","url":null,"abstract":"Insects represent nearly half of all known multicellular species, but knowledge about them lags behind for most vertebrate species. In part for this reason, they are often neglected in biodiversity conservation policies and practice. Computer vision tools, such as insect camera traps, for automated monitoring have the potential to revolutionize insect study and conservation. To further advance insect camera trapping and the analysis of their image data, effective image processing pipelines are needed. In this paper, we present a flexible and fast processing pipeline designed to analyse these recordings by detecting, tracking and classifying nocturnal insects in a broad taxonomy of 15 insect classes and resolution of individual moth species. A classifier with anomaly detection is proposed to filter dark, blurred or partially visible insects that will be uncertain to classify correctly. A simple track‐by‐detection algorithm is proposed to track classified insects by incorporating feature embeddings, distance and area cost. We evaluated the computational speed and power performance of different edge computing devices (Raspberry Pi's and NVIDIA Jetson Nano) and compared various time‐lapse (TL) strategies with tracking. The minimum difference of detections was found for 2‐min TL intervals compared to tracking with 0.5 frames per second; however, for insects with fewer than one detection per night, the Pearson correlation decreases. Shifting from tracking to TL monitoring would reduce the number of recorded images and would allow for edge processing of images in real‐time on a camera trap with Raspberry Pi. The Jetson Nano is the most energy‐efficient solution, capable of real‐time tracking at nearly 0.5 fps. Our processing pipeline was applied to more than 5.7 million images recorded at 0.5 frames per second from 12 light camera traps during two full seasons located in diverse habitats, including bogs, heaths and forests. Our results thus show the scalability of insect camera traps.","PeriodicalId":21132,"journal":{"name":"Remote Sensing in Ecology and Conservation","volume":"123 1","pages":""},"PeriodicalIF":5.5,"publicationDate":"2025-04-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143847243","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"环境科学与生态学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Application of computer vision for off‐highway vehicle route detection: A case study in Mojave desert tortoise habitat 计算机视觉在非公路车辆路线检测中的应用——以莫哈韦沙漠陆龟栖息地为例
IF 5.5 2区 环境科学与生态学 Q1 ECOLOGY Pub Date : 2025-04-07 DOI: 10.1002/rse2.70004
Alexander J. Robillard, Madeline Standen, Noah Giebink, Mark Spangler, Amy C. Collins, Brian Folt, Andrew Maguire, Elissa M. Olimpi, Brett G. Dickson
Driving off‐highway vehicles (OHVs), which contributes to habitat degradation and fragmentation, is a common recreational activity in the United States and other parts of the world, particularly in desert environments with fragile ecosystems. Although habitat degradation and mortality from the expansion of OHV networks are thought to have major impacts on desert species, comprehensive maps of OHV route networks and their changes are poorly understood. To better understand how OHV route networks have evolved in the Mojave Desert ecoregion, we developed a computer vision approach to estimate OHV route location and density across the range of the Mojave desert tortoise (Gopherus agassizii). We defined OHV routes as non‐paved, linear features, including designated routes and washes in the presence of non‐paved routes. Using contemporary (n = 1499) and historical (n = 1148) aerial images, we trained and validated three convolutional neural network (CNN) models. We cross‐examined each model on sets of independently curated data and selected the highest performing model to generate predictions across the tortoise's range. When evaluated against a ‘hybrid’ test set (n = 1807 images), the final hybrid model achieved an accuracy of 77%. We then applied our model to remotely sensed imagery from across the tortoise's range and generated spatial layers of OHV route density for the 1970s, 1980s, 2010s, and 2020s. We examined OHV route density within tortoise conservation areas (TCA) and recovery units (RU) within the range of the species. Results showed an increase in the OHV route density in both TCAs (8.45%) and RUs (7.85%) from 1980 to 2020. Ordinal logistic regression indicated a strong correlation (OR = 1.01, P < 0.001) between model outputs and ground‐truthed OHV maps from the study region. Our computer vision approach and mapped results can inform conservation strategies and management aimed at mitigating the adverse impacts of OHV activity on sensitive ecosystems.
在美国和世界其他地区,特别是在生态系统脆弱的沙漠环境中,驾驶非公路车辆(ohv)是一种常见的娱乐活动,它会导致栖息地退化和破碎化。虽然人们认为OHV网络扩张造成的栖息地退化和死亡率对沙漠物种有重大影响,但OHV路线网络的综合地图及其变化却知之甚少。为了更好地了解莫哈维沙漠生态区内OHV路线网络的演变过程,我们开发了一种计算机视觉方法来估计莫哈维沙漠陆龟(Gopherus agassizii)范围内OHV路线的位置和密度。我们将OHV路线定义为非铺设的线性特征,包括指定路线和在非铺设路线存在的清洗。使用当代(n = 1499)和历史(n = 1148)航空图像,我们训练并验证了三个卷积神经网络(CNN)模型。我们在独立整理的数据集上交叉检查了每个模型,并选择了性能最高的模型来生成整个乌龟范围的预测。当对“混合”测试集(n = 1807张图像)进行评估时,最终的混合模型达到了77%的准确率。然后,我们将该模型应用于陆龟范围内的遥感图像,并生成了20世纪70年代、80年代、2010年代和2020年代的OHV路线密度空间层。研究了龟类保护区(TCA)和恢复单元(RU)内的OHV路径密度。结果表明:1980 - 2020年,中国中部地区和中部地区的OHV路径密度均呈上升趋势,分别为8.45%和7.85%;有序逻辑回归显示相关性强(OR = 1.01, P <;0.001),模型输出和研究区域的地面真实OHV图之间存在差异。我们的计算机视觉方法和映射结果可以为保护策略和管理提供信息,旨在减轻OHV活动对敏感生态系统的不利影响。
{"title":"Application of computer vision for off‐highway vehicle route detection: A case study in Mojave desert tortoise habitat","authors":"Alexander J. Robillard, Madeline Standen, Noah Giebink, Mark Spangler, Amy C. Collins, Brian Folt, Andrew Maguire, Elissa M. Olimpi, Brett G. Dickson","doi":"10.1002/rse2.70004","DOIUrl":"https://doi.org/10.1002/rse2.70004","url":null,"abstract":"Driving off‐highway vehicles (OHVs), which contributes to habitat degradation and fragmentation, is a common recreational activity in the United States and other parts of the world, particularly in desert environments with fragile ecosystems. Although habitat degradation and mortality from the expansion of OHV networks are thought to have major impacts on desert species, comprehensive maps of OHV route networks and their changes are poorly understood. To better understand how OHV route networks have evolved in the Mojave Desert ecoregion, we developed a computer vision approach to estimate OHV route location and density across the range of the Mojave desert tortoise (<jats:italic>Gopherus agassizii</jats:italic>). We defined OHV routes as non‐paved, linear features, including designated routes and washes in the presence of non‐paved routes. Using contemporary (<jats:italic>n</jats:italic> = 1499) and historical (<jats:italic>n</jats:italic> = 1148) aerial images, we trained and validated three convolutional neural network (CNN) models. We cross‐examined each model on sets of independently curated data and selected the highest performing model to generate predictions across the tortoise's range. When evaluated against a ‘hybrid’ test set (<jats:italic>n</jats:italic> = 1807 images), the final hybrid model achieved an accuracy of 77%. We then applied our model to remotely sensed imagery from across the tortoise's range and generated spatial layers of OHV route density for the 1970s, 1980s, 2010s, and 2020s. We examined OHV route density within tortoise conservation areas (TCA) and recovery units (RU) within the range of the species. Results showed an increase in the OHV route density in both TCAs (8.45%) and RUs (7.85%) from 1980 to 2020. Ordinal logistic regression indicated a strong correlation (OR = 1.01, <jats:italic>P</jats:italic> &lt; 0.001) between model outputs and ground‐truthed OHV maps from the study region. Our computer vision approach and mapped results can inform conservation strategies and management aimed at mitigating the adverse impacts of OHV activity on sensitive ecosystems.","PeriodicalId":21132,"journal":{"name":"Remote Sensing in Ecology and Conservation","volume":"89 1","pages":""},"PeriodicalIF":5.5,"publicationDate":"2025-04-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143798030","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"环境科学与生态学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Woody cover and geology as regional‐scale determinants of semi‐arid savanna stability 半干旱稀树草原稳定性的区域尺度决定因素:植被覆盖和地质
IF 5.5 2区 环境科学与生态学 Q1 ECOLOGY Pub Date : 2025-03-28 DOI: 10.1002/rse2.70005
Liezl Mari Vermeulen, Koenraad Van Meerbeek, Paulo Negri Bernardino, Jasper Slingsby, Bruno Verbist, Ben Somers
Savannas, defined by a balance of woody and herbaceous vegetation, are vital for global biodiversity and carbon sequestration. Yet, their stability is increasingly at risk due to climate change and human impacts. The responses of these ecosystems to extreme drought events remain poorly understood, especially in relation to the regional variations in soil, terrain, climate history and disturbance legacy. This study analysed time series of a vegetation index, derived from remote sensing data, to quantify ecosystem stability metrics, i.e., resistance and resilience, in response to a major drought event in the semi‐arid savanna of the Kruger National Park, South Africa. Using Bayesian Generalized Linear Models, we assessed the influence of ecosystem traits, past extreme climate events, fire history and herbivory on regional patterns of drought resistance and resilience. Our results show that sandier granite soils dominated by trees have higher drought resistance, supported by the ability of deep‐rooted water access. In contrast, grassier savanna landscapes on basalt soils proved more drought resilient, with rapid vegetation recovery post‐drought. The effects of woody cover on ecosystem drought response are mediated by differences in historical fire regimes, elephant presence and climate legacy, underscoring the complex, context‐dependent nature of savanna landscape response to drought. This research deepens our understanding of savanna stability by clarifying the role of regional drivers, like fire and climate, alongside long‐term factors, like soil composition and woody cover. With droughts projected to increase in frequency and severity in arid and semi‐arid savannas, it also highlights remote sensing as a robust tool for regional‐scale analysis of drought responses, offering a valuable complement to field‐based experiments that can guide effective management and adaptive strategies.
稀树草原的定义是木本和草本植被的平衡,对全球生物多样性和碳封存至关重要。然而,由于气候变化和人类影响,它们的稳定性正日益受到威胁。这些生态系统对极端干旱事件的响应仍然知之甚少,特别是与土壤、地形、气候历史和干扰遗留的区域差异有关。本研究分析了来自遥感数据的植被指数的时间序列,以量化生态系统稳定性指标,即对南非克鲁格国家公园半干旱稀树草原重大干旱事件的抵抗力和恢复力。利用贝叶斯广义线性模型,评估了生态系统特征、过去极端气候事件、火灾历史和草食对区域抗旱性和抗旱性格局的影响。我们的研究结果表明,树木为主的砂质花岗岩土壤具有更高的抗旱性,这得益于深层根系的取水能力。相比之下,玄武岩土壤上的草甸稀树草原景观具有更强的抗旱能力,干旱后植被恢复迅速。森林覆盖对生态系统干旱响应的影响是由历史火灾制度、大象存在和气候遗产的差异介导的,这凸显了稀树草原景观对干旱响应的复杂性和环境依赖性。这项研究通过阐明区域驱动因素(如火灾和气候)以及长期因素(如土壤成分和树木覆盖)的作用,加深了我们对稀树草原稳定性的理解。鉴于干旱和半干旱稀树草原干旱的频率和严重程度预计将增加,该报告还强调了遥感作为区域尺度干旱响应分析的有力工具,为基于现场的实验提供了宝贵的补充,可以指导有效的管理和适应战略。
{"title":"Woody cover and geology as regional‐scale determinants of semi‐arid savanna stability","authors":"Liezl Mari Vermeulen, Koenraad Van Meerbeek, Paulo Negri Bernardino, Jasper Slingsby, Bruno Verbist, Ben Somers","doi":"10.1002/rse2.70005","DOIUrl":"https://doi.org/10.1002/rse2.70005","url":null,"abstract":"Savannas, defined by a balance of woody and herbaceous vegetation, are vital for global biodiversity and carbon sequestration. Yet, their stability is increasingly at risk due to climate change and human impacts. The responses of these ecosystems to extreme drought events remain poorly understood, especially in relation to the regional variations in soil, terrain, climate history and disturbance legacy. This study analysed time series of a vegetation index, derived from remote sensing data, to quantify ecosystem stability metrics, i.e., resistance and resilience, in response to a major drought event in the semi‐arid savanna of the Kruger National Park, South Africa. Using Bayesian Generalized Linear Models, we assessed the influence of ecosystem traits, past extreme climate events, fire history and herbivory on regional patterns of drought resistance and resilience. Our results show that sandier granite soils dominated by trees have higher drought resistance, supported by the ability of deep‐rooted water access. In contrast, grassier savanna landscapes on basalt soils proved more drought resilient, with rapid vegetation recovery post‐drought. The effects of woody cover on ecosystem drought response are mediated by differences in historical fire regimes, elephant presence and climate legacy, underscoring the complex, context‐dependent nature of savanna landscape response to drought. This research deepens our understanding of savanna stability by clarifying the role of regional drivers, like fire and climate, alongside long‐term factors, like soil composition and woody cover. With droughts projected to increase in frequency and severity in arid and semi‐arid savannas, it also highlights remote sensing as a robust tool for regional‐scale analysis of drought responses, offering a valuable complement to field‐based experiments that can guide effective management and adaptive strategies.","PeriodicalId":21132,"journal":{"name":"Remote Sensing in Ecology and Conservation","volume":"18 1","pages":""},"PeriodicalIF":5.5,"publicationDate":"2025-03-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143734262","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"环境科学与生态学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
How to achieve accurate wildlife detection by using vehicle‐mounted mobile monitoring images and deep learning? 如何利用车载移动监控图像和深度学习实现对野生动物的精确检测?
IF 5.5 2区 环境科学与生态学 Q1 ECOLOGY Pub Date : 2025-03-14 DOI: 10.1002/rse2.70003
Leilei Shi, Jixi Gao, Fei Cao, Wenming Shen, Yue Wu, Kai Liu, Zheng Zhang
With the advancement of artificial intelligence (AI) technologies, vehicle‐mounted mobile monitoring systems have become increasingly integrated into wildlife monitoring practices. However, images captured through these systems often present challenges such as low resolution, small target sizes, and partial occlusions. Consequently, detecting animal targets using conventional deep‐learning networks is challenging. To address these challenges, this paper presents an enhanced YOLOv7 model, referred to as YOLOv7(sr‐sm), which incorporates a super‐resolution (SR) reconstruction module and a small object optimization module. The YOLOv7(sr‐sm) model introduces a super‐resolution reconstruction module that leverages generative adversarial networks (GANs) to reconstruct high‐resolution details from blurry animal images. Additionally, an attention mechanism is integrated into the Neck and Head of YOLOv7 to form a small object optimization module, which enhances the model's ability to detect and locate densely packed small targets. Using a vehicle‐mounted mobile monitoring system, images of four wildlife taxa—sheep, birds, deer, and antelope —were captured on the Tibetan Plateau. These images were combined with publicly available high‐resolution wildlife photographs to create a wildlife test dataset. Experiments were conducted on this dataset, comparing the YOLOv7(sr‐sm) model with eight popular object detection models. The results demonstrate significant improvements in precision, recall, and mean Average Precision (mAP), with YOLOv7(sr‐sm) achieving 93.9%, 92.1%, and 92.3%, respectively. Furthermore, compared to the newly released YOLOv8l model, YOLOv7(sr‐sm) outperforms it by 9.3%, 2.1%, and 4.5% in these three metrics while also exhibiting superior parameter efficiency and higher inference speeds. The YOLOv7(sr‐sm) model architecture can accurately locate and identify blurry animal targets in vehicle‐mounted monitoring images, serving as a reliable tool for animal identification and counting in mobile monitoring systems. These findings provide significant technological support for the application of intelligent monitoring techniques in biodiversity conservation efforts.
随着人工智能(AI)技术的发展,车载移动监测系统已越来越多地融入到野生动物监测实践中。然而,通过这些系统捕捉到的图像往往存在分辨率低、目标尺寸小和部分遮挡等问题。因此,使用传统的深度学习网络检测动物目标具有挑战性。为了应对这些挑战,本文提出了一个增强型 YOLOv7 模型,简称为 YOLOv7(sr-sm),它集成了一个超分辨率(SR)重建模块和一个小目标优化模块。YOLOv7(sr-sm) 模型引入了超分辨率重建模块,该模块利用生成对抗网络 (GAN) 从模糊的动物图像中重建高分辨率细节。此外,YOLOv7 的 "颈部 "和 "头部 "集成了注意力机制,形成了小目标优化模块,从而增强了模型检测和定位密集小目标的能力。利用车载移动监测系统,在青藏高原捕捉到了四种野生动物类群--羊、鸟、鹿和羚羊的图像。这些图像与公开的高分辨率野生动物照片相结合,创建了一个野生动物测试数据集。我们在该数据集上进行了实验,将 YOLOv7(sr-sm) 模型与八种流行的物体检测模型进行了比较。结果表明,YOLOv7(sr-ssm)在精确度、召回率和平均精确度(mAP)方面都有显著提高,分别达到了 93.9%、92.1% 和 92.3%。此外,与新发布的 YOLOv8l 模型相比,YOLOv7(sr-sm) 在这三个指标上分别领先其 9.3%、2.1% 和 4.5%,同时还表现出更高的参数效率和推理速度。YOLOv7(sr-sm)模型架构可以准确定位和识别车载监控图像中模糊的动物目标,是移动监控系统中动物识别和计数的可靠工具。这些发现为在生物多样性保护工作中应用智能监测技术提供了重要的技术支持。
{"title":"How to achieve accurate wildlife detection by using vehicle‐mounted mobile monitoring images and deep learning?","authors":"Leilei Shi, Jixi Gao, Fei Cao, Wenming Shen, Yue Wu, Kai Liu, Zheng Zhang","doi":"10.1002/rse2.70003","DOIUrl":"https://doi.org/10.1002/rse2.70003","url":null,"abstract":"With the advancement of artificial intelligence (AI) technologies, vehicle‐mounted mobile monitoring systems have become increasingly integrated into wildlife monitoring practices. However, images captured through these systems often present challenges such as low resolution, small target sizes, and partial occlusions. Consequently, detecting animal targets using conventional deep‐learning networks is challenging. To address these challenges, this paper presents an enhanced YOLOv7 model, referred to as YOLOv7(sr‐sm), which incorporates a super‐resolution (SR) reconstruction module and a small object optimization module. The YOLOv7(sr‐sm) model introduces a super‐resolution reconstruction module that leverages generative adversarial networks (GANs) to reconstruct high‐resolution details from blurry animal images. Additionally, an attention mechanism is integrated into the Neck and Head of YOLOv7 to form a small object optimization module, which enhances the model's ability to detect and locate densely packed small targets. Using a vehicle‐mounted mobile monitoring system, images of four wildlife taxa—sheep, birds, deer, and antelope —were captured on the Tibetan Plateau. These images were combined with publicly available high‐resolution wildlife photographs to create a wildlife test dataset. Experiments were conducted on this dataset, comparing the YOLOv7(sr‐sm) model with eight popular object detection models. The results demonstrate significant improvements in precision, recall, and mean Average Precision (mAP), with YOLOv7(sr‐sm) achieving 93.9%, 92.1%, and 92.3%, respectively. Furthermore, compared to the newly released YOLOv8l model, YOLOv7(sr‐sm) outperforms it by 9.3%, 2.1%, and 4.5% in these three metrics while also exhibiting superior parameter efficiency and higher inference speeds. The YOLOv7(sr‐sm) model architecture can accurately locate and identify blurry animal targets in vehicle‐mounted monitoring images, serving as a reliable tool for animal identification and counting in mobile monitoring systems. These findings provide significant technological support for the application of intelligent monitoring techniques in biodiversity conservation efforts.","PeriodicalId":21132,"journal":{"name":"Remote Sensing in Ecology and Conservation","volume":"9 1","pages":""},"PeriodicalIF":5.5,"publicationDate":"2025-03-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143627562","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"环境科学与生态学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Bridging the gap in deep seafloor management: Ultra fine‐scale ecological habitat characterization of large seascapes 弥合深海底管理的差距:大海景的超细尺度生态栖息地特征
IF 5.5 2区 环境科学与生态学 Q1 ECOLOGY Pub Date : 2025-03-12 DOI: 10.1002/rse2.70002
Ole Johannes Ringnander Sørensen, Itai van Rijn, Shai Einbinder, Hagai Nativ, Aviad Scheinin, Ziv Zemah‐Shamir, Eyal Bigal, Leigh Livne, Anat Tsemel, Or M. Bialik, Gleb Papeer, Dan Tchernov, Yizhaq Makovsky
The United Nations' sustainable development goal to designate 30% of the oceans as marine protected areas by 2030 requires practical management tools, and in turn ecologically meaningful mapping of the seafloor. Particularly challenging is the mesophotic zone, a critical component of the marine system, a biodiversity hotspot, and a potential refuge. Here, we introduce a novel seafloor habitat management workflow, integrating cm‐scale synthetic aperture sonar (SAS) and multibeam bathymetry surveying with efficient ecotope characterization. In merely 6 h, we mapped ~5 km2 of a complex mesophotic reef at sub‐metric resolution. Applying a deep learning classifier on the SAS imagery, we classified four habitats with an accuracy of 84% and defined relevant fine‐scale ecotones. Visual census with precise in situ sampling guided by SAS images for navigation were utilized for ecological characterization of mapped units. Our preliminary fish surveys indicate the ecological importance of highly complex areas and rock/sand ecotones. These less abundant habitats would be largely underrepresented if surveying the area without prior consideration. Thus, our approach is demonstrated to generate scalable habitat maps at resolutions pertinent to relevant biotas, previously inaccessible in the mesophotic, advancing ecological modeling and management of large seascapes.
联合国的可持续发展目标是到2030年将30%的海洋指定为海洋保护区,这需要切实可行的管理工具,并相应地绘制具有生态意义的海底地图。尤其具有挑战性的是中鳍区,它是海洋系统的关键组成部分,是生物多样性的热点,也是潜在的避难所。本文介绍了一种新的海底生境管理工作流程,该流程将厘米尺度合成孔径声呐(SAS)和多波束测深测量相结合,并具有高效的生态环境表征。在短短6小时内,我们以亚米分辨率绘制了约5平方公里的复杂中叶藻礁。在SAS图像上应用深度学习分类器,我们以84%的准确率对四个栖息地进行了分类,并定义了相关的细尺度交错带。利用SAS图像导航引导的精确原位采样的视觉普查,对测绘单元进行生态表征。我们的初步鱼类调查表明,高度复杂的地区和岩石/沙子过渡带的生态重要性。如果在没有事先考虑的情况下对该地区进行调查,这些较少的栖息地将在很大程度上被低估。因此,我们的方法被证明可以生成与相关生物区系相关的分辨率的可扩展栖息地地图,这些地图以前无法在中微藻中获得,从而推进了大型海景的生态建模和管理。
{"title":"Bridging the gap in deep seafloor management: Ultra fine‐scale ecological habitat characterization of large seascapes","authors":"Ole Johannes Ringnander Sørensen, Itai van Rijn, Shai Einbinder, Hagai Nativ, Aviad Scheinin, Ziv Zemah‐Shamir, Eyal Bigal, Leigh Livne, Anat Tsemel, Or M. Bialik, Gleb Papeer, Dan Tchernov, Yizhaq Makovsky","doi":"10.1002/rse2.70002","DOIUrl":"https://doi.org/10.1002/rse2.70002","url":null,"abstract":"The United Nations' sustainable development goal to designate 30% of the oceans as marine protected areas by 2030 requires practical management tools, and in turn ecologically meaningful mapping of the seafloor. Particularly challenging is the mesophotic zone, a critical component of the marine system, a biodiversity hotspot, and a potential refuge. Here, we introduce a novel seafloor habitat management workflow, integrating cm‐scale synthetic aperture sonar (SAS) and multibeam bathymetry surveying with efficient ecotope characterization. In merely 6 h, we mapped ~5 km<jats:sup>2</jats:sup> of a complex mesophotic reef at sub‐metric resolution. Applying a deep learning classifier on the SAS imagery, we classified four habitats with an accuracy of 84% and defined relevant fine‐scale ecotones. Visual census with precise in situ sampling guided by SAS images for navigation were utilized for ecological characterization of mapped units. Our preliminary fish surveys indicate the ecological importance of highly complex areas and rock/sand ecotones. These less abundant habitats would be largely underrepresented if surveying the area without prior consideration. Thus, our approach is demonstrated to generate scalable habitat maps at resolutions pertinent to relevant biotas, previously inaccessible in the mesophotic, advancing ecological modeling and management of large seascapes.","PeriodicalId":21132,"journal":{"name":"Remote Sensing in Ecology and Conservation","volume":"11 1","pages":""},"PeriodicalIF":5.5,"publicationDate":"2025-03-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143607921","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"环境科学与生态学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Automated extraction of right whale morphometric data from drone aerial photographs 从无人机航拍照片中自动提取露脊鲸形态测量数据
IF 5.5 2区 环境科学与生态学 Q1 ECOLOGY Pub Date : 2025-03-12 DOI: 10.1002/rse2.70001
Chhandak Bagchi, Josh Medina, Duncan J. Irschick, Subhransu Maji, Fredrik Christiansen
Aerial photogrammetry is a popular non‐invasive tool to measure the size, body morphometrics and body condition of wild animals. While the method can generate large datasets quickly, the lack of efficient processing tools can create bottlenecks that delay management actions. We developed a machine learning algorithm to automatically measure body morphometrics (body length and widths) of southern right whales (Eubalaena australis, SRWs) from aerial photographs (n = 8,958) collected by unmanned aerial vehicles in Australia. Our approach utilizes two Mask R‐CNN detection models to: (i) generate masks for each whale and (ii) estimate points along the whale's axis. We annotated a dataset of 468 images containing 638 whales to train our models. To evaluate the accuracy of our machine learning approach, we compared the model‐generated body morphometrics to manual measurements. The influence of picture quality (whale posture and water clarity) was also assessed. The model‐generated body length estimates were slightly negatively biased (median error of −1.3%), whereas the body volume estimates had a small (median error of 6.5%) positive bias. After correcting both biases, the resulting model‐generated body length and volume estimates had mean absolute errors of 0.85% (SD = 0.75) and 6.88% (SD = 6.57), respectively. The magnitude of the errors decreased as picture quality increased. When using the model‐generated data to quantify intra‐seasonal changes in body condition of SRW females, we obtained a similar slope parameter (−0.001843, SE = 0.000095) as derived from manual measurements (−0.001565, SE = 0.000079). This indicates that our approach was able to accurately capture temporal trends in body condition at a population level.
航空摄影测量是一种流行的非侵入性工具,用于测量野生动物的大小,身体形态和身体状况。虽然该方法可以快速生成大型数据集,但缺乏有效的处理工具可能会造成瓶颈,从而延迟管理行动。我们开发了一种机器学习算法,从澳大利亚无人驾驶飞行器收集的航空照片(n = 8,958)中自动测量南露脊鲸(Eubalaena australis, SRWs)的身体形态(体长和宽度)。我们的方法利用两个Mask R - CNN检测模型:(i)为每个鲸鱼生成掩模,(ii)沿鲸鱼轴线估计点。我们注释了包含638头鲸鱼的468张图像的数据集来训练我们的模型。为了评估机器学习方法的准确性,我们将模型生成的身体形态测量与人工测量进行了比较。还评估了图像质量(鲸鱼姿态和水的清晰度)的影响。模型生成的体长估计值有轻微的负偏(中位数误差为- 1.3%),而体量估计值有较小的正偏(中位数误差为6.5%)。在修正了这两种偏差后,模型生成的体长和体积估计的平均绝对误差分别为0.85% (SD = 0.75)和6.88% (SD = 6.57)。误差的大小随着图像质量的提高而减小。当使用模型生成的数据来量化SRW女性身体状况的季节性变化时,我们获得了与人工测量(- 0.001565,SE = 0.000079)相似的斜率参数(- 0.001843,SE = 0.000095)。这表明我们的方法能够准确地捕捉人口水平上身体状况的时间趋势。
{"title":"Automated extraction of right whale morphometric data from drone aerial photographs","authors":"Chhandak Bagchi, Josh Medina, Duncan J. Irschick, Subhransu Maji, Fredrik Christiansen","doi":"10.1002/rse2.70001","DOIUrl":"https://doi.org/10.1002/rse2.70001","url":null,"abstract":"Aerial photogrammetry is a popular non‐invasive tool to measure the size, body morphometrics and body condition of wild animals. While the method can generate large datasets quickly, the lack of efficient processing tools can create bottlenecks that delay management actions. We developed a machine learning algorithm to automatically measure body morphometrics (body length and widths) of southern right whales (Eubalaena australis, SRWs) from aerial photographs (<jats:italic>n</jats:italic> = 8,958) collected by unmanned aerial vehicles in Australia. Our approach utilizes two Mask R‐CNN detection models to: (i) generate masks for each whale and (ii) estimate points along the whale's axis. We annotated a dataset of 468 images containing 638 whales to train our models. To evaluate the accuracy of our machine learning approach, we compared the model‐generated body morphometrics to manual measurements. The influence of picture quality (whale posture and water clarity) was also assessed. The model‐generated body length estimates were slightly negatively biased (median error of −1.3%), whereas the body volume estimates had a small (median error of 6.5%) positive bias. After correcting both biases, the resulting model‐generated body length and volume estimates had mean absolute errors of 0.85% (SD = 0.75) and 6.88% (SD = 6.57), respectively. The magnitude of the errors decreased as picture quality increased. When using the model‐generated data to quantify intra‐seasonal changes in body condition of SRW females, we obtained a similar slope parameter (−0.001843, SE = 0.000095) as derived from manual measurements (−0.001565, SE = 0.000079). This indicates that our approach was able to accurately capture temporal trends in body condition at a population level.","PeriodicalId":21132,"journal":{"name":"Remote Sensing in Ecology and Conservation","volume":"54 1","pages":""},"PeriodicalIF":5.5,"publicationDate":"2025-03-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143607922","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"环境科学与生态学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
期刊
Remote Sensing in Ecology and Conservation
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1