首页 > 最新文献

Remote Sensing in Ecology and Conservation最新文献

英文 中文
From snapshots to continuous estimates: Augmenting citizen science with computer vision for fish monitoring 从快照到连续估计:用计算机视觉增强鱼类监测的公民科学
IF 5.5 2区 环境科学与生态学 Q1 ECOLOGY Pub Date : 2026-02-05 DOI: 10.1002/rse2.70055
Zhongqi Chen, Timm Haucke, Sara Beery, Keven Bennett, Austin Powell, Lydia Zuehsow, Robert Vincent, Linda Deegan
Monitoring fish movement is essential for understanding population dynamics, informing conservation efforts and supporting fisheries management. Traditional methods, such as visual observations by volunteers, are constrained by time limitations, environmental conditions and labour intensity. Recent advancements in computer vision (CV) and deep learning offer promising solutions for automating fish counting from underwater videos, improving efficiency and data resolution. In this study, we developed and applied a deep learning‐based CV system to monitor river herring ( Alosa spp.) migration, covering all essential steps from field camera deployment, video annotation to model training and in‐season population counting. We assessed the labelling and training efforts required to achieve good model performance and explored the use of importance sampling to correct biases in CV‐based fish counts. Our results demonstrated that CV models trained on a single site and year showed limited generalization to sites or years unseen during training, while models trained on more diverse labelled data generalized better. We also found that the amount of annotations required is related to dataset complexity. When applied for in‐season fish counting, CV efficiently processed season‐long datasets and produced counts consistent with human review, with some moderate differences under migration pulses that can be adjusted by importance sampling. By providing continuous, high‐resolution monitoring throughout the entire migration season, CV counts offer more reliable run size estimates and greater insight into the spawning migration of river herring. This study demonstrates a scalable, cost‐effective and efficient approach with significant potential for addressing complex ecological questions and supporting conservation strategies and resource management.
{"title":"From snapshots to continuous estimates: Augmenting citizen science with computer vision for fish monitoring","authors":"Zhongqi Chen, Timm Haucke, Sara Beery, Keven Bennett, Austin Powell, Lydia Zuehsow, Robert Vincent, Linda Deegan","doi":"10.1002/rse2.70055","DOIUrl":"https://doi.org/10.1002/rse2.70055","url":null,"abstract":"Monitoring fish movement is essential for understanding population dynamics, informing conservation efforts and supporting fisheries management. Traditional methods, such as visual observations by volunteers, are constrained by time limitations, environmental conditions and labour intensity. Recent advancements in computer vision (CV) and deep learning offer promising solutions for automating fish counting from underwater videos, improving efficiency and data resolution. In this study, we developed and applied a deep learning‐based CV system to monitor river herring ( <jats:italic>Alosa</jats:italic> spp.) migration, covering all essential steps from field camera deployment, video annotation to model training and in‐season population counting. We assessed the labelling and training efforts required to achieve good model performance and explored the use of importance sampling to correct biases in CV‐based fish counts. Our results demonstrated that CV models trained on a single site and year showed limited generalization to sites or years unseen during training, while models trained on more diverse labelled data generalized better. We also found that the amount of annotations required is related to dataset complexity. When applied for in‐season fish counting, CV efficiently processed season‐long datasets and produced counts consistent with human review, with some moderate differences under migration pulses that can be adjusted by importance sampling. By providing continuous, high‐resolution monitoring throughout the entire migration season, CV counts offer more reliable run size estimates and greater insight into the spawning migration of river herring. This study demonstrates a scalable, cost‐effective and efficient approach with significant potential for addressing complex ecological questions and supporting conservation strategies and resource management.","PeriodicalId":21132,"journal":{"name":"Remote Sensing in Ecology and Conservation","volume":"51 1","pages":""},"PeriodicalIF":5.5,"publicationDate":"2026-02-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"146122092","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"环境科学与生态学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Incorporating environmental DNA metabarcoding for improved benthic biodiversity and habitat mapping 结合环境DNA元条形码改善底栖生物多样性和栖息地定位
IF 5.5 2区 环境科学与生态学 Q1 ECOLOGY Pub Date : 2026-02-05 DOI: 10.1002/rse2.70048
Rylan J. Command, Shreya Nemani, Benjamin Misiuk, Mehrdad Hajibabaei, Nicole Fahner, Emily Porter, Greg Singer, Beverly McClenaghan, Katleen Robert
Complex coastal seascapes harbor high marine biodiversity from which humans derive numerous ecosystem services. Maps of benthic habitats are important tools used to inform coastal development and conservation efforts. Seafloor imagery is commonly used to collect information about the distribution of benthic organisms, but these data are often limited to low taxonomic resolutions and may systematically underrepresent local biodiversity. Recent advances in genomics enable rapid and accurate detection of taxa with high taxonomic resolution from environmental DNA (eDNA) extracted from water samples, but there are few examples of broad‐spectrum eDNA biodiversity data in nearshore benthic habitat mapping. We combined an eDNA‐based biodiversity assessment with concurrently collected high‐resolution video ground‐truth data to assess the benefit of metabarcoding data for improving benthic habitat mapping in the sub‐Arctic coastal embayment of Mortier Bay, Newfoundland and Labrador, Canada. Features derived from acoustic bathymetry and backscatter data were used to develop full‐coverage habitat and biodiversity maps using a joint species distribution‐modeling framework. The predicted taxonomic richness spatial patterns were similar between video‐only, eDNA‐only and combined datasets, suggesting diversity patterns were accurately represented by both methods. However, 226 additional taxa (72 species, 109 genera) were identified using eDNA compared to the 46 detected by video ground‐truthing. Averaged over all taxa, the video‐only model performed best in terms of discriminating presences from absences; however, we found that most sessile taxa were better predicted by the combined dataset compared to video data alone. These results highlight the limitations of imagery‐only datasets for biodiversity surveys and demonstrate the utility and limitations of metabarcoding data to improve benthic habitat and diversity maps in complex coastal habitats. This study highlights opportunities to fill gaps that could improve spatial modeling of seafloor assemblages derived from metabarcoding data, including sources and sinks of DNA in the environment and water column properties that control its dispersal.
{"title":"Incorporating environmental DNA metabarcoding for improved benthic biodiversity and habitat mapping","authors":"Rylan J. Command, Shreya Nemani, Benjamin Misiuk, Mehrdad Hajibabaei, Nicole Fahner, Emily Porter, Greg Singer, Beverly McClenaghan, Katleen Robert","doi":"10.1002/rse2.70048","DOIUrl":"https://doi.org/10.1002/rse2.70048","url":null,"abstract":"Complex coastal seascapes harbor high marine biodiversity from which humans derive numerous ecosystem services. Maps of benthic habitats are important tools used to inform coastal development and conservation efforts. Seafloor imagery is commonly used to collect information about the distribution of benthic organisms, but these data are often limited to low taxonomic resolutions and may systematically underrepresent local biodiversity. Recent advances in genomics enable rapid and accurate detection of taxa with high taxonomic resolution from environmental DNA (eDNA) extracted from water samples, but there are few examples of broad‐spectrum eDNA biodiversity data in nearshore benthic habitat mapping. We combined an eDNA‐based biodiversity assessment with concurrently collected high‐resolution video ground‐truth data to assess the benefit of metabarcoding data for improving benthic habitat mapping in the sub‐Arctic coastal embayment of Mortier Bay, Newfoundland and Labrador, Canada. Features derived from acoustic bathymetry and backscatter data were used to develop full‐coverage habitat and biodiversity maps using a joint species distribution‐modeling framework. The predicted taxonomic richness spatial patterns were similar between video‐only, eDNA‐only and combined datasets, suggesting diversity patterns were accurately represented by both methods. However, 226 additional taxa (72 species, 109 genera) were identified using eDNA compared to the 46 detected by video ground‐truthing. Averaged over all taxa, the video‐only model performed best in terms of discriminating presences from absences; however, we found that most sessile taxa were better predicted by the combined dataset compared to video data alone. These results highlight the limitations of imagery‐only datasets for biodiversity surveys and demonstrate the utility and limitations of metabarcoding data to improve benthic habitat and diversity maps in complex coastal habitats. This study highlights opportunities to fill gaps that could improve spatial modeling of seafloor assemblages derived from metabarcoding data, including sources and sinks of DNA in the environment and water column properties that control its dispersal.","PeriodicalId":21132,"journal":{"name":"Remote Sensing in Ecology and Conservation","volume":"28 1","pages":""},"PeriodicalIF":5.5,"publicationDate":"2026-02-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"146122094","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"环境科学与生态学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Big Bird: A global dataset of birds in drone imagery annotated to species level 大鸟:无人机图像中的鸟类全球数据集,标注到物种水平
IF 5.5 2区 环境科学与生态学 Q1 ECOLOGY Pub Date : 2026-02-05 DOI: 10.1002/rse2.70059
Joshua P. Wilson, Tatsuya Amano, Thomas Bregnballe, Alejandro Corregidor‐Castro, Roxane Francis, Diego Gallego‐García, Jarrod C. Hodgson, Landon R. Jones, César R. Luque‐Fernández, Dominik Marchowski, John McEvoy, Ann E. McKellar, W. Chris Oosthuizen, Christian Pfeifer, Martin Renner, José Hernán Sarasola, Mateo Sokač, Roberto Valle, Adam Zbyryt, Richard A. Fuller
Drones are a valuable tool for surveying birds. However, surveys are hampered by the costs of manually detecting birds in the resulting images. Researchers are using computer vision to automate this process, but efforts to date generally target a narrow context, such as a single habitat, and do not identify key attributes such as species. To address this, we collected a diverse dataset of drone‐based bird images from existing studies and our own fieldwork. We labelled the birds in these images, detailing their location, species, posture (resting, flying, or other), age (chick, juvenile, or adult), and sex (male, female, or monomorphic). To demonstrate the usefulness of this dataset, we trained a bird detection and identification computer vision model, compared its performance with manual methods, and identified the main predictors of performance. Thirty‐three researchers contributed 23 865 images, captured using 21 different cameras across 11 countries and all 7 continents. We labelled 4824 of these images, containing 49 990 birds from 101 species. Our model processed images 85 times faster than manual processing and achieved a mean average precision (mAP) of 0.91 ± 0.25 for detection and 0.65 ± 0.33 for classification of species, age, and sex. Performance was predicted by the similarity between test and train images (Estimate = 1.3248, P = 0.00021), the number of similar classes (Estimate = −0.0742, P = 0.0033), the number of train instances (Estimate = 0.0034, P = 0.1019), and the number of pixels on the bird (Estimate = 0.0002, P = 0.0462). Our drone‐based bird dataset is the most accurately labelled and biologically, environmentally, and digitally diverse to date, laying the foundation for future research. We provide it and the trained model open‐access and urge researchers to continue to work together to assemble datasets that cover broad contexts and are labelled with key conservation metrics.
{"title":"Big Bird: A global dataset of birds in drone imagery annotated to species level","authors":"Joshua P. Wilson, Tatsuya Amano, Thomas Bregnballe, Alejandro Corregidor‐Castro, Roxane Francis, Diego Gallego‐García, Jarrod C. Hodgson, Landon R. Jones, César R. Luque‐Fernández, Dominik Marchowski, John McEvoy, Ann E. McKellar, W. Chris Oosthuizen, Christian Pfeifer, Martin Renner, José Hernán Sarasola, Mateo Sokač, Roberto Valle, Adam Zbyryt, Richard A. Fuller","doi":"10.1002/rse2.70059","DOIUrl":"https://doi.org/10.1002/rse2.70059","url":null,"abstract":"Drones are a valuable tool for surveying birds. However, surveys are hampered by the costs of manually detecting birds in the resulting images. Researchers are using computer vision to automate this process, but efforts to date generally target a narrow context, such as a single habitat, and do not identify key attributes such as species. To address this, we collected a diverse dataset of drone‐based bird images from existing studies and our own fieldwork. We labelled the birds in these images, detailing their location, species, posture (resting, flying, or other), age (chick, juvenile, or adult), and sex (male, female, or monomorphic). To demonstrate the usefulness of this dataset, we trained a bird detection and identification computer vision model, compared its performance with manual methods, and identified the main predictors of performance. Thirty‐three researchers contributed 23 865 images, captured using 21 different cameras across 11 countries and all 7 continents. We labelled 4824 of these images, containing 49 990 birds from 101 species. Our model processed images 85 times faster than manual processing and achieved a mean average precision (mAP) of 0.91 ± 0.25 for detection and 0.65 ± 0.33 for classification of species, age, and sex. Performance was predicted by the similarity between test and train images (Estimate = 1.3248, <jats:italic>P</jats:italic> = 0.00021), the number of similar classes (Estimate = −0.0742, <jats:italic>P</jats:italic> = 0.0033), the number of train instances (Estimate = 0.0034, <jats:italic>P</jats:italic> = 0.1019), and the number of pixels on the bird (Estimate = 0.0002, <jats:italic>P</jats:italic> = 0.0462). Our drone‐based bird dataset is the most accurately labelled and biologically, environmentally, and digitally diverse to date, laying the foundation for future research. We provide it and the trained model open‐access and urge researchers to continue to work together to assemble datasets that cover broad contexts and are labelled with key conservation metrics.","PeriodicalId":21132,"journal":{"name":"Remote Sensing in Ecology and Conservation","volume":"112 1","pages":""},"PeriodicalIF":5.5,"publicationDate":"2026-02-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"146122093","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"环境科学与生态学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
An autonomous network of acoustic detectors to map tiger risk by eavesdropping on prey alarm calls 一个自主的声学探测器网络,通过窃听猎物的警报呼叫来绘制老虎的风险
IF 5.5 2区 环境科学与生态学 Q1 ECOLOGY Pub Date : 2026-02-04 DOI: 10.1002/rse2.70061
Arik Kershenbaum, Andrew Markham, Holly Root‐Gutteridge, Bethany Smith, Casey Anderson, Riley McClaughry, Ramjan Chaudhary, Amogh Vishwakarma, Stephen Cummins, Angela Dassow
Tiger ( Panthera tigris ) attacks are a frequent source of injuries and fatalities among villagers in Nepal, where many communities make extensive use of dense forests for foraging and grazing of livestock. As conservation efforts have boosted the tiger population in the country, a conflict exists between maintaining traditional practises whilst ensuring human safety and protecting endangered predators. Hence, there is a need for cost‐effective management strategies that do not reduce habitat use by humans or wildlife. Passive acoustic monitoring (PAM) offers a promising approach to mapping tiger presence in real‐time and providing a warning system for villagers. Although tigers vocalize infrequently, their presence triggers alarm calls from prey species, meaning these alarm calls could potentially act as a proxy for detecting tigers. To explore the potential for tracking tigers and other dangerous predators such as leopards using these alarm calls, we designed and tested a PAM system in the Terai region of southern Nepal. We implemented a TinyML low‐memory convolutional neural network (~1000 parameters) for chital deer ( Axis axis ) automatic detection—a species that reliably produce loud predator‐specific alarm calls—and deployed a distributed network of 10 autonomous interconnected sensors for continuous operation over 3 months. The network transmits chital deer alarm call events via a cellular‐connected gateway to a remote base station to generate a heatmap of predator risk. Incidences of high predator risk can be used to alert local forest rangers, who can then inform nearby villagers of areas with a higher likelihood of predator presence. The neural net achieved an F1 score of 0.91 in training and 0.72 in the field. We suggest that this proof of concept indicates that automated PAM could be an effective tool for detecting and tracking tigers and other predators and a potentially valuable tool for facilitating human‐wildlife co‐existence.
{"title":"An autonomous network of acoustic detectors to map tiger risk by eavesdropping on prey alarm calls","authors":"Arik Kershenbaum, Andrew Markham, Holly Root‐Gutteridge, Bethany Smith, Casey Anderson, Riley McClaughry, Ramjan Chaudhary, Amogh Vishwakarma, Stephen Cummins, Angela Dassow","doi":"10.1002/rse2.70061","DOIUrl":"https://doi.org/10.1002/rse2.70061","url":null,"abstract":"Tiger ( <jats:italic>Panthera tigris</jats:italic> ) attacks are a frequent source of injuries and fatalities among villagers in Nepal, where many communities make extensive use of dense forests for foraging and grazing of livestock. As conservation efforts have boosted the tiger population in the country, a conflict exists between maintaining traditional practises whilst ensuring human safety and protecting endangered predators. Hence, there is a need for cost‐effective management strategies that do not reduce habitat use by humans or wildlife. Passive acoustic monitoring (PAM) offers a promising approach to mapping tiger presence in real‐time and providing a warning system for villagers. Although tigers vocalize infrequently, their presence triggers alarm calls from prey species, meaning these alarm calls could potentially act as a proxy for detecting tigers. To explore the potential for tracking tigers and other dangerous predators such as leopards using these alarm calls, we designed and tested a PAM system in the Terai region of southern Nepal. We implemented a TinyML low‐memory convolutional neural network (~1000 parameters) for chital deer ( <jats:italic>Axis axis</jats:italic> ) automatic detection—a species that reliably produce loud predator‐specific alarm calls—and deployed a distributed network of 10 autonomous interconnected sensors for continuous operation over 3 months. The network transmits chital deer alarm call events via a cellular‐connected gateway to a remote base station to generate a heatmap of predator risk. Incidences of high predator risk can be used to alert local forest rangers, who can then inform nearby villagers of areas with a higher likelihood of predator presence. The neural net achieved an F1 score of 0.91 in training and 0.72 in the field. We suggest that this proof of concept indicates that automated PAM could be an effective tool for detecting and tracking tigers and other predators and a potentially valuable tool for facilitating human‐wildlife co‐existence.","PeriodicalId":21132,"journal":{"name":"Remote Sensing in Ecology and Conservation","volume":"23 1","pages":""},"PeriodicalIF":5.5,"publicationDate":"2026-02-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"146122095","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"环境科学与生态学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Large‐scale characterization of horizontal forest structure from remote sensing optical images 基于遥感光学图像的水平森林结构的大尺度特征
IF 5.5 2区 环境科学与生态学 Q1 ECOLOGY Pub Date : 2026-01-16 DOI: 10.1002/rse2.70058
Xin Xu, Martin Brandt, Xiaowei Tong, Maurice Mugabowindekwe, Yuemin Yue, Sizhuo Li, Qiue Xu, Siyu Liu, Florian Reiner, Kelin Wang, Zhengchao Chen, Yongqing Bai, Rasmus Fensholt
Forest structure is an essential variable in forest management and conservation, as it has a direct impact on ecosystem processes and functions. Previous remote sensing studies have primarily focused on the vertical structure of forests, which requires laser point data and may not always be suited to distinguish plantations from old forests. Sub‐meter resolution remote sensing data and tree crown segmentation techniques hold promise in offering detailed information that can support the characterization of forest structure from a horizontal perspective, offering new insights in the tree crown structure at scale. In this study, we generated a dataset with over 5 billion tree crowns and developed a Horizontal Structure Index (HSI) by analyzing spatial relationships among neighboring trees from remote sensing optical images. We first extracted the location and crown size of overstory trees from optical satellite and aerial imagery at sub‐meter resolution. We subsequently calculated the distance between tree crown centers, their angles, the crown size and crown spacing, and linked this information with individual trees. We then used principal component analysis (PCA) to condense the structural information into the HSI and tested it in China, Rwanda and Denmark. Our result showed that the HSI has the potential to distinguish monoculture plantations from other forest types, which provides insights that extend beyond metrics derived from vertical forest structure. The proposed HSI is derived directly from tree‐level attributes and supports a deeper understanding of forest structure from a horizontal perspective, complementing existing remote sensing‐based metrics.
森林结构是森林管理和养护的一个重要变量,因为它对生态系统过程和功能有直接影响。以前的遥感研究主要集中于森林的垂直结构,这需要激光点数据,可能并不总是适合于区分人工林和老森林。亚米分辨率遥感数据和树冠分割技术有望提供详细的信息,可以从水平角度支持森林结构的表征,为树冠结构的尺度提供新的见解。本研究以50多亿棵树冠为数据集,通过分析遥感光学影像中邻近树木间的空间关系,建立了水平结构指数(HSI)。我们首先从亚米分辨率的光学卫星和航空图像中提取了上层树木的位置和树冠大小。随后,我们计算了树冠中心之间的距离、树冠中心的角度、树冠大小和树冠间距,并将这些信息与每棵树联系起来。然后,我们使用主成分分析(PCA)将结构信息浓缩到恒生指数中,并在中国、卢旺达和丹麦进行了测试。我们的研究结果表明,HSI具有将单一栽培人工林与其他森林类型区分开来的潜力,这提供了超越垂直森林结构指标的见解。拟议的HSI直接来源于树级属性,支持从水平角度更深入地了解森林结构,补充了现有的基于遥感的指标。
{"title":"Large‐scale characterization of horizontal forest structure from remote sensing optical images","authors":"Xin Xu, Martin Brandt, Xiaowei Tong, Maurice Mugabowindekwe, Yuemin Yue, Sizhuo Li, Qiue Xu, Siyu Liu, Florian Reiner, Kelin Wang, Zhengchao Chen, Yongqing Bai, Rasmus Fensholt","doi":"10.1002/rse2.70058","DOIUrl":"https://doi.org/10.1002/rse2.70058","url":null,"abstract":"Forest structure is an essential variable in forest management and conservation, as it has a direct impact on ecosystem processes and functions. Previous remote sensing studies have primarily focused on the vertical structure of forests, which requires laser point data and may not always be suited to distinguish plantations from old forests. Sub‐meter resolution remote sensing data and tree crown segmentation techniques hold promise in offering detailed information that can support the characterization of forest structure from a horizontal perspective, offering new insights in the tree crown structure at scale. In this study, we generated a dataset with over 5 billion tree crowns and developed a Horizontal Structure Index (HSI) by analyzing spatial relationships among neighboring trees from remote sensing optical images. We first extracted the location and crown size of overstory trees from optical satellite and aerial imagery at sub‐meter resolution. We subsequently calculated the distance between tree crown centers, their angles, the crown size and crown spacing, and linked this information with individual trees. We then used principal component analysis (PCA) to condense the structural information into the HSI and tested it in China, Rwanda and Denmark. Our result showed that the HSI has the potential to distinguish monoculture plantations from other forest types, which provides insights that extend beyond metrics derived from vertical forest structure. The proposed HSI is derived directly from tree‐level attributes and supports a deeper understanding of forest structure from a horizontal perspective, complementing existing remote sensing‐based metrics.","PeriodicalId":21132,"journal":{"name":"Remote Sensing in Ecology and Conservation","volume":"45 1","pages":""},"PeriodicalIF":5.5,"publicationDate":"2026-01-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145993210","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"环境科学与生态学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Deep learning‐based ecological analysis of camera trap images is impacted by training data quality and quantity 基于深度学习的相机陷阱图像生态分析受到训练数据质量和数量的影响
IF 5.5 2区 环境科学与生态学 Q1 ECOLOGY Pub Date : 2026-01-12 DOI: 10.1002/rse2.70052
Peggy A. Bevan, Omiros Pantazis, Holly A.I. Pringle, Guilherme Braga Ferreira, Daniel J. Ingram, Emily K. Madsen, Liam Thomas, Dol Raj Thanet, Thakur Silwal, Santosh Rayamajhi, Gabriel J. Brostow, Oisin Mac Aodha, Kate E. Jones
Large image collections generated from camera traps offer valuable insights into species richness, occupancy, and activity patterns, significantly aiding biodiversity monitoring. However, the manual processing of these data sets is time‐consuming, hindering analytical processes. To address this, deep neural networks have been widely adopted to automate image labelling, but the impact of classification error on key ecological metrics remains unclear. Here, we analyze data from camera trap collections in an African savannah (82,300 labelled images, 47 species) and an Asian sub‐tropical dry forest (40,308 labelled images, 29 species) to compare ecological metrics derived from expert‐generated species identifications with those generated by deep‐learning classification models. We specifically assess the impact of deep‐ learning model architecture, the proportion of label noise in the training data, and the size of the training data set on three key ecological metrics: species richness, occupancy, and activity patterns. We found that predictions of species richness derived from deep neural networks closely match those calculated from expert labels and remained resilient to up to 10% noise in the training data set (mis‐labelled images) and a 50% reduction in the training data set size. We found that our choice of deep‐learning model architecture (ResNet vs. ConvNext‐T) or depth (ResNet18, 50, 101) did not impact predicted ecological metrics. In contrast, species‐specific metrics were more sensitive; less common and visually similar species were disproportionately affected by a reduction in deep neural network accuracy, with consequences for occupancy and diel activity pattern estimates. To ensure the reliability of their findings, practitioners should prioritize creating large, clean training sets and account for class imbalance across species over exploring numerous deep‐learning model architectures.
相机陷阱产生的大量图像收集提供了对物种丰富度、占用和活动模式的宝贵见解,对生物多样性监测有重要帮助。然而,这些数据集的手工处理是耗时的,阻碍了分析过程。为了解决这个问题,深度神经网络已被广泛用于自动图像标记,但分类误差对关键生态指标的影响尚不清楚。在这里,我们分析了来自非洲大草原(82,300张标记图像,47个物种)和亚洲亚热带干燥森林(40,308张标记图像,29个物种)的相机陷阱收集的数据,以比较专家生成的物种识别与深度学习分类模型生成的生态指标。我们特别评估了深度学习模型架构、训练数据中标签噪声的比例以及训练数据集的大小对三个关键生态指标的影响:物种丰富度、占用率和活动模式。我们发现,来自深度神经网络的物种丰富度预测与来自专家标签的预测非常匹配,并且对训练数据集中高达10%的噪声(错误标记的图像)和训练数据集大小减少50%保持弹性。我们发现,我们选择的深度学习模型架构(ResNet vs. ConvNext - T)或深度(resnet18,50,101)对预测的生态指标没有影响。相比之下,物种特异性指标更为敏感;不太常见和视觉上相似的物种受到深度神经网络准确性降低的不成比例的影响,从而导致占用和死亡活动模式估计的后果。为了确保研究结果的可靠性,从业者应该优先考虑创建大型、干净的训练集,并考虑物种间的类不平衡,而不是探索大量的深度学习模型架构。
{"title":"Deep learning‐based ecological analysis of camera trap images is impacted by training data quality and quantity","authors":"Peggy A. Bevan, Omiros Pantazis, Holly A.I. Pringle, Guilherme Braga Ferreira, Daniel J. Ingram, Emily K. Madsen, Liam Thomas, Dol Raj Thanet, Thakur Silwal, Santosh Rayamajhi, Gabriel J. Brostow, Oisin Mac Aodha, Kate E. Jones","doi":"10.1002/rse2.70052","DOIUrl":"https://doi.org/10.1002/rse2.70052","url":null,"abstract":"Large image collections generated from camera traps offer valuable insights into species richness, occupancy, and activity patterns, significantly aiding biodiversity monitoring. However, the manual processing of these data sets is time‐consuming, hindering analytical processes. To address this, deep neural networks have been widely adopted to automate image labelling, but the impact of classification error on key ecological metrics remains unclear. Here, we analyze data from camera trap collections in an African savannah (82,300 labelled images, 47 species) and an Asian sub‐tropical dry forest (40,308 labelled images, 29 species) to compare ecological metrics derived from expert‐generated species identifications with those generated by deep‐learning classification models. We specifically assess the impact of deep‐ learning model architecture, the proportion of label noise in the training data, and the size of the training data set on three key ecological metrics: species richness, occupancy, and activity patterns. We found that predictions of species richness derived from deep neural networks closely match those calculated from expert labels and remained resilient to up to 10% noise in the training data set (mis‐labelled images) and a 50% reduction in the training data set size. We found that our choice of deep‐learning model architecture (ResNet vs. ConvNext‐T) or depth (ResNet18, 50, 101) did not impact predicted ecological metrics. In contrast, species‐specific metrics were more sensitive; less common and visually similar species were disproportionately affected by a reduction in deep neural network accuracy, with consequences for occupancy and diel activity pattern estimates. To ensure the reliability of their findings, practitioners should prioritize creating large, clean training sets and account for class imbalance across species over exploring numerous deep‐learning model architectures.","PeriodicalId":21132,"journal":{"name":"Remote Sensing in Ecology and Conservation","volume":"88 1","pages":""},"PeriodicalIF":5.5,"publicationDate":"2026-01-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145955812","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"环境科学与生态学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Evaluating land–sea linkages using land cover change and coral reef monitoring data: A case study from northeastern Puerto Rico 利用土地覆盖变化和珊瑚礁监测数据评价陆海联系:以波多黎各东北部为例
IF 5.5 2区 环境科学与生态学 Q1 ECOLOGY Pub Date : 2026-01-06 DOI: 10.1002/rse2.70054
Pirta Palola, Sasha Hills, Simon J. Pittman, Edwin A. Hernández‐Delgado, Antoine Collin, Lisa M. Wedding
Land cover change that leads to increased nutrient and sediment runoff is an important driver of change in coral reef ecosystems. Linking landscape change to seascape change is necessary for integrated land–sea management of coral reefs. This study explored the use of freely available satellite products to examine long‐term patterns of change across the land–sea continuum. We focused on northeastern Puerto Rico, where a widespread decline in live coral cover has occurred despite concomitant watershed reforestation that was expected to reduce land‐based threats. The aims of this study were (1) to examine whether these land–sea trends continued in 2000–2015 and (2) to assess the opportunities and limitations associated with using satellite data to inform land–sea management. We applied a Random Forest classifier on Landsat‐7 satellite imagery to assess changes in land cover and landscape development intensity, a spatial index to estimate land‐based pressure on nearshore marine ecosystems. We used field monitoring data to quantify benthic community change. We found that reforestation continued in 2000–2015 (+11%), suggesting reduced land‐based pressure on adjacent reefs in both northern (Luquillo) and eastern (Ceiba‐Fajardo) watersheds. Concomitantly, coral cover continued to decline, and a new aggressive expansion of peyssonnelid algal crust was recorded. Clustering analysis indicated that benthic monitoring sites in the same geographic regions (nearshore/offshore, north/east) followed similar community composition trajectories over time. Our results suggest that continued reforestation and the expected reduction in land‐based pressure have not been sufficient to halt coral cover decline in northeastern Puerto Rico. To improve the characterization and monitoring of the full causal chain from changes in land cover to water quality to benthic communities, advances in satellite‐based water quality mapping in optically shallow waters are needed. A strategic combination of remote sensing and targeted field surveys is required to monitor and mitigate land‐based stressors on coral reefs.
导致养分和沉积物径流增加的土地覆盖变化是珊瑚礁生态系统变化的重要驱动因素。将景观变化与海景变化联系起来是珊瑚礁海陆综合管理的必要条件。本研究探索了利用免费卫星产品来检查陆地-海洋连续体的长期变化模式。我们关注的是波多黎各东北部,尽管同时进行的流域再造林有望减少陆地威胁,但该地区的活珊瑚覆盖率却出现了广泛下降。本研究的目的是:(1)检查这些陆海趋势在2000-2015年是否持续;(2)评估使用卫星数据为陆海管理提供信息的机会和局限性。我们在Landsat‐7卫星图像上应用随机森林分类器来评估土地覆盖和景观开发强度的变化,这是一个估算近岸海洋生态系统陆地压力的空间指数。我们使用实地监测数据来量化底栖生物群落的变化。我们发现,2000年至2015年,重新造林继续进行(+11%),这表明北部(Luquillo)和东部(Ceiba - Fajardo)流域邻近珊瑚礁的陆地压力减少。与此同时,珊瑚覆盖面积继续下降,并记录了一种新的积极扩张的佩索内尔藻壳。聚类分析表明,同一地理区域(近岸/近海,北部/东部)的底栖生物监测点具有相似的群落组成轨迹。我们的研究结果表明,持续的重新造林和预期的陆地压力减少不足以阻止波多黎各东北部珊瑚覆盖的下降。为了改善从土地覆盖到水质到底栖生物群落变化的完整因果链的表征和监测,需要在光学浅水中基于卫星的水质制图方面取得进展。需要遥感和有针对性的实地调查的战略结合,以监测和减轻对珊瑚礁的陆地压力。
{"title":"Evaluating land–sea linkages using land cover change and coral reef monitoring data: A case study from northeastern Puerto Rico","authors":"Pirta Palola, Sasha Hills, Simon J. Pittman, Edwin A. Hernández‐Delgado, Antoine Collin, Lisa M. Wedding","doi":"10.1002/rse2.70054","DOIUrl":"https://doi.org/10.1002/rse2.70054","url":null,"abstract":"Land cover change that leads to increased nutrient and sediment runoff is an important driver of change in coral reef ecosystems. Linking landscape change to seascape change is necessary for integrated land–sea management of coral reefs. This study explored the use of freely available satellite products to examine long‐term patterns of change across the land–sea continuum. We focused on northeastern Puerto Rico, where a widespread decline in live coral cover has occurred despite concomitant watershed reforestation that was expected to reduce land‐based threats. The aims of this study were (1) to examine whether these land–sea trends continued in 2000–2015 and (2) to assess the opportunities and limitations associated with using satellite data to inform land–sea management. We applied a Random Forest classifier on Landsat‐7 satellite imagery to assess changes in land cover and landscape development intensity, a spatial index to estimate land‐based pressure on nearshore marine ecosystems. We used field monitoring data to quantify benthic community change. We found that reforestation continued in 2000–2015 (+11%), suggesting reduced land‐based pressure on adjacent reefs in both northern (Luquillo) and eastern (Ceiba‐Fajardo) watersheds. Concomitantly, coral cover continued to decline, and a new aggressive expansion of peyssonnelid algal crust was recorded. Clustering analysis indicated that benthic monitoring sites in the same geographic regions (nearshore/offshore, north/east) followed similar community composition trajectories over time. Our results suggest that continued reforestation and the expected reduction in land‐based pressure have not been sufficient to halt coral cover decline in northeastern Puerto Rico. To improve the characterization and monitoring of the full causal chain from changes in land cover to water quality to benthic communities, advances in satellite‐based water quality mapping in optically shallow waters are needed. A strategic combination of remote sensing and targeted field surveys is required to monitor and mitigate land‐based stressors on coral reefs.","PeriodicalId":21132,"journal":{"name":"Remote Sensing in Ecology and Conservation","volume":"18 1","pages":""},"PeriodicalIF":5.5,"publicationDate":"2026-01-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145902907","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"环境科学与生态学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Using time‐series remote sensing to identify and track individual bird nests at large scales 利用时间序列遥感在大尺度上识别和跟踪单个鸟巢
IF 5.5 2区 环境科学与生态学 Q1 ECOLOGY Pub Date : 2025-12-22 DOI: 10.1002/rse2.70046
S. K. Morgan Ernest, Lindsey A. Garner, Ben G. Weinstein, Peter Frederick, Henry Senyondo, Glenda M. Yenni, Ethan P. White
The challenges of monitoring wildlife often limit the scales and intensity of the data that can be collected. New technologies—such as remote sensing using unoccupied aircraft systems (UASs)—can collect information more quickly, over larger areas, and more frequently than is feasible using ground‐based methods. While airborne imaging is increasingly used to produce data on the location and counts of individuals, its ability to produce individual‐based demographic information is less explored. Repeat airborne imagery to generate an imagery time series provides the potential to track individuals over time to collect information beyond one‐off counts, but doing so necessitates automated approaches to handle the resulting high‐frequency large‐spatial scale imagery. We developed an automated time‐series remote sensing approach to identifying wading bird nests in the Everglades ecosystem of Florida, USA to explore the feasibility and challenges of conducting time‐series based remote sensing on mobile animals at large spatial scales. We combine a computer vision model for detecting birds in weekly UAS imagery of colonies with biology‐informed algorithmic rules to generate an automated approach that identifies likely nests. Comparing the performance of these automated approaches to human review of the same imagery shows that our primary approach identifies nests with comparable performance to human review, and that a secondary approach designed to find quick‐fail nests resulted in high false‐positive rates. We also assessed the ability of both human review and our primary algorithm to find ground‐verified nests in UAS imagery and again found comparable performance, with the exception of nests that fail quickly. Our results showed that automating nest detection, a key first step toward estimating nest success, is possible in complex environments like the Everglades and we discuss a number of challenges and possible uses for these types of approaches.
监测野生动物的挑战往往限制了可收集数据的规模和强度。新技术——例如使用无人飞机系统(UASs)的遥感技术——可以比使用地面方法更快、更大范围、更频繁地收集信息。虽然航空成像越来越多地用于产生关于个体位置和数量的数据,但其产生基于个体的人口统计信息的能力却很少被探索。重复航空图像以生成图像时间序列提供了随时间跟踪个体的潜力,以收集一次性计数以外的信息,但这样做需要自动化方法来处理由此产生的高频大空间尺度图像。为了探索在大空间尺度上对活动动物进行时间序列遥感的可行性和挑战,我们开发了一种自动时序遥感方法来识别美国佛罗里达州沼泽地生态系统中的涉水鸟巢。我们将计算机视觉模型与生物信息算法规则相结合,用于在每周的蜂群无人机图像中检测鸟类,以生成识别可能巢穴的自动化方法。将这些自动化方法的性能与人工检查相同图像的性能进行比较表明,我们的主要方法识别的巢与人工检查的性能相当,而设计用于查找快速失败巢的次要方法导致高假阳性率。我们还评估了人工审查和我们的主要算法在无人机图像中寻找地面验证巢穴的能力,并再次发现了类似的性能,除了快速失效的巢穴。我们的研究结果表明,在像沼泽地这样的复杂环境中,自动化巢穴检测是估计巢穴成功的关键第一步,是可能的,我们讨论了这些类型方法的一些挑战和可能的用途。
{"title":"Using time‐series remote sensing to identify and track individual bird nests at large scales","authors":"S. K. Morgan Ernest, Lindsey A. Garner, Ben G. Weinstein, Peter Frederick, Henry Senyondo, Glenda M. Yenni, Ethan P. White","doi":"10.1002/rse2.70046","DOIUrl":"https://doi.org/10.1002/rse2.70046","url":null,"abstract":"The challenges of monitoring wildlife often limit the scales and intensity of the data that can be collected. New technologies—such as remote sensing using unoccupied aircraft systems (UASs)—can collect information more quickly, over larger areas, and more frequently than is feasible using ground‐based methods. While airborne imaging is increasingly used to produce data on the location and counts of individuals, its ability to produce individual‐based demographic information is less explored. Repeat airborne imagery to generate an imagery time series provides the potential to track individuals over time to collect information beyond one‐off counts, but doing so necessitates automated approaches to handle the resulting high‐frequency large‐spatial scale imagery. We developed an automated time‐series remote sensing approach to identifying wading bird nests in the Everglades ecosystem of Florida, USA to explore the feasibility and challenges of conducting time‐series based remote sensing on mobile animals at large spatial scales. We combine a computer vision model for detecting birds in weekly UAS imagery of colonies with biology‐informed algorithmic rules to generate an automated approach that identifies likely nests. Comparing the performance of these automated approaches to human review of the same imagery shows that our primary approach identifies nests with comparable performance to human review, and that a secondary approach designed to find quick‐fail nests resulted in high false‐positive rates. We also assessed the ability of both human review and our primary algorithm to find ground‐verified nests in UAS imagery and again found comparable performance, with the exception of nests that fail quickly. Our results showed that automating nest detection, a key first step toward estimating nest success, is possible in complex environments like the Everglades and we discuss a number of challenges and possible uses for these types of approaches.","PeriodicalId":21132,"journal":{"name":"Remote Sensing in Ecology and Conservation","volume":"16 1","pages":""},"PeriodicalIF":5.5,"publicationDate":"2025-12-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145801312","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"环境科学与生态学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
On the compatibility of single‐scan terrestrial LiDAR with digital photogrammetry and field inventory metrics of vegetation structure in forest and agroforestry landscapes 关于单扫描地面激光雷达与数字摄影测量和森林和农林业景观植被结构野外清查指标的兼容性
IF 5.5 2区 环境科学与生态学 Q1 ECOLOGY Pub Date : 2025-12-13 DOI: 10.1002/rse2.70047
Magnus Onyiriagwu, Nereoh Leley, Caleb W. T. Ngaba, Anthony Macharia, Henry Muchiri, Abdalla Kisiwa, Martin Ehbrecht, Delphine Clara Zemp
In tropical ecosystems, accurately quantifying vegetation structure is crucial to determining their capacity to deliver ecosystem services. Terrestrial laser scanning (TLS) and UAV‐based digital aerial photogrammetry (DAP) are remote sensing tools used to assess vegetation structure, but are challenging to use with conventional methods. Single‐Scan TLS and DTM‐independent DAPs are alternative scanning approaches used to describe vegetation structure; however, it remains unclear to what extent they relate to each other and how accurately they can distinguish forest structural characteristics, including vertical structure, horizontal structure, vegetation density, and structural heterogeneity. First, we quantified bivariate and multivariate correlations between equivalent/analogous structural metrics from these data sources using principal component and Procrustes analysis. We then evaluated their ability to characterize the forest and agroforestry landscapes. DAP, TLS, and Field metrics were moderately aligned for vegetation density, canopy top height, and gap dynamics, but differed in height variability and surface heterogeneity, reflecting differences in data structure. DAP and TLS achieved the highest accuracy in classifying forests and agroforestry plots, with overall accuracies of 89% and 78%, respectively. Though the field metrics were unable to resolve 3D characteristics related to heterogeneity, their capacity to distinguish the stand structure at 69% accuracy was driven by the relative pattern of its suite of metrics. The results indicate that the single‐scan TLS and DTM‐independent DAP yield meaningful descriptors of vegetation structure, which, when combined, can provide a comprehensive representation of the structure in these tropical landscapes.
在热带生态系统中,准确量化植被结构对于确定其提供生态系统服务的能力至关重要。地面激光扫描(TLS)和基于无人机的数字航空摄影测量(DAP)是用于评估植被结构的遥感工具,但与传统方法一起使用具有挑战性。单扫描TLS和DTM独立DAPs是用于描述植被结构的替代扫描方法;然而,目前尚不清楚它们之间的联系程度,以及它们如何准确地区分森林结构特征,包括垂直结构、水平结构、植被密度和结构异质性。首先,我们使用主成分和Procrustes分析量化了这些数据源中等效/类似结构度量之间的二元和多元相关性。然后,我们评估了它们表征森林和农林复合景观的能力。DAP、TLS和Field指标在植被密度、冠层顶高和林隙动态方面基本一致,但在高度变异性和地表异质性方面存在差异,反映了数据结构的差异。DAP和TLS对森林和农林业样地的分类精度最高,总体精度分别为89%和78%。尽管野外指标无法分辨与异质性相关的三维特征,但其区分林分结构的准确率高达69%,这是由其指标套件的相对模式驱动的。结果表明,单扫描TLS和独立于DTM的DAP产生了有意义的植被结构描述符,当它们结合在一起时,可以提供这些热带景观结构的全面代表。
{"title":"On the compatibility of single‐scan terrestrial LiDAR with digital photogrammetry and field inventory metrics of vegetation structure in forest and agroforestry landscapes","authors":"Magnus Onyiriagwu, Nereoh Leley, Caleb W. T. Ngaba, Anthony Macharia, Henry Muchiri, Abdalla Kisiwa, Martin Ehbrecht, Delphine Clara Zemp","doi":"10.1002/rse2.70047","DOIUrl":"https://doi.org/10.1002/rse2.70047","url":null,"abstract":"In tropical ecosystems, accurately quantifying vegetation structure is crucial to determining their capacity to deliver ecosystem services. Terrestrial laser scanning (TLS) and UAV‐based digital aerial photogrammetry (DAP) are remote sensing tools used to assess vegetation structure, but are challenging to use with conventional methods. Single‐Scan TLS and DTM‐independent DAPs are alternative scanning approaches used to describe vegetation structure; however, it remains unclear to what extent they relate to each other and how accurately they can distinguish forest structural characteristics, including vertical structure, horizontal structure, vegetation density, and structural heterogeneity. First, we quantified bivariate and multivariate correlations between equivalent/analogous structural metrics from these data sources using principal component and Procrustes analysis. We then evaluated their ability to characterize the forest and agroforestry landscapes. DAP, TLS, and Field metrics were moderately aligned for vegetation density, canopy top height, and gap dynamics, but differed in height variability and surface heterogeneity, reflecting differences in data structure. DAP and TLS achieved the highest accuracy in classifying forests and agroforestry plots, with overall accuracies of 89% and 78%, respectively. Though the field metrics were unable to resolve 3D characteristics related to heterogeneity, their capacity to distinguish the stand structure at 69% accuracy was driven by the relative pattern of its suite of metrics. The results indicate that the single‐scan TLS and DTM‐independent DAP yield meaningful descriptors of vegetation structure, which, when combined, can provide a comprehensive representation of the structure in these tropical landscapes.","PeriodicalId":21132,"journal":{"name":"Remote Sensing in Ecology and Conservation","volume":"93 1","pages":""},"PeriodicalIF":5.5,"publicationDate":"2025-12-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145746672","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"环境科学与生态学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Using phenology to improve invasive plant detection in fine‐scale hyperspectral drone‐based images 利用物候学改进基于高光谱无人机的精细尺度图像中的入侵植物检测
IF 5.5 2区 环境科学与生态学 Q1 ECOLOGY Pub Date : 2025-12-09 DOI: 10.1002/rse2.70049
Kelsey S. Huelsman, Howard E. Epstein, Xi Yang, Roderick Walker
Mapping and managing invasive plants are top priorities for land managers, but traditional approaches are time and labor‐intensive. To improve detection efforts, we explored the effectiveness of hyperspectral, drone‐based detection algorithms that incorporate phenology. We collected fine‐resolution (3 cm) hyperspectral images using a drone equipped with a Nano‐Hyperspec imager on seven dates from April to November, 2020 and then used a subsample of pixels from the images to develop multitemporal detection algorithms for three invasive plant species within heterogeneous vegetation communities. The three species are invasive in much of the U.S. and in Virginia, where the data were collected: Ailanthus altissima (tree of heaven), Elaeagnus umbellata (autumn olive), and Rhamnus davurica (Dahurian buckthorn). We determined when each species could be accurately detected, what spectral features allowed for detection, and the consistency of those features over a growing season. All three species could be detected in June. Only E. umbellata had consistently accurate algorithms and used consistent features in the visible and red edge across the growing season. Its most accurate detection algorithms in the summer included features in the yellow‐orange spectral region. A. altissima and R. davurica were both detectable in the mid‐ and late‐growing seasons, with little overlap in key spectral features across dates. Our results indicate that even a small subset of data from hyperspectral imagery can be used to accurately detect invasive plants in heterogeneous plant communities, and that incorporating species‐specific phenological traits into detection algorithms improves detection, laying methodological and theoretical groundwork for the future of invasive species management.
绘制入侵植物分布图和管理入侵植物是土地管理者的首要任务,但传统方法耗时耗力。为了改进检测工作,我们探索了结合物候学的高光谱、基于无人机的检测算法的有效性。在2020年4月至11月的7个日期,我们使用配备Nano - Hyperspec成像仪的无人机收集了精细分辨率(3cm)的高光谱图像,然后使用图像中的像素子样本开发了异质植被群落中三种入侵植物物种的多时相检测算法。这三种植物在美国和弗吉尼亚州的大部分地区都是入侵物种,数据是在弗吉尼亚州收集的:Ailanthus altissima(天堂树),Elaeagnus umellata(秋橄榄)和Rhamnus davurica(大鼠李)。我们确定了每个物种何时可以被准确检测到,哪些光谱特征可以被检测到,以及这些特征在生长季节的一致性。这三种都可以在6月份检测到。在整个生长季节,只有伞形莲具有一致的精确算法,并且在可见边缘和红边缘使用一致的特征。它在夏季最准确的检测算法包括黄橙色光谱区域的特征。A. altissima和R. davurica在生长中期和后期都可以检测到,在不同日期的关键光谱特征上几乎没有重叠。我们的研究结果表明,即使是来自高光谱图像的一小部分数据也可以用于准确检测异种植物群落中的入侵植物,并且将物种特异性物候特征纳入检测算法可以提高检测效率,为未来的入侵物种管理奠定方法和理论基础。
{"title":"Using phenology to improve invasive plant detection in fine‐scale hyperspectral drone‐based images","authors":"Kelsey S. Huelsman, Howard E. Epstein, Xi Yang, Roderick Walker","doi":"10.1002/rse2.70049","DOIUrl":"https://doi.org/10.1002/rse2.70049","url":null,"abstract":"Mapping and managing invasive plants are top priorities for land managers, but traditional approaches are time and labor‐intensive. To improve detection efforts, we explored the effectiveness of hyperspectral, drone‐based detection algorithms that incorporate phenology. We collected fine‐resolution (3 cm) hyperspectral images using a drone equipped with a Nano‐Hyperspec imager on seven dates from April to November, 2020 and then used a subsample of pixels from the images to develop multitemporal detection algorithms for three invasive plant species within heterogeneous vegetation communities. The three species are invasive in much of the U.S. and in Virginia, where the data were collected: <jats:italic>Ailanthus altissima</jats:italic> (tree of heaven), <jats:italic>Elaeagnus umbellata</jats:italic> (autumn olive), and <jats:italic>Rhamnus davurica</jats:italic> (Dahurian buckthorn). We determined when each species could be accurately detected, what spectral features allowed for detection, and the consistency of those features over a growing season. All three species could be detected in June. Only <jats:italic>E. umbellata</jats:italic> had consistently accurate algorithms and used consistent features in the visible and red edge across the growing season. Its most accurate detection algorithms in the summer included features in the yellow‐orange spectral region. <jats:italic>A. altissima</jats:italic> and <jats:italic>R. davurica</jats:italic> were both detectable in the mid‐ and late‐growing seasons, with little overlap in key spectral features across dates. Our results indicate that even a small subset of data from hyperspectral imagery can be used to accurately detect invasive plants in heterogeneous plant communities, and that incorporating species‐specific phenological traits into detection algorithms improves detection, laying methodological and theoretical groundwork for the future of invasive species management.","PeriodicalId":21132,"journal":{"name":"Remote Sensing in Ecology and Conservation","volume":"17 1","pages":""},"PeriodicalIF":5.5,"publicationDate":"2025-12-09","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145704601","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"环境科学与生态学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
期刊
Remote Sensing in Ecology and Conservation
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1