Zhongqi Chen, Timm Haucke, Sara Beery, Keven Bennett, Austin Powell, Lydia Zuehsow, Robert Vincent, Linda Deegan
Monitoring fish movement is essential for understanding population dynamics, informing conservation efforts and supporting fisheries management. Traditional methods, such as visual observations by volunteers, are constrained by time limitations, environmental conditions and labour intensity. Recent advancements in computer vision (CV) and deep learning offer promising solutions for automating fish counting from underwater videos, improving efficiency and data resolution. In this study, we developed and applied a deep learning‐based CV system to monitor river herring ( Alosa spp.) migration, covering all essential steps from field camera deployment, video annotation to model training and in‐season population counting. We assessed the labelling and training efforts required to achieve good model performance and explored the use of importance sampling to correct biases in CV‐based fish counts. Our results demonstrated that CV models trained on a single site and year showed limited generalization to sites or years unseen during training, while models trained on more diverse labelled data generalized better. We also found that the amount of annotations required is related to dataset complexity. When applied for in‐season fish counting, CV efficiently processed season‐long datasets and produced counts consistent with human review, with some moderate differences under migration pulses that can be adjusted by importance sampling. By providing continuous, high‐resolution monitoring throughout the entire migration season, CV counts offer more reliable run size estimates and greater insight into the spawning migration of river herring. This study demonstrates a scalable, cost‐effective and efficient approach with significant potential for addressing complex ecological questions and supporting conservation strategies and resource management.
{"title":"From snapshots to continuous estimates: Augmenting citizen science with computer vision for fish monitoring","authors":"Zhongqi Chen, Timm Haucke, Sara Beery, Keven Bennett, Austin Powell, Lydia Zuehsow, Robert Vincent, Linda Deegan","doi":"10.1002/rse2.70055","DOIUrl":"https://doi.org/10.1002/rse2.70055","url":null,"abstract":"Monitoring fish movement is essential for understanding population dynamics, informing conservation efforts and supporting fisheries management. Traditional methods, such as visual observations by volunteers, are constrained by time limitations, environmental conditions and labour intensity. Recent advancements in computer vision (CV) and deep learning offer promising solutions for automating fish counting from underwater videos, improving efficiency and data resolution. In this study, we developed and applied a deep learning‐based CV system to monitor river herring ( <jats:italic>Alosa</jats:italic> spp.) migration, covering all essential steps from field camera deployment, video annotation to model training and in‐season population counting. We assessed the labelling and training efforts required to achieve good model performance and explored the use of importance sampling to correct biases in CV‐based fish counts. Our results demonstrated that CV models trained on a single site and year showed limited generalization to sites or years unseen during training, while models trained on more diverse labelled data generalized better. We also found that the amount of annotations required is related to dataset complexity. When applied for in‐season fish counting, CV efficiently processed season‐long datasets and produced counts consistent with human review, with some moderate differences under migration pulses that can be adjusted by importance sampling. By providing continuous, high‐resolution monitoring throughout the entire migration season, CV counts offer more reliable run size estimates and greater insight into the spawning migration of river herring. This study demonstrates a scalable, cost‐effective and efficient approach with significant potential for addressing complex ecological questions and supporting conservation strategies and resource management.","PeriodicalId":21132,"journal":{"name":"Remote Sensing in Ecology and Conservation","volume":"51 1","pages":""},"PeriodicalIF":5.5,"publicationDate":"2026-02-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"146122092","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"环境科学与生态学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Rylan J. Command, Shreya Nemani, Benjamin Misiuk, Mehrdad Hajibabaei, Nicole Fahner, Emily Porter, Greg Singer, Beverly McClenaghan, Katleen Robert
Complex coastal seascapes harbor high marine biodiversity from which humans derive numerous ecosystem services. Maps of benthic habitats are important tools used to inform coastal development and conservation efforts. Seafloor imagery is commonly used to collect information about the distribution of benthic organisms, but these data are often limited to low taxonomic resolutions and may systematically underrepresent local biodiversity. Recent advances in genomics enable rapid and accurate detection of taxa with high taxonomic resolution from environmental DNA (eDNA) extracted from water samples, but there are few examples of broad‐spectrum eDNA biodiversity data in nearshore benthic habitat mapping. We combined an eDNA‐based biodiversity assessment with concurrently collected high‐resolution video ground‐truth data to assess the benefit of metabarcoding data for improving benthic habitat mapping in the sub‐Arctic coastal embayment of Mortier Bay, Newfoundland and Labrador, Canada. Features derived from acoustic bathymetry and backscatter data were used to develop full‐coverage habitat and biodiversity maps using a joint species distribution‐modeling framework. The predicted taxonomic richness spatial patterns were similar between video‐only, eDNA‐only and combined datasets, suggesting diversity patterns were accurately represented by both methods. However, 226 additional taxa (72 species, 109 genera) were identified using eDNA compared to the 46 detected by video ground‐truthing. Averaged over all taxa, the video‐only model performed best in terms of discriminating presences from absences; however, we found that most sessile taxa were better predicted by the combined dataset compared to video data alone. These results highlight the limitations of imagery‐only datasets for biodiversity surveys and demonstrate the utility and limitations of metabarcoding data to improve benthic habitat and diversity maps in complex coastal habitats. This study highlights opportunities to fill gaps that could improve spatial modeling of seafloor assemblages derived from metabarcoding data, including sources and sinks of DNA in the environment and water column properties that control its dispersal.
{"title":"Incorporating environmental DNA metabarcoding for improved benthic biodiversity and habitat mapping","authors":"Rylan J. Command, Shreya Nemani, Benjamin Misiuk, Mehrdad Hajibabaei, Nicole Fahner, Emily Porter, Greg Singer, Beverly McClenaghan, Katleen Robert","doi":"10.1002/rse2.70048","DOIUrl":"https://doi.org/10.1002/rse2.70048","url":null,"abstract":"Complex coastal seascapes harbor high marine biodiversity from which humans derive numerous ecosystem services. Maps of benthic habitats are important tools used to inform coastal development and conservation efforts. Seafloor imagery is commonly used to collect information about the distribution of benthic organisms, but these data are often limited to low taxonomic resolutions and may systematically underrepresent local biodiversity. Recent advances in genomics enable rapid and accurate detection of taxa with high taxonomic resolution from environmental DNA (eDNA) extracted from water samples, but there are few examples of broad‐spectrum eDNA biodiversity data in nearshore benthic habitat mapping. We combined an eDNA‐based biodiversity assessment with concurrently collected high‐resolution video ground‐truth data to assess the benefit of metabarcoding data for improving benthic habitat mapping in the sub‐Arctic coastal embayment of Mortier Bay, Newfoundland and Labrador, Canada. Features derived from acoustic bathymetry and backscatter data were used to develop full‐coverage habitat and biodiversity maps using a joint species distribution‐modeling framework. The predicted taxonomic richness spatial patterns were similar between video‐only, eDNA‐only and combined datasets, suggesting diversity patterns were accurately represented by both methods. However, 226 additional taxa (72 species, 109 genera) were identified using eDNA compared to the 46 detected by video ground‐truthing. Averaged over all taxa, the video‐only model performed best in terms of discriminating presences from absences; however, we found that most sessile taxa were better predicted by the combined dataset compared to video data alone. These results highlight the limitations of imagery‐only datasets for biodiversity surveys and demonstrate the utility and limitations of metabarcoding data to improve benthic habitat and diversity maps in complex coastal habitats. This study highlights opportunities to fill gaps that could improve spatial modeling of seafloor assemblages derived from metabarcoding data, including sources and sinks of DNA in the environment and water column properties that control its dispersal.","PeriodicalId":21132,"journal":{"name":"Remote Sensing in Ecology and Conservation","volume":"28 1","pages":""},"PeriodicalIF":5.5,"publicationDate":"2026-02-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"146122094","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"环境科学与生态学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Joshua P. Wilson, Tatsuya Amano, Thomas Bregnballe, Alejandro Corregidor‐Castro, Roxane Francis, Diego Gallego‐García, Jarrod C. Hodgson, Landon R. Jones, César R. Luque‐Fernández, Dominik Marchowski, John McEvoy, Ann E. McKellar, W. Chris Oosthuizen, Christian Pfeifer, Martin Renner, José Hernán Sarasola, Mateo Sokač, Roberto Valle, Adam Zbyryt, Richard A. Fuller
Drones are a valuable tool for surveying birds. However, surveys are hampered by the costs of manually detecting birds in the resulting images. Researchers are using computer vision to automate this process, but efforts to date generally target a narrow context, such as a single habitat, and do not identify key attributes such as species. To address this, we collected a diverse dataset of drone‐based bird images from existing studies and our own fieldwork. We labelled the birds in these images, detailing their location, species, posture (resting, flying, or other), age (chick, juvenile, or adult), and sex (male, female, or monomorphic). To demonstrate the usefulness of this dataset, we trained a bird detection and identification computer vision model, compared its performance with manual methods, and identified the main predictors of performance. Thirty‐three researchers contributed 23 865 images, captured using 21 different cameras across 11 countries and all 7 continents. We labelled 4824 of these images, containing 49 990 birds from 101 species. Our model processed images 85 times faster than manual processing and achieved a mean average precision (mAP) of 0.91 ± 0.25 for detection and 0.65 ± 0.33 for classification of species, age, and sex. Performance was predicted by the similarity between test and train images (Estimate = 1.3248, P = 0.00021), the number of similar classes (Estimate = −0.0742, P = 0.0033), the number of train instances (Estimate = 0.0034, P = 0.1019), and the number of pixels on the bird (Estimate = 0.0002, P = 0.0462). Our drone‐based bird dataset is the most accurately labelled and biologically, environmentally, and digitally diverse to date, laying the foundation for future research. We provide it and the trained model open‐access and urge researchers to continue to work together to assemble datasets that cover broad contexts and are labelled with key conservation metrics.
{"title":"Big Bird: A global dataset of birds in drone imagery annotated to species level","authors":"Joshua P. Wilson, Tatsuya Amano, Thomas Bregnballe, Alejandro Corregidor‐Castro, Roxane Francis, Diego Gallego‐García, Jarrod C. Hodgson, Landon R. Jones, César R. Luque‐Fernández, Dominik Marchowski, John McEvoy, Ann E. McKellar, W. Chris Oosthuizen, Christian Pfeifer, Martin Renner, José Hernán Sarasola, Mateo Sokač, Roberto Valle, Adam Zbyryt, Richard A. Fuller","doi":"10.1002/rse2.70059","DOIUrl":"https://doi.org/10.1002/rse2.70059","url":null,"abstract":"Drones are a valuable tool for surveying birds. However, surveys are hampered by the costs of manually detecting birds in the resulting images. Researchers are using computer vision to automate this process, but efforts to date generally target a narrow context, such as a single habitat, and do not identify key attributes such as species. To address this, we collected a diverse dataset of drone‐based bird images from existing studies and our own fieldwork. We labelled the birds in these images, detailing their location, species, posture (resting, flying, or other), age (chick, juvenile, or adult), and sex (male, female, or monomorphic). To demonstrate the usefulness of this dataset, we trained a bird detection and identification computer vision model, compared its performance with manual methods, and identified the main predictors of performance. Thirty‐three researchers contributed 23 865 images, captured using 21 different cameras across 11 countries and all 7 continents. We labelled 4824 of these images, containing 49 990 birds from 101 species. Our model processed images 85 times faster than manual processing and achieved a mean average precision (mAP) of 0.91 ± 0.25 for detection and 0.65 ± 0.33 for classification of species, age, and sex. Performance was predicted by the similarity between test and train images (Estimate = 1.3248, <jats:italic>P</jats:italic> = 0.00021), the number of similar classes (Estimate = −0.0742, <jats:italic>P</jats:italic> = 0.0033), the number of train instances (Estimate = 0.0034, <jats:italic>P</jats:italic> = 0.1019), and the number of pixels on the bird (Estimate = 0.0002, <jats:italic>P</jats:italic> = 0.0462). Our drone‐based bird dataset is the most accurately labelled and biologically, environmentally, and digitally diverse to date, laying the foundation for future research. We provide it and the trained model open‐access and urge researchers to continue to work together to assemble datasets that cover broad contexts and are labelled with key conservation metrics.","PeriodicalId":21132,"journal":{"name":"Remote Sensing in Ecology and Conservation","volume":"112 1","pages":""},"PeriodicalIF":5.5,"publicationDate":"2026-02-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"146122093","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"环境科学与生态学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Arik Kershenbaum, Andrew Markham, Holly Root‐Gutteridge, Bethany Smith, Casey Anderson, Riley McClaughry, Ramjan Chaudhary, Amogh Vishwakarma, Stephen Cummins, Angela Dassow
Tiger ( Panthera tigris ) attacks are a frequent source of injuries and fatalities among villagers in Nepal, where many communities make extensive use of dense forests for foraging and grazing of livestock. As conservation efforts have boosted the tiger population in the country, a conflict exists between maintaining traditional practises whilst ensuring human safety and protecting endangered predators. Hence, there is a need for cost‐effective management strategies that do not reduce habitat use by humans or wildlife. Passive acoustic monitoring (PAM) offers a promising approach to mapping tiger presence in real‐time and providing a warning system for villagers. Although tigers vocalize infrequently, their presence triggers alarm calls from prey species, meaning these alarm calls could potentially act as a proxy for detecting tigers. To explore the potential for tracking tigers and other dangerous predators such as leopards using these alarm calls, we designed and tested a PAM system in the Terai region of southern Nepal. We implemented a TinyML low‐memory convolutional neural network (~1000 parameters) for chital deer ( Axis axis ) automatic detection—a species that reliably produce loud predator‐specific alarm calls—and deployed a distributed network of 10 autonomous interconnected sensors for continuous operation over 3 months. The network transmits chital deer alarm call events via a cellular‐connected gateway to a remote base station to generate a heatmap of predator risk. Incidences of high predator risk can be used to alert local forest rangers, who can then inform nearby villagers of areas with a higher likelihood of predator presence. The neural net achieved an F1 score of 0.91 in training and 0.72 in the field. We suggest that this proof of concept indicates that automated PAM could be an effective tool for detecting and tracking tigers and other predators and a potentially valuable tool for facilitating human‐wildlife co‐existence.
{"title":"An autonomous network of acoustic detectors to map tiger risk by eavesdropping on prey alarm calls","authors":"Arik Kershenbaum, Andrew Markham, Holly Root‐Gutteridge, Bethany Smith, Casey Anderson, Riley McClaughry, Ramjan Chaudhary, Amogh Vishwakarma, Stephen Cummins, Angela Dassow","doi":"10.1002/rse2.70061","DOIUrl":"https://doi.org/10.1002/rse2.70061","url":null,"abstract":"Tiger ( <jats:italic>Panthera tigris</jats:italic> ) attacks are a frequent source of injuries and fatalities among villagers in Nepal, where many communities make extensive use of dense forests for foraging and grazing of livestock. As conservation efforts have boosted the tiger population in the country, a conflict exists between maintaining traditional practises whilst ensuring human safety and protecting endangered predators. Hence, there is a need for cost‐effective management strategies that do not reduce habitat use by humans or wildlife. Passive acoustic monitoring (PAM) offers a promising approach to mapping tiger presence in real‐time and providing a warning system for villagers. Although tigers vocalize infrequently, their presence triggers alarm calls from prey species, meaning these alarm calls could potentially act as a proxy for detecting tigers. To explore the potential for tracking tigers and other dangerous predators such as leopards using these alarm calls, we designed and tested a PAM system in the Terai region of southern Nepal. We implemented a TinyML low‐memory convolutional neural network (~1000 parameters) for chital deer ( <jats:italic>Axis axis</jats:italic> ) automatic detection—a species that reliably produce loud predator‐specific alarm calls—and deployed a distributed network of 10 autonomous interconnected sensors for continuous operation over 3 months. The network transmits chital deer alarm call events via a cellular‐connected gateway to a remote base station to generate a heatmap of predator risk. Incidences of high predator risk can be used to alert local forest rangers, who can then inform nearby villagers of areas with a higher likelihood of predator presence. The neural net achieved an F1 score of 0.91 in training and 0.72 in the field. We suggest that this proof of concept indicates that automated PAM could be an effective tool for detecting and tracking tigers and other predators and a potentially valuable tool for facilitating human‐wildlife co‐existence.","PeriodicalId":21132,"journal":{"name":"Remote Sensing in Ecology and Conservation","volume":"23 1","pages":""},"PeriodicalIF":5.5,"publicationDate":"2026-02-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"146122095","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"环境科学与生态学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Forest structure is an essential variable in forest management and conservation, as it has a direct impact on ecosystem processes and functions. Previous remote sensing studies have primarily focused on the vertical structure of forests, which requires laser point data and may not always be suited to distinguish plantations from old forests. Sub‐meter resolution remote sensing data and tree crown segmentation techniques hold promise in offering detailed information that can support the characterization of forest structure from a horizontal perspective, offering new insights in the tree crown structure at scale. In this study, we generated a dataset with over 5 billion tree crowns and developed a Horizontal Structure Index (HSI) by analyzing spatial relationships among neighboring trees from remote sensing optical images. We first extracted the location and crown size of overstory trees from optical satellite and aerial imagery at sub‐meter resolution. We subsequently calculated the distance between tree crown centers, their angles, the crown size and crown spacing, and linked this information with individual trees. We then used principal component analysis (PCA) to condense the structural information into the HSI and tested it in China, Rwanda and Denmark. Our result showed that the HSI has the potential to distinguish monoculture plantations from other forest types, which provides insights that extend beyond metrics derived from vertical forest structure. The proposed HSI is derived directly from tree‐level attributes and supports a deeper understanding of forest structure from a horizontal perspective, complementing existing remote sensing‐based metrics.
{"title":"Large‐scale characterization of horizontal forest structure from remote sensing optical images","authors":"Xin Xu, Martin Brandt, Xiaowei Tong, Maurice Mugabowindekwe, Yuemin Yue, Sizhuo Li, Qiue Xu, Siyu Liu, Florian Reiner, Kelin Wang, Zhengchao Chen, Yongqing Bai, Rasmus Fensholt","doi":"10.1002/rse2.70058","DOIUrl":"https://doi.org/10.1002/rse2.70058","url":null,"abstract":"Forest structure is an essential variable in forest management and conservation, as it has a direct impact on ecosystem processes and functions. Previous remote sensing studies have primarily focused on the vertical structure of forests, which requires laser point data and may not always be suited to distinguish plantations from old forests. Sub‐meter resolution remote sensing data and tree crown segmentation techniques hold promise in offering detailed information that can support the characterization of forest structure from a horizontal perspective, offering new insights in the tree crown structure at scale. In this study, we generated a dataset with over 5 billion tree crowns and developed a Horizontal Structure Index (HSI) by analyzing spatial relationships among neighboring trees from remote sensing optical images. We first extracted the location and crown size of overstory trees from optical satellite and aerial imagery at sub‐meter resolution. We subsequently calculated the distance between tree crown centers, their angles, the crown size and crown spacing, and linked this information with individual trees. We then used principal component analysis (PCA) to condense the structural information into the HSI and tested it in China, Rwanda and Denmark. Our result showed that the HSI has the potential to distinguish monoculture plantations from other forest types, which provides insights that extend beyond metrics derived from vertical forest structure. The proposed HSI is derived directly from tree‐level attributes and supports a deeper understanding of forest structure from a horizontal perspective, complementing existing remote sensing‐based metrics.","PeriodicalId":21132,"journal":{"name":"Remote Sensing in Ecology and Conservation","volume":"45 1","pages":""},"PeriodicalIF":5.5,"publicationDate":"2026-01-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145993210","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"环境科学与生态学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Peggy A. Bevan, Omiros Pantazis, Holly A.I. Pringle, Guilherme Braga Ferreira, Daniel J. Ingram, Emily K. Madsen, Liam Thomas, Dol Raj Thanet, Thakur Silwal, Santosh Rayamajhi, Gabriel J. Brostow, Oisin Mac Aodha, Kate E. Jones
Large image collections generated from camera traps offer valuable insights into species richness, occupancy, and activity patterns, significantly aiding biodiversity monitoring. However, the manual processing of these data sets is time‐consuming, hindering analytical processes. To address this, deep neural networks have been widely adopted to automate image labelling, but the impact of classification error on key ecological metrics remains unclear. Here, we analyze data from camera trap collections in an African savannah (82,300 labelled images, 47 species) and an Asian sub‐tropical dry forest (40,308 labelled images, 29 species) to compare ecological metrics derived from expert‐generated species identifications with those generated by deep‐learning classification models. We specifically assess the impact of deep‐ learning model architecture, the proportion of label noise in the training data, and the size of the training data set on three key ecological metrics: species richness, occupancy, and activity patterns. We found that predictions of species richness derived from deep neural networks closely match those calculated from expert labels and remained resilient to up to 10% noise in the training data set (mis‐labelled images) and a 50% reduction in the training data set size. We found that our choice of deep‐learning model architecture (ResNet vs. ConvNext‐T) or depth (ResNet18, 50, 101) did not impact predicted ecological metrics. In contrast, species‐specific metrics were more sensitive; less common and visually similar species were disproportionately affected by a reduction in deep neural network accuracy, with consequences for occupancy and diel activity pattern estimates. To ensure the reliability of their findings, practitioners should prioritize creating large, clean training sets and account for class imbalance across species over exploring numerous deep‐learning model architectures.
相机陷阱产生的大量图像收集提供了对物种丰富度、占用和活动模式的宝贵见解,对生物多样性监测有重要帮助。然而,这些数据集的手工处理是耗时的,阻碍了分析过程。为了解决这个问题,深度神经网络已被广泛用于自动图像标记,但分类误差对关键生态指标的影响尚不清楚。在这里,我们分析了来自非洲大草原(82,300张标记图像,47个物种)和亚洲亚热带干燥森林(40,308张标记图像,29个物种)的相机陷阱收集的数据,以比较专家生成的物种识别与深度学习分类模型生成的生态指标。我们特别评估了深度学习模型架构、训练数据中标签噪声的比例以及训练数据集的大小对三个关键生态指标的影响:物种丰富度、占用率和活动模式。我们发现,来自深度神经网络的物种丰富度预测与来自专家标签的预测非常匹配,并且对训练数据集中高达10%的噪声(错误标记的图像)和训练数据集大小减少50%保持弹性。我们发现,我们选择的深度学习模型架构(ResNet vs. ConvNext - T)或深度(resnet18,50,101)对预测的生态指标没有影响。相比之下,物种特异性指标更为敏感;不太常见和视觉上相似的物种受到深度神经网络准确性降低的不成比例的影响,从而导致占用和死亡活动模式估计的后果。为了确保研究结果的可靠性,从业者应该优先考虑创建大型、干净的训练集,并考虑物种间的类不平衡,而不是探索大量的深度学习模型架构。
{"title":"Deep learning‐based ecological analysis of camera trap images is impacted by training data quality and quantity","authors":"Peggy A. Bevan, Omiros Pantazis, Holly A.I. Pringle, Guilherme Braga Ferreira, Daniel J. Ingram, Emily K. Madsen, Liam Thomas, Dol Raj Thanet, Thakur Silwal, Santosh Rayamajhi, Gabriel J. Brostow, Oisin Mac Aodha, Kate E. Jones","doi":"10.1002/rse2.70052","DOIUrl":"https://doi.org/10.1002/rse2.70052","url":null,"abstract":"Large image collections generated from camera traps offer valuable insights into species richness, occupancy, and activity patterns, significantly aiding biodiversity monitoring. However, the manual processing of these data sets is time‐consuming, hindering analytical processes. To address this, deep neural networks have been widely adopted to automate image labelling, but the impact of classification error on key ecological metrics remains unclear. Here, we analyze data from camera trap collections in an African savannah (82,300 labelled images, 47 species) and an Asian sub‐tropical dry forest (40,308 labelled images, 29 species) to compare ecological metrics derived from expert‐generated species identifications with those generated by deep‐learning classification models. We specifically assess the impact of deep‐ learning model architecture, the proportion of label noise in the training data, and the size of the training data set on three key ecological metrics: species richness, occupancy, and activity patterns. We found that predictions of species richness derived from deep neural networks closely match those calculated from expert labels and remained resilient to up to 10% noise in the training data set (mis‐labelled images) and a 50% reduction in the training data set size. We found that our choice of deep‐learning model architecture (ResNet vs. ConvNext‐T) or depth (ResNet18, 50, 101) did not impact predicted ecological metrics. In contrast, species‐specific metrics were more sensitive; less common and visually similar species were disproportionately affected by a reduction in deep neural network accuracy, with consequences for occupancy and diel activity pattern estimates. To ensure the reliability of their findings, practitioners should prioritize creating large, clean training sets and account for class imbalance across species over exploring numerous deep‐learning model architectures.","PeriodicalId":21132,"journal":{"name":"Remote Sensing in Ecology and Conservation","volume":"88 1","pages":""},"PeriodicalIF":5.5,"publicationDate":"2026-01-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145955812","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"环境科学与生态学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pirta Palola, Sasha Hills, Simon J. Pittman, Edwin A. Hernández‐Delgado, Antoine Collin, Lisa M. Wedding
Land cover change that leads to increased nutrient and sediment runoff is an important driver of change in coral reef ecosystems. Linking landscape change to seascape change is necessary for integrated land–sea management of coral reefs. This study explored the use of freely available satellite products to examine long‐term patterns of change across the land–sea continuum. We focused on northeastern Puerto Rico, where a widespread decline in live coral cover has occurred despite concomitant watershed reforestation that was expected to reduce land‐based threats. The aims of this study were (1) to examine whether these land–sea trends continued in 2000–2015 and (2) to assess the opportunities and limitations associated with using satellite data to inform land–sea management. We applied a Random Forest classifier on Landsat‐7 satellite imagery to assess changes in land cover and landscape development intensity, a spatial index to estimate land‐based pressure on nearshore marine ecosystems. We used field monitoring data to quantify benthic community change. We found that reforestation continued in 2000–2015 (+11%), suggesting reduced land‐based pressure on adjacent reefs in both northern (Luquillo) and eastern (Ceiba‐Fajardo) watersheds. Concomitantly, coral cover continued to decline, and a new aggressive expansion of peyssonnelid algal crust was recorded. Clustering analysis indicated that benthic monitoring sites in the same geographic regions (nearshore/offshore, north/east) followed similar community composition trajectories over time. Our results suggest that continued reforestation and the expected reduction in land‐based pressure have not been sufficient to halt coral cover decline in northeastern Puerto Rico. To improve the characterization and monitoring of the full causal chain from changes in land cover to water quality to benthic communities, advances in satellite‐based water quality mapping in optically shallow waters are needed. A strategic combination of remote sensing and targeted field surveys is required to monitor and mitigate land‐based stressors on coral reefs.
{"title":"Evaluating land–sea linkages using land cover change and coral reef monitoring data: A case study from northeastern Puerto Rico","authors":"Pirta Palola, Sasha Hills, Simon J. Pittman, Edwin A. Hernández‐Delgado, Antoine Collin, Lisa M. Wedding","doi":"10.1002/rse2.70054","DOIUrl":"https://doi.org/10.1002/rse2.70054","url":null,"abstract":"Land cover change that leads to increased nutrient and sediment runoff is an important driver of change in coral reef ecosystems. Linking landscape change to seascape change is necessary for integrated land–sea management of coral reefs. This study explored the use of freely available satellite products to examine long‐term patterns of change across the land–sea continuum. We focused on northeastern Puerto Rico, where a widespread decline in live coral cover has occurred despite concomitant watershed reforestation that was expected to reduce land‐based threats. The aims of this study were (1) to examine whether these land–sea trends continued in 2000–2015 and (2) to assess the opportunities and limitations associated with using satellite data to inform land–sea management. We applied a Random Forest classifier on Landsat‐7 satellite imagery to assess changes in land cover and landscape development intensity, a spatial index to estimate land‐based pressure on nearshore marine ecosystems. We used field monitoring data to quantify benthic community change. We found that reforestation continued in 2000–2015 (+11%), suggesting reduced land‐based pressure on adjacent reefs in both northern (Luquillo) and eastern (Ceiba‐Fajardo) watersheds. Concomitantly, coral cover continued to decline, and a new aggressive expansion of peyssonnelid algal crust was recorded. Clustering analysis indicated that benthic monitoring sites in the same geographic regions (nearshore/offshore, north/east) followed similar community composition trajectories over time. Our results suggest that continued reforestation and the expected reduction in land‐based pressure have not been sufficient to halt coral cover decline in northeastern Puerto Rico. To improve the characterization and monitoring of the full causal chain from changes in land cover to water quality to benthic communities, advances in satellite‐based water quality mapping in optically shallow waters are needed. A strategic combination of remote sensing and targeted field surveys is required to monitor and mitigate land‐based stressors on coral reefs.","PeriodicalId":21132,"journal":{"name":"Remote Sensing in Ecology and Conservation","volume":"18 1","pages":""},"PeriodicalIF":5.5,"publicationDate":"2026-01-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145902907","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"环境科学与生态学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
S. K. Morgan Ernest, Lindsey A. Garner, Ben G. Weinstein, Peter Frederick, Henry Senyondo, Glenda M. Yenni, Ethan P. White
The challenges of monitoring wildlife often limit the scales and intensity of the data that can be collected. New technologies—such as remote sensing using unoccupied aircraft systems (UASs)—can collect information more quickly, over larger areas, and more frequently than is feasible using ground‐based methods. While airborne imaging is increasingly used to produce data on the location and counts of individuals, its ability to produce individual‐based demographic information is less explored. Repeat airborne imagery to generate an imagery time series provides the potential to track individuals over time to collect information beyond one‐off counts, but doing so necessitates automated approaches to handle the resulting high‐frequency large‐spatial scale imagery. We developed an automated time‐series remote sensing approach to identifying wading bird nests in the Everglades ecosystem of Florida, USA to explore the feasibility and challenges of conducting time‐series based remote sensing on mobile animals at large spatial scales. We combine a computer vision model for detecting birds in weekly UAS imagery of colonies with biology‐informed algorithmic rules to generate an automated approach that identifies likely nests. Comparing the performance of these automated approaches to human review of the same imagery shows that our primary approach identifies nests with comparable performance to human review, and that a secondary approach designed to find quick‐fail nests resulted in high false‐positive rates. We also assessed the ability of both human review and our primary algorithm to find ground‐verified nests in UAS imagery and again found comparable performance, with the exception of nests that fail quickly. Our results showed that automating nest detection, a key first step toward estimating nest success, is possible in complex environments like the Everglades and we discuss a number of challenges and possible uses for these types of approaches.
{"title":"Using time‐series remote sensing to identify and track individual bird nests at large scales","authors":"S. K. Morgan Ernest, Lindsey A. Garner, Ben G. Weinstein, Peter Frederick, Henry Senyondo, Glenda M. Yenni, Ethan P. White","doi":"10.1002/rse2.70046","DOIUrl":"https://doi.org/10.1002/rse2.70046","url":null,"abstract":"The challenges of monitoring wildlife often limit the scales and intensity of the data that can be collected. New technologies—such as remote sensing using unoccupied aircraft systems (UASs)—can collect information more quickly, over larger areas, and more frequently than is feasible using ground‐based methods. While airborne imaging is increasingly used to produce data on the location and counts of individuals, its ability to produce individual‐based demographic information is less explored. Repeat airborne imagery to generate an imagery time series provides the potential to track individuals over time to collect information beyond one‐off counts, but doing so necessitates automated approaches to handle the resulting high‐frequency large‐spatial scale imagery. We developed an automated time‐series remote sensing approach to identifying wading bird nests in the Everglades ecosystem of Florida, USA to explore the feasibility and challenges of conducting time‐series based remote sensing on mobile animals at large spatial scales. We combine a computer vision model for detecting birds in weekly UAS imagery of colonies with biology‐informed algorithmic rules to generate an automated approach that identifies likely nests. Comparing the performance of these automated approaches to human review of the same imagery shows that our primary approach identifies nests with comparable performance to human review, and that a secondary approach designed to find quick‐fail nests resulted in high false‐positive rates. We also assessed the ability of both human review and our primary algorithm to find ground‐verified nests in UAS imagery and again found comparable performance, with the exception of nests that fail quickly. Our results showed that automating nest detection, a key first step toward estimating nest success, is possible in complex environments like the Everglades and we discuss a number of challenges and possible uses for these types of approaches.","PeriodicalId":21132,"journal":{"name":"Remote Sensing in Ecology and Conservation","volume":"16 1","pages":""},"PeriodicalIF":5.5,"publicationDate":"2025-12-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145801312","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"环境科学与生态学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Magnus Onyiriagwu, Nereoh Leley, Caleb W. T. Ngaba, Anthony Macharia, Henry Muchiri, Abdalla Kisiwa, Martin Ehbrecht, Delphine Clara Zemp
In tropical ecosystems, accurately quantifying vegetation structure is crucial to determining their capacity to deliver ecosystem services. Terrestrial laser scanning (TLS) and UAV‐based digital aerial photogrammetry (DAP) are remote sensing tools used to assess vegetation structure, but are challenging to use with conventional methods. Single‐Scan TLS and DTM‐independent DAPs are alternative scanning approaches used to describe vegetation structure; however, it remains unclear to what extent they relate to each other and how accurately they can distinguish forest structural characteristics, including vertical structure, horizontal structure, vegetation density, and structural heterogeneity. First, we quantified bivariate and multivariate correlations between equivalent/analogous structural metrics from these data sources using principal component and Procrustes analysis. We then evaluated their ability to characterize the forest and agroforestry landscapes. DAP, TLS, and Field metrics were moderately aligned for vegetation density, canopy top height, and gap dynamics, but differed in height variability and surface heterogeneity, reflecting differences in data structure. DAP and TLS achieved the highest accuracy in classifying forests and agroforestry plots, with overall accuracies of 89% and 78%, respectively. Though the field metrics were unable to resolve 3D characteristics related to heterogeneity, their capacity to distinguish the stand structure at 69% accuracy was driven by the relative pattern of its suite of metrics. The results indicate that the single‐scan TLS and DTM‐independent DAP yield meaningful descriptors of vegetation structure, which, when combined, can provide a comprehensive representation of the structure in these tropical landscapes.
{"title":"On the compatibility of single‐scan terrestrial LiDAR with digital photogrammetry and field inventory metrics of vegetation structure in forest and agroforestry landscapes","authors":"Magnus Onyiriagwu, Nereoh Leley, Caleb W. T. Ngaba, Anthony Macharia, Henry Muchiri, Abdalla Kisiwa, Martin Ehbrecht, Delphine Clara Zemp","doi":"10.1002/rse2.70047","DOIUrl":"https://doi.org/10.1002/rse2.70047","url":null,"abstract":"In tropical ecosystems, accurately quantifying vegetation structure is crucial to determining their capacity to deliver ecosystem services. Terrestrial laser scanning (TLS) and UAV‐based digital aerial photogrammetry (DAP) are remote sensing tools used to assess vegetation structure, but are challenging to use with conventional methods. Single‐Scan TLS and DTM‐independent DAPs are alternative scanning approaches used to describe vegetation structure; however, it remains unclear to what extent they relate to each other and how accurately they can distinguish forest structural characteristics, including vertical structure, horizontal structure, vegetation density, and structural heterogeneity. First, we quantified bivariate and multivariate correlations between equivalent/analogous structural metrics from these data sources using principal component and Procrustes analysis. We then evaluated their ability to characterize the forest and agroforestry landscapes. DAP, TLS, and Field metrics were moderately aligned for vegetation density, canopy top height, and gap dynamics, but differed in height variability and surface heterogeneity, reflecting differences in data structure. DAP and TLS achieved the highest accuracy in classifying forests and agroforestry plots, with overall accuracies of 89% and 78%, respectively. Though the field metrics were unable to resolve 3D characteristics related to heterogeneity, their capacity to distinguish the stand structure at 69% accuracy was driven by the relative pattern of its suite of metrics. The results indicate that the single‐scan TLS and DTM‐independent DAP yield meaningful descriptors of vegetation structure, which, when combined, can provide a comprehensive representation of the structure in these tropical landscapes.","PeriodicalId":21132,"journal":{"name":"Remote Sensing in Ecology and Conservation","volume":"93 1","pages":""},"PeriodicalIF":5.5,"publicationDate":"2025-12-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145746672","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"环境科学与生态学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Kelsey S. Huelsman, Howard E. Epstein, Xi Yang, Roderick Walker
Mapping and managing invasive plants are top priorities for land managers, but traditional approaches are time and labor‐intensive. To improve detection efforts, we explored the effectiveness of hyperspectral, drone‐based detection algorithms that incorporate phenology. We collected fine‐resolution (3 cm) hyperspectral images using a drone equipped with a Nano‐Hyperspec imager on seven dates from April to November, 2020 and then used a subsample of pixels from the images to develop multitemporal detection algorithms for three invasive plant species within heterogeneous vegetation communities. The three species are invasive in much of the U.S. and in Virginia, where the data were collected: Ailanthus altissima (tree of heaven), Elaeagnus umbellata (autumn olive), and Rhamnus davurica (Dahurian buckthorn). We determined when each species could be accurately detected, what spectral features allowed for detection, and the consistency of those features over a growing season. All three species could be detected in June. Only E. umbellata had consistently accurate algorithms and used consistent features in the visible and red edge across the growing season. Its most accurate detection algorithms in the summer included features in the yellow‐orange spectral region. A. altissima and R. davurica were both detectable in the mid‐ and late‐growing seasons, with little overlap in key spectral features across dates. Our results indicate that even a small subset of data from hyperspectral imagery can be used to accurately detect invasive plants in heterogeneous plant communities, and that incorporating species‐specific phenological traits into detection algorithms improves detection, laying methodological and theoretical groundwork for the future of invasive species management.
{"title":"Using phenology to improve invasive plant detection in fine‐scale hyperspectral drone‐based images","authors":"Kelsey S. Huelsman, Howard E. Epstein, Xi Yang, Roderick Walker","doi":"10.1002/rse2.70049","DOIUrl":"https://doi.org/10.1002/rse2.70049","url":null,"abstract":"Mapping and managing invasive plants are top priorities for land managers, but traditional approaches are time and labor‐intensive. To improve detection efforts, we explored the effectiveness of hyperspectral, drone‐based detection algorithms that incorporate phenology. We collected fine‐resolution (3 cm) hyperspectral images using a drone equipped with a Nano‐Hyperspec imager on seven dates from April to November, 2020 and then used a subsample of pixels from the images to develop multitemporal detection algorithms for three invasive plant species within heterogeneous vegetation communities. The three species are invasive in much of the U.S. and in Virginia, where the data were collected: <jats:italic>Ailanthus altissima</jats:italic> (tree of heaven), <jats:italic>Elaeagnus umbellata</jats:italic> (autumn olive), and <jats:italic>Rhamnus davurica</jats:italic> (Dahurian buckthorn). We determined when each species could be accurately detected, what spectral features allowed for detection, and the consistency of those features over a growing season. All three species could be detected in June. Only <jats:italic>E. umbellata</jats:italic> had consistently accurate algorithms and used consistent features in the visible and red edge across the growing season. Its most accurate detection algorithms in the summer included features in the yellow‐orange spectral region. <jats:italic>A. altissima</jats:italic> and <jats:italic>R. davurica</jats:italic> were both detectable in the mid‐ and late‐growing seasons, with little overlap in key spectral features across dates. Our results indicate that even a small subset of data from hyperspectral imagery can be used to accurately detect invasive plants in heterogeneous plant communities, and that incorporating species‐specific phenological traits into detection algorithms improves detection, laying methodological and theoretical groundwork for the future of invasive species management.","PeriodicalId":21132,"journal":{"name":"Remote Sensing in Ecology and Conservation","volume":"17 1","pages":""},"PeriodicalIF":5.5,"publicationDate":"2025-12-09","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145704601","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"环境科学与生态学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}