R. Hernández-Clemente, A. Hornero, V. González-Dugo, M. Berdugo, J. Quero, J. Jiménez, F. Maestre
{"title":"Global monitoring of soil multifunctionality in drylands using satellite imagery and field data","authors":"R. Hernández-Clemente, A. Hornero, V. González-Dugo, M. Berdugo, J. Quero, J. Jiménez, F. Maestre","doi":"10.1002/rse2.340","DOIUrl":"https://doi.org/10.1002/rse2.340","url":null,"abstract":"","PeriodicalId":21132,"journal":{"name":"Remote Sensing in Ecology and Conservation","volume":" ","pages":""},"PeriodicalIF":5.5,"publicationDate":"2023-05-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"45709191","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"环境科学与生态学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
James G. C. Ball, Sebastian H. M. Hickman, Tobias D. Jackson, Xian Jing Koay, James Hirst, William Jay, Matthew Archer, Mélaine Aubry‐Kientz, Grégoire Vincent, David A. Coomes
Abstract Tropical forests are a major component of the global carbon cycle and home to two‐thirds of terrestrial species. Upper‐canopy trees store the majority of forest carbon and can be vulnerable to drought events and storms. Monitoring their growth and mortality is essential to understanding forest resilience to climate change, but in the context of forest carbon storage, large trees are underrepresented in traditional field surveys, so estimates are poorly constrained. Aerial photographs provide spectral and textural information to discriminate between tree crowns in diverse, complex tropical canopies, potentially opening the door to landscape monitoring of large trees. Here we describe a new deep convolutional neural network method, Detectree2 , which builds on the Mask R‐CNN computer vision framework to recognize the irregular edges of individual tree crowns from airborne RGB imagery. We trained and evaluated this model with 3797 manually delineated tree crowns at three sites in Malaysian Borneo and one site in French Guiana. As an example application, we combined the delineations with repeat lidar surveys (taken between 3 and 6 years apart) of the four sites to estimate the growth and mortality of upper‐canopy trees. Detectree2 delineated 65 000 upper‐canopy trees across 14 km 2 of aerial images. The skill of the automatic method in delineating unseen test trees was good ( F 1 score = 0.64) and for the tallest category of trees was excellent ( F 1 score = 0.74). As predicted from previous field studies, we found that growth rate declined with tree height and tall trees had higher mortality rates than intermediate‐size trees. Our approach demonstrates that deep learning methods can automatically segment trees in widely accessible RGB imagery. This tool (provided as an open‐source Python package) has many potential applications in forest ecology and conservation, from estimating carbon stocks to monitoring forest phenology and restoration. Python package available to install at https://github.com/PatBall1/Detectree2 .
热带森林是全球碳循环的主要组成部分,也是三分之二陆生物种的家园。上冠层树木储存了大部分的森林碳,可能容易受到干旱事件和风暴的影响。监测它们的生长和死亡对于了解森林对气候变化的适应能力至关重要,但在森林碳储量的背景下,传统的实地调查中大树的代表性不足,因此估算结果的约束很差。航空照片提供了光谱和纹理信息,可以区分不同、复杂的热带树冠中的树冠,这可能为大型树木的景观监测打开了大门。在这里,我们描述了一种新的深度卷积神经网络方法Detectree2,它建立在Mask R - CNN计算机视觉框架的基础上,从机载RGB图像中识别单个树冠的不规则边缘。我们在马来西亚婆罗洲的三个地点和法属圭亚那的一个地点对3797个人工绘制的树冠进行了训练和评估。作为一个应用实例,我们结合了四个地点的重复激光雷达调查(间隔3到6年)来估计上冠层树木的生长和死亡率。Detectree2在14公里的航空图像中描绘了65000棵上冠层树木。自动方法对未见测试树的圈定能力较好(f1得分= 0.64),对最高类别树的圈定能力较好(f1得分= 0.74)。正如以前的野外研究预测的那样,我们发现生长速率随树高而下降,高大树木的死亡率高于中等大小的树木。我们的方法表明,深度学习方法可以在广泛访问的RGB图像中自动分割树。这个工具(作为一个开源的Python包提供)在森林生态和保护中有许多潜在的应用,从估算碳储量到监测森林物候和恢复。Python包可在https://github.com/PatBall1/Detectree2上安装。
{"title":"Accurate delineation of individual tree crowns in tropical forests from aerial <scp>RGB</scp> imagery using Mask <scp>R‐CNN</scp>","authors":"James G. C. Ball, Sebastian H. M. Hickman, Tobias D. Jackson, Xian Jing Koay, James Hirst, William Jay, Matthew Archer, Mélaine Aubry‐Kientz, Grégoire Vincent, David A. Coomes","doi":"10.1002/rse2.332","DOIUrl":"https://doi.org/10.1002/rse2.332","url":null,"abstract":"Abstract Tropical forests are a major component of the global carbon cycle and home to two‐thirds of terrestrial species. Upper‐canopy trees store the majority of forest carbon and can be vulnerable to drought events and storms. Monitoring their growth and mortality is essential to understanding forest resilience to climate change, but in the context of forest carbon storage, large trees are underrepresented in traditional field surveys, so estimates are poorly constrained. Aerial photographs provide spectral and textural information to discriminate between tree crowns in diverse, complex tropical canopies, potentially opening the door to landscape monitoring of large trees. Here we describe a new deep convolutional neural network method, Detectree2 , which builds on the Mask R‐CNN computer vision framework to recognize the irregular edges of individual tree crowns from airborne RGB imagery. We trained and evaluated this model with 3797 manually delineated tree crowns at three sites in Malaysian Borneo and one site in French Guiana. As an example application, we combined the delineations with repeat lidar surveys (taken between 3 and 6 years apart) of the four sites to estimate the growth and mortality of upper‐canopy trees. Detectree2 delineated 65 000 upper‐canopy trees across 14 km 2 of aerial images. The skill of the automatic method in delineating unseen test trees was good ( F 1 score = 0.64) and for the tallest category of trees was excellent ( F 1 score = 0.74). As predicted from previous field studies, we found that growth rate declined with tree height and tall trees had higher mortality rates than intermediate‐size trees. Our approach demonstrates that deep learning methods can automatically segment trees in widely accessible RGB imagery. This tool (provided as an open‐source Python package) has many potential applications in forest ecology and conservation, from estimating carbon stocks to monitoring forest phenology and restoration. Python package available to install at https://github.com/PatBall1/Detectree2 .","PeriodicalId":21132,"journal":{"name":"Remote Sensing in Ecology and Conservation","volume":"222 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-05-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"135239414","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"环境科学与生态学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Ben. G. Weinstein, S. Marconi, Sarah J. Graves, Alina Zare, Aditya Singh, Stephanie A. Bohlman, L. Magee, Daniel J. Johnson, P. Townsend, E. White
Measuring forest biodiversity using terrestrial surveys is expensive and can only capture common species abundance in large heterogeneous landscapes. In contrast, combining airborne imagery with computer vision can generate individual tree data at the scales of hundreds of thousands of trees. To train computer vision models, ground‐based species labels are combined with airborne reflectance data. Due to the difficulty of finding rare species in a large landscape, many classification models only include the most abundant species, leading to biased predictions at broad scales. For example, if only common species are used to train the model, this assumes that these samples are representative across the entire landscape. Extending classification models to include rare species requires targeted data collection and algorithmic improvements to overcome large data imbalances between dominant and rare taxa. We use a targeted sampling workflow to the Ordway Swisher Biological Station within the US National Ecological Observatory Network (NEON), where traditional forestry plots had identified six canopy tree species with more than 10 individuals at the site. Combining iterative model development with rare species sampling, we extend a training dataset to include 14 species. Using a multi‐temporal hierarchical model, we demonstrate the ability to include species predicted at <1% frequency in landscape without losing performance on the dominant species. The final model has over 75% accuracy for 14 species with improved rare species classification compared to 61% accuracy of a baseline deep learning model. After filtering out dead trees, we generate landscape species maps of individual crowns for over 670 000 individual trees. We find distinct patches of forest composed of rarer species at the full‐site scale, highlighting the importance of capturing species diversity in training data. We estimate the relative abundance of 14 species within the landscape and provide three measures of uncertainty to generate a range of counts for each species. For example, we estimate that the dominant species, Pinus palustris accounts for c. 28% of predicted stems, with models predicting a range of counts between 160 000 and 210 000 individuals. These maps provide the first estimates of canopy tree diversity within a NEON site to include rare species and provide a blueprint for capturing tree diversity using airborne computer vision at broad scales.
{"title":"Capturing long‐tailed individual tree diversity using an airborne imaging and a multi‐temporal hierarchical model","authors":"Ben. G. Weinstein, S. Marconi, Sarah J. Graves, Alina Zare, Aditya Singh, Stephanie A. Bohlman, L. Magee, Daniel J. Johnson, P. Townsend, E. White","doi":"10.1002/rse2.335","DOIUrl":"https://doi.org/10.1002/rse2.335","url":null,"abstract":"Measuring forest biodiversity using terrestrial surveys is expensive and can only capture common species abundance in large heterogeneous landscapes. In contrast, combining airborne imagery with computer vision can generate individual tree data at the scales of hundreds of thousands of trees. To train computer vision models, ground‐based species labels are combined with airborne reflectance data. Due to the difficulty of finding rare species in a large landscape, many classification models only include the most abundant species, leading to biased predictions at broad scales. For example, if only common species are used to train the model, this assumes that these samples are representative across the entire landscape. Extending classification models to include rare species requires targeted data collection and algorithmic improvements to overcome large data imbalances between dominant and rare taxa. We use a targeted sampling workflow to the Ordway Swisher Biological Station within the US National Ecological Observatory Network (NEON), where traditional forestry plots had identified six canopy tree species with more than 10 individuals at the site. Combining iterative model development with rare species sampling, we extend a training dataset to include 14 species. Using a multi‐temporal hierarchical model, we demonstrate the ability to include species predicted at <1% frequency in landscape without losing performance on the dominant species. The final model has over 75% accuracy for 14 species with improved rare species classification compared to 61% accuracy of a baseline deep learning model. After filtering out dead trees, we generate landscape species maps of individual crowns for over 670 000 individual trees. We find distinct patches of forest composed of rarer species at the full‐site scale, highlighting the importance of capturing species diversity in training data. We estimate the relative abundance of 14 species within the landscape and provide three measures of uncertainty to generate a range of counts for each species. For example, we estimate that the dominant species, Pinus palustris accounts for c. 28% of predicted stems, with models predicting a range of counts between 160 000 and 210 000 individuals. These maps provide the first estimates of canopy tree diversity within a NEON site to include rare species and provide a blueprint for capturing tree diversity using airborne computer vision at broad scales.","PeriodicalId":21132,"journal":{"name":"Remote Sensing in Ecology and Conservation","volume":" ","pages":""},"PeriodicalIF":5.5,"publicationDate":"2023-05-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"42181383","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"环境科学与生态学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
G. Krivek, Alexander Gillert, Martin Harder, M. Fritze, Karina Frankowski, Luisa Timm, Liska Meyer‐Olbersleben, Uwe Freiherr von Lukas, G. Kerth, J. van Schaik
{"title":"BatNet\u0000 : a deep learning‐based tool for automated bat species identification from camera trap images","authors":"G. Krivek, Alexander Gillert, Martin Harder, M. Fritze, Karina Frankowski, Luisa Timm, Liska Meyer‐Olbersleben, Uwe Freiherr von Lukas, G. Kerth, J. van Schaik","doi":"10.1002/rse2.339","DOIUrl":"https://doi.org/10.1002/rse2.339","url":null,"abstract":"","PeriodicalId":21132,"journal":{"name":"Remote Sensing in Ecology and Conservation","volume":" ","pages":""},"PeriodicalIF":5.5,"publicationDate":"2023-05-09","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"43714566","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"环境科学与生态学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}