Rodrigo V. Leite, Cibele Amaral, Christopher S. R. Neigh, Diogo N. Cosenza, Carine Klauberg, Andrew T. Hudak, Luiz Aragão, Douglas C. Morton, Shane Coffield, Tempest McCabe, Carlos A. Silva
Managing fuels is a key strategy for mitigating the negative impacts of wildfires on people and the environment. The use of satellite‐based Earth observation data has become an important tool for managers to optimize fuel treatment planning at regional scales. Fortunately, several new sensors have been launched in the last few years, providing novel opportunities to enhance fuel characterization. Herein, we summarize the potential improvements in fuel characterization at large scale (i.e., hundreds to thousands of km2) with high spatial and spectral resolution arising from the use of new spaceborne instruments with near‐global, freely‐available data. We identified sensors at spatial resolutions suitable for fuel treatment planning, featuring: lidar data for characterizing vegetation structure; hyperspectral sensors for retrieving chemical compounds and species composition; and dense time series derived from multispectral and synthetic aperture radar sensors for mapping phenology and moisture dynamics. We also highlight future hyperspectral and radar missions that will deliver valuable and complementary information for a new era of fuel load characterization from space. The data volume that is being generated may still challenge the usability by a diverse group of stakeholders. Seamless cyberinfrastructure and community engagement are paramount to guarantee the use of these cutting‐edge datasets for fuel monitoring and wildland fire management across the world.
{"title":"Leveraging the next generation of spaceborne Earth observations for fuel monitoring and wildland fire management","authors":"Rodrigo V. Leite, Cibele Amaral, Christopher S. R. Neigh, Diogo N. Cosenza, Carine Klauberg, Andrew T. Hudak, Luiz Aragão, Douglas C. Morton, Shane Coffield, Tempest McCabe, Carlos A. Silva","doi":"10.1002/rse2.416","DOIUrl":"https://doi.org/10.1002/rse2.416","url":null,"abstract":"Managing fuels is a key strategy for mitigating the negative impacts of wildfires on people and the environment. The use of satellite‐based Earth observation data has become an important tool for managers to optimize fuel treatment planning at regional scales. Fortunately, several new sensors have been launched in the last few years, providing novel opportunities to enhance fuel characterization. Herein, we summarize the potential improvements in fuel characterization at large scale (i.e., hundreds to thousands of km<jats:sup>2</jats:sup>) with high spatial and spectral resolution arising from the use of new spaceborne instruments with near‐global, freely‐available data. We identified sensors at spatial resolutions suitable for fuel treatment planning, featuring: lidar data for characterizing vegetation structure; hyperspectral sensors for retrieving chemical compounds and species composition; and dense time series derived from multispectral and synthetic aperture radar sensors for mapping phenology and moisture dynamics. We also highlight future hyperspectral and radar missions that will deliver valuable and complementary information for a new era of fuel load characterization from space. The data volume that is being generated may still challenge the usability by a diverse group of stakeholders. Seamless cyberinfrastructure and community engagement are paramount to guarantee the use of these cutting‐edge datasets for fuel monitoring and wildland fire management across the world.","PeriodicalId":21132,"journal":{"name":"Remote Sensing in Ecology and Conservation","volume":"96 1","pages":""},"PeriodicalIF":5.5,"publicationDate":"2024-08-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141998774","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"环境科学与生态学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Jenny Bueno, Sarah E. Lester, Joshua L. Breithaupt, Sandra Brooke
The eastern oyster (Crassostrea virginica) is a coastal foundation species currently under threat from anthropogenic activities both globally and in the Apalachicola Bay region of north Florida. Oysters provide numerous ecosystem services, and it is important to establish efficient and reliable methods for their effective monitoring and management. Traditional monitoring techniques, such as quadrat density sampling, can be labor‐intensive, destructive of both oysters and reefs, and may be spatially limited. In this study, we demonstrate how unoccupied aerial systems (UAS) can be used to efficiently generate high‐resolution geospatial oyster reef condition data over large areas. These data, with appropriate ground truthing and minimal destructive sampling, can be used to effectively monitor the size and abundance of oyster clusters on intertidal reefs. Utilizing structure‐from‐motion photogrammetry techniques to create three‐dimensional topographic models, we reconstructed the distribution, spatial density and size of oyster clusters on intertidal reefs in Apalachicola Bay. Ground truthing revealed 97% accuracy for cluster presence detection by UAS products and we confirmed that live oysters are predominately located within clusters, supporting the use of cluster features to estimate oyster population status. We found a positive significant relationship between cluster size and live oyster counts. These findings allowed us to extract clusters from geospatial products and predict live oyster abundance and spatial density on 138 reefs covering 138 382 m2 over two locations. Oyster densities varied between sites, with higher live oyster densities occurring at one site within the Apalachicola Bay bounds, and lower oyster densities in areas adjacent to Apalachicola Bay. Repeated monitoring at one site in 2022 and 2023 revealed a relatively stable oyster density over time. This study demonstrated the successful application of high‐resolution drone imagery combined with cluster sampling, providing a repeatable method for mapping and monitoring to inform conservation, restoration and management strategies for intertidal oyster populations.
{"title":"The application of unoccupied aerial systems (UAS) for monitoring intertidal oyster density and abundance","authors":"Jenny Bueno, Sarah E. Lester, Joshua L. Breithaupt, Sandra Brooke","doi":"10.1002/rse2.417","DOIUrl":"https://doi.org/10.1002/rse2.417","url":null,"abstract":"The eastern oyster (<jats:italic>Crassostrea virginica</jats:italic>) is a coastal foundation species currently under threat from anthropogenic activities both globally and in the Apalachicola Bay region of north Florida. Oysters provide numerous ecosystem services, and it is important to establish efficient and reliable methods for their effective monitoring and management. Traditional monitoring techniques, such as quadrat density sampling, can be labor‐intensive, destructive of both oysters and reefs, and may be spatially limited. In this study, we demonstrate how unoccupied aerial systems (UAS) can be used to efficiently generate high‐resolution geospatial oyster reef condition data over large areas. These data, with appropriate ground truthing and minimal destructive sampling, can be used to effectively monitor the size and abundance of oyster clusters on intertidal reefs. Utilizing structure‐from‐motion photogrammetry techniques to create three‐dimensional topographic models, we reconstructed the distribution, spatial density and size of oyster clusters on intertidal reefs in Apalachicola Bay. Ground truthing revealed 97% accuracy for cluster presence detection by UAS products and we confirmed that live oysters are predominately located within clusters, supporting the use of cluster features to estimate oyster population status. We found a positive significant relationship between cluster size and live oyster counts. These findings allowed us to extract clusters from geospatial products and predict live oyster abundance and spatial density on 138 reefs covering 138 382 m<jats:sup>2</jats:sup> over two locations. Oyster densities varied between sites, with higher live oyster densities occurring at one site within the Apalachicola Bay bounds, and lower oyster densities in areas adjacent to Apalachicola Bay. Repeated monitoring at one site in 2022 and 2023 revealed a relatively stable oyster density over time. This study demonstrated the successful application of high‐resolution drone imagery combined with cluster sampling, providing a repeatable method for mapping and monitoring to inform conservation, restoration and management strategies for intertidal oyster populations.","PeriodicalId":21132,"journal":{"name":"Remote Sensing in Ecology and Conservation","volume":"17 1","pages":""},"PeriodicalIF":5.5,"publicationDate":"2024-08-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141980647","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"环境科学与生态学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Chiara Aquino, Edward T. A. Mitchard, Iain M. McNicol, Harry Carstairs, Andrew Burt, Beisit L. P. Vilca, Sylvia Mayta, Mathias Disney
Selective logging is known to be widespread in the tropics, but is currently very poorly mapped, in part because there is little quantitative data on which satellite sensor characteristics and analysis methods are best at detecting it. To improve this, we used data from the Tropical Forest Degradation Experiment (FODEX) plots in the southern Peruvian Amazon, where different numbers of trees had been removed from four plots of 1 ha each, carefully inventoried by hand and terrestrial laser scanning before and after the logging to give a range of biomass loss (∆AGB) values. We conducted a comparative study of six multispectral optical satellite sensors at 0.3–30 m spatial resolution, to find the best combination of sensor and remote sensing indicator for change detection. Spectral reflectance, the normalised difference vegetation index (NDVI) and texture parameters were extracted after radiometric calibration and image preprocessing. The strength of the relationships between the change in these values and field‐measured ∆AGB (computed in % ha−1) was analysed. The results demonstrate that: (a) texture measures correlates more with ∆AGB than simple spectral parameters; (b) the strongest correlations are achieved for those sensors with spatial resolutions in the intermediate range (1.5–10 m), with finer or coarser resolutions producing worse results, and (c) when texture is computed using a moving square window ranging between 9 and 14 m in length. Maps predicting ∆AGB showed very promising results using a NIR‐derived texture parameter for 3 m resolution PlanetScope (R2 = 0.97 and root mean square error (RMSE) = 1.91% ha−1), followed by 1.5 m SPOT‐7 (R2 = 0.76 and RMSE = 5.06% ha−1) and 10 m Sentinel‐2 (R2 = 0.79 and RMSE = 4.77% ha−1). Our findings imply that, at least for lowland Peru, low‐medium intensity disturbance can be detected best in optical wavelengths using a texture measure derived from 3 m PlanetScope data.
{"title":"Detecting selective logging in tropical forests with optical satellite data: an experiment in Peru shows texture at 3 m gives the best results","authors":"Chiara Aquino, Edward T. A. Mitchard, Iain M. McNicol, Harry Carstairs, Andrew Burt, Beisit L. P. Vilca, Sylvia Mayta, Mathias Disney","doi":"10.1002/rse2.414","DOIUrl":"https://doi.org/10.1002/rse2.414","url":null,"abstract":"Selective logging is known to be widespread in the tropics, but is currently very poorly mapped, in part because there is little quantitative data on which satellite sensor characteristics and analysis methods are best at detecting it. To improve this, we used data from the Tropical Forest Degradation Experiment (FODEX) plots in the southern Peruvian Amazon, where different numbers of trees had been removed from four plots of 1 ha each, carefully inventoried by hand and terrestrial laser scanning before and after the logging to give a range of biomass loss (∆AGB) values. We conducted a comparative study of six multispectral optical satellite sensors at 0.3–30 m spatial resolution, to find the best combination of sensor and remote sensing indicator for change detection. Spectral reflectance, the normalised difference vegetation index (NDVI) and texture parameters were extracted after radiometric calibration and image preprocessing. The strength of the relationships between the change in these values and field‐measured ∆AGB (computed in % ha<jats:sup>−1</jats:sup>) was analysed. The results demonstrate that: (a) texture measures correlates more with ∆AGB than simple spectral parameters; (b) the strongest correlations are achieved for those sensors with spatial resolutions in the intermediate range (1.5–10 m), with finer or coarser resolutions producing worse results, and (c) when texture is computed using a moving square window ranging between 9 and 14 m in length. Maps predicting ∆AGB showed very promising results using a NIR‐derived texture parameter for 3 m resolution PlanetScope (<jats:italic>R</jats:italic><jats:sup>2</jats:sup> = 0.97 and root mean square error (RMSE) = 1.91% ha<jats:sup>−1</jats:sup>), followed by 1.5 m SPOT‐7 (<jats:italic>R</jats:italic><jats:sup>2</jats:sup> = 0.76 and RMSE = 5.06% ha<jats:sup>−1</jats:sup>) and 10 m Sentinel‐2 (<jats:italic>R</jats:italic><jats:sup>2</jats:sup> = 0.79 and RMSE = 4.77% ha<jats:sup>−1</jats:sup>). Our findings imply that, at least for lowland Peru, low‐medium intensity disturbance can be detected best in optical wavelengths using a texture measure derived from 3 m PlanetScope data.","PeriodicalId":21132,"journal":{"name":"Remote Sensing in Ecology and Conservation","volume":"6 1","pages":""},"PeriodicalIF":5.5,"publicationDate":"2024-07-31","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141862350","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"环境科学与生态学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Cate Ryan, Hannah L. Buckley, Craig D. Bishop, Graham Hinchliffe, Bradley C. Case
Coastal active dunes provide vital biodiversity, habitat, and ecosystem services, yet they are one of the most endangered and understudied ecosystems worldwide. Therefore, monitoring the status of these systems is essential, but field vegetation surveys are time‐consuming and expensive. Remotely sensed aerial imagery offers spatially continuous, low‐cost, high‐resolution coverage, allowing for vegetation mapping across larger areas than traditional field surveys. Taking Aotearoa New Zealand as a case study, we used a nationally representative sample of coastal active dunes to classify vegetation from red‐green‐blue (RGB) high‐resolution (0.075–0.75 m) aerial imagery with object‐based image analysis. The mean overall accuracy was 0.76 across 21 beaches for aggregated classes, and key cover classes, such as sand, sandbinders, and woody vegetation, were discerned. However, differentiation among woody vegetation species on semi‐stable and stable dunes posed a challenge. We developed a national cover typology from the classification, comprising seven vegetation types. Classification tree models showed that where human activity was higher, it was more important than geomorphic factors in influencing the relative percent cover of the different active dune cover classes. Our methods provide a quantitative approach to characterizing the cover classes on active dunes at a national scale, which are relevant for conservation management, including habitat mapping, determining species occupancy, indigenous dominance, and the representativeness of remaining active dunes.
{"title":"Quantifying vegetation cover on coastal active dunes using nationwide aerial image analysis","authors":"Cate Ryan, Hannah L. Buckley, Craig D. Bishop, Graham Hinchliffe, Bradley C. Case","doi":"10.1002/rse2.410","DOIUrl":"https://doi.org/10.1002/rse2.410","url":null,"abstract":"Coastal active dunes provide vital biodiversity, habitat, and ecosystem services, yet they are one of the most endangered and understudied ecosystems worldwide. Therefore, monitoring the status of these systems is essential, but field vegetation surveys are time‐consuming and expensive. Remotely sensed aerial imagery offers spatially continuous, low‐cost, high‐resolution coverage, allowing for vegetation mapping across larger areas than traditional field surveys. Taking Aotearoa New Zealand as a case study, we used a nationally representative sample of coastal active dunes to classify vegetation from red‐green‐blue (RGB) high‐resolution (0.075–0.75 m) aerial imagery with object‐based image analysis. The mean overall accuracy was 0.76 across 21 beaches for aggregated classes, and key cover classes, such as sand, sandbinders, and woody vegetation, were discerned. However, differentiation among woody vegetation species on semi‐stable and stable dunes posed a challenge. We developed a national cover typology from the classification, comprising seven vegetation types. Classification tree models showed that where human activity was higher, it was more important than geomorphic factors in influencing the relative percent cover of the different active dune cover classes. Our methods provide a quantitative approach to characterizing the cover classes on active dunes at a national scale, which are relevant for conservation management, including habitat mapping, determining species occupancy, indigenous dominance, and the representativeness of remaining active dunes.","PeriodicalId":21132,"journal":{"name":"Remote Sensing in Ecology and Conservation","volume":"28 1","pages":""},"PeriodicalIF":5.5,"publicationDate":"2024-07-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141631631","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"环境科学与生态学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Mimi Arandjelovic, Colleen R. Stephens, Paula Dieguez, Nuria Maldonado, Gaëlle Bocksberger, Marie‐Lyne Després‐Einspenner, Benjamin Debetencourt, Vittoria Estienne, Ammie K. Kalan, Maureen S. McCarthy, Anne‐Céline Granjon, Veronika Städele, Briana Harder, Lucia Hacker, Anja Landsmann, Laura K. Lynn, Heidi Pfund, Zuzana Ročkaiová, Kristeena Sigler, Jane Widness, Heike Wilken, Antonio Buzharevski, Adeelia S. Goffe, Kristin Havercamp, Lydia L. Luncz, Giulia Sirianni, Erin G. Wessling, Roman M. Wittig, Christophe Boesch, Hjalmar S. Kühl
As camera trapping grows in popularity and application, some analytical limitations persist including processing time and accuracy of data annotation. Typically images are recorded by camera traps although videos are becoming increasingly collected even though they require much more time for annotation. To overcome limitations with image annotation, camera trap studies are increasingly linked to community science (CS) platforms. Here, we extend previous work on CS image annotations to camera trap videos from a challenging environment; a dense tropical forest with low visibility and high occlusion due to thick canopy cover and bushy undergrowth at the camera level. Using the CS platform Chimp&See, established for classification of 599 956 video clips from tropical Africa, we assess annotation precision and accuracy by comparing classification of 13 531 1‐min video clips by a professional ecologist (PE) with output from 1744 registered, as well as unregistered, Chimp&See community scientists. We considered 29 classification categories, including 17 species and 12 higher‐level categories, in which phenotypically similar species were grouped. Overall, annotation precision was 95.4%, which increased to 98.2% when aggregating similar species groups together. Our findings demonstrate the competence of community scientists working with camera trap videos from even challenging environments and hold great promise for future studies on animal behaviour, species interaction dynamics and population monitoring.
{"title":"Highly precise community science annotations of video camera‐trapped fauna in challenging environments","authors":"Mimi Arandjelovic, Colleen R. Stephens, Paula Dieguez, Nuria Maldonado, Gaëlle Bocksberger, Marie‐Lyne Després‐Einspenner, Benjamin Debetencourt, Vittoria Estienne, Ammie K. Kalan, Maureen S. McCarthy, Anne‐Céline Granjon, Veronika Städele, Briana Harder, Lucia Hacker, Anja Landsmann, Laura K. Lynn, Heidi Pfund, Zuzana Ročkaiová, Kristeena Sigler, Jane Widness, Heike Wilken, Antonio Buzharevski, Adeelia S. Goffe, Kristin Havercamp, Lydia L. Luncz, Giulia Sirianni, Erin G. Wessling, Roman M. Wittig, Christophe Boesch, Hjalmar S. Kühl","doi":"10.1002/rse2.402","DOIUrl":"https://doi.org/10.1002/rse2.402","url":null,"abstract":"As camera trapping grows in popularity and application, some analytical limitations persist including processing time and accuracy of data annotation. Typically images are recorded by camera traps although videos are becoming increasingly collected even though they require much more time for annotation. To overcome limitations with image annotation, camera trap studies are increasingly linked to community science (CS) platforms. Here, we extend previous work on CS image annotations to camera trap videos from a challenging environment; a dense tropical forest with low visibility and high occlusion due to thick canopy cover and bushy undergrowth at the camera level. Using the CS platform Chimp&See, established for classification of 599 956 video clips from tropical Africa, we assess annotation precision and accuracy by comparing classification of 13 531 1‐min video clips by a professional ecologist (PE) with output from 1744 registered, as well as unregistered, Chimp&See community scientists. We considered 29 classification categories, including 17 species and 12 higher‐level categories, in which phenotypically similar species were grouped. Overall, annotation precision was 95.4%, which increased to 98.2% when aggregating similar species groups together. Our findings demonstrate the competence of community scientists working with camera trap videos from even challenging environments and hold great promise for future studies on animal behaviour, species interaction dynamics and population monitoring.","PeriodicalId":21132,"journal":{"name":"Remote Sensing in Ecology and Conservation","volume":"17 1","pages":""},"PeriodicalIF":5.5,"publicationDate":"2024-06-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141452967","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"环境科学与生态学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Daire Carroll, Eduardo Infantes, Eva V. Pagan, Karin C. Harding
Body mass is a fundamental indicator of animal health closely linked to survival and reproductive success. Systematic assessment of body mass for a large proportion of a population can allow early detection of changes likely to impact population growth, facilitating responsive management and a mechanistic understanding of ecological trends. One challenge with integrating body mass assessment into monitoring is sampling enough animals to detect trends and account for individual variation. Harbour seals (Phoca vitulina) are philopatric marine mammals responsive to regional environmental changes, resulting in their use as an indicator species. We present a novel method for the non‐invasive and semi‐automatic assessment of harbour seal body condition, using unoccupied aerial vehicles (UAVs/drones). Morphological parameters are automatically measured in georeferenced images and used to estimate volume, which is then translated to estimated mass. Remote observations of known individuals are utilized to calibrate the method. We achieve a high level of accuracy (mean absolute error of 4.5 kg or 10.5% for all seals and 3.2 kg or 12.7% for pups‐of‐the‐year). We systematically apply the method to wild seals during the Spring pupping season and Autumn over 2 years, achieving a near‐population‐level assessment for pups on land (82.5% measured). With reference to previous mark‐recapture work linking Autumn pup weights to survival, we estimate mean expected probability of over‐winter survival (mean = 0.89, standard deviation = 0.08). This work marks a significant step forward for the non‐invasive assessment of body condition in pinnipeds and could provide daily estimates of body mass for thousands of individuals. It can act as an early warning for deteriorating environmental conditions and be utilized as an integrative tool for wildlife monitoring. It also enables estimation of yearly variation in demographic rates which can be utilized in parameterizing models of population growth with relevance for conservation and evolutionary biology.
{"title":"Approaching a population‐level assessment of body size in pinnipeds using drones, an early warning of environmental degradation","authors":"Daire Carroll, Eduardo Infantes, Eva V. Pagan, Karin C. Harding","doi":"10.1002/rse2.413","DOIUrl":"https://doi.org/10.1002/rse2.413","url":null,"abstract":"Body mass is a fundamental indicator of animal health closely linked to survival and reproductive success. Systematic assessment of body mass for a large proportion of a population can allow early detection of changes likely to impact population growth, facilitating responsive management and a mechanistic understanding of ecological trends. One challenge with integrating body mass assessment into monitoring is sampling enough animals to detect trends and account for individual variation. Harbour seals (<jats:italic>Phoca vitulina</jats:italic>) are philopatric marine mammals responsive to regional environmental changes, resulting in their use as an indicator species. We present a novel method for the non‐invasive and semi‐automatic assessment of harbour seal body condition, using unoccupied aerial vehicles (UAVs/drones). Morphological parameters are automatically measured in georeferenced images and used to estimate volume, which is then translated to estimated mass. Remote observations of known individuals are utilized to calibrate the method. We achieve a high level of accuracy (mean absolute error of 4.5 kg or 10.5% for all seals and 3.2 kg or 12.7% for pups‐of‐the‐year). We systematically apply the method to wild seals during the Spring pupping season and Autumn over 2 years, achieving a near‐population‐level assessment for pups on land (82.5% measured). With reference to previous mark‐recapture work linking Autumn pup weights to survival, we estimate mean expected probability of over‐winter survival (mean = 0.89, standard deviation = 0.08). This work marks a significant step forward for the non‐invasive assessment of body condition in pinnipeds and could provide daily estimates of body mass for thousands of individuals. It can act as an early warning for deteriorating environmental conditions and be utilized as an integrative tool for wildlife monitoring. It also enables estimation of yearly variation in demographic rates which can be utilized in parameterizing models of population growth with relevance for conservation and evolutionary biology.","PeriodicalId":21132,"journal":{"name":"Remote Sensing in Ecology and Conservation","volume":"102 1","pages":""},"PeriodicalIF":5.5,"publicationDate":"2024-06-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141452988","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"环境科学与生态学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Silvia Giuntini, Juha Saari, Adriano Martinoli, Damiano G. Preatoni, Birgen Haest, Baptiste Schmid, Nadja Weisshaupt
Studying nocturnal bird migration is challenging because direct visual observations are difficult during darkness. Radar has been the means of choice to study nocturnal bird migration for several decades, but provides limited taxonomic information. Here, to ascertain the feasibility of enhancing the taxonomic resolution of radar data, we combined acoustic data with vertical‐looking radar measurements to quantify thrush (Family: Turdidae) migration. Acoustic recordings, collected in Helsinki between August and October of 2021–2022, were used to identify likely nights of high and low thrush migration. Then, we built a random forest classifier that used recorded radar signals from those nights to separate all migrating passerines across the autumn migration season into thrushes and non‐thrushes. The classifier had a high overall accuracy (≈0.82), with wingbeat frequency and bird size being key for separation. The overall estimated thrush autumn migration phenology was in line with known migratory patterns and strongly correlated (Pearson correlation coefficient ≈0.65) with the phenology of the acoustic data. These results confirm how the joint application of acoustic and vertical‐looking radar data can, under certain migratory conditions and locations, be used to quantify ‘family‐level’ bird migration.
{"title":"Quantifying nocturnal thrush migration using sensor data fusion between acoustics and vertical‐looking radar","authors":"Silvia Giuntini, Juha Saari, Adriano Martinoli, Damiano G. Preatoni, Birgen Haest, Baptiste Schmid, Nadja Weisshaupt","doi":"10.1002/rse2.397","DOIUrl":"https://doi.org/10.1002/rse2.397","url":null,"abstract":"Studying nocturnal bird migration is challenging because direct visual observations are difficult during darkness. Radar has been the means of choice to study nocturnal bird migration for several decades, but provides limited taxonomic information. Here, to ascertain the feasibility of enhancing the taxonomic resolution of radar data, we combined acoustic data with vertical‐looking radar measurements to quantify thrush (Family: Turdidae) migration. Acoustic recordings, collected in Helsinki between August and October of 2021–2022, were used to identify likely nights of high and low thrush migration. Then, we built a random forest classifier that used recorded radar signals from those nights to separate all migrating passerines across the autumn migration season into thrushes and non‐thrushes. The classifier had a high overall accuracy (≈0.82), with wingbeat frequency and bird size being key for separation. The overall estimated thrush autumn migration phenology was in line with known migratory patterns and strongly correlated (Pearson correlation coefficient ≈0.65) with the phenology of the acoustic data. These results confirm how the joint application of acoustic and vertical‐looking radar data can, under certain migratory conditions and locations, be used to quantify ‘family‐level’ bird migration.","PeriodicalId":21132,"journal":{"name":"Remote Sensing in Ecology and Conservation","volume":"23 1","pages":""},"PeriodicalIF":5.5,"publicationDate":"2024-06-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141448122","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"环境科学与生态学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Amy Stone, Sharyn Hickey, Ben Radford, Mary Wakeford
Although emergent coral reefs represent a significant proportion of overall reef habitat, they are often excluded from monitoring projects due to their shallow and exposed setting that makes them challenging to access. Using drones to survey emergent reefs overcomes issues around access to this habitat type; however, methods for deriving robust monitoring metrics, such as coral cover, are not well developed for drone imagery. To address this knowledge gap, we compare the effectiveness of two remote sensing methods in quantifying broad substrate groups, such as coral cover, on a lagoon bommie, namely a pixel‐based (PB) model versus an object‐based (OB) model. For the OB model, two segmentation methods were considered: an optimized mean shift segmentation and the fully automated Segment Anything Model (SAM). Mean shift segmentation was assessed as the preferred method and applied in the final OB model (SAM exhibited poor identification of coral patches on the bommie). While good cross‐validation accuracies were achieved for both models, the PB had generally higher overall accuracy (mean accuracy PB = 75%, OB = 70%) and kappa (mean kappa PB = 0.69, OB = 0.63), making it the preferred method for monitoring coral cover. Both models were limited by the low contrast between Coral features and the bommie substrate in the drone imagery, causing indistinct segment boundaries in the OB model that increased misclassification. For both models, the inclusion of a drone‐derived digital surface model and multiscale derivatives was critical to predicting coral habitat. Our success in creating emergent reef habitat models with high accuracy demonstrates the niche role drones could play in monitoring these habitat types, which are particularly vulnerable to rising sea surface and air temperatures, as well as sea level rise which is predicted to outpace reef vertical accretion rates.
虽然新生珊瑚礁在整个珊瑚礁栖息地中占很大比例,但由于其位置较浅且暴露在外,难以进入,因此常常被排除在监测项目之外。使用无人机勘测突起珊瑚礁克服了进入这种生境类型的问题;但是,无人机图像得出珊瑚覆盖率等可靠监测指标的方法并不完善。为了填补这一知识空白,我们比较了两种遥感方法(即基于像素(PB)的模型和基于对象(OB)的模型)在量化泻湖礁石上珊瑚覆盖率等广泛基质群方面的效果。对于 OB 模型,考虑了两种分割方法:优化的均值偏移分割法和全自动的 "任意分割模型"(SAM)。平均移位分割法被认为是首选方法,并被应用于最终的 OB 模型中(SAM 对 Bommie 上珊瑚斑块的识别能力较差)。虽然两个模型都达到了良好的交叉验证精度,但 PB 的总体精度(平均精度 PB = 75%,OB = 70%)和卡帕值(平均卡帕值 PB = 0.69,OB = 0.63)普遍较高,因此成为监测珊瑚覆盖率的首选方法。两种模型都受到了无人机图像中珊瑚特征与鲂鱼底质之间对比度低的限制,导致 OB 模型中的区段边界不清晰,从而增加了误分类。对于这两个模型来说,包含无人机数字表面模型和多尺度衍生物对于预测珊瑚栖息地至关重要。我们成功创建了高精度的新兴珊瑚礁栖息地模型,这表明无人机在监测这些栖息地类型方面可以发挥利基作用,因为这些栖息地特别容易受到海面和气温上升以及海平面上升的影响,而海平面上升的速度预计将超过珊瑚礁垂直增生的速度。
{"title":"Mapping emergent coral reefs: a comparison of pixel‐ and object‐based methods","authors":"Amy Stone, Sharyn Hickey, Ben Radford, Mary Wakeford","doi":"10.1002/rse2.401","DOIUrl":"https://doi.org/10.1002/rse2.401","url":null,"abstract":"Although emergent coral reefs represent a significant proportion of overall reef habitat, they are often excluded from monitoring projects due to their shallow and exposed setting that makes them challenging to access. Using drones to survey emergent reefs overcomes issues around access to this habitat type; however, methods for deriving robust monitoring metrics, such as coral cover, are not well developed for drone imagery. To address this knowledge gap, we compare the effectiveness of two remote sensing methods in quantifying broad substrate groups, such as coral cover, on a lagoon bommie, namely a pixel‐based (PB) model versus an object‐based (OB) model. For the OB model, two segmentation methods were considered: an optimized mean shift segmentation and the fully automated Segment Anything Model (SAM). Mean shift segmentation was assessed as the preferred method and applied in the final OB model (SAM exhibited poor identification of coral patches on the bommie). While good cross‐validation accuracies were achieved for both models, the PB had generally higher overall accuracy (mean accuracy PB = 75%, OB = 70%) and kappa (mean kappa PB = 0.69, OB = 0.63), making it the preferred method for monitoring coral cover. Both models were limited by the low contrast between Coral features and the bommie substrate in the drone imagery, causing indistinct segment boundaries in the OB model that increased misclassification. For both models, the inclusion of a drone‐derived digital surface model and multiscale derivatives was critical to predicting coral habitat. Our success in creating emergent reef habitat models with high accuracy demonstrates the niche role drones could play in monitoring these habitat types, which are particularly vulnerable to rising sea surface and air temperatures, as well as sea level rise which is predicted to outpace reef vertical accretion rates.","PeriodicalId":21132,"journal":{"name":"Remote Sensing in Ecology and Conservation","volume":"31 1","pages":""},"PeriodicalIF":5.5,"publicationDate":"2024-05-29","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141177384","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"环境科学与生态学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Cheryl L. Doughty, Kyle C. Cavanaugh, Samantha Chapman, Lola Fatoyinbo
Mangroves are important ecosystems for coastal biodiversity, resilience and carbon dynamics that are being threatened globally by human pressures and the impacts of climate change. Yet, at several geographic range limits in tropical–temperate transition zones, mangrove ecosystems are expanding poleward in response to changing macroclimatic drivers. Mangroves near range limits often grow to smaller statures and form dynamic, patchy distributions with other coastal habitats, which are difficult to map using moderate‐resolution (30‐m) satellite imagery. As a result, many of these mangrove areas are missing in global distribution maps. To better map small, scrub mangroves, we tested Landsat (30‐m) and Sentinel (10‐m) against very high resolution (VHR) Planet (3‐m) and WorldView (1.8‐m) imagery and assessed the accuracy of machine learning classification approaches in discerning current (2022) mangrove and saltmarsh from other coastal habitats in a rapidly changing ecotone along the east coast of Florida, USA. Our aim is to (1) quantify the mappable differences in landscape composition and complexity, class dominance and spatial properties of mangrove and saltmarsh patches due to image resolution; and (2) to resolve mapping uncertainties in the region. We found that the ability of Landsat to map mangrove distributions at the leading range edge was hampered by the size and extent of mangrove stands being too small for detection (50% accuracy). WorldView was the most successful in discerning mangroves from other wetland habitats (84% accuracy), closely followed by Planet (82%) and Sentinel (81%). With WorldView, we detected 800 ha of mangroves within the Florida range‐limit study area, 35% more mangroves than were detected with Planet, 114% more than Sentinel and 537% more than Landsat. Higher‐resolution imagery helped reveal additional variability in landscape metrics quantifying diversity, spatial configuration and connectedness among mangrove and saltmarsh habitats at the landscape, class and patch scales. Overall, VHR satellite imagery improved our ability to map mangroves at range limits and can help supplement moderate‐resolution global distributions and outdated regional maps.
{"title":"Uncovering mangrove range limits using very high resolution satellite imagery to detect fine‐scale mangrove and saltmarsh habitats in dynamic coastal ecotones","authors":"Cheryl L. Doughty, Kyle C. Cavanaugh, Samantha Chapman, Lola Fatoyinbo","doi":"10.1002/rse2.394","DOIUrl":"https://doi.org/10.1002/rse2.394","url":null,"abstract":"Mangroves are important ecosystems for coastal biodiversity, resilience and carbon dynamics that are being threatened globally by human pressures and the impacts of climate change. Yet, at several geographic range limits in tropical–temperate transition zones, mangrove ecosystems are expanding poleward in response to changing macroclimatic drivers. Mangroves near range limits often grow to smaller statures and form dynamic, patchy distributions with other coastal habitats, which are difficult to map using moderate‐resolution (30‐m) satellite imagery. As a result, many of these mangrove areas are missing in global distribution maps. To better map small, scrub mangroves, we tested Landsat (30‐m) and Sentinel (10‐m) against very high resolution (VHR) Planet (3‐m) and WorldView (1.8‐m) imagery and assessed the accuracy of machine learning classification approaches in discerning current (2022) mangrove and saltmarsh from other coastal habitats in a rapidly changing ecotone along the east coast of Florida, USA. Our aim is to (1) quantify the mappable differences in landscape composition and complexity, class dominance and spatial properties of mangrove and saltmarsh patches due to image resolution; and (2) to resolve mapping uncertainties in the region. We found that the ability of Landsat to map mangrove distributions at the leading range edge was hampered by the size and extent of mangrove stands being too small for detection (50% accuracy). WorldView was the most successful in discerning mangroves from other wetland habitats (84% accuracy), closely followed by Planet (82%) and Sentinel (81%). With WorldView, we detected 800 ha of mangroves within the Florida range‐limit study area, 35% more mangroves than were detected with Planet, 114% more than Sentinel and 537% more than Landsat. Higher‐resolution imagery helped reveal additional variability in landscape metrics quantifying diversity, spatial configuration and connectedness among mangrove and saltmarsh habitats at the landscape, class and patch scales. Overall, VHR satellite imagery improved our ability to map mangroves at range limits and can help supplement moderate‐resolution global distributions and outdated regional maps.","PeriodicalId":21132,"journal":{"name":"Remote Sensing in Ecology and Conservation","volume":"26 1","pages":""},"PeriodicalIF":5.5,"publicationDate":"2024-05-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141096665","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"环境科学与生态学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Hannah C. Cubaynes, Jaume Forcada, Kit M. Kovacs, Christian Lydersen, Rod Downie, Peter T. Fretwell
Regular counts of walruses (Odobenus rosmarus) across their pan‐Arctic range are necessary to determine accurate population trends and in turn understand how current rapid changes in their habitat, such as sea ice loss, are impacting them. However, surveying a region as vast and remote as the Arctic with vessels or aircraft is a formidable logistical challenge, limiting the frequency and spatial coverage of field surveys. An alternative methodology involving very high‐resolution (VHR) satellite imagery has proven to be a useful tool to detect walruses, but the feasibility of accurately counting individuals has not been addressed. Here, we compare walrus counts obtained from a VHR WorldView‐3 satellite image, with a simultaneous ground count obtained using a remotely piloted aircraft system (RPAS). We estimated the accuracy of the walrus counts depending on (1) the spatial resolution of the VHR satellite imagery, providing the same WorldView‐3 image to assessors at three different spatial resolutions (i.e., 50, 30 and 15 cm per pixel) and (2) the level of expertise of the assessors (experts vs. a mixed level of experience – representative of citizen scientists). This latter aspect of the study is important to the efficiency and outcomes of the global assessment programme because there are citizen science campaigns inviting the public to count walruses in VHR satellite imagery. There were 73 walruses in our RPAS ‘control’ image. Our results show that walruses were under‐counted in VHR satellite imagery at all spatial resolutions and across all levels of assessor expertise. Counts from the VHR satellite imagery with 30 cm spatial resolution were the most accurate and least variable across levels of expertise. This was a successful first attempt at validating VHR counts with near‐simultaneous, in situ, data but further assessments are required for walrus aggregations with different densities and configurations, on different substrates.
{"title":"Walruses from space: walrus counts in simultaneous remotely piloted aircraft system versus very high‐resolution satellite imagery","authors":"Hannah C. Cubaynes, Jaume Forcada, Kit M. Kovacs, Christian Lydersen, Rod Downie, Peter T. Fretwell","doi":"10.1002/rse2.391","DOIUrl":"https://doi.org/10.1002/rse2.391","url":null,"abstract":"Regular counts of walruses (<jats:italic>Odobenus rosmarus</jats:italic>) across their pan‐Arctic range are necessary to determine accurate population trends and in turn understand how current rapid changes in their habitat, such as sea ice loss, are impacting them. However, surveying a region as vast and remote as the Arctic with vessels or aircraft is a formidable logistical challenge, limiting the frequency and spatial coverage of field surveys. An alternative methodology involving very high‐resolution (VHR) satellite imagery has proven to be a useful tool to detect walruses, but the feasibility of accurately counting individuals has not been addressed. Here, we compare walrus counts obtained from a VHR WorldView‐3 satellite image, with a simultaneous ground count obtained using a remotely piloted aircraft system (RPAS). We estimated the accuracy of the walrus counts depending on (1) the spatial resolution of the VHR satellite imagery, providing the same WorldView‐3 image to assessors at three different spatial resolutions (i.e., 50, 30 and 15 cm per pixel) and (2) the level of expertise of the assessors (experts vs. a mixed level of experience – representative of citizen scientists). This latter aspect of the study is important to the efficiency and outcomes of the global assessment programme because there are citizen science campaigns inviting the public to count walruses in VHR satellite imagery. There were 73 walruses in our RPAS ‘control’ image. Our results show that walruses were under‐counted in VHR satellite imagery at all spatial resolutions and across all levels of assessor expertise. Counts from the VHR satellite imagery with 30 cm spatial resolution were the most accurate and least variable across levels of expertise. This was a successful first attempt at validating VHR counts with near‐simultaneous, in situ, data but further assessments are required for walrus aggregations with different densities and configurations, on different substrates.","PeriodicalId":21132,"journal":{"name":"Remote Sensing in Ecology and Conservation","volume":"26 1","pages":""},"PeriodicalIF":5.5,"publicationDate":"2024-05-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141085525","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"环境科学与生态学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}