Geotechnologies have significant potential for application in socio-environmental analysis coupled to disaster risk reduction. Equipment and applications are available that are supported by scientific computing, promoting advances in the acquisition and processing of remote sensing data. Among these are two types: (i) the associated equipment to technology LiDAR (light detection and ranging) and (ii) remotely piloted aircraft systems (RPAS) with platforms of remote sensors. Recently, an growing number of studies has been observed that have the potential for applications in the sensors equipped in RPAS for environmental studies, especially those that evaluate the impacts of natural disasters. In this context, the aim of this research is to demonstrate the possibilities of RPAS applications in the collection of data of interest in the management of natural disasters. Also associated with this task is the prospect of implementing some techniques of scientific computing necessary for the implementation of applications. With these activities, we seek to contribute to the advancement of the employment of RPAS in managing and preventing the risk of natural disasters.
{"title":"Application of RPAS to disaster risk reduction in Brazil: application in the analysis of urban floods","authors":"Elaiz Aparecida Mensch Buffon, F. Mendonça","doi":"10.1139/JUVS-2020-0033","DOIUrl":"https://doi.org/10.1139/JUVS-2020-0033","url":null,"abstract":"Geotechnologies have significant potential for application in socio-environmental analysis coupled to disaster risk reduction. Equipment and applications are available that are supported by scientific computing, promoting advances in the acquisition and processing of remote sensing data. Among these are two types: (i) the associated equipment to technology LiDAR (light detection and ranging) and (ii) remotely piloted aircraft systems (RPAS) with platforms of remote sensors. Recently, an growing number of studies has been observed that have the potential for applications in the sensors equipped in RPAS for environmental studies, especially those that evaluate the impacts of natural disasters. In this context, the aim of this research is to demonstrate the possibilities of RPAS applications in the collection of data of interest in the management of natural disasters. Also associated with this task is the prospect of implementing some techniques of scientific computing necessary for the implementation of applications. With these activities, we seek to contribute to the advancement of the employment of RPAS in managing and preventing the risk of natural disasters.","PeriodicalId":45619,"journal":{"name":"Journal of Unmanned Vehicle Systems","volume":" ","pages":""},"PeriodicalIF":2.3,"publicationDate":"2021-07-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"43219447","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
K. Ellis, Iryna Borshchova, S. Jennings, Caidence Paleske
This paper compares two approaches developed by the National Research Council of Canada to conduct “near-miss” intercepts in flight test, and describes a new method for assessing the efficacy of these trajectories. Each approach used a different combination of flight test techniques and displays to provide guidance to the pilots to set-up the aircraft on a collision trajectory and to maintain the desired path. Approach 1 only provided visual guidance of the relative azimuth and position of the aircraft, whereas Approach 2 established the conflict point (latitude/longitude) from the desired geometry, and provided cross track error from the desired intercept as well as speed cueing for the arrival time. The performance of the approaches was analyzed by comparing the proportion of time where the predicted closest approach distance was below a desired threshold value. The analysis showed that Approach 2 resulted in more than double the amount of time spent at or below desired closest approach distance across all azimuths flown. Moreover, since less time was required to establish the required initial conditions, and to stabilize the flight paths, the authors were able to conduct 50% more intercepts.
{"title":"A comparison of two novel approaches for conducting detect and avoid flight test","authors":"K. Ellis, Iryna Borshchova, S. Jennings, Caidence Paleske","doi":"10.1139/juvs-2021-0005","DOIUrl":"https://doi.org/10.1139/juvs-2021-0005","url":null,"abstract":"This paper compares two approaches developed by the National Research Council of Canada to conduct “near-miss” intercepts in flight test, and describes a new method for assessing the efficacy of these trajectories. Each approach used a different combination of flight test techniques and displays to provide guidance to the pilots to set-up the aircraft on a collision trajectory and to maintain the desired path. Approach 1 only provided visual guidance of the relative azimuth and position of the aircraft, whereas Approach 2 established the conflict point (latitude/longitude) from the desired geometry, and provided cross track error from the desired intercept as well as speed cueing for the arrival time. The performance of the approaches was analyzed by comparing the proportion of time where the predicted closest approach distance was below a desired threshold value. The analysis showed that Approach 2 resulted in more than double the amount of time spent at or below desired closest approach distance across all azimuths flown. Moreover, since less time was required to establish the required initial conditions, and to stabilize the flight paths, the authors were able to conduct 50% more intercepts.","PeriodicalId":45619,"journal":{"name":"Journal of Unmanned Vehicle Systems","volume":"1 1","pages":""},"PeriodicalIF":2.3,"publicationDate":"2021-06-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"42605111","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
C. Chalmers, P. Fergus, C. C. Montañez, S. Longmore, S. Wich
Determining animal distribution and density is important in conservation. The process is both time-consuming and labour-intensive. Drones have been used to help mitigate human-intensive tasks by covering large geographical areas over a much shorter timescale. In this paper we investigate this idea further using a proof of concept to detect rhinos and cars from drone footage. The proof of concept utilises off-the-shelf technology and consumer-grade drone hardware. The study demonstrates the feasibility of using machine learning (ML) to automate routine conservation tasks, such as animal detection and tracking. The prototype has been developed using a DJI Mavic Pro 2 and tested over a global system for mobile communications (GSM) network. The Faster-RCNN Resnet 101 architecture is used for transfer learning. Inference is performed with a frame sampling technique to address the required trade-off between precision, processing speed, and live video feed synchronisation. Inference models are hosted on a web platform and video streams from the drone (using OcuSync) are transmitted to a real-time messaging protocol (RTMP) server for subsequent classification. During training, the best model achieves a mean average precision (mAP) of 0.83 intersection over union (@IOU) 0.50 and 0.69 @IOU 0.75, respectively. On testing the system in Knowsley Safari our prototype was able to achieve the following: sensitivity (Sen), 0.91 (0.869, 0.94); specificity (Spec), 0.78 (0.74, 0.82); and an accuracy (ACC), 0.84 (0.81, 0.87) when detecting rhinos, and Sen, 1.00 (1.00, 1.00); Spec, 1.00 (1.00, 1.00); and an ACC, 1.00 (1.00, 1.00) when detecting cars.
{"title":"Video analysis for the detection of animals using convolutional neural networks and consumer-grade drones","authors":"C. Chalmers, P. Fergus, C. C. Montañez, S. Longmore, S. Wich","doi":"10.1139/JUVS-2020-0018","DOIUrl":"https://doi.org/10.1139/JUVS-2020-0018","url":null,"abstract":"Determining animal distribution and density is important in conservation. The process is both time-consuming and labour-intensive. Drones have been used to help mitigate human-intensive tasks by covering large geographical areas over a much shorter timescale. In this paper we investigate this idea further using a proof of concept to detect rhinos and cars from drone footage. The proof of concept utilises off-the-shelf technology and consumer-grade drone hardware. The study demonstrates the feasibility of using machine learning (ML) to automate routine conservation tasks, such as animal detection and tracking. The prototype has been developed using a DJI Mavic Pro 2 and tested over a global system for mobile communications (GSM) network. The Faster-RCNN Resnet 101 architecture is used for transfer learning. Inference is performed with a frame sampling technique to address the required trade-off between precision, processing speed, and live video feed synchronisation. Inference models are hosted on a web platform and video streams from the drone (using OcuSync) are transmitted to a real-time messaging protocol (RTMP) server for subsequent classification. During training, the best model achieves a mean average precision (mAP) of 0.83 intersection over union (@IOU) 0.50 and 0.69 @IOU 0.75, respectively. On testing the system in Knowsley Safari our prototype was able to achieve the following: sensitivity (Sen), 0.91 (0.869, 0.94); specificity (Spec), 0.78 (0.74, 0.82); and an accuracy (ACC), 0.84 (0.81, 0.87) when detecting rhinos, and Sen, 1.00 (1.00, 1.00); Spec, 1.00 (1.00, 1.00); and an ACC, 1.00 (1.00, 1.00) when detecting cars.","PeriodicalId":45619,"journal":{"name":"Journal of Unmanned Vehicle Systems","volume":" ","pages":""},"PeriodicalIF":2.3,"publicationDate":"2021-04-15","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"48899545","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
In this study, we aim at developing ways to directly translate raw drone data into actionable insights, thus enabling us to make management decisions directly from drone data. Drone photogrammetric data and data analytics were used to model stand-level immediate tending need and cost in regeneration forests. Field reference data were used to train and validate a logistic model for the binary classification of immediate tending need and a multiple linear regression model to predict the cost to perform the tending operation. The performance of the models derived from drone data was compared to models utilizing the following alternative data sources: airborne laser scanning data (ALS), prior information from forest management plans (Prior) and the combination of drone +Prior and ALS +Prior. The use of drone data and prior information outperformed the remaining alternatives in terms of classification of tending needs, whereas drone data alone resulted in the most accurate cost models. Our results are encouraging for further use of drones in the operational management of regeneration forests and show that drone data and data analytics are useful for deriving actionable insights.
{"title":"Drone data for decision making in regeneration forests: from raw data to actionable insights1","authors":"S. Puliti, A. Granhus","doi":"10.1139/juvs-2020-0029","DOIUrl":"https://doi.org/10.1139/juvs-2020-0029","url":null,"abstract":"In this study, we aim at developing ways to directly translate raw drone data into actionable insights, thus enabling us to make management decisions directly from drone data. Drone photogrammetric data and data analytics were used to model stand-level immediate tending need and cost in regeneration forests. Field reference data were used to train and validate a logistic model for the binary classification of immediate tending need and a multiple linear regression model to predict the cost to perform the tending operation. The performance of the models derived from drone data was compared to models utilizing the following alternative data sources: airborne laser scanning data (ALS), prior information from forest management plans (Prior) and the combination of drone +Prior and ALS +Prior. The use of drone data and prior information outperformed the remaining alternatives in terms of classification of tending needs, whereas drone data alone resulted in the most accurate cost models. Our results are encouraging for further use of drones in the operational management of regeneration forests and show that drone data and data analytics are useful for deriving actionable insights.","PeriodicalId":45619,"journal":{"name":"Journal of Unmanned Vehicle Systems","volume":"1 1","pages":""},"PeriodicalIF":2.3,"publicationDate":"2020-11-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"41343331","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
F. Allaire, G. Labonté, Vincent Roberge, M. Tarbouchi
Les revues étendues de la littérature sur la planification de trajectoires s’entendent toutes sur un point commun : le manque d’un point de référence pour permettre de comparer les différentes mises-en-œuvre dans ce domaine. Ce travail présente une fonction d’évaluation de trajectoires de véhicules aériens sans pilote (UAV) à voilure fixe qui couvrent quatre critères de volabilité essentiels et trois critères d’optimisation qui peuvent être applicables à une grande variété de mission tout en étant adaptables à l’ajout de critères supplémentaires. Ce travail présente aussi une série de 20 scénarios permettant de couvrir une grande variété de conditions possibles pour permettre de caractériser la qualité des trajectoires planifiées pour un UAV à voilure fixe. Ce travail propose de combiner ces deux éléments pour constituer un environnement de test détaillé comme point de référence pour les futurs travaux sur la planification de trajectoires d’UAV à voilure fixe.
{"title":"Point de référence pour la planification de trajectoires d’UAV à voilure fixe","authors":"F. Allaire, G. Labonté, Vincent Roberge, M. Tarbouchi","doi":"10.1139/juvs-2019-0022","DOIUrl":"https://doi.org/10.1139/juvs-2019-0022","url":null,"abstract":"Les revues étendues de la littérature sur la planification de trajectoires s’entendent toutes sur un point commun : le manque d’un point de référence pour permettre de comparer les différentes mises-en-œuvre dans ce domaine. Ce travail présente une fonction d’évaluation de trajectoires de véhicules aériens sans pilote (UAV) à voilure fixe qui couvrent quatre critères de volabilité essentiels et trois critères d’optimisation qui peuvent être applicables à une grande variété de mission tout en étant adaptables à l’ajout de critères supplémentaires. Ce travail présente aussi une série de 20 scénarios permettant de couvrir une grande variété de conditions possibles pour permettre de caractériser la qualité des trajectoires planifiées pour un UAV à voilure fixe. Ce travail propose de combiner ces deux éléments pour constituer un environnement de test détaillé comme point de référence pour les futurs travaux sur la planification de trajectoires d’UAV à voilure fixe.","PeriodicalId":45619,"journal":{"name":"Journal of Unmanned Vehicle Systems","volume":" ","pages":""},"PeriodicalIF":2.3,"publicationDate":"2020-10-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"48504910","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Unmanned aerial vehicles (UAVs) are established, valuable tools for wildlife surveys in marine and terrestrial environments; however, they are seldom utilized in freshwater ecosystems. Therefore, baseline data on the use of UAVs in lotic environments are needed that balances flight parameters (e.g., altitude and noise level) with image quality, while minimizing disturbance to individuals. Moreover, the traditional high-cost UAVs may present challenges to researchers conducting rapid assessments on species presence with limited funding. However, emerging, affordable UAV systems can provide this preliminary data to researchers, albeit with caveats on reliability of data. We tested a low-cost UAV system to document freshwater turtle presence, species distribution, and habitat use in a small North Carolina wetland. We observed minimal instances of turtles fleeing basking sites (∼0.7%), as this UAV system was only ∼2.1 dB above ambient noise levels at an altitude of 20 m. Freshwater turtles were found primarily in algal mat basking habitats with highly variable numbers observed across locations and flights, likely due to image quality reliability and altitude. Our affordable UAV system was successful in providing baseline information on species presence, size distribution, and habitat preference of turtles in freshwater ecosystems.
{"title":"Preliminary data on an affordable UAV system to survey for freshwater turtles: advantages and disadvantages of low-cost drones","authors":"J. Escobar, Mark A. Rollins, S. Unger","doi":"10.1139/juvs-2018-0037","DOIUrl":"https://doi.org/10.1139/juvs-2018-0037","url":null,"abstract":"Unmanned aerial vehicles (UAVs) are established, valuable tools for wildlife surveys in marine and terrestrial environments; however, they are seldom utilized in freshwater ecosystems. Therefore, baseline data on the use of UAVs in lotic environments are needed that balances flight parameters (e.g., altitude and noise level) with image quality, while minimizing disturbance to individuals. Moreover, the traditional high-cost UAVs may present challenges to researchers conducting rapid assessments on species presence with limited funding. However, emerging, affordable UAV systems can provide this preliminary data to researchers, albeit with caveats on reliability of data. We tested a low-cost UAV system to document freshwater turtle presence, species distribution, and habitat use in a small North Carolina wetland. We observed minimal instances of turtles fleeing basking sites (∼0.7%), as this UAV system was only ∼2.1 dB above ambient noise levels at an altitude of 20 m. Freshwater turtles were found primarily in algal mat basking habitats with highly variable numbers observed across locations and flights, likely due to image quality reliability and altitude. Our affordable UAV system was successful in providing baseline information on species presence, size distribution, and habitat preference of turtles in freshwater ecosystems.","PeriodicalId":45619,"journal":{"name":"Journal of Unmanned Vehicle Systems","volume":" ","pages":""},"PeriodicalIF":2.3,"publicationDate":"2020-09-09","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.1139/juvs-2018-0037","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"49106437","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
J. V. D. Sluijs, Glen MacKay, L. Andrew, Naomi Smethurst, T. D. Andrews
Indigenous peoples of Canada’s North have long made use of boreal forest products, with wooden drift fences to direct caribou movement towards kill sites as unique examples. Caribou fences are of archaeological and ecological significance, yet sparsely distributed and increasingly at risk to wildfire. Costly remote field logistics requires efficient prior fence verification and rapid on-site documentation of structure and landscape context. Unmanned aerial vehicle (UAV) and very high-resolution (VHR) satellite imagery were used for detailed site recording and detection of coarse woody debris (CWD) objects under challenging Subarctic alpine woodlands conditions. UAVs enabled discovery of previously unknown wooden structures and revealed extensive use of CWD (n = 1745, total length = 2682 m, total volume = 16.7 m3). The methodology detected CWD objects much smaller than previously reported in remote sensing literature (mean 1.5 m long, 0.09 m wide), substantiating a high spatial resolution requirement for detection. Structurally, the fences were not uniformly left on the landscape. Permafrost patterned ground combined with small CWD contributions at the pixel level complicated identification through VHR data sets. UAV outputs significantly enriched field techniques and supported a deeper understanding of caribou fences as a hunting technology, and they will aid ongoing archaeological interpretation and time-series comparisons of change agents.
{"title":"Archaeological documentation of wood caribou fences using unmanned aerial vehicle and very high-resolution satellite imagery in the Mackenzie Mountains, Northwest Territories","authors":"J. V. D. Sluijs, Glen MacKay, L. Andrew, Naomi Smethurst, T. D. Andrews","doi":"10.1139/juvs-2020-0007","DOIUrl":"https://doi.org/10.1139/juvs-2020-0007","url":null,"abstract":"Indigenous peoples of Canada’s North have long made use of boreal forest products, with wooden drift fences to direct caribou movement towards kill sites as unique examples. Caribou fences are of archaeological and ecological significance, yet sparsely distributed and increasingly at risk to wildfire. Costly remote field logistics requires efficient prior fence verification and rapid on-site documentation of structure and landscape context. Unmanned aerial vehicle (UAV) and very high-resolution (VHR) satellite imagery were used for detailed site recording and detection of coarse woody debris (CWD) objects under challenging Subarctic alpine woodlands conditions. UAVs enabled discovery of previously unknown wooden structures and revealed extensive use of CWD (n = 1745, total length = 2682 m, total volume = 16.7 m3). The methodology detected CWD objects much smaller than previously reported in remote sensing literature (mean 1.5 m long, 0.09 m wide), substantiating a high spatial resolution requirement for detection. Structurally, the fences were not uniformly left on the landscape. Permafrost patterned ground combined with small CWD contributions at the pixel level complicated identification through VHR data sets. UAV outputs significantly enriched field techniques and supported a deeper understanding of caribou fences as a hunting technology, and they will aid ongoing archaeological interpretation and time-series comparisons of change agents.","PeriodicalId":45619,"journal":{"name":"Journal of Unmanned Vehicle Systems","volume":" ","pages":""},"PeriodicalIF":2.3,"publicationDate":"2020-07-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"42348702","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Tree species identification at the individual tree level is crucial for forest operations and management, yet its automated mapping remains challenging. Emerging technology, such as the high-resolution imagery from unmanned aerial vehicles (UAV) that is now becoming part of every forester’s surveillance kit, can potentially provide a solution to better characterize the tree canopy. To address this need, we have developed an approach based on a deep Convolutional Neural Network (CNN) to classify forest tree species at the individual tree-level that uses high-resolution RGB images acquired from a consumer-grade camera mounted on a UAV platform. This work explores the ability of the Dense Convolutional Network (DenseNet) to classify commonly available economic coniferous tree species in eastern Canada. The network was trained using multitemporal images captured under varying acquisition parameters to include seasonal, temporal, illumination, and angular variability. Validation of this model using distinct images over a mixed-wood forest in Ontario, Canada, showed over 84% classification accuracy in distinguishing five predominant species of coniferous trees. The model remains highly robust even when using images taken during different seasons and times, and with varying illumination and angles.
{"title":"Individual tree species identification using Dense Convolutional Network (DenseNet) on multitemporal RGB images from UAV","authors":"Sowmya Natesan, C. Armenakis, U. Vepakomma","doi":"10.1139/juvs-2020-0014","DOIUrl":"https://doi.org/10.1139/juvs-2020-0014","url":null,"abstract":"Tree species identification at the individual tree level is crucial for forest operations and management, yet its automated mapping remains challenging. Emerging technology, such as the high-resolution imagery from unmanned aerial vehicles (UAV) that is now becoming part of every forester’s surveillance kit, can potentially provide a solution to better characterize the tree canopy. To address this need, we have developed an approach based on a deep Convolutional Neural Network (CNN) to classify forest tree species at the individual tree-level that uses high-resolution RGB images acquired from a consumer-grade camera mounted on a UAV platform. This work explores the ability of the Dense Convolutional Network (DenseNet) to classify commonly available economic coniferous tree species in eastern Canada. The network was trained using multitemporal images captured under varying acquisition parameters to include seasonal, temporal, illumination, and angular variability. Validation of this model using distinct images over a mixed-wood forest in Ontario, Canada, showed over 84% classification accuracy in distinguishing five predominant species of coniferous trees. The model remains highly robust even when using images taken during different seasons and times, and with varying illumination and angles.","PeriodicalId":45619,"journal":{"name":"Journal of Unmanned Vehicle Systems","volume":" ","pages":""},"PeriodicalIF":2.3,"publicationDate":"2020-07-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.1139/juvs-2020-0014","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"42721934","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
In recent years, the frequency and severity of forest fire occurrence have increased, compelling the research communities to actively search for early forest fire detection and suppression methods. Remote sensing using computer vision techniques can provide early detection from a large field of view along with providing additional information such as location and severity of the fire. Over the last few years, the feasibility of forest fire detection by combining computer vision and aerial platforms such as manned and unmanned aerial vehicles, especially low cost and small-size unmanned aerial vehicles, have been experimented with and have shown promise by providing detection, geolocation, and fire characteristic information. This paper adds to the existing research by proposing a novel method of detecting forest fire using color and multi-color space local binary pattern of both flame and smoke signatures and a single artificial neural network. The training and evaluation images in this paper have been mostly obtained from aerial platforms with challenging circumstances such as minuscule flame pixels, varying illumination and range, complex backgrounds, occluded flame and smoke regions, and smoke blending into the background. The proposed method has achieved F1 scores of 0.84 for flame and 0.90 for smoke while maintaining a processing speed of 19 frames per second. It has outperformed support vector machine, random forest, Bayesian classifiers and YOLOv3, and has demonstrated the capability of detecting challenging flame and smoke regions of a wide range of sizes, colors, textures, and opacity.
{"title":"Forest fire flame and smoke detection from UAV-captured images using fire-specific color features and multi-color space local binary pattern","authors":"Faruk Hossain, Youmin Zhang, Masuda A. Tonima","doi":"10.1139/juvs-2020-0009","DOIUrl":"https://doi.org/10.1139/juvs-2020-0009","url":null,"abstract":"In recent years, the frequency and severity of forest fire occurrence have increased, compelling the research communities to actively search for early forest fire detection and suppression methods. Remote sensing using computer vision techniques can provide early detection from a large field of view along with providing additional information such as location and severity of the fire. Over the last few years, the feasibility of forest fire detection by combining computer vision and aerial platforms such as manned and unmanned aerial vehicles, especially low cost and small-size unmanned aerial vehicles, have been experimented with and have shown promise by providing detection, geolocation, and fire characteristic information. This paper adds to the existing research by proposing a novel method of detecting forest fire using color and multi-color space local binary pattern of both flame and smoke signatures and a single artificial neural network. The training and evaluation images in this paper have been mostly obtained from aerial platforms with challenging circumstances such as minuscule flame pixels, varying illumination and range, complex backgrounds, occluded flame and smoke regions, and smoke blending into the background. The proposed method has achieved F1 scores of 0.84 for flame and 0.90 for smoke while maintaining a processing speed of 19 frames per second. It has outperformed support vector machine, random forest, Bayesian classifiers and YOLOv3, and has demonstrated the capability of detecting challenging flame and smoke regions of a wide range of sizes, colors, textures, and opacity.","PeriodicalId":45619,"journal":{"name":"Journal of Unmanned Vehicle Systems","volume":" ","pages":""},"PeriodicalIF":2.3,"publicationDate":"2020-06-30","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.1139/juvs-2020-0009","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"43030875","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
P. Highland, J. Williams, M. Yazvec, A. Dideriksen, N. Corcoran, K. Woodruff, C. Thompson, L. Kirby, E. Chun, H. Kousheh, J. Stoltz, T. Schnell
With more unmanned aircraft (UA) becoming airborne each day, an already high manned aircraft to UA exposure rate continues to grow. Pilots and rulemaking authorities realize that UA visibility is a real, but unquantified, threat to operations under the see-and-avoid concept. To finally quantify the threat, a novel contrast-based UA visibility model is constructed here using collected empirical data as well as previous work on the factors affecting visibility. This work showed that UA visibility <1300 m makes a midair collision a serious threat if a manned aircraft and a UA are on a collision course while operating under the see-and-avoid concept. Similarly, this work also showed that a midair collision may be unavoidable when UA visibility is <400 m. Validating pilot and rulemaking authority concerns, this work demonstrated that UA visibility distances <1300 and <400 m occur often in the real world. Finally, the model produced UA visibility lookup tables that may prove useful to rulemaking authorities such as the U.S. Federal Aviation Administration and International Civil Aviation Organization for future work in the proof of equivalency of detect and avoid operations. Until then, pilots flying at slower airspeeds in the vicinity of UA may improve safety margins.
{"title":"Modelling of unmanned aircraft visibility for see-and-avoid operations","authors":"P. Highland, J. Williams, M. Yazvec, A. Dideriksen, N. Corcoran, K. Woodruff, C. Thompson, L. Kirby, E. Chun, H. Kousheh, J. Stoltz, T. Schnell","doi":"10.1139/juvs-2020-0011","DOIUrl":"https://doi.org/10.1139/juvs-2020-0011","url":null,"abstract":"With more unmanned aircraft (UA) becoming airborne each day, an already high manned aircraft to UA exposure rate continues to grow. Pilots and rulemaking authorities realize that UA visibility is a real, but unquantified, threat to operations under the see-and-avoid concept. To finally quantify the threat, a novel contrast-based UA visibility model is constructed here using collected empirical data as well as previous work on the factors affecting visibility. This work showed that UA visibility <1300 m makes a midair collision a serious threat if a manned aircraft and a UA are on a collision course while operating under the see-and-avoid concept. Similarly, this work also showed that a midair collision may be unavoidable when UA visibility is <400 m. Validating pilot and rulemaking authority concerns, this work demonstrated that UA visibility distances <1300 and <400 m occur often in the real world. Finally, the model produced UA visibility lookup tables that may prove useful to rulemaking authorities such as the U.S. Federal Aviation Administration and International Civil Aviation Organization for future work in the proof of equivalency of detect and avoid operations. Until then, pilots flying at slower airspeeds in the vicinity of UA may improve safety margins.","PeriodicalId":45619,"journal":{"name":"Journal of Unmanned Vehicle Systems","volume":" ","pages":""},"PeriodicalIF":2.3,"publicationDate":"2020-06-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"42754597","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}