利用计算机视觉对无人机系统捕捉到的图像中的火灾行为进行分类、定位和分割

IF 5.7 Q1 ENVIRONMENTAL SCIENCES Science of Remote Sensing Pub Date : 2024-09-28 DOI:10.1016/j.srs.2024.100167
Brett L. Lawrence , Emerson de Lemmus
{"title":"利用计算机视觉对无人机系统捕捉到的图像中的火灾行为进行分类、定位和分割","authors":"Brett L. Lawrence ,&nbsp;Emerson de Lemmus","doi":"10.1016/j.srs.2024.100167","DOIUrl":null,"url":null,"abstract":"<div><div>The widely adaptable capabilities of artificial intelligence, in particular deep learning and computer vision have led to significant research output regarding flame and smoke detection. The composition of flame and smoke, also described as fire behavior, can be considerably different depending on factors like weather, fuels, and the specific landscape fire is being observed on. The ability to detect definable classes of fire behavior using computer vision has not been explored and could be helpful given it often dictates how firefighters respond to fire situations. To test whether types of fire behavior could be reliably classified, we collected and labeled a unique unmanned aerial system (UAS) image dataset of fire behavior classifications to be trained and validated using You Only Look Once (YOLO) detection models. Our 960 labeled images were sourced from over 21 h of UAS video collected during prescribed fire operations covering a large region of Texas and Louisiana, United States. National Wildfire Coordinating Group (NWCG) fire behavior observations and descriptions served as a reference for determining fire behavior classes during labeling. YOLOv8 models were trained on NWCG Rank 1–3 fire behavior descriptions in grassland, shrubland, forested, and combined fire regimes within our study area. Models were first trained and validated on classifying isolated image objects of fire behavior, and then separately trained to locate and segment fire behavior classifications in UAS images. Models trained to classify isolated image objects of fire behavior consistently performed at a mAP of 0.808 or higher, with combined fire regimes producing the best results (mAP = 0.897). Most segmentation models performed relatively poorly, except for the forest regime model at a box (locate) and mask (segment) mAP of 0.59 and 0.611, respectively. Our results indicate that classifying fire behavior with computer vision is possible in different fire regimes and fuel models, whereas locating and segmenting fire behavior types around background information is relatively difficult. However, it may be a manageable task with enough data, and when models are developed for a specific fire regime. With an increasing number of destructive wildfires and new challenges confronting fire managers, identifying how new technologies can quickly assess wildfire situations can assist wildfire responder awareness. Our conclusion is that levels of abstraction deeper than just detection of smoke or flame are possible using computer vision and could make even more detailed aerial fire monitoring possible using a UAS.</div></div>","PeriodicalId":101147,"journal":{"name":"Science of Remote Sensing","volume":"10 ","pages":"Article 100167"},"PeriodicalIF":5.7000,"publicationDate":"2024-09-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"Using computer vision to classify, locate and segment fire behavior in UAS-captured images\",\"authors\":\"Brett L. Lawrence ,&nbsp;Emerson de Lemmus\",\"doi\":\"10.1016/j.srs.2024.100167\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"<div><div>The widely adaptable capabilities of artificial intelligence, in particular deep learning and computer vision have led to significant research output regarding flame and smoke detection. The composition of flame and smoke, also described as fire behavior, can be considerably different depending on factors like weather, fuels, and the specific landscape fire is being observed on. The ability to detect definable classes of fire behavior using computer vision has not been explored and could be helpful given it often dictates how firefighters respond to fire situations. To test whether types of fire behavior could be reliably classified, we collected and labeled a unique unmanned aerial system (UAS) image dataset of fire behavior classifications to be trained and validated using You Only Look Once (YOLO) detection models. Our 960 labeled images were sourced from over 21 h of UAS video collected during prescribed fire operations covering a large region of Texas and Louisiana, United States. National Wildfire Coordinating Group (NWCG) fire behavior observations and descriptions served as a reference for determining fire behavior classes during labeling. YOLOv8 models were trained on NWCG Rank 1–3 fire behavior descriptions in grassland, shrubland, forested, and combined fire regimes within our study area. Models were first trained and validated on classifying isolated image objects of fire behavior, and then separately trained to locate and segment fire behavior classifications in UAS images. Models trained to classify isolated image objects of fire behavior consistently performed at a mAP of 0.808 or higher, with combined fire regimes producing the best results (mAP = 0.897). Most segmentation models performed relatively poorly, except for the forest regime model at a box (locate) and mask (segment) mAP of 0.59 and 0.611, respectively. Our results indicate that classifying fire behavior with computer vision is possible in different fire regimes and fuel models, whereas locating and segmenting fire behavior types around background information is relatively difficult. However, it may be a manageable task with enough data, and when models are developed for a specific fire regime. With an increasing number of destructive wildfires and new challenges confronting fire managers, identifying how new technologies can quickly assess wildfire situations can assist wildfire responder awareness. Our conclusion is that levels of abstraction deeper than just detection of smoke or flame are possible using computer vision and could make even more detailed aerial fire monitoring possible using a UAS.</div></div>\",\"PeriodicalId\":101147,\"journal\":{\"name\":\"Science of Remote Sensing\",\"volume\":\"10 \",\"pages\":\"Article 100167\"},\"PeriodicalIF\":5.7000,\"publicationDate\":\"2024-09-28\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"Science of Remote Sensing\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://www.sciencedirect.com/science/article/pii/S2666017224000518\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"Q1\",\"JCRName\":\"ENVIRONMENTAL SCIENCES\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"Science of Remote Sensing","FirstCategoryId":"1085","ListUrlMain":"https://www.sciencedirect.com/science/article/pii/S2666017224000518","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q1","JCRName":"ENVIRONMENTAL SCIENCES","Score":null,"Total":0}
引用次数: 0

摘要

人工智能的广泛适应能力,特别是深度学习和计算机视觉,为火焰和烟雾探测带来了巨大的研究成果。火焰和烟雾的组成(也称为火灾行为)会因天气、燃料和观察火灾的具体地貌等因素的不同而大相径庭。利用计算机视觉技术检测可定义的火灾行为类别的能力尚未得到探索,而这种能力可能很有帮助,因为它往往决定了消防员如何应对火灾情况。为了测试火灾行为类型是否可以可靠地分类,我们收集并标注了一个独特的无人机系统(UAS)火灾行为分类图像数据集,以便使用 "只看一次"(YOLO)检测模型进行训练和验证。我们的 960 张标注图像来自在美国德克萨斯州和路易斯安那州的大片地区进行规定灭火行动期间收集的超过 21 小时的无人机系统视频。国家野火协调组(NWCG)的火灾行为观察和描述可作为在标注过程中确定火灾行为类别的参考。YOLOv8 模型是根据国家野火协调组 1-3 级火灾行为描述在我们的研究区域内的草地、灌木林、森林和综合火灾机制中进行训练的。首先对模型进行训练和验证,以对孤立的火灾行为图像对象进行分类,然后对模型进行单独训练,以定位和分割 UAS 图像中的火灾行为分类。在对孤立的火灾行为图像对象进行分类时,所训练的模型的 mAP 值始终保持在 0.808 或更高水平,而组合火灾机制的结果最好(mAP = 0.897)。大多数分割模型的表现相对较差,只有森林系统模型的方框(定位)和掩码(分割)mAP 分别为 0.59 和 0.611。我们的研究结果表明,利用计算机视觉技术对不同火势和燃料模型中的火灾行为进行分类是可行的,而围绕背景信息对火灾行为类型进行定位和分割则相对困难。不过,如果有足够的数据,并针对特定的火灾机制开发出相应的模型,这项任务还是可以完成的。随着破坏性野火的数量不断增加,火灾管理者面临着新的挑战,确定新技术如何快速评估野火情况有助于提高野火应对人员的意识。我们的结论是,利用计算机视觉可以实现比检测烟雾或火焰更深层次的抽象,并利用无人机系统实现更详细的空中火灾监测。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
查看原文
分享 分享
微信好友 朋友圈 QQ好友 复制链接
本刊更多论文
Using computer vision to classify, locate and segment fire behavior in UAS-captured images
The widely adaptable capabilities of artificial intelligence, in particular deep learning and computer vision have led to significant research output regarding flame and smoke detection. The composition of flame and smoke, also described as fire behavior, can be considerably different depending on factors like weather, fuels, and the specific landscape fire is being observed on. The ability to detect definable classes of fire behavior using computer vision has not been explored and could be helpful given it often dictates how firefighters respond to fire situations. To test whether types of fire behavior could be reliably classified, we collected and labeled a unique unmanned aerial system (UAS) image dataset of fire behavior classifications to be trained and validated using You Only Look Once (YOLO) detection models. Our 960 labeled images were sourced from over 21 h of UAS video collected during prescribed fire operations covering a large region of Texas and Louisiana, United States. National Wildfire Coordinating Group (NWCG) fire behavior observations and descriptions served as a reference for determining fire behavior classes during labeling. YOLOv8 models were trained on NWCG Rank 1–3 fire behavior descriptions in grassland, shrubland, forested, and combined fire regimes within our study area. Models were first trained and validated on classifying isolated image objects of fire behavior, and then separately trained to locate and segment fire behavior classifications in UAS images. Models trained to classify isolated image objects of fire behavior consistently performed at a mAP of 0.808 or higher, with combined fire regimes producing the best results (mAP = 0.897). Most segmentation models performed relatively poorly, except for the forest regime model at a box (locate) and mask (segment) mAP of 0.59 and 0.611, respectively. Our results indicate that classifying fire behavior with computer vision is possible in different fire regimes and fuel models, whereas locating and segmenting fire behavior types around background information is relatively difficult. However, it may be a manageable task with enough data, and when models are developed for a specific fire regime. With an increasing number of destructive wildfires and new challenges confronting fire managers, identifying how new technologies can quickly assess wildfire situations can assist wildfire responder awareness. Our conclusion is that levels of abstraction deeper than just detection of smoke or flame are possible using computer vision and could make even more detailed aerial fire monitoring possible using a UAS.
求助全文
通过发布文献求助,成功后即可免费获取论文全文。 去求助
来源期刊
CiteScore
12.20
自引率
0.00%
发文量
0
期刊最新文献
Global soil moisture mapping at 5 km by combining GNSS reflectometry and machine learning in view of HydroGNSS Coastal vertical land motion across Southeast Asia derived from combining tide gauge and satellite altimetry observations Identifying thermokarst lakes using deep learning and high-resolution satellite images A two-stage deep learning architecture for detection global coastal and offshore submesoscale ocean eddy using SDGSAT-1 multispectral imagery A comprehensive evaluation of satellite-based and reanalysis soil moisture products over the upper Blue Nile Basin, Ethiopia
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
已复制链接
已复制链接
快去分享给好友吧!
我知道了
×
扫码分享
扫码分享
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1