Using computer vision to detect and segment fire behavior classifications in UAS-captured images

This is a Preprint and has not been peer reviewed. This is version 1 of this Preprint.

Add a Comment

You must log in to post a comment.


Comments

There are no comments or no comments have been made public for this article.

Downloads

Download Preprint

Authors

Brett Lawrence, Emerson de Lemmus

Abstract

The widely adaptable capabilities of artificial intelligence, in particular deep learning and computer vision has led to significant research output regarding fire and smoke detection. Previous studies often focus on themes like early fire detection, increased operational awareness, and post-fire assessment. To further test the capabilities of deep learning detection in these scenarios, we collected and labeled a unique aerial image dataset that determined whether specific types of fire behavior could be reliably detected in prescribed fire settings. Our 960 labeled images were sourced from over 20.97 hours of UAS video collected during prescribed fire operations covering a large region of Texas and Louisiana, U.S.. National Wildfire Coordinating Group (NWCG) fire behavior observations and descriptions served as a reference for determining fire behavior classes during labeling. YOLOv8 models were trained on NWCG Rank 1-3 fire behavior descriptions in grassland, shrubland, forested, and combined fire regimes within our study area. Models were first trained and validated on isolated image objects of fire behavior, and then on segmenting fire behavior in their original parent images. Models trained using isolated image objects of fire behavior consistently performed at a mAP of 0.808 or higher, with combined fire regimes producing the best results (mAP = 0.897). Most segmentation models performed relatively poorly, except for the forest regime model at a box and mask mAP of 0.59 and 0.611, respectively. Our results indicate that classifying fire behavior with computer vision is possible in most fire regimes and fuel models, whereas segmenting fire behavior around background information is relatively difficult. However, it may be a manageable task with enough data, and when models are developed for a specific fire regime. With an increasing number of destructive wildfires and new challenges confronting fire managers, identifying how new technologies can quickly assess wildfire situations can assist wildfire responder awareness. Our conclusion is that levels of abstraction deeper than mere detection of smoke or fire are possible using computer vision, and could make even more detailed fire monitoring possible.

DOI

https://doi.org/10.31223/X5710X

Subjects

Artificial Intelligence and Robotics, Forest Management, Natural Resources and Conservation

Keywords

YOLO, computer vision, fire behavior, fire detection, UAS

Dates

Published: 2024-02-21 18:51

Last Updated: 2024-02-21 23:51

License

CC-By Attribution-NonCommercial-NoDerivatives 4.0 International

Additional Metadata

Conflict of interest statement:
None

Data Availability (Reason not available):
The labeled image dataset of combined fire regimes is available at: https://app.roboflow.com/raven-environmental-services/fire-behavior/9