This is a Preprint and has not been peer reviewed. The published version of this Preprint is available: https://doi.org/10.1016/j.envsoft.2025.106421. This is version 1 of this Preprint.

Can large language models effectively reason about adverse weather conditions?
Downloads
Authors
Abstract
This paper seeks to answer the question “can Large Language Models (LLMs) effectively reason about adverse weather conditions?”. To address this question, we utilized multiple LLMs to harness the US National Weather Service (NWS) flood report data spanning from June 2005 to September 2024. Bidirectional and Auto-Regressive Transformer (BART), Bidirectional Encoder Representations from Transformers (BERT), Large Language Model Meta AI (LLaMA-2), LLaMA-3, and LLaMA-3.1 were employed to categorize data based on predefined labels. The methodology was implemented in Charleston County, South Carolina, USA. Extreme events were unevenly distributed across the training period with the “Cyclonic” category exhibiting significantly fewer instances compared to the “Flood” and “Thunderstorm” categories. Analysis suggests that the LLaMA-3 reached its peak performance at 60% of the dataset size while other LLMs achieved peak performance at approximately 80–100% of the dataset size. This study provided deep insights into the application of LLMs in reasoning adverse weather conditions.
DOI
https://doi.org/10.31223/X5X44P
Subjects
Engineering
Keywords
Large Language Model, Text classification, LLama, BART, bert, Adverse weather conditions
Dates
Published: 2025-04-24 21:26
Last Updated: 2025-04-24 21:26
License
CC BY Attribution 4.0 International
Additional Metadata
Conflict of interest statement:
The contact author has declared that none of the authors has any competing interests.
Data Availability (Reason not available):
https://github.com/Clemson-Hydroinformatics-Lab/HydroLLMs
There are no comments or no comments have been made public for this article.