Enhancing Hydrological Modeling with Transformers: A Case Study for 24-Hour Streamflow Prediction

This is a Preprint and has not been peer reviewed. The published version of this Preprint is available: https://doi.org/10.2166/wst.2024.110. This is version 1 of this Preprint.

Add a Comment

You must log in to post a comment.


There are no comments or no comments have been made public for this article.


Download Preprint


Bekir Zahit Demiray, Muhammed Sit , Omer Mermer, Ibrahim Demir


In this paper, we address the critical task of 24-hour streamflow forecasting using advanced deep-learning models, with a primary focus on the Transformer architecture which has seen limited application in this specific task. We compare the performance of five different models, including Persistence, LSTM, Seq2Seq, GRU, and Transformer, across four distinct regions. The evaluation is based on three performance metrics: Nash-Sutcliffe Efficiency (NSE), Pearson’s r, and Normalized Root Mean Square Error (NRMSE). Additionally, we investigate the impact of two data extension methods: zero-padding and persistence, on the model's predictive capabilities. Our findings highlight the Transformer's superiority in capturing complex temporal dependencies and patterns in the streamflow data, outperforming all other models in terms of both accuracy and reliability. The study's insights emphasize the significance of leveraging advanced deep learning techniques, such as the Transformer, in hydrological modeling and streamflow forecasting for effective water resource management and flood prediction.




Environmental Engineering, Hydraulic Engineering


Rainfall-Runoff modeling, Deep learning, flood forecasting, Transformers, streamflow forecasting


Published: 2023-09-12 20:05


CC BY Attribution 4.0 International