This is a Preprint and has not been peer reviewed. The published version of this Preprint is available: https://doi.org/10.1029/2022WR033918. This is version 3 of this Preprint.
Downloads
Authors
Abstract
Building accurate rainfall-runoff models is an integral part of hydrological science and practice. The variety of modeling goals and applications have led to a large suite of evaluation metrics for these models. Yet, hydrologists still put considerable trust into visual judgment, although it is unclear whether such judgment agrees or disagrees with existing quantitative metrics. In this study, we tasked 622 experts to compare and judge more than 14,000 pairs of hydrographs from 13 different models. Our results show that expert opinion broadly agrees with quantitative metrics and results in a clear preference for a Machine Learning model over traditional hydrological models. The expert opinions are, however, subject to significant amounts of inconsistency. Nevertheless, where experts agree, we can predict their opinion purely from quantitative metrics, which indicates that the metrics sufficiently encode human preferences in a small set of numbers. While there remains room for improvement of quantitative metrics, we suggest that the hydrologic community should reinforce their benchmarking efforts and put more trust in these metrics.
DOI
https://doi.org/10.31223/X52938
Subjects
Earth Sciences, Hydrology, Physical Sciences and Mathematics, Water Resource Management
Keywords
hydrology, metrics, visual inspection, Rainfall-Runoff, expert judgment, machine learning
Dates
Published: 2022-10-19 03:38
Last Updated: 2023-02-07 07:38
There are no comments or no comments have been made public for this article.