T. Hallouin, F. Bourgin, C. Perrin, Maria-Helena Ramos, V. Andréassian
{"title":"EvalHyd v0.1.2: a polyglot tool for the evaluation of deterministic and probabilistic streamflow predictions","authors":"T. Hallouin, F. Bourgin, C. Perrin, Maria-Helena Ramos, V. Andréassian","doi":"10.5194/gmd-17-4561-2024","DOIUrl":null,"url":null,"abstract":"Abstract. The evaluation of streamflow predictions forms an essential part of most hydrological modelling studies published in the literature. The evaluation process typically involves the computation of some evaluation metrics, but it can also involve the preliminary processing of the predictions as well as the subsequent processing of the computed metrics. In order for published hydrological studies to be reproducible, these steps need to be carefully documented by the authors. The availability of a single tool performing all of these tasks would simplify not only the documentation by the authors but also the reproducibility by the readers. However, this requires such a tool to be polyglot (i.e. usable in a variety of programming languages) and openly accessible so that it can be used by everyone in the hydrological community. To this end, we developed a new tool named evalhyd that offers metrics and functionalities for the evaluation of deterministic and probabilistic streamflow predictions. It is open source, and it can be used in Python, in R, in C++, or as a command line tool. This article describes the tool and illustrates its functionalities using Global Flood Awareness System (GloFAS) reforecasts over France as an example data set.\n","PeriodicalId":12799,"journal":{"name":"Geoscientific Model Development","volume":null,"pages":null},"PeriodicalIF":4.0000,"publicationDate":"2024-06-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Geoscientific Model Development","FirstCategoryId":"89","ListUrlMain":"https://doi.org/10.5194/gmd-17-4561-2024","RegionNum":3,"RegionCategory":"地球科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q1","JCRName":"GEOSCIENCES, MULTIDISCIPLINARY","Score":null,"Total":0}
引用次数: 0
Abstract
Abstract. The evaluation of streamflow predictions forms an essential part of most hydrological modelling studies published in the literature. The evaluation process typically involves the computation of some evaluation metrics, but it can also involve the preliminary processing of the predictions as well as the subsequent processing of the computed metrics. In order for published hydrological studies to be reproducible, these steps need to be carefully documented by the authors. The availability of a single tool performing all of these tasks would simplify not only the documentation by the authors but also the reproducibility by the readers. However, this requires such a tool to be polyglot (i.e. usable in a variety of programming languages) and openly accessible so that it can be used by everyone in the hydrological community. To this end, we developed a new tool named evalhyd that offers metrics and functionalities for the evaluation of deterministic and probabilistic streamflow predictions. It is open source, and it can be used in Python, in R, in C++, or as a command line tool. This article describes the tool and illustrates its functionalities using Global Flood Awareness System (GloFAS) reforecasts over France as an example data set.
期刊介绍:
Geoscientific Model Development (GMD) is an international scientific journal dedicated to the publication and public discussion of the description, development, and evaluation of numerical models of the Earth system and its components. The following manuscript types can be considered for peer-reviewed publication:
* geoscientific model descriptions, from statistical models to box models to GCMs;
* development and technical papers, describing developments such as new parameterizations or technical aspects of running models such as the reproducibility of results;
* new methods for assessment of models, including work on developing new metrics for assessing model performance and novel ways of comparing model results with observational data;
* papers describing new standard experiments for assessing model performance or novel ways of comparing model results with observational data;
* model experiment descriptions, including experimental details and project protocols;
* full evaluations of previously published models.