Heather Dawn Reeves, Daniel D. Tripp, Michael E. Baldwin, Andrew A. Rosenow
{"title":"Statistical evaluation of different surface precipitation-type algorithms and its implications for NWP prediction and operational decision making","authors":"Heather Dawn Reeves, Daniel D. Tripp, Michael E. Baldwin, Andrew A. Rosenow","doi":"10.1175/waf-d-23-0081.1","DOIUrl":null,"url":null,"abstract":"Abstract Several new precipitation-type algorithms have been developed to improve NWP predictions of surface precipitation type during winter storms. In this study, we evaluate whether it is possible to objectively declare one algorithm as superior to another through comparison of three precipitation-type algorithms when validated using different techniques. The apparent skill of the algorithms is dependent on the choice of performance metric – algorithms can have high scores for some metrics and poor scores for others. It is also possible for an algorithm to have high skill at diagnosing some precipitation types and poor skill with others. Algorithm skill is also highly dependent on the choice of verification data/methodology. Just by changing what data is considered “truth,” we were able to substantially change the apparent skill of all algorithms evaluated herein. These findings suggest an objective declaration of algorithm “goodness” is not possible. Moreover, they indicate the unambiguous declaration of superiority is difficult, if not impossible. A contributing factor to algorithm performance is uncertainty of the microphysical processes that lead to phase changes of falling hydrometeors, which are treated differently by each algorithm thus resulting in different biases in near-0°C environments. These biases are evident even when algorithms are applied to ensemble forecasts. Hence, a multi-algorithm approach is advocated to account for this source of uncertainty. Though the apparent performance of this approach is still dependent on the choice of performance metric and precipitation type, a case-study analysis shows it has the potential to provide better decision support than the single-algorithm approach.","PeriodicalId":49369,"journal":{"name":"Weather and Forecasting","volume":"66 1","pages":"0"},"PeriodicalIF":3.0000,"publicationDate":"2023-09-29","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Weather and Forecasting","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1175/waf-d-23-0081.1","RegionNum":3,"RegionCategory":"地球科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q2","JCRName":"METEOROLOGY & ATMOSPHERIC SCIENCES","Score":null,"Total":0}
引用次数: 0
Abstract
Abstract Several new precipitation-type algorithms have been developed to improve NWP predictions of surface precipitation type during winter storms. In this study, we evaluate whether it is possible to objectively declare one algorithm as superior to another through comparison of three precipitation-type algorithms when validated using different techniques. The apparent skill of the algorithms is dependent on the choice of performance metric – algorithms can have high scores for some metrics and poor scores for others. It is also possible for an algorithm to have high skill at diagnosing some precipitation types and poor skill with others. Algorithm skill is also highly dependent on the choice of verification data/methodology. Just by changing what data is considered “truth,” we were able to substantially change the apparent skill of all algorithms evaluated herein. These findings suggest an objective declaration of algorithm “goodness” is not possible. Moreover, they indicate the unambiguous declaration of superiority is difficult, if not impossible. A contributing factor to algorithm performance is uncertainty of the microphysical processes that lead to phase changes of falling hydrometeors, which are treated differently by each algorithm thus resulting in different biases in near-0°C environments. These biases are evident even when algorithms are applied to ensemble forecasts. Hence, a multi-algorithm approach is advocated to account for this source of uncertainty. Though the apparent performance of this approach is still dependent on the choice of performance metric and precipitation type, a case-study analysis shows it has the potential to provide better decision support than the single-algorithm approach.
期刊介绍:
Weather and Forecasting (WAF) (ISSN: 0882-8156; eISSN: 1520-0434) publishes research that is relevant to operational forecasting. This includes papers on significant weather events, forecasting techniques, forecast verification, model parameterizations, data assimilation, model ensembles, statistical postprocessing techniques, the transfer of research results to the forecasting community, and the societal use and value of forecasts. The scope of WAF includes research relevant to forecast lead times ranging from short-term “nowcasts” through seasonal time scales out to approximately two years.