Mariana G. Cains, Christopher D. Wirz, Julie L. Demuth, Ann Bostrom, David John Gagne, Amy McGovern, R. Sobash, Deianna Madlambayan
{"title":"Exploring NWS Forecasters’ Assessment of AI Guidance Trustworthiness","authors":"Mariana G. Cains, Christopher D. Wirz, Julie L. Demuth, Ann Bostrom, David John Gagne, Amy McGovern, R. Sobash, Deianna Madlambayan","doi":"10.1175/waf-d-23-0180.1","DOIUrl":null,"url":null,"abstract":"\nAs artificial intelligence (AI) methods are increasingly used to develop new guidance intended for operational use by forecasters, it is critical to evaluate whether forecasters deem the guidance trustworthy. Past trust-related AI research suggests that certain attributes (e.g., understanding how the AI was trained, interactivity, performance) contribute to users perceiving the AI as trustworthy. However, little research has been done to examine the role of these and other attributes for weather forecasters. In this study, we conducted 16 online interviews with National Weather Service (NWS) forecasters to examine (a) how they make guidance use decisions, and (b) how the AI model technique used, training, input variables, performance, and developers as well as interacting with the model output influenced their assessments of trustworthiness of new guidance. The interviews pertained to either a random forest model predicting probability of severe hail or a 2D-convolutional neural net model predicting probability of storm mode. When taken as a whole, our findings illustrate how forecasters’ assessment of AI guidance trustworthiness is a process that occurs over time rather than automatically or at first introduction. We recommend developers center end users when creating new AI guidance tools, making end users integral to their thinking and efforts. This approach is essential for the development of useful and used tools. The details of these findings can help AI developers understand how forecasters perceive AI guidance and inform AI development and refinement efforts.","PeriodicalId":509742,"journal":{"name":"Weather and Forecasting","volume":null,"pages":null},"PeriodicalIF":0.0000,"publicationDate":"2024-07-09","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Weather and Forecasting","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1175/waf-d-23-0180.1","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 0
Abstract
As artificial intelligence (AI) methods are increasingly used to develop new guidance intended for operational use by forecasters, it is critical to evaluate whether forecasters deem the guidance trustworthy. Past trust-related AI research suggests that certain attributes (e.g., understanding how the AI was trained, interactivity, performance) contribute to users perceiving the AI as trustworthy. However, little research has been done to examine the role of these and other attributes for weather forecasters. In this study, we conducted 16 online interviews with National Weather Service (NWS) forecasters to examine (a) how they make guidance use decisions, and (b) how the AI model technique used, training, input variables, performance, and developers as well as interacting with the model output influenced their assessments of trustworthiness of new guidance. The interviews pertained to either a random forest model predicting probability of severe hail or a 2D-convolutional neural net model predicting probability of storm mode. When taken as a whole, our findings illustrate how forecasters’ assessment of AI guidance trustworthiness is a process that occurs over time rather than automatically or at first introduction. We recommend developers center end users when creating new AI guidance tools, making end users integral to their thinking and efforts. This approach is essential for the development of useful and used tools. The details of these findings can help AI developers understand how forecasters perceive AI guidance and inform AI development and refinement efforts.