Harishankar Vasudevanallur Subramanian, Casey Canfield, Daniel B. Shank, Matthew Kinnison
{"title":"Combining uncertainty information with AI recommendations supports calibration with domain knowledge","authors":"Harishankar Vasudevanallur Subramanian, Casey Canfield, Daniel B. Shank, Matthew Kinnison","doi":"10.1080/13669877.2023.2259406","DOIUrl":null,"url":null,"abstract":"AbstractThe use of Artificial Intelligence (AI) decision support is increasing in high-stakes contexts, such as healthcare, defense, and finance. Uncertainty information may help users better leverage AI predictions, especially when combined with their domain knowledge. We conducted a human-subject experiment with an online sample to examine the effects of presenting uncertainty information with AI recommendations. The experimental stimuli and task, which included identifying plant and animal images, are from an existing image recognition deep learning model, a popular approach to AI. The uncertainty information was predicted probabilities for whether each label was the true label. This information was presented numerically and visually. In the study, we tested the effect of AI recommendations in a within-subject comparison and uncertainty information in a between-subject comparison. The results suggest that AI recommendations increased both participants’ accuracy and confidence. Further, providing uncertainty information significantly increased accuracy but not confidence, suggesting that it may be effective for reducing overconfidence. In this task, participants tended to have higher domain knowledge for animals than plants based on a self-reported measure of domain knowledge. Participants with more domain knowledge were appropriately less confident when uncertainty information was provided. This suggests that people use AI and uncertainty information differently, such as an expert versus second opinion, depending on their level of domain knowledge. These results suggest that if presented appropriately, uncertainty information can potentially decrease overconfidence that is induced by using AI recommendations.Keywords: Overconfidenceartificial intelligenceuncertaintyhuman-AI teamsrisk communication AcknowledgmentsWe thank Cihan Dagli, Krista Lentine, Mark Schnitzler, and Henry Randall for their insights on the design of AI decision support systems.Disclosure statementThe authors report that there are no competing interests to declare.Additional informationFundingThis work was supported by a National Science Foundation Award #2026324.","PeriodicalId":16975,"journal":{"name":"Journal of Risk Research","volume":"123 1","pages":"0"},"PeriodicalIF":2.4000,"publicationDate":"2023-10-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Journal of Risk Research","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1080/13669877.2023.2259406","RegionNum":4,"RegionCategory":"管理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q1","JCRName":"SOCIAL SCIENCES, INTERDISCIPLINARY","Score":null,"Total":0}
引用次数: 0
Abstract
AbstractThe use of Artificial Intelligence (AI) decision support is increasing in high-stakes contexts, such as healthcare, defense, and finance. Uncertainty information may help users better leverage AI predictions, especially when combined with their domain knowledge. We conducted a human-subject experiment with an online sample to examine the effects of presenting uncertainty information with AI recommendations. The experimental stimuli and task, which included identifying plant and animal images, are from an existing image recognition deep learning model, a popular approach to AI. The uncertainty information was predicted probabilities for whether each label was the true label. This information was presented numerically and visually. In the study, we tested the effect of AI recommendations in a within-subject comparison and uncertainty information in a between-subject comparison. The results suggest that AI recommendations increased both participants’ accuracy and confidence. Further, providing uncertainty information significantly increased accuracy but not confidence, suggesting that it may be effective for reducing overconfidence. In this task, participants tended to have higher domain knowledge for animals than plants based on a self-reported measure of domain knowledge. Participants with more domain knowledge were appropriately less confident when uncertainty information was provided. This suggests that people use AI and uncertainty information differently, such as an expert versus second opinion, depending on their level of domain knowledge. These results suggest that if presented appropriately, uncertainty information can potentially decrease overconfidence that is induced by using AI recommendations.Keywords: Overconfidenceartificial intelligenceuncertaintyhuman-AI teamsrisk communication AcknowledgmentsWe thank Cihan Dagli, Krista Lentine, Mark Schnitzler, and Henry Randall for their insights on the design of AI decision support systems.Disclosure statementThe authors report that there are no competing interests to declare.Additional informationFundingThis work was supported by a National Science Foundation Award #2026324.
期刊介绍:
The Journal of Risk Research is an international journal that publishes peer-reviewed theoretical and empirical research articles within the risk field from the areas of social, physical and health sciences and engineering, as well as articles related to decision making, regulation and policy issues in all disciplines. Articles will be published in English. The main aims of the Journal of Risk Research are to stimulate intellectual debate, to promote better risk management practices and to contribute to the development of risk management methodologies. Journal of Risk Research is the official journal of the Society for Risk Analysis Europe and the Society for Risk Analysis Japan.