将不确定度信息与人工智能建议相结合,支持使用领域知识进行校准

IF 2.4 4区 管理学 Q1 SOCIAL SCIENCES, INTERDISCIPLINARY Journal of Risk Research Pub Date : 2023-10-13 DOI:10.1080/13669877.2023.2259406
Harishankar Vasudevanallur Subramanian, Casey Canfield, Daniel B. Shank, Matthew Kinnison
{"title":"将不确定度信息与人工智能建议相结合,支持使用领域知识进行校准","authors":"Harishankar Vasudevanallur Subramanian, Casey Canfield, Daniel B. Shank, Matthew Kinnison","doi":"10.1080/13669877.2023.2259406","DOIUrl":null,"url":null,"abstract":"AbstractThe use of Artificial Intelligence (AI) decision support is increasing in high-stakes contexts, such as healthcare, defense, and finance. Uncertainty information may help users better leverage AI predictions, especially when combined with their domain knowledge. We conducted a human-subject experiment with an online sample to examine the effects of presenting uncertainty information with AI recommendations. The experimental stimuli and task, which included identifying plant and animal images, are from an existing image recognition deep learning model, a popular approach to AI. The uncertainty information was predicted probabilities for whether each label was the true label. This information was presented numerically and visually. In the study, we tested the effect of AI recommendations in a within-subject comparison and uncertainty information in a between-subject comparison. The results suggest that AI recommendations increased both participants’ accuracy and confidence. Further, providing uncertainty information significantly increased accuracy but not confidence, suggesting that it may be effective for reducing overconfidence. In this task, participants tended to have higher domain knowledge for animals than plants based on a self-reported measure of domain knowledge. Participants with more domain knowledge were appropriately less confident when uncertainty information was provided. This suggests that people use AI and uncertainty information differently, such as an expert versus second opinion, depending on their level of domain knowledge. These results suggest that if presented appropriately, uncertainty information can potentially decrease overconfidence that is induced by using AI recommendations.Keywords: Overconfidenceartificial intelligenceuncertaintyhuman-AI teamsrisk communication AcknowledgmentsWe thank Cihan Dagli, Krista Lentine, Mark Schnitzler, and Henry Randall for their insights on the design of AI decision support systems.Disclosure statementThe authors report that there are no competing interests to declare.Additional informationFundingThis work was supported by a National Science Foundation Award #2026324.","PeriodicalId":16975,"journal":{"name":"Journal of Risk Research","volume":null,"pages":null},"PeriodicalIF":2.4000,"publicationDate":"2023-10-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"Combining uncertainty information with AI recommendations supports calibration with domain knowledge\",\"authors\":\"Harishankar Vasudevanallur Subramanian, Casey Canfield, Daniel B. Shank, Matthew Kinnison\",\"doi\":\"10.1080/13669877.2023.2259406\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"AbstractThe use of Artificial Intelligence (AI) decision support is increasing in high-stakes contexts, such as healthcare, defense, and finance. Uncertainty information may help users better leverage AI predictions, especially when combined with their domain knowledge. We conducted a human-subject experiment with an online sample to examine the effects of presenting uncertainty information with AI recommendations. The experimental stimuli and task, which included identifying plant and animal images, are from an existing image recognition deep learning model, a popular approach to AI. The uncertainty information was predicted probabilities for whether each label was the true label. This information was presented numerically and visually. In the study, we tested the effect of AI recommendations in a within-subject comparison and uncertainty information in a between-subject comparison. The results suggest that AI recommendations increased both participants’ accuracy and confidence. Further, providing uncertainty information significantly increased accuracy but not confidence, suggesting that it may be effective for reducing overconfidence. In this task, participants tended to have higher domain knowledge for animals than plants based on a self-reported measure of domain knowledge. Participants with more domain knowledge were appropriately less confident when uncertainty information was provided. This suggests that people use AI and uncertainty information differently, such as an expert versus second opinion, depending on their level of domain knowledge. These results suggest that if presented appropriately, uncertainty information can potentially decrease overconfidence that is induced by using AI recommendations.Keywords: Overconfidenceartificial intelligenceuncertaintyhuman-AI teamsrisk communication AcknowledgmentsWe thank Cihan Dagli, Krista Lentine, Mark Schnitzler, and Henry Randall for their insights on the design of AI decision support systems.Disclosure statementThe authors report that there are no competing interests to declare.Additional informationFundingThis work was supported by a National Science Foundation Award #2026324.\",\"PeriodicalId\":16975,\"journal\":{\"name\":\"Journal of Risk Research\",\"volume\":null,\"pages\":null},\"PeriodicalIF\":2.4000,\"publicationDate\":\"2023-10-13\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"Journal of Risk Research\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.1080/13669877.2023.2259406\",\"RegionNum\":4,\"RegionCategory\":\"管理学\",\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"Q1\",\"JCRName\":\"SOCIAL SCIENCES, INTERDISCIPLINARY\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"Journal of Risk Research","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1080/13669877.2023.2259406","RegionNum":4,"RegionCategory":"管理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q1","JCRName":"SOCIAL SCIENCES, INTERDISCIPLINARY","Score":null,"Total":0}
引用次数: 0

摘要

摘要人工智能(AI)决策支持在医疗、国防和金融等高风险环境中的应用越来越多。不确定性信息可以帮助用户更好地利用人工智能预测,特别是当与他们的领域知识相结合时。我们用在线样本进行了一项人体实验,以检验用人工智能推荐呈现不确定性信息的效果。实验刺激和任务,包括识别植物和动物图像,来自现有的图像识别深度学习模型,这是一种流行的人工智能方法。不确定性信息是每个标签是否为真实标签的预测概率。这些信息以数字和视觉方式呈现。在这项研究中,我们测试了人工智能推荐在主题内比较中的效果,以及在主题间比较中的不确定性信息。结果表明,人工智能建议提高了参与者的准确性和信心。此外,提供不确定性信息显著提高了准确性,但没有提高信心,这表明它可能对减少过度自信有效。在这项任务中,基于自我报告的领域知识测量,参与者倾向于对动物的领域知识比植物的领域知识高。当提供不确定性信息时,具有更多领域知识的参与者适当地降低了信心。这表明人们使用人工智能和不确定性信息的方式不同,比如专家和第二意见,这取决于他们的领域知识水平。这些结果表明,如果呈现得当,不确定性信息可以潜在地减少使用人工智能推荐引起的过度自信。我们感谢Cihan Dagli、Krista Lentine、Mark Schnitzler和Henry Randall对人工智能决策支持系统设计的见解。披露声明作者报告无竞争利益需要申报。本工作得到了国家科学基金奖#2026324的支持。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
查看原文
分享 分享
微信好友 朋友圈 QQ好友 复制链接
本刊更多论文
Combining uncertainty information with AI recommendations supports calibration with domain knowledge
AbstractThe use of Artificial Intelligence (AI) decision support is increasing in high-stakes contexts, such as healthcare, defense, and finance. Uncertainty information may help users better leverage AI predictions, especially when combined with their domain knowledge. We conducted a human-subject experiment with an online sample to examine the effects of presenting uncertainty information with AI recommendations. The experimental stimuli and task, which included identifying plant and animal images, are from an existing image recognition deep learning model, a popular approach to AI. The uncertainty information was predicted probabilities for whether each label was the true label. This information was presented numerically and visually. In the study, we tested the effect of AI recommendations in a within-subject comparison and uncertainty information in a between-subject comparison. The results suggest that AI recommendations increased both participants’ accuracy and confidence. Further, providing uncertainty information significantly increased accuracy but not confidence, suggesting that it may be effective for reducing overconfidence. In this task, participants tended to have higher domain knowledge for animals than plants based on a self-reported measure of domain knowledge. Participants with more domain knowledge were appropriately less confident when uncertainty information was provided. This suggests that people use AI and uncertainty information differently, such as an expert versus second opinion, depending on their level of domain knowledge. These results suggest that if presented appropriately, uncertainty information can potentially decrease overconfidence that is induced by using AI recommendations.Keywords: Overconfidenceartificial intelligenceuncertaintyhuman-AI teamsrisk communication AcknowledgmentsWe thank Cihan Dagli, Krista Lentine, Mark Schnitzler, and Henry Randall for their insights on the design of AI decision support systems.Disclosure statementThe authors report that there are no competing interests to declare.Additional informationFundingThis work was supported by a National Science Foundation Award #2026324.
求助全文
通过发布文献求助,成功后即可免费获取论文全文。 去求助
来源期刊
Journal of Risk Research
Journal of Risk Research SOCIAL SCIENCES, INTERDISCIPLINARY-
CiteScore
12.20
自引率
5.90%
发文量
44
期刊介绍: The Journal of Risk Research is an international journal that publishes peer-reviewed theoretical and empirical research articles within the risk field from the areas of social, physical and health sciences and engineering, as well as articles related to decision making, regulation and policy issues in all disciplines. Articles will be published in English. The main aims of the Journal of Risk Research are to stimulate intellectual debate, to promote better risk management practices and to contribute to the development of risk management methodologies. Journal of Risk Research is the official journal of the Society for Risk Analysis Europe and the Society for Risk Analysis Japan.
期刊最新文献
Beyond the singular and linear risk approach Investigating the psychological impact of communicating epistemic uncertainty in personalized and generic risk estimates: an experimental study Securing public spaces: public willingness to sacrifice convenience and privacy for security at three U.S. public venues Seismic hazard and risk analysis in The Netherlands for deep subsurface activities in practice The role of knowledge and trust in developing risk perceptions of autonomous vehicles: a moderated mediation model
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
已复制链接
已复制链接
快去分享给好友吧!
我知道了
×
扫码分享
扫码分享
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1