使用或不使用人工智能进行关键判断:专业人员如何处理使用人工智能进行医疗诊断时的不透明性

Sarah Lebovitz, Hila Lifshitz-Assaf, N. Levina
{"title":"使用或不使用人工智能进行关键判断:专业人员如何处理使用人工智能进行医疗诊断时的不透明性","authors":"Sarah Lebovitz, Hila Lifshitz-Assaf, N. Levina","doi":"10.1287/orsc.2021.1549","DOIUrl":null,"url":null,"abstract":"Artificial intelligence (AI) technologies promise to transform how professionals conduct knowledge work by augmenting their capabilities for making professional judgments. We know little, however, about how human-AI augmentation takes place in practice. Yet, gaining this understanding is particularly important when professionals use AI tools to form judgments on critical decisions. We conducted an in-depth field study in a major U.S. hospital where AI tools were used in three departments by diagnostic radiologists making breast cancer, lung cancer, and bone age determinations. The study illustrates the hindering effects of opacity that professionals experienced when using AI tools and explores how these professionals grappled with it in practice. In all three departments, this opacity resulted in professionals experiencing increased uncertainty because AI tool results often diverged from their initial judgment without providing underlying reasoning. Only in one department (of the three) did professionals consistently incorporate AI results into their final judgments, achieving what we call engaged augmentation. These professionals invested in AI interrogation practices—practices enacted by human experts to relate their own knowledge claims to AI knowledge claims. Professionals in the other two departments did not enact such practices and did not incorporate AI inputs into their final decisions, which we call unengaged “augmentation.” Our study unpacks the challenges involved in augmenting professional judgment with powerful, yet opaque, technologies and contributes to literature on AI adoption in knowledge work.","PeriodicalId":18268,"journal":{"name":"Materials Engineering eJournal","volume":"84 1","pages":""},"PeriodicalIF":0.0000,"publicationDate":"2021-10-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"46","resultStr":"{\"title\":\"To engage or not to engage with AI for critical judgments: How professionals deal with opacity when using AI for medical diagnosis\",\"authors\":\"Sarah Lebovitz, Hila Lifshitz-Assaf, N. Levina\",\"doi\":\"10.1287/orsc.2021.1549\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"Artificial intelligence (AI) technologies promise to transform how professionals conduct knowledge work by augmenting their capabilities for making professional judgments. We know little, however, about how human-AI augmentation takes place in practice. Yet, gaining this understanding is particularly important when professionals use AI tools to form judgments on critical decisions. We conducted an in-depth field study in a major U.S. hospital where AI tools were used in three departments by diagnostic radiologists making breast cancer, lung cancer, and bone age determinations. The study illustrates the hindering effects of opacity that professionals experienced when using AI tools and explores how these professionals grappled with it in practice. In all three departments, this opacity resulted in professionals experiencing increased uncertainty because AI tool results often diverged from their initial judgment without providing underlying reasoning. Only in one department (of the three) did professionals consistently incorporate AI results into their final judgments, achieving what we call engaged augmentation. These professionals invested in AI interrogation practices—practices enacted by human experts to relate their own knowledge claims to AI knowledge claims. Professionals in the other two departments did not enact such practices and did not incorporate AI inputs into their final decisions, which we call unengaged “augmentation.” Our study unpacks the challenges involved in augmenting professional judgment with powerful, yet opaque, technologies and contributes to literature on AI adoption in knowledge work.\",\"PeriodicalId\":18268,\"journal\":{\"name\":\"Materials Engineering eJournal\",\"volume\":\"84 1\",\"pages\":\"\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2021-10-05\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"46\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"Materials Engineering eJournal\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.1287/orsc.2021.1549\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"Materials Engineering eJournal","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1287/orsc.2021.1549","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 46

摘要

人工智能(AI)技术有望通过增强专业人士做出专业判断的能力,改变他们从事知识工作的方式。然而,我们对人工智能在实践中是如何实现的知之甚少。然而,当专业人士使用人工智能工具对关键决策做出判断时,获得这种理解尤为重要。我们在美国一家大型医院进行了深入的实地研究,在那里,诊断放射科医生在三个部门使用人工智能工具进行乳腺癌、肺癌和骨龄测定。该研究说明了专业人员在使用人工智能工具时所经历的不透明性的阻碍作用,并探讨了这些专业人员在实践中如何应对它。在这三个部门中,这种不透明性导致专业人员经历了越来越多的不确定性,因为人工智能工具的结果往往偏离了他们最初的判断,而没有提供潜在的推理。只有一个部门(三个部门中)的专业人员始终将人工智能结果纳入他们的最终判断,实现了我们所说的“参与式增强”。这些专业人士投资于人工智能审讯实践——由人类专家制定的实践,将他们自己的知识主张与人工智能的知识主张联系起来。另外两个部门的专业人员没有制定这样的做法,也没有将人工智能的输入纳入他们的最终决策,我们称之为不参与的“增强”。我们的研究揭示了利用强大但不透明的技术增强专业判断力所面临的挑战,并为知识工作中人工智能应用的文献做出了贡献。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
查看原文
分享 分享
微信好友 朋友圈 QQ好友 复制链接
本刊更多论文
To engage or not to engage with AI for critical judgments: How professionals deal with opacity when using AI for medical diagnosis
Artificial intelligence (AI) technologies promise to transform how professionals conduct knowledge work by augmenting their capabilities for making professional judgments. We know little, however, about how human-AI augmentation takes place in practice. Yet, gaining this understanding is particularly important when professionals use AI tools to form judgments on critical decisions. We conducted an in-depth field study in a major U.S. hospital where AI tools were used in three departments by diagnostic radiologists making breast cancer, lung cancer, and bone age determinations. The study illustrates the hindering effects of opacity that professionals experienced when using AI tools and explores how these professionals grappled with it in practice. In all three departments, this opacity resulted in professionals experiencing increased uncertainty because AI tool results often diverged from their initial judgment without providing underlying reasoning. Only in one department (of the three) did professionals consistently incorporate AI results into their final judgments, achieving what we call engaged augmentation. These professionals invested in AI interrogation practices—practices enacted by human experts to relate their own knowledge claims to AI knowledge claims. Professionals in the other two departments did not enact such practices and did not incorporate AI inputs into their final decisions, which we call unengaged “augmentation.” Our study unpacks the challenges involved in augmenting professional judgment with powerful, yet opaque, technologies and contributes to literature on AI adoption in knowledge work.
求助全文
通过发布文献求助,成功后即可免费获取论文全文。 去求助
来源期刊
自引率
0.00%
发文量
0
期刊最新文献
Short-Range Order Clusters in the Long-Period Stacking/Order Phases With an Intrinsic-I Type Stacking Fault in Mg-Co-Y Alloys Phase Decomposition and Strengthening in Hfnbtatizr High Entropy Alloy from First-Principles Calculations Development of Advanced Stellarator With Identical Permanent Magnet Blocks Solvent-Rich Magnesium-Based Bulk Metallic Glasses in the Mg–Pd–Ca and Mg–Pd–Yb Alloy Systems Light-Driven Proton Transport Across Liposomal Membranes Enabled by Janus Metal-Organic Layers
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
已复制链接
已复制链接
快去分享给好友吧!
我知道了
×
扫码分享
扫码分享
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1