{"title":"Physician Adoption of AI Assistant","authors":"Ting Hou, Meng Li, Y. Tan, Huazhong Zhao","doi":"10.1287/msom.2023.0093","DOIUrl":null,"url":null,"abstract":"Problem definition: Artificial intelligence (AI) assistants—software agents that can perform tasks or services for individuals—are among the most promising AI applications. However, little is known about the adoption of AI assistants by service providers (i.e., physicians) in a real-world healthcare setting. In this paper, we investigate the impact of the AI smartness (i.e., whether the AI assistant is powered by machine learning intelligence) and the impact of AI transparency (i.e., whether physicians are informed of the AI assistant). Methodology/results: We collaborate with a leading healthcare platform to run a field experiment in which we compare physicians’ adoption behavior, that is, adoption rate and adoption timing, of smart and automated AI assistants under transparent and non-transparent conditions. We find that the smartness can increase the adoption rate and shorten the adoption timing, whereas the transparency can only shorten the adoption timing. Moreover, the impact of AI transparency on the adoption rate is contingent on the smartness level of the AI assistant: the transparency increases the adoption rate only when the AI assistant is not equipped with smart algorithms and fails to do so when the AI assistant is smart. Managerial implications: Our study can guide platforms in designing their AI strategies. Platforms should improve the smartness of AI assistants. If such an improvement is too costly, the platform should transparentize the AI assistant, especially when it is not smart. Funding: This research was supported by a Behavioral Research Assistance Grant from the C. T. Bauer College of Business, University of Houston. H. Zhao acknowledges support from Hong Kong General Research Fund [9043593]. Y. (R.) Tan acknowledges generous support from CEIBS Research [Grant AG24QCS]. Supplemental Material: The online appendix is available at https://doi.org/10.1287/msom.2023.0093 .","PeriodicalId":119284,"journal":{"name":"Manufacturing & Service Operations Management","volume":" 31","pages":""},"PeriodicalIF":0.0000,"publicationDate":"2024-07-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Manufacturing & Service Operations Management","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1287/msom.2023.0093","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 0
Abstract
Problem definition: Artificial intelligence (AI) assistants—software agents that can perform tasks or services for individuals—are among the most promising AI applications. However, little is known about the adoption of AI assistants by service providers (i.e., physicians) in a real-world healthcare setting. In this paper, we investigate the impact of the AI smartness (i.e., whether the AI assistant is powered by machine learning intelligence) and the impact of AI transparency (i.e., whether physicians are informed of the AI assistant). Methodology/results: We collaborate with a leading healthcare platform to run a field experiment in which we compare physicians’ adoption behavior, that is, adoption rate and adoption timing, of smart and automated AI assistants under transparent and non-transparent conditions. We find that the smartness can increase the adoption rate and shorten the adoption timing, whereas the transparency can only shorten the adoption timing. Moreover, the impact of AI transparency on the adoption rate is contingent on the smartness level of the AI assistant: the transparency increases the adoption rate only when the AI assistant is not equipped with smart algorithms and fails to do so when the AI assistant is smart. Managerial implications: Our study can guide platforms in designing their AI strategies. Platforms should improve the smartness of AI assistants. If such an improvement is too costly, the platform should transparentize the AI assistant, especially when it is not smart. Funding: This research was supported by a Behavioral Research Assistance Grant from the C. T. Bauer College of Business, University of Houston. H. Zhao acknowledges support from Hong Kong General Research Fund [9043593]. Y. (R.) Tan acknowledges generous support from CEIBS Research [Grant AG24QCS]. Supplemental Material: The online appendix is available at https://doi.org/10.1287/msom.2023.0093 .
问题的定义:人工智能(AI)助手--可为个人执行任务或提供服务的软件代理--是最有前途的人工智能应用之一。然而,人们对服务提供者(即医生)在现实医疗环境中采用人工智能助手的情况知之甚少。在本文中,我们研究了人工智能智能性(即人工智能助手是否由机器学习智能驱动)和人工智能透明度(即医生是否了解人工智能助手)的影响。方法/结果:我们与一家领先的医疗保健平台合作开展了一项实地实验,比较了医生在透明和不透明条件下采用智能和自动人工智能助手的行为,即采用率和采用时间。我们发现,智能化可以提高采用率并缩短采用时间,而透明化只能缩短采用时间。此外,人工智能透明度对采用率的影响取决于人工智能助手的智能程度:只有当人工智能助手不具备智能算法时,透明度才会提高采用率;而当人工智能助手具备智能时,透明度则不会提高采用率。管理意义:我们的研究可以指导平台设计其人工智能战略。平台应提高人工智能助手的智能程度。如果这种改进成本过高,平台应将人工智能助手透明化,尤其是当它不智能时。研究经费本研究得到了休斯顿大学 C. T. Bauer 商学院的行为研究资助。H. Zhao 感谢香港一般研究基金 [9043593] 的支持。Y. (R.) Tan 感谢中欧国际工商学院研究基金[Grant AG24QCS]的慷慨资助。补充材料:在线附录见 https://doi.org/10.1287/msom.2023.0093 。