研究人工智能生成的健康建议中个性化和细心程度的影响:在线医疗咨询实验中的信任、采用和洞察力

IF 10.1 1区 社会学 Q1 SOCIAL ISSUES Technology in Society Pub Date : 2024-10-12 DOI:10.1016/j.techsoc.2024.102726
Hongyi Qin, Yifan Zhu, Yan Jiang, Siqi Luo, Cui Huang
{"title":"研究人工智能生成的健康建议中个性化和细心程度的影响:在线医疗咨询实验中的信任、采用和洞察力","authors":"Hongyi Qin,&nbsp;Yifan Zhu,&nbsp;Yan Jiang,&nbsp;Siqi Luo,&nbsp;Cui Huang","doi":"10.1016/j.techsoc.2024.102726","DOIUrl":null,"url":null,"abstract":"<div><div>Artificial intelligence (AI) technologies, exemplified by health chatbots, are transforming the healthcare industry. Their widespread application has the potential to enhance decision-making efficiency, improve the quality of healthcare services, and reduce medical costs. While there is ongoing discussion about the opportunities and challenges brought by AI, more needs to be known about the public's attitude towards its use in the healthcare domain. Understanding public attitudes can help policymakers better grasp their needs and involve them in making decisions that benefit both technological development and social welfare. Therefore, this study presents evidence from two between-subjects experiments. This study aims to compare the public's adoption and trust levels in health advice provided by human vs. AI doctors and explore the potential effects of personalization and carefulness on the public's attitudes. Experimental designs adopt a trust-centered, cognitively and emotionally balanced perspective to study the public's intention to adopt AI. In Experiment 1, the experimental conditions involve the types of decision-makers providing online consultation advice, either AI or human doctors. In Experiment 2, the experimental conditions involve varying levels of perceived personalization and carefulness (high vs. low). A total of 734 participants took part in the study. They were randomly assigned to one of the intervention conditions and responded to manipulation checks after reading the materials. Using a seven-point Likert-type scale, participants rated their cognitive and emotional trust levels and intention to adopt the advice. Partial Least Squares Structural Equation Modeling (PLS-SEM) is conducted to estimate the proposed theoretical perspective. Qualitative interviews on both real-world and AI-generated treatment recommendations further enriched the understanding of public perceptions.The results show that AI-generated advice is generally slightly less trusted and adopted by the public. However, a noticeable inclination towards AI-generated advice emerges when AI demonstrates proficiency in understanding individuals' health conditions and providing empathetic consultations. Further analyses confirm the mediating influence of emotional trust between cognitive trust and adoption intention. These findings provide deeper insights into the process of adoption and trust formation. Moreover, they offer guidance to digital healthcare providers, empowering them with the knowledge to co-design AI implementation strategies that cater to the public's expectations.</div></div>","PeriodicalId":47979,"journal":{"name":"Technology in Society","volume":"79 ","pages":"Article 102726"},"PeriodicalIF":10.1000,"publicationDate":"2024-10-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"Examining the impact of personalization and carefulness in AI-generated health advice: Trust, adoption, and insights in online healthcare consultations experiments\",\"authors\":\"Hongyi Qin,&nbsp;Yifan Zhu,&nbsp;Yan Jiang,&nbsp;Siqi Luo,&nbsp;Cui Huang\",\"doi\":\"10.1016/j.techsoc.2024.102726\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"<div><div>Artificial intelligence (AI) technologies, exemplified by health chatbots, are transforming the healthcare industry. Their widespread application has the potential to enhance decision-making efficiency, improve the quality of healthcare services, and reduce medical costs. While there is ongoing discussion about the opportunities and challenges brought by AI, more needs to be known about the public's attitude towards its use in the healthcare domain. Understanding public attitudes can help policymakers better grasp their needs and involve them in making decisions that benefit both technological development and social welfare. Therefore, this study presents evidence from two between-subjects experiments. This study aims to compare the public's adoption and trust levels in health advice provided by human vs. AI doctors and explore the potential effects of personalization and carefulness on the public's attitudes. Experimental designs adopt a trust-centered, cognitively and emotionally balanced perspective to study the public's intention to adopt AI. In Experiment 1, the experimental conditions involve the types of decision-makers providing online consultation advice, either AI or human doctors. In Experiment 2, the experimental conditions involve varying levels of perceived personalization and carefulness (high vs. low). A total of 734 participants took part in the study. They were randomly assigned to one of the intervention conditions and responded to manipulation checks after reading the materials. Using a seven-point Likert-type scale, participants rated their cognitive and emotional trust levels and intention to adopt the advice. Partial Least Squares Structural Equation Modeling (PLS-SEM) is conducted to estimate the proposed theoretical perspective. Qualitative interviews on both real-world and AI-generated treatment recommendations further enriched the understanding of public perceptions.The results show that AI-generated advice is generally slightly less trusted and adopted by the public. However, a noticeable inclination towards AI-generated advice emerges when AI demonstrates proficiency in understanding individuals' health conditions and providing empathetic consultations. Further analyses confirm the mediating influence of emotional trust between cognitive trust and adoption intention. These findings provide deeper insights into the process of adoption and trust formation. Moreover, they offer guidance to digital healthcare providers, empowering them with the knowledge to co-design AI implementation strategies that cater to the public's expectations.</div></div>\",\"PeriodicalId\":47979,\"journal\":{\"name\":\"Technology in Society\",\"volume\":\"79 \",\"pages\":\"Article 102726\"},\"PeriodicalIF\":10.1000,\"publicationDate\":\"2024-10-12\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"Technology in Society\",\"FirstCategoryId\":\"90\",\"ListUrlMain\":\"https://www.sciencedirect.com/science/article/pii/S0160791X24002744\",\"RegionNum\":1,\"RegionCategory\":\"社会学\",\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"Q1\",\"JCRName\":\"SOCIAL ISSUES\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"Technology in Society","FirstCategoryId":"90","ListUrlMain":"https://www.sciencedirect.com/science/article/pii/S0160791X24002744","RegionNum":1,"RegionCategory":"社会学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q1","JCRName":"SOCIAL ISSUES","Score":null,"Total":0}
引用次数: 0

摘要

以健康聊天机器人为代表的人工智能(AI)技术正在改变医疗保健行业。它们的广泛应用有可能提高决策效率、改善医疗服务质量并降低医疗成本。虽然人们一直在讨论人工智能带来的机遇和挑战,但还需要更多地了解公众对其在医疗保健领域应用的态度。了解公众的态度有助于决策者更好地把握他们的需求,并让他们参与到有利于技术发展和社会福利的决策中来。因此,本研究提供了两个主体间实验的证据。本研究旨在比较公众对人类医生和人工智能医生所提供的健康建议的采纳和信任程度,并探索个性化和细心程度对公众态度的潜在影响。实验设计采用以信任为中心、认知与情感平衡的视角来研究公众采用人工智能的意向。在实验 1 中,实验条件涉及提供在线咨询建议的决策者类型,即人工智能或人类医生。在实验 2 中,实验条件涉及不同程度的感知个性化和细心程度(高与低)。共有 734 名参与者参与了这项研究。他们被随机分配到其中一种干预条件下,并在阅读材料后对操作检查作出回应。参与者使用七分李克特量表对其认知和情感信任水平以及采纳建议的意愿进行评分。采用偏最小二乘法结构方程模型(PLS-SEM)来估计所提出的理论观点。对真实世界和人工智能生成的治疗建议进行的定性访谈进一步丰富了对公众看法的理解。然而,当人工智能在理解个人健康状况和提供移情咨询方面表现出熟练程度时,公众就会明显倾向于人工智能生成的建议。进一步的分析证实了情感信任在认知信任和采用意向之间的中介影响。这些研究结果对采用和信任的形成过程提供了更深入的见解。此外,它们还为数字医疗服务提供商提供了指导,使他们有能力共同设计符合公众期望的人工智能实施策略。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
查看原文
分享 分享
微信好友 朋友圈 QQ好友 复制链接
本刊更多论文
Examining the impact of personalization and carefulness in AI-generated health advice: Trust, adoption, and insights in online healthcare consultations experiments
Artificial intelligence (AI) technologies, exemplified by health chatbots, are transforming the healthcare industry. Their widespread application has the potential to enhance decision-making efficiency, improve the quality of healthcare services, and reduce medical costs. While there is ongoing discussion about the opportunities and challenges brought by AI, more needs to be known about the public's attitude towards its use in the healthcare domain. Understanding public attitudes can help policymakers better grasp their needs and involve them in making decisions that benefit both technological development and social welfare. Therefore, this study presents evidence from two between-subjects experiments. This study aims to compare the public's adoption and trust levels in health advice provided by human vs. AI doctors and explore the potential effects of personalization and carefulness on the public's attitudes. Experimental designs adopt a trust-centered, cognitively and emotionally balanced perspective to study the public's intention to adopt AI. In Experiment 1, the experimental conditions involve the types of decision-makers providing online consultation advice, either AI or human doctors. In Experiment 2, the experimental conditions involve varying levels of perceived personalization and carefulness (high vs. low). A total of 734 participants took part in the study. They were randomly assigned to one of the intervention conditions and responded to manipulation checks after reading the materials. Using a seven-point Likert-type scale, participants rated their cognitive and emotional trust levels and intention to adopt the advice. Partial Least Squares Structural Equation Modeling (PLS-SEM) is conducted to estimate the proposed theoretical perspective. Qualitative interviews on both real-world and AI-generated treatment recommendations further enriched the understanding of public perceptions.The results show that AI-generated advice is generally slightly less trusted and adopted by the public. However, a noticeable inclination towards AI-generated advice emerges when AI demonstrates proficiency in understanding individuals' health conditions and providing empathetic consultations. Further analyses confirm the mediating influence of emotional trust between cognitive trust and adoption intention. These findings provide deeper insights into the process of adoption and trust formation. Moreover, they offer guidance to digital healthcare providers, empowering them with the knowledge to co-design AI implementation strategies that cater to the public's expectations.
求助全文
通过发布文献求助,成功后即可免费获取论文全文。 去求助
来源期刊
CiteScore
17.90
自引率
14.10%
发文量
316
审稿时长
60 days
期刊介绍: Technology in Society is a global journal dedicated to fostering discourse at the crossroads of technological change and the social, economic, business, and philosophical transformation of our world. The journal aims to provide scholarly contributions that empower decision-makers to thoughtfully and intentionally navigate the decisions shaping this dynamic landscape. A common thread across these fields is the role of technology in society, influencing economic, political, and cultural dynamics. Scholarly work in Technology in Society delves into the social forces shaping technological decisions and the societal choices regarding technology use. This encompasses scholarly and theoretical approaches (history and philosophy of science and technology, technology forecasting, economic growth, and policy, ethics), applied approaches (business innovation, technology management, legal and engineering), and developmental perspectives (technology transfer, technology assessment, and economic development). Detailed information about the journal's aims and scope on specific topics can be found in Technology in Society Briefings, accessible via our Special Issues and Article Collections.
期刊最新文献
Modeling ICT adoption and electricity consumption in emerging digital economies: Insights from the West African Region Artificial Intelligence: Intensifying or mitigating unemployment? Technology shock of ChatGPT, social attention and firm value: Evidence from China Exploring determinants influencing artificial intelligence adoption, reference to diffusion of innovation theory Advanced cryopreservation as an emergent and convergent technological platform
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
已复制链接
已复制链接
快去分享给好友吧!
我知道了
×
扫码分享
扫码分享
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1