Hongyi Qin, Yifan Zhu, Yan Jiang, Siqi Luo, Cui Huang
{"title":"研究人工智能生成的健康建议中个性化和细心程度的影响:在线医疗咨询实验中的信任、采用和洞察力","authors":"Hongyi Qin, Yifan Zhu, Yan Jiang, Siqi Luo, Cui Huang","doi":"10.1016/j.techsoc.2024.102726","DOIUrl":null,"url":null,"abstract":"<div><div>Artificial intelligence (AI) technologies, exemplified by health chatbots, are transforming the healthcare industry. Their widespread application has the potential to enhance decision-making efficiency, improve the quality of healthcare services, and reduce medical costs. While there is ongoing discussion about the opportunities and challenges brought by AI, more needs to be known about the public's attitude towards its use in the healthcare domain. Understanding public attitudes can help policymakers better grasp their needs and involve them in making decisions that benefit both technological development and social welfare. Therefore, this study presents evidence from two between-subjects experiments. This study aims to compare the public's adoption and trust levels in health advice provided by human vs. AI doctors and explore the potential effects of personalization and carefulness on the public's attitudes. Experimental designs adopt a trust-centered, cognitively and emotionally balanced perspective to study the public's intention to adopt AI. In Experiment 1, the experimental conditions involve the types of decision-makers providing online consultation advice, either AI or human doctors. In Experiment 2, the experimental conditions involve varying levels of perceived personalization and carefulness (high vs. low). A total of 734 participants took part in the study. They were randomly assigned to one of the intervention conditions and responded to manipulation checks after reading the materials. Using a seven-point Likert-type scale, participants rated their cognitive and emotional trust levels and intention to adopt the advice. Partial Least Squares Structural Equation Modeling (PLS-SEM) is conducted to estimate the proposed theoretical perspective. Qualitative interviews on both real-world and AI-generated treatment recommendations further enriched the understanding of public perceptions.The results show that AI-generated advice is generally slightly less trusted and adopted by the public. However, a noticeable inclination towards AI-generated advice emerges when AI demonstrates proficiency in understanding individuals' health conditions and providing empathetic consultations. Further analyses confirm the mediating influence of emotional trust between cognitive trust and adoption intention. These findings provide deeper insights into the process of adoption and trust formation. Moreover, they offer guidance to digital healthcare providers, empowering them with the knowledge to co-design AI implementation strategies that cater to the public's expectations.</div></div>","PeriodicalId":47979,"journal":{"name":"Technology in Society","volume":"79 ","pages":"Article 102726"},"PeriodicalIF":10.1000,"publicationDate":"2024-10-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"Examining the impact of personalization and carefulness in AI-generated health advice: Trust, adoption, and insights in online healthcare consultations experiments\",\"authors\":\"Hongyi Qin, Yifan Zhu, Yan Jiang, Siqi Luo, Cui Huang\",\"doi\":\"10.1016/j.techsoc.2024.102726\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"<div><div>Artificial intelligence (AI) technologies, exemplified by health chatbots, are transforming the healthcare industry. Their widespread application has the potential to enhance decision-making efficiency, improve the quality of healthcare services, and reduce medical costs. While there is ongoing discussion about the opportunities and challenges brought by AI, more needs to be known about the public's attitude towards its use in the healthcare domain. Understanding public attitudes can help policymakers better grasp their needs and involve them in making decisions that benefit both technological development and social welfare. Therefore, this study presents evidence from two between-subjects experiments. This study aims to compare the public's adoption and trust levels in health advice provided by human vs. AI doctors and explore the potential effects of personalization and carefulness on the public's attitudes. Experimental designs adopt a trust-centered, cognitively and emotionally balanced perspective to study the public's intention to adopt AI. In Experiment 1, the experimental conditions involve the types of decision-makers providing online consultation advice, either AI or human doctors. In Experiment 2, the experimental conditions involve varying levels of perceived personalization and carefulness (high vs. low). A total of 734 participants took part in the study. They were randomly assigned to one of the intervention conditions and responded to manipulation checks after reading the materials. Using a seven-point Likert-type scale, participants rated their cognitive and emotional trust levels and intention to adopt the advice. Partial Least Squares Structural Equation Modeling (PLS-SEM) is conducted to estimate the proposed theoretical perspective. Qualitative interviews on both real-world and AI-generated treatment recommendations further enriched the understanding of public perceptions.The results show that AI-generated advice is generally slightly less trusted and adopted by the public. However, a noticeable inclination towards AI-generated advice emerges when AI demonstrates proficiency in understanding individuals' health conditions and providing empathetic consultations. Further analyses confirm the mediating influence of emotional trust between cognitive trust and adoption intention. These findings provide deeper insights into the process of adoption and trust formation. Moreover, they offer guidance to digital healthcare providers, empowering them with the knowledge to co-design AI implementation strategies that cater to the public's expectations.</div></div>\",\"PeriodicalId\":47979,\"journal\":{\"name\":\"Technology in Society\",\"volume\":\"79 \",\"pages\":\"Article 102726\"},\"PeriodicalIF\":10.1000,\"publicationDate\":\"2024-10-12\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"Technology in Society\",\"FirstCategoryId\":\"90\",\"ListUrlMain\":\"https://www.sciencedirect.com/science/article/pii/S0160791X24002744\",\"RegionNum\":1,\"RegionCategory\":\"社会学\",\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"Q1\",\"JCRName\":\"SOCIAL ISSUES\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"Technology in Society","FirstCategoryId":"90","ListUrlMain":"https://www.sciencedirect.com/science/article/pii/S0160791X24002744","RegionNum":1,"RegionCategory":"社会学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q1","JCRName":"SOCIAL ISSUES","Score":null,"Total":0}
Examining the impact of personalization and carefulness in AI-generated health advice: Trust, adoption, and insights in online healthcare consultations experiments
Artificial intelligence (AI) technologies, exemplified by health chatbots, are transforming the healthcare industry. Their widespread application has the potential to enhance decision-making efficiency, improve the quality of healthcare services, and reduce medical costs. While there is ongoing discussion about the opportunities and challenges brought by AI, more needs to be known about the public's attitude towards its use in the healthcare domain. Understanding public attitudes can help policymakers better grasp their needs and involve them in making decisions that benefit both technological development and social welfare. Therefore, this study presents evidence from two between-subjects experiments. This study aims to compare the public's adoption and trust levels in health advice provided by human vs. AI doctors and explore the potential effects of personalization and carefulness on the public's attitudes. Experimental designs adopt a trust-centered, cognitively and emotionally balanced perspective to study the public's intention to adopt AI. In Experiment 1, the experimental conditions involve the types of decision-makers providing online consultation advice, either AI or human doctors. In Experiment 2, the experimental conditions involve varying levels of perceived personalization and carefulness (high vs. low). A total of 734 participants took part in the study. They were randomly assigned to one of the intervention conditions and responded to manipulation checks after reading the materials. Using a seven-point Likert-type scale, participants rated their cognitive and emotional trust levels and intention to adopt the advice. Partial Least Squares Structural Equation Modeling (PLS-SEM) is conducted to estimate the proposed theoretical perspective. Qualitative interviews on both real-world and AI-generated treatment recommendations further enriched the understanding of public perceptions.The results show that AI-generated advice is generally slightly less trusted and adopted by the public. However, a noticeable inclination towards AI-generated advice emerges when AI demonstrates proficiency in understanding individuals' health conditions and providing empathetic consultations. Further analyses confirm the mediating influence of emotional trust between cognitive trust and adoption intention. These findings provide deeper insights into the process of adoption and trust formation. Moreover, they offer guidance to digital healthcare providers, empowering them with the knowledge to co-design AI implementation strategies that cater to the public's expectations.
期刊介绍:
Technology in Society is a global journal dedicated to fostering discourse at the crossroads of technological change and the social, economic, business, and philosophical transformation of our world. The journal aims to provide scholarly contributions that empower decision-makers to thoughtfully and intentionally navigate the decisions shaping this dynamic landscape. A common thread across these fields is the role of technology in society, influencing economic, political, and cultural dynamics. Scholarly work in Technology in Society delves into the social forces shaping technological decisions and the societal choices regarding technology use. This encompasses scholarly and theoretical approaches (history and philosophy of science and technology, technology forecasting, economic growth, and policy, ethics), applied approaches (business innovation, technology management, legal and engineering), and developmental perspectives (technology transfer, technology assessment, and economic development). Detailed information about the journal's aims and scope on specific topics can be found in Technology in Society Briefings, accessible via our Special Issues and Article Collections.