Mike Seymour, Lingyao (Ivy) Yuan, Kai Riemer, Alan R. Dennis
{"title":"少一点人工,多一点智能:了解数字人类的亲和力、可信度和偏好","authors":"Mike Seymour, Lingyao (Ivy) Yuan, Kai Riemer, Alan R. Dennis","doi":"10.1287/isre.2022.0203","DOIUrl":null,"url":null,"abstract":"Practice- and policy-oriented abstract:Companies are increasingly deploying highly realistic digital human agents (DHAs) controlled by advanced AI for online customer service, tasks typically handled by chatbots. We conducted four experiments to assess users’ perceptions (trustworthiness, affinity, and willingness to work with) and behaviors while using DHAs, utilizing quantitative surveys, qualitative interviews, direct observations, and neurophysiological measurements. Our studies involved four DHAs, including two commercial products (found to be immature) and two future-focused ones (where participants believed the AI-controlled DHAs were human-controlled). In the first study, comparing perceptions of a DHA, chatbot, and human agent from descriptions revealed few differences between the DHA and chatbot. The second study, involving actual use of a commercial DHA, showed participants found it uncanny, robotic, or difficult to converse with. The third and fourth studies used a “Wizard of Oz” design, with participants believing a human-controlled DHA was AI-driven. Results showed a preference for human agents via video conferencing, but no significant differences between DHAs and human agents when visual fidelity was controlled. Current DHAs, despite communication issues, trigger more affinity than chatbots. When DHAs match human communication abilities, they are perceived similarly to human agents for simple tasks. This research also suggests DHAs may alleviate algorithm aversion.","PeriodicalId":48411,"journal":{"name":"Information Systems Research","volume":"39 1","pages":""},"PeriodicalIF":5.0000,"publicationDate":"2024-09-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"Less Artificial, More Intelligent: Understanding Affinity, Trustworthiness, and Preference for Digital Humans\",\"authors\":\"Mike Seymour, Lingyao (Ivy) Yuan, Kai Riemer, Alan R. Dennis\",\"doi\":\"10.1287/isre.2022.0203\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"Practice- and policy-oriented abstract:Companies are increasingly deploying highly realistic digital human agents (DHAs) controlled by advanced AI for online customer service, tasks typically handled by chatbots. We conducted four experiments to assess users’ perceptions (trustworthiness, affinity, and willingness to work with) and behaviors while using DHAs, utilizing quantitative surveys, qualitative interviews, direct observations, and neurophysiological measurements. Our studies involved four DHAs, including two commercial products (found to be immature) and two future-focused ones (where participants believed the AI-controlled DHAs were human-controlled). In the first study, comparing perceptions of a DHA, chatbot, and human agent from descriptions revealed few differences between the DHA and chatbot. The second study, involving actual use of a commercial DHA, showed participants found it uncanny, robotic, or difficult to converse with. The third and fourth studies used a “Wizard of Oz” design, with participants believing a human-controlled DHA was AI-driven. Results showed a preference for human agents via video conferencing, but no significant differences between DHAs and human agents when visual fidelity was controlled. Current DHAs, despite communication issues, trigger more affinity than chatbots. When DHAs match human communication abilities, they are perceived similarly to human agents for simple tasks. This research also suggests DHAs may alleviate algorithm aversion.\",\"PeriodicalId\":48411,\"journal\":{\"name\":\"Information Systems Research\",\"volume\":\"39 1\",\"pages\":\"\"},\"PeriodicalIF\":5.0000,\"publicationDate\":\"2024-09-02\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"Information Systems Research\",\"FirstCategoryId\":\"91\",\"ListUrlMain\":\"https://doi.org/10.1287/isre.2022.0203\",\"RegionNum\":3,\"RegionCategory\":\"管理学\",\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"Q1\",\"JCRName\":\"INFORMATION SCIENCE & LIBRARY SCIENCE\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"Information Systems Research","FirstCategoryId":"91","ListUrlMain":"https://doi.org/10.1287/isre.2022.0203","RegionNum":3,"RegionCategory":"管理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q1","JCRName":"INFORMATION SCIENCE & LIBRARY SCIENCE","Score":null,"Total":0}
引用次数: 0
摘要
实践与政策导向摘要:越来越多的公司正在部署由高级人工智能控制的高度逼真的数字人类代理(DHA),用于在线客户服务,这些任务通常由聊天机器人处理。我们利用定量调查、定性访谈、直接观察和神经生理学测量方法进行了四项实验,以评估用户在使用 DHA 时的感知(可信度、亲和力和合作意愿)和行为。我们的研究涉及四种 DHA,包括两种商业产品(不成熟)和两种面向未来的产品(参与者认为人工智能控制的 DHA 由人类控制)。在第一项研究中,我们比较了参与者对 DHA、聊天机器人和人类代理的看法,结果发现 DHA 和聊天机器人之间几乎没有差别。第二项研究涉及商业 DHA 的实际使用,结果显示,参与者认为它很怪异、像机器人或难以交谈。第三项和第四项研究采用了 "绿野仙踪 "设计,让参与者相信由人类控制的 DHA 是人工智能驱动的。结果显示,通过视频会议,人们更喜欢人类代理,但在控制视觉保真度的情况下,DHA 与人类代理之间没有显著差异。尽管存在沟通问题,但当前的 DHA 比聊天机器人更能激发亲和力。当 DHA 与人类的交流能力相匹配时,在完成简单任务时,人们对它们的感知与人类代理相似。这项研究还表明,DHA 可以减轻算法厌恶。
Less Artificial, More Intelligent: Understanding Affinity, Trustworthiness, and Preference for Digital Humans
Practice- and policy-oriented abstract:Companies are increasingly deploying highly realistic digital human agents (DHAs) controlled by advanced AI for online customer service, tasks typically handled by chatbots. We conducted four experiments to assess users’ perceptions (trustworthiness, affinity, and willingness to work with) and behaviors while using DHAs, utilizing quantitative surveys, qualitative interviews, direct observations, and neurophysiological measurements. Our studies involved four DHAs, including two commercial products (found to be immature) and two future-focused ones (where participants believed the AI-controlled DHAs were human-controlled). In the first study, comparing perceptions of a DHA, chatbot, and human agent from descriptions revealed few differences between the DHA and chatbot. The second study, involving actual use of a commercial DHA, showed participants found it uncanny, robotic, or difficult to converse with. The third and fourth studies used a “Wizard of Oz” design, with participants believing a human-controlled DHA was AI-driven. Results showed a preference for human agents via video conferencing, but no significant differences between DHAs and human agents when visual fidelity was controlled. Current DHAs, despite communication issues, trigger more affinity than chatbots. When DHAs match human communication abilities, they are perceived similarly to human agents for simple tasks. This research also suggests DHAs may alleviate algorithm aversion.
期刊介绍:
ISR (Information Systems Research) is a journal of INFORMS, the Institute for Operations Research and the Management Sciences. Information Systems Research is a leading international journal of theory, research, and intellectual development, focused on information systems in organizations, institutions, the economy, and society.