首页 > 最新文献

Computers in Human Behavior: Artificial Humans最新文献

英文 中文
Be careful what you explain: Benefits and costs of explainable AI in a simulated medical task 小心你的解释:可解释的人工智能在模拟医疗任务中的收益和成本
Pub Date : 2023-08-01 DOI: 10.1016/j.chbah.2023.100021
Tobias Rieger , Dietrich Manzey , Benigna Meussling , Linda Onnasch , Eileen Roesler

We investigated the impact of explainability instructions with respect to system limitations on trust behavior and trust attitude when using an artificial intelligence (AI) support agent to perform a simulated medical task. In an online experiment (N = 128), participants performed a visual estimation task in a simulated medical setting (i.e., estimate the percentage of bacteria in a visual stimulus). All participants were supported by an AI that gave perfect recommendations for all but one color of bacteria (i.e., error-prone color with 50% reliability). We manipulated between-subjects whether participants knew about the error-prone color (XAI condition) or not (nonXAI condition). The analyses revealed that participants showed higher trust behavior (i.e., lower deviation from the AI recommendation) for the non-error-prone trials in the XAI condition. Moreover, participants showed lower trust behavior for the error-prone color in the XAI condition than in the nonXAI condition. However, this behavioral adaptation only applied to the subset of error-prone trials in which the AI gave correct recommendations, and not to the actual erroneous trials. Thus, designing explainable AI systems can also come with inadequate behavioral adaptations, as explainability was associated with benefits (i.e., more adequate behavior in non-error-prone trials), but also costs (stronger changes to the AI recommendations in correct error-prone trials).

我们研究了当使用人工智能(AI)支持代理执行模拟医疗任务时,关于系统限制的可解释性指令对信任行为和信任态度的影响。在一项在线实验中(N = 128),参与者在模拟医疗环境中执行视觉估计任务(即,估计视觉刺激中细菌的百分比)。所有参与者都得到了人工智能的支持,除了一种细菌颜色(即易出错的颜色,可靠性为50%),人工智能对所有细菌颜色都给出了完美的建议。我们在受试者之间操纵受试者是否知道容易出错的颜色(XAI条件)或不知道(非XAI条件)。分析显示,在XAI条件下,对于不容易出错的试验,参与者表现出更高的信任行为(即与AI建议的偏差较低)。此外,参与者对易出错颜色的信任行为在XAI条件下比在非XAI条件下更低。然而,这种行为适应只适用于人工智能给出正确建议的易出错试验的子集,而不适用于实际错误的试验。因此,设计可解释的AI系统也可能伴随着不适当的行为适应,因为可解释性与利益(即在不容易出错的试验中更适当的行为)相关,但也与成本(在正确容易出错的试验中对AI建议进行更强的更改)相关。
{"title":"Be careful what you explain: Benefits and costs of explainable AI in a simulated medical task","authors":"Tobias Rieger ,&nbsp;Dietrich Manzey ,&nbsp;Benigna Meussling ,&nbsp;Linda Onnasch ,&nbsp;Eileen Roesler","doi":"10.1016/j.chbah.2023.100021","DOIUrl":"10.1016/j.chbah.2023.100021","url":null,"abstract":"<div><p>We investigated the impact of explainability instructions with respect to system limitations on trust behavior and trust attitude when using an artificial intelligence (AI) support agent to perform a simulated medical task. In an online experiment (<em>N</em> = 128), participants performed a visual estimation task in a simulated medical setting (i.e., estimate the percentage of bacteria in a visual stimulus). All participants were supported by an AI that gave perfect recommendations for all but one color of bacteria (i.e., error-prone color with 50% reliability). We manipulated between-subjects whether participants knew about the error-prone color (XAI condition) or not (nonXAI condition). The analyses revealed that participants showed higher trust behavior (i.e., lower deviation from the AI recommendation) for the non-error-prone trials in the XAI condition. Moreover, participants showed lower trust behavior for the error-prone color in the XAI condition than in the nonXAI condition. However, this behavioral adaptation only applied to the subset of error-prone trials in which the AI gave correct recommendations, and not to the actual erroneous trials. Thus, designing explainable AI systems can also come with inadequate behavioral adaptations, as explainability was associated with benefits (i.e., more adequate behavior in non-error-prone trials), but also costs (stronger changes to the AI recommendations in correct error-prone trials).</p></div>","PeriodicalId":100324,"journal":{"name":"Computers in Human Behavior: Artificial Humans","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2023-08-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.sciencedirect.com/science/article/pii/S294988212300021X/pdfft?md5=221d729df96546eae8913e787fa04ac8&pid=1-s2.0-S294988212300021X-main.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"135325442","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Choosing between human and algorithmic advisors: The role of responsibility sharing 在人工顾问和算法顾问之间进行选择:责任分担的作用
Pub Date : 2023-08-01 DOI: 10.1016/j.chbah.2023.100009
Lior Gazit , Ofer Arazy , Uri Hertz

Algorithms are increasingly employed to provide highly accurate advice and recommendations across domains, yet in many cases people tend to prefer human advisors. Studies to date have focused mainly on the advisor’s perceived competence and the outcome of the advice as determinants of advice takers’ willingness to accept advice from human and algorithmic advisors and to arbitrate between them. Here we examine the role of another factor that is not directly related to the outcome: the advice taker’s ability to psychologically offload responsibility for the decision’s potential consequences. Building on studies showing differences in responsibility attribution between human and algorithmic advisors, we hypothesize that, controlling for the effects of the advisor’s competence, the advisor's perceived responsibility is an important factor affecting advice takers’ choice between human and algorithmic advisors. In an experiment in two domains, Medical and Financial (N = 806), participants were asked to rate advisors’ perceived responsibility and choose between a human and algorithmic advisor. Our results show that human advisors were perceived as more responsible than algorithmic advisors and most importantly, that the perception of the advisor’s responsibility affected the preference for a human advisor over an algorithmic counterpart. Furthermore, we found that an experimental manipulation that impeded advice takers’ ability to offload responsibility affected the extent to which human, but not algorithmic, advisors were perceived as responsible. Together, our findings highlight the role of responsibility sharing in influencing algorithm aversion.

算法越来越多地被用于跨领域提供高度准确的建议和建议,但在许多情况下,人们倾向于选择人工顾问。迄今为止的研究主要集中在顾问的感知能力和建议的结果上,这是决定咨询者是否愿意接受人类和算法顾问的建议并在他们之间进行仲裁的决定因素。在这里,我们考察了另一个与结果没有直接关系的因素的作用:咨询者在心理上推卸决策潜在后果责任的能力。基于显示人类和算法顾问之间责任归属差异的研究,我们假设,在控制顾问能力影响的情况下,顾问的感知责任是影响咨询师在人类和算法咨询之间选择的重要因素。在一项涉及医学和金融两个领域的实验中(N=806),参与者被要求对顾问的感知责任进行评分,并在人类顾问和算法顾问之间做出选择。我们的研究结果表明,人类顾问被认为比算法顾问更负责任,最重要的是,对顾问责任的感知影响了对人类顾问比对算法顾问的偏好。此外,我们发现,阻碍咨询师推卸责任的实验操作影响了人类顾问(而不是算法顾问)被认为负有责任的程度。总之,我们的研究结果突出了责任分担在影响算法厌恶中的作用。
{"title":"Choosing between human and algorithmic advisors: The role of responsibility sharing","authors":"Lior Gazit ,&nbsp;Ofer Arazy ,&nbsp;Uri Hertz","doi":"10.1016/j.chbah.2023.100009","DOIUrl":"https://doi.org/10.1016/j.chbah.2023.100009","url":null,"abstract":"<div><p>Algorithms are increasingly employed to provide highly accurate advice and recommendations across domains, yet in many cases people tend to prefer human advisors. Studies to date have focused mainly on the advisor’s perceived competence and the outcome of the advice as determinants of advice takers’ willingness to accept advice from human and algorithmic advisors and to arbitrate between them. Here we examine the role of another factor that is not directly related to the outcome: the advice taker’s ability to psychologically offload responsibility for the decision’s potential consequences. Building on studies showing differences in responsibility attribution between human and algorithmic advisors, we hypothesize that, controlling for the effects of the advisor’s competence, the advisor's perceived responsibility is an important factor affecting advice takers’ choice between human and algorithmic advisors. In an experiment in two domains, Medical and Financial (N = 806), participants were asked to rate advisors’ perceived responsibility and choose between a human and algorithmic advisor. Our results show that human advisors were perceived as more responsible than algorithmic advisors and most importantly, that the perception of the advisor’s responsibility affected the preference for a human advisor over an algorithmic counterpart. Furthermore, we found that an experimental manipulation that impeded advice takers’ ability to offload responsibility affected the extent to which human, but not algorithmic, advisors were perceived as responsible. Together, our findings highlight the role of responsibility sharing in influencing algorithm aversion.</p></div>","PeriodicalId":100324,"journal":{"name":"Computers in Human Behavior: Artificial Humans","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2023-08-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"49729479","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Obedience to robot. Humanoid robot as an experimenter in Milgram paradigm 服从机器人。米尔格拉姆范式中人形机器人的实验研究
Pub Date : 2023-08-01 DOI: 10.1016/j.chbah.2023.100010
Tomasz Grzyb, Konrad Maj, Dariusz Dolinski

Humans will increasingly be influenced by social robots. It still seems unclear whether we will accept them as authorities and whether we will give in to them without reflection, as in the case of human authorities in the classic Stanley Milgram experiments (1963, 1965, and 1974). The demonstration by Stanley Milgram of the prevailing tendency in people to display extreme obedience to authority figures was one of the most important discoveries in the field of social psychology. The authors of this article decided to use a modified Milgram's research paradigm (obedience lite procedure) to compare one's obedience to a person giving instructions to electrocute someone sitting in an adjacent room with obedience to a robot issuing similar instructions. Twenty individuals were tested in both cases. As it turned out, the level of obedience was very high in both situations, and the nature of the authority figure issuing instructions (a professor vs. a robot) did not have the impact on the reactions of the subjects.

人类将越来越多地受到社交机器人的影响。我们是否会接受他们作为权威,以及我们是否会像经典的Stanley Milgram实验(1963年、1965年和1974年)中的人类权威那样,在没有反思的情况下向他们屈服,这似乎仍然不清楚。Stanley Milgram证明了人们对权威人物表现出极端服从的普遍趋势,这是社会心理学领域最重要的发现之一。这篇文章的作者决定使用一种改进的Milgram研究范式(服从-精简程序)来比较一个人对一个人发出指令电击坐在相邻房间的人的服从与对发出类似指令的机器人的服从。在这两种情况下都对20人进行了检测。事实证明,在这两种情况下,服从程度都很高,权威人物发布指令的性质(教授与机器人)对受试者的反应没有影响。
{"title":"Obedience to robot. Humanoid robot as an experimenter in Milgram paradigm","authors":"Tomasz Grzyb,&nbsp;Konrad Maj,&nbsp;Dariusz Dolinski","doi":"10.1016/j.chbah.2023.100010","DOIUrl":"https://doi.org/10.1016/j.chbah.2023.100010","url":null,"abstract":"<div><p>Humans will increasingly be influenced by social robots. It still seems unclear whether we will accept them as authorities and whether we will give in to them without reflection, as in the case of human authorities in the classic Stanley Milgram experiments (1963, 1965, and 1974). The demonstration by Stanley Milgram of the prevailing tendency in people to display extreme obedience to authority figures was one of the most important discoveries in the field of social psychology. The authors of this article decided to use a modified Milgram's research paradigm (obedience lite procedure) to compare one's obedience to a person giving instructions to electrocute someone sitting in an adjacent room with obedience to a robot issuing similar instructions. Twenty individuals were tested in both cases. As it turned out, the level of obedience was very high in both situations, and the nature of the authority figure issuing instructions (a professor vs. a robot) did not have the impact on the reactions of the subjects.</p></div>","PeriodicalId":100324,"journal":{"name":"Computers in Human Behavior: Artificial Humans","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2023-08-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"49729485","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Reading between the lines: Automatic inference of self-assessed personality traits from dyadic social chats 言外之意:从二元社交聊天中自动推断自我评估的人格特征
Pub Date : 2023-08-01 DOI: 10.1016/j.chbah.2023.100026
Abeer Buker , Alessandro Vinciarelli

Interaction through text-based platforms (e.g., WhatsApp) is a common everyday activity, typically referred to as “chatting”. However, the computing community paid relatively little attention to the automatic analysis of social and psycho-logical phenomena taking place during chats. This article proposes experiments aimed at the automatic inference of self-assessed personality traits from data collected during online dyadic chats. The proposed approach is multimodal and takes into account the two main components of chat-based interactions, namely what people type (the text) and how they type it (the keystroke dynamics). To the best of our knowledge, this is one of the very first works that includes keystroke dynamics in an approach for the inference of personality traits. The experiments involved 60 people and the results suggest that it is possible to recognize whether someone is below median or not along the Big-Five traits. Such a result suggests that personality leaves traces in both what people type it and how they type it, the two types of information the approach takes into account.

通过基于文本的平台(例如WhatsApp)进行互动是一种常见的日常活动,通常被称为“聊天”。然而,计算机社区相对较少关注在聊天过程中发生的社会和心理现象的自动分析。本文提出了从在线二元聊天中收集的数据中自动推断自评人格特征的实验。提出的方法是多模式的,并且考虑到基于聊天的交互的两个主要组成部分,即人们输入什么(文本)和他们如何输入(击键动力学)。据我们所知,这是第一个将击键动力学纳入人格特征推断方法的研究。实验涉及60个人,结果表明,根据五大特征判断一个人是否处于中值以下是可能的。这样的结果表明,性格在人们打字的内容和方式上都留下了痕迹,这是该方法考虑的两种信息。
{"title":"Reading between the lines: Automatic inference of self-assessed personality traits from dyadic social chats","authors":"Abeer Buker ,&nbsp;Alessandro Vinciarelli","doi":"10.1016/j.chbah.2023.100026","DOIUrl":"https://doi.org/10.1016/j.chbah.2023.100026","url":null,"abstract":"<div><p>Interaction through text-based platforms (e.g., WhatsApp) is a common everyday activity, typically referred to as “chatting”. However, the computing community paid relatively little attention to the automatic analysis of social and psycho-logical phenomena taking place during chats. This article proposes experiments aimed at the automatic inference of self-assessed personality traits from data collected during online dyadic chats. The proposed approach is multimodal and takes into account the two main components of chat-based interactions, namely <em>what</em> people type (the <em>text</em>) and <em>how</em> they type it (the <em>keystroke dynamics</em>). To the best of our knowledge, this is one of the very first works that includes keystroke dynamics in an approach for the inference of personality traits. The experiments involved 60 people and the results suggest that it is possible to recognize whether someone is below median or not along the Big-Five traits. Such a result suggests that personality leaves traces in both what people type it and how they type it, the two types of information the approach takes into account.</p></div>","PeriodicalId":100324,"journal":{"name":"Computers in Human Behavior: Artificial Humans","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2023-08-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.sciencedirect.com/science/article/pii/S2949882123000269/pdfft?md5=5c727ee751d05005017c524d25960f35&pid=1-s2.0-S2949882123000269-main.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"92025805","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
ChatGPT in education: Methods, potentials, and limitations 教育中的聊天技术:方法、潜力和局限性
Pub Date : 2023-08-01 DOI: 10.1016/j.chbah.2023.100022
Bahar Memarian, Tenzin Doleck

ChatGPT has been under the scrutiny of public opinion including in education. Yet, less work has been done to analyze studies conducted on ChatGPT in educational contexts. This review paper examines where ChatGPT is employed in educational literature and areas of potential, challenges, and future work. A total of 63 publications were included in this review using the general framework of open and axial coding. We coded and summarized the methods, and reported potentials, limitations, and future work of each study. Thematic analysis of reviewed studies revealed that most extant studies in the education literature explore ChatGPT through a commentary and non-empirical lens. The potentials of ChatGPT include but are not limited to the development of personalized and complex learning, specific teaching and learning activities, assessments, asynchronous communication, feedback, accuracy in research, personas, and task delegation and cognitive offload. Several areas of challenge that ChatGPT is or will be facing in education are also shared. Examples include but are not limited to plagiarism deception, misuse or lack of learning, accountability, and privacy. There are both concerns and optimism about the use of ChatGPT in education, yet the most pressing need is to ensure student learning and academic integrity are not sacrificed. Our review provides a summary of studies conducted on ChatGPT in education literature. We further provide a comprehensive and unique discussion on future considerations for ChatGPT in education.

ChatGPT一直受到包括教育界在内的公众舆论的密切关注。然而,对ChatGPT在教育背景下进行的研究进行分析的工作却很少。这篇综述研究了ChatGPT在教育文献中的应用,以及它的潜力、挑战和未来的工作。本综述采用开放编码和轴向编码的一般框架,共纳入63篇出版物。我们对这些方法进行了编码和总结,并报道了每项研究的潜力、局限性和未来的工作。对所回顾研究的专题分析表明,大多数现存的教育文献研究都是通过评论和非经验的视角来探索ChatGPT的。ChatGPT的潜力包括但不限于个性化和复杂学习的发展、具体的教学和学习活动、评估、异步通信、反馈、研究准确性、人物角色、任务授权和认知卸载。ChatGPT在教育中面临或将面临的几个挑战领域也被分享。例子包括但不限于抄袭、欺骗、滥用或缺乏学习、问责制和隐私。在教育中使用ChatGPT既有担忧,也有乐观,但最迫切的需要是确保学生的学习和学术诚信不被牺牲。本文综述了教育文献中关于ChatGPT的研究。我们进一步对ChatGPT在教育中的未来考虑进行了全面而独特的讨论。
{"title":"ChatGPT in education: Methods, potentials, and limitations","authors":"Bahar Memarian,&nbsp;Tenzin Doleck","doi":"10.1016/j.chbah.2023.100022","DOIUrl":"https://doi.org/10.1016/j.chbah.2023.100022","url":null,"abstract":"<div><p>ChatGPT has been under the scrutiny of public opinion including in education. Yet, less work has been done to analyze studies conducted on ChatGPT in educational contexts. This review paper examines where ChatGPT is employed in educational literature and areas of potential, challenges, and future work. A total of 63 publications were included in this review using the general framework of open and axial coding. We coded and summarized the methods, and reported potentials, limitations, and future work of each study. Thematic analysis of reviewed studies revealed that most extant studies in the education literature explore ChatGPT through a commentary and non-empirical lens. The potentials of ChatGPT include but are not limited to the development of personalized and complex learning, specific teaching and learning activities, assessments, asynchronous communication, feedback, accuracy in research, personas, and task delegation and cognitive offload. Several areas of challenge that ChatGPT is or will be facing in education are also shared. Examples include but are not limited to plagiarism deception, misuse or lack of learning, accountability, and privacy. There are both concerns and optimism about the use of ChatGPT in education, yet the most pressing need is to ensure student learning and academic integrity are not sacrificed. Our review provides a summary of studies conducted on ChatGPT in education literature. We further provide a comprehensive and unique discussion on future considerations for ChatGPT in education.</p></div>","PeriodicalId":100324,"journal":{"name":"Computers in Human Behavior: Artificial Humans","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2023-08-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.sciencedirect.com/science/article/pii/S2949882123000221/pdfft?md5=f9aa184eb8668e5dbec672d9482aabfb&pid=1-s2.0-S2949882123000221-main.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"92025807","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
Exploring the superiority of human expertise over algorithmic expertise in the cognitive and metacognitive processes of decision-making among decision-makers. 在决策者决策的认知和元认知过程中,探索人类专业知识相对于算法专业知识的优越性。
Pub Date : 2023-08-01 DOI: 10.1016/j.chbah.2023.100023
Nicolas Spatola

Investigating the role of human vs algorithmic expertise on decision-making processes is crucial, especially in the public sector where it can impact millions of people. To better comprehend the underlying cognitive and metacognitive processes, we conducted an experiment to manipulate the influence of human and algorithmic agents on decision-makers' confidence levels. We also studied the resulting impact on decision outcomes and metacognitive awareness. By exploring a theoretical model of serial and interaction effects, we were able to manipulate the complexity and uncertainty of initial data and analyze the role of confidence in decision-making facing human or algorithmic expertise. Results showed that individuals tend to be more confident in their decision-making and less likely to revise their decisions when presented with consistent information. External expertise, whether from an expert or algorithmic analysis, can significantly impact decision outcomes, depending on whether it confirms or contradicts the initial decision. Also, human expertise proved to have a higher impact on decision outcomes than algorithmic expertise, which may demonstrate confirmation bias and other social processes that we further discuss. In conclusion, the study highlights the importance of adopting a holistic perspective in complex decision-making situations. Decision-makers must recognize their biases and the influence of external factors on their confidence and accountability.

研究人类与算法专业知识在决策过程中的作用至关重要,特别是在可能影响数百万人的公共部门。为了更好地理解潜在的认知和元认知过程,我们进行了一项实验,以操纵人类和算法代理对决策者信心水平的影响。我们还研究了结果对决策结果和元认知意识的影响。通过探索序列和交互效应的理论模型,我们能够操纵初始数据的复杂性和不确定性,并分析面对人类或算法专业知识的决策中信心的作用。结果表明,当提供一致的信息时,个人往往对自己的决策更有信心,不太可能修改自己的决定。外部专业知识,无论是来自专家还是算法分析,都可以显著影响决策结果,这取决于它是否证实或反驳了最初的决策。此外,事实证明,人类专业知识比算法专业知识对决策结果的影响更大,这可能表明我们进一步讨论的确认偏差和其他社会过程。总之,该研究强调了在复杂的决策情况下采用整体视角的重要性。决策者必须认识到他们的偏见和外部因素对他们的信心和问责制的影响。
{"title":"Exploring the superiority of human expertise over algorithmic expertise in the cognitive and metacognitive processes of decision-making among decision-makers.","authors":"Nicolas Spatola","doi":"10.1016/j.chbah.2023.100023","DOIUrl":"https://doi.org/10.1016/j.chbah.2023.100023","url":null,"abstract":"<div><p>Investigating the role of human vs algorithmic expertise on decision-making processes is crucial, especially in the public sector where it can impact millions of people. To better comprehend the underlying cognitive and metacognitive processes, we conducted an experiment to manipulate the influence of human and algorithmic agents on decision-makers' confidence levels. We also studied the resulting impact on decision outcomes and metacognitive awareness. By exploring a theoretical model of serial and interaction effects, we were able to manipulate the complexity and uncertainty of initial data and analyze the role of confidence in decision-making facing human or algorithmic expertise. Results showed that individuals tend to be more confident in their decision-making and less likely to revise their decisions when presented with consistent information. External expertise, whether from an expert or algorithmic analysis, can significantly impact decision outcomes, depending on whether it confirms or contradicts the initial decision. Also, human expertise proved to have a higher impact on decision outcomes than algorithmic expertise, which may demonstrate confirmation bias and other social processes that we further discuss. In conclusion, the study highlights the importance of adopting a holistic perspective in complex decision-making situations. Decision-makers must recognize their biases and the influence of external factors on their confidence and accountability.</p></div>","PeriodicalId":100324,"journal":{"name":"Computers in Human Behavior: Artificial Humans","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2023-08-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.sciencedirect.com/science/article/pii/S2949882123000233/pdfft?md5=0659c799ba0059b5e4f8b8519fce9e98&pid=1-s2.0-S2949882123000233-main.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"92025804","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Conversational agents for Children's mental health and mental disorders: A scoping review 儿童心理健康和精神障碍的对话代理:范围综述
Pub Date : 2023-08-01 DOI: 10.1016/j.chbah.2023.100028
Rachael Martin, Sally Richmond
{"title":"Conversational agents for Children's mental health and mental disorders: A scoping review","authors":"Rachael Martin,&nbsp;Sally Richmond","doi":"10.1016/j.chbah.2023.100028","DOIUrl":"https://doi.org/10.1016/j.chbah.2023.100028","url":null,"abstract":"","PeriodicalId":100324,"journal":{"name":"Computers in Human Behavior: Artificial Humans","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2023-08-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.sciencedirect.com/science/article/pii/S2949882123000282/pdfft?md5=0917711b8920dde8ac8f0301419db9dc&pid=1-s2.0-S2949882123000282-main.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"92025808","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
“To comply or to react, that is the question:” the roles of humanness versus eeriness of AI-powered virtual influencers, loneliness, and threats to human identities in AI-driven digital transformation 是遵从还是回应,这是一个问题:“在人工智能驱动的数字化转型中,人性与人工智能驱动的虚拟影响者的怪异、孤独以及对人类身份的威胁。
Pub Date : 2023-08-01 DOI: 10.1016/j.chbah.2023.100011
S. Venus Jin

AI-powered virtual influencers play a variety of roles in emerging media environments. To test the diffusion of AI-powered virtual influencers among social media users and to examine antecedents, mediators, and moderators relevant to compliance with and reactance to virtual influencers, data were collected using two cross-sectional surveys (∑ N = 1623). Drawing on the Diffusion of Innovations theory, survey data from Study 1 (N1 = 987) provide preliminary descriptive statistics about US social media users' levels of awareness of, knowledge of, exposure to, and engagement with virtual influencers. Drawing from the theoretical frameworks of the Uncanny Valley Hypothesis and the CASA (Computers Are Social Actors) paradigm, Study 2 examines social media users' compliance with versus reactance to AI-powered virtual influencers. Survey data from Study 2 (N2 = 636) provide inferential statistics supporting the moderated serial mediation model that proposes (1) empathy and engagement with AI-powered virtual influencers mediate the effects of perceived humanness versus eeriness of virtual influencers on social media users' behavioral intention to purchase the products recommended by the virtual influencers (serial and total mediation effects) and (2) loneliness moderates the effects of humanness versus eeriness on empathy. Drawing from the theory of Psychological Reactance, Study 2 further reports the moderation effect of social media users' trait reactance and perceived threats to one's own human identity on the relationship between perceived eeriness and compliance with versus situational reactance to virtual influencers. Theoretical contributions to CASA research and the Uncanny Valley literature as well as managerial implications for AI-driven digital transformation in media industries and virtual influencer marketing are discussed.

人工智能驱动的虚拟影响者在新兴媒体环境中扮演着各种角色。为了测试人工智能驱动的虚拟影响者在社交媒体用户中的传播,并检查与遵守和抵制虚拟影响者相关的前因、中介和调节因素,使用两项横断面调查收集了数据(∑N=1623)。根据创新扩散理论,研究1(N1=987)的调查数据提供了关于美国社交媒体用户对虚拟影响者的意识、知识、接触和参与程度的初步描述性统计数据。研究2借鉴了Uncanny Valley假说和CASA(计算机是社会行动者)范式的理论框架,考察了社交媒体用户对人工智能虚拟影响者的依从性和抗拒性。研究2的调查数据(N2=636)提供了推理统计数据,支持有调节的系列中介模型,该模型提出(1)与人工智能驱动的虚拟影响者的同理心和参与度中介了虚拟影响者感知的人性与怪异对社交媒体用户购买虚拟影响者推荐的产品的行为意向的影响(系列和整体中介效应)和(2)孤独调节了人性与怪异对移情的影响。根据心理反应理论,研究2进一步报道了社交媒体用户的特质反应和对自身人类身份的感知威胁对感知怪异和对虚拟影响者的顺从与情境反应之间关系的调节作用。讨论了对CASA研究和Uncanny Valley文献的理论贡献,以及对媒体行业人工智能驱动的数字化转型和虚拟影响力营销的管理启示。
{"title":"“To comply or to react, that is the question:” the roles of humanness versus eeriness of AI-powered virtual influencers, loneliness, and threats to human identities in AI-driven digital transformation","authors":"S. Venus Jin","doi":"10.1016/j.chbah.2023.100011","DOIUrl":"https://doi.org/10.1016/j.chbah.2023.100011","url":null,"abstract":"<div><p>AI-powered virtual influencers play a variety of roles in emerging media environments. To test the diffusion of AI-powered virtual influencers among social media users and to examine antecedents, mediators, and moderators relevant to compliance with and reactance to virtual influencers, data were collected using two cross-sectional surveys (∑ <em>N</em> = 1623). Drawing on the Diffusion of Innovations theory, survey data from Study 1 (<em>N</em><sub><em>1</em></sub> = 987) provide preliminary descriptive statistics about US social media users' levels of awareness of, knowledge of, exposure to, and engagement with virtual influencers. Drawing from the theoretical frameworks of the Uncanny Valley Hypothesis and the CASA (Computers Are Social Actors) paradigm, Study 2 examines social media users' compliance with versus reactance to AI-powered virtual influencers. Survey data from Study 2 (<em>N</em><sub><em>2</em></sub> = 636) provide inferential statistics supporting the moderated serial mediation model that proposes (1) empathy and engagement with AI-powered virtual influencers mediate the effects of perceived humanness versus eeriness of virtual influencers on social media users' behavioral intention to purchase the products recommended by the virtual influencers (serial and total mediation effects) and (2) loneliness moderates the effects of humanness versus eeriness on empathy. Drawing from the theory of Psychological Reactance, Study 2 further reports the moderation effect of social media users' trait reactance and perceived threats to one's own human identity on the relationship between perceived eeriness and compliance with versus situational reactance to virtual influencers. Theoretical contributions to CASA research and the Uncanny Valley literature as well as managerial implications for AI-driven digital transformation in media industries and virtual influencer marketing are discussed.</p></div>","PeriodicalId":100324,"journal":{"name":"Computers in Human Behavior: Artificial Humans","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2023-08-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"49713831","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Are social robots the solution for shortages in rehabilitation care? Assessing the acceptance of nurses and patients of a social robot 社交机器人是康复护理短缺的解决方案吗?评估护士和病人对社交机器人的接受程度
Pub Date : 2023-08-01 DOI: 10.1016/j.chbah.2023.100017
Marian Z.M. Hurmuz , Stephanie M. Jansen-Kosterink , Ina Flierman , Susanna del Signore , Gianluca Zia , Stefania del Signore , Behrouz Fard

Social robots are upcoming innovations in the healthcare sector. Currently, those robots are merely used for entertaining and accompanying people, or facilitating telepresence. Social robots have the potential to perform more added value tasks within healthcare. So, the aim of our paper was to study the acceptance of a social robot in a rehabilitation centre. This paper reports on three studies conducted with the Pepper robot. We first conducted an acceptance study in which patients (N = 6) and nurses (N = 10) performed different tasks with the robot and rated their acceptance of the robot at different time points. These participants were also interviewed afterwards to gather more qualitative data. The second study conducted was a flash mob study in which patients (N = 23) could interact with the robot via a chatbot and complete a questionnaire. Afterwards, 15 patients completed a short evaluation questionnaire about the easiness and intention to use the robot and possible new functionalities for a social robot. Finally, a Social Return on Investment analysis was conducted to assess the added value of the Pepper robot. Considering the findings from these three studies, we conclude that the use of the Pepper robot for health-related tasks in the context a rehabilitation centre is not yet feasible as major steps are needed to have the Pepper robot able to take over these health-related tasks.

社交机器人是医疗保健领域即将出现的创新。目前,这些机器人仅用于娱乐和陪伴人们,或促进远程呈现。社交机器人有潜力在医疗保健领域执行更多附加值任务。因此,我们论文的目的是研究康复中心对社交机器人的接受程度。本文报告了对Pepper机器人进行的三项研究。我们首先进行了一项接受度研究,其中患者(N=6)和护士(N=10)用机器人执行不同的任务,并在不同的时间点对他们对机器人的接受度进行评分。之后还采访了这些参与者,以收集更多的定性数据。进行的第二项研究是一项快闪研究,患者(N=23)可以通过聊天机器人与机器人互动,并完成问卷调查。之后,15名患者完成了一份简短的评估问卷,内容涉及使用机器人的容易性和意图以及社交机器人可能的新功能。最后,进行了社会投资回报率分析,以评估Pepper机器人的附加值。考虑到这三项研究的结果,我们得出的结论是,在康复中心的背景下,使用Pepper机器人执行与健康相关的任务尚不可行,因为需要采取重大步骤才能让Pepper能够承担这些与健康有关的任务。
{"title":"Are social robots the solution for shortages in rehabilitation care? Assessing the acceptance of nurses and patients of a social robot","authors":"Marian Z.M. Hurmuz ,&nbsp;Stephanie M. Jansen-Kosterink ,&nbsp;Ina Flierman ,&nbsp;Susanna del Signore ,&nbsp;Gianluca Zia ,&nbsp;Stefania del Signore ,&nbsp;Behrouz Fard","doi":"10.1016/j.chbah.2023.100017","DOIUrl":"https://doi.org/10.1016/j.chbah.2023.100017","url":null,"abstract":"<div><p>Social robots are upcoming innovations in the healthcare sector. Currently, those robots are merely used for entertaining and accompanying people, or facilitating telepresence. Social robots have the potential to perform more added value tasks within healthcare. So, the aim of our paper was to study the acceptance of a social robot in a rehabilitation centre. This paper reports on three studies conducted with the Pepper robot. We first conducted an acceptance study in which patients (N = 6) and nurses (N = 10) performed different tasks with the robot and rated their acceptance of the robot at different time points. These participants were also interviewed afterwards to gather more qualitative data. The second study conducted was a flash mob study in which patients (N = 23) could interact with the robot via a chatbot and complete a questionnaire. Afterwards, 15 patients completed a short evaluation questionnaire about the easiness and intention to use the robot and possible new functionalities for a social robot. Finally, a Social Return on Investment analysis was conducted to assess the added value of the Pepper robot. Considering the findings from these three studies, we conclude that the use of the Pepper robot for health-related tasks in the context a rehabilitation centre is not yet feasible as major steps are needed to have the Pepper robot able to take over these health-related tasks.</p></div>","PeriodicalId":100324,"journal":{"name":"Computers in Human Behavior: Artificial Humans","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2023-08-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"49713984","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Optimizing human-AI collaboration: Effects of motivation and accuracy information in AI-supported decision-making 优化人类-人工智能协作:动机和准确性信息在人工智能支持决策中的影响
Pub Date : 2023-08-01 DOI: 10.1016/j.chbah.2023.100015
Simon Eisbach , Markus Langer , Guido Hertel

Artificial intelligence (AI) systems increasingly support human decision-making in fields like medicine, management, and finance. However, such human-AI (HAI) collaboration is often less effective than AI systems alone. Moreover, efforts to make AI recommendations more transparent have failed to improve the decision quality of HAI collaborations. Based on dual process theories of cognition, we hypothesized that suboptimal HAI collaboration is partly due to heuristic information processing of humans, creating a trust imbalance towards the AI system. In an online experiment with 337 participants, we investigated motivation and accuracy information as potential factors to induce more deliberate elaboration of AI recommendations, and thus improve HAI collaboration. Participants worked on a simulated personnel selection task and received recommendations from a simulated AI system. Participants' motivation was varied through gamification, and accuracy information through additional information from the AI system. Results indicate that both motivation and accuracy information positively influenced HAI performance, but in different ways. While high motivation primarily increased humans’ use in high-quality recommendations only, accuracy information improved both the use of low- and high-quality recommendations. However, a combination of high motivation and accuracy information did not yield additional improvement of HAI performance.

人工智能系统越来越多地支持医学、管理和金融等领域的人类决策。然而,这样的人工智能(HAI)协作往往不如单独的人工智能系统有效。此外,使人工智能推荐更加透明的努力未能提高HAI合作的决策质量。基于认知的双过程理论,我们假设次优的HAI协作部分是由于人类的启发式信息处理,造成了对人工智能系统的信任失衡。在一项有337名参与者参加的在线实验中,我们调查了动机和准确性信息作为潜在因素,以诱导更深思熟虑地阐述人工智能建议,从而改善HAI协作。参与者参与了一项模拟人员选拔任务,并从模拟人工智能系统中获得了建议。参与者的动机通过游戏化而变化,准确性信息通过来自人工智能系统的额外信息而变化。结果表明,动机和准确性信息对HAI绩效均有正向影响,但影响方式不同。虽然高动机主要只增加了人类对高质量推荐的使用,但准确性信息改善了低质量和高质量建议的使用。然而,高动机和准确信息的结合并没有带来HAI表现的额外改善。
{"title":"Optimizing human-AI collaboration: Effects of motivation and accuracy information in AI-supported decision-making","authors":"Simon Eisbach ,&nbsp;Markus Langer ,&nbsp;Guido Hertel","doi":"10.1016/j.chbah.2023.100015","DOIUrl":"https://doi.org/10.1016/j.chbah.2023.100015","url":null,"abstract":"<div><p>Artificial intelligence (AI) systems increasingly support human decision-making in fields like medicine, management, and finance. However, such human-AI (HAI) collaboration is often less effective than AI systems alone. Moreover, efforts to make AI recommendations more transparent have failed to improve the decision quality of HAI collaborations. Based on dual process theories of cognition, we hypothesized that suboptimal HAI collaboration is partly due to heuristic information processing of humans, creating a trust imbalance towards the AI system. In an online experiment with 337 participants, we investigated motivation and accuracy information as potential factors to induce more deliberate elaboration of AI recommendations, and thus improve HAI collaboration. Participants worked on a simulated personnel selection task and received recommendations from a simulated AI system. Participants' motivation was varied through gamification, and accuracy information through additional information from the AI system. Results indicate that both motivation and accuracy information positively influenced HAI performance, but in different ways. While high motivation primarily increased humans’ use in high-quality recommendations only, accuracy information improved both the use of low- and high-quality recommendations. However, a combination of high motivation and accuracy information did not yield additional improvement of HAI performance.</p></div>","PeriodicalId":100324,"journal":{"name":"Computers in Human Behavior: Artificial Humans","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2023-08-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"49714058","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
期刊
Computers in Human Behavior: Artificial Humans
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1