Perceptions of artificial intelligence system's aptitude to judge morality and competence amidst the rise of Chatbots.

IF 3.4 2区 心理学 Q1 PSYCHOLOGY, EXPERIMENTAL Cognitive Research-Principles and Implications Pub Date : 2024-07-18 DOI:10.1186/s41235-024-00573-7
Manuel Oliveira, Justus Brands, Judith Mashudi, Baptist Liefooghe, Ruud Hortensius
{"title":"Perceptions of artificial intelligence system's aptitude to judge morality and competence amidst the rise of Chatbots.","authors":"Manuel Oliveira, Justus Brands, Judith Mashudi, Baptist Liefooghe, Ruud Hortensius","doi":"10.1186/s41235-024-00573-7","DOIUrl":null,"url":null,"abstract":"<p><p>This paper examines how humans judge the capabilities of artificial intelligence (AI) to evaluate human attributes, specifically focusing on two key dimensions of human social evaluation: morality and competence. Furthermore, it investigates the impact of exposure to advanced Large Language Models on these perceptions. In three studies (combined N = 200), we tested the hypothesis that people will find it less plausible that AI is capable of judging the morality conveyed by a behavior compared to judging its competence. Participants estimated the plausibility of AI origin for a set of written impressions of positive and negative behaviors related to morality and competence. Studies 1 and 3 supported our hypothesis that people would be more inclined to attribute AI origin to competence-related impressions compared to morality-related ones. In Study 2, we found this effect only for impressions of positive behaviors. Additional exploratory analyses clarified that the differentiation between the AI origin of competence and morality judgments persisted throughout the first half year after the public launch of popular AI chatbot (i.e., ChatGPT) and could not be explained by participants' general attitudes toward AI, or the actual source of the impressions (i.e., AI or human). These findings suggest an enduring belief that AI is less adept at assessing the morality compared to the competence of human behavior, even as AI capabilities continued to advance.</p>","PeriodicalId":46827,"journal":{"name":"Cognitive Research-Principles and Implications","volume":"9 1","pages":"47"},"PeriodicalIF":3.4000,"publicationDate":"2024-07-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11255178/pdf/","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Cognitive Research-Principles and Implications","FirstCategoryId":"102","ListUrlMain":"https://doi.org/10.1186/s41235-024-00573-7","RegionNum":2,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q1","JCRName":"PSYCHOLOGY, EXPERIMENTAL","Score":null,"Total":0}
引用次数: 0

Abstract

This paper examines how humans judge the capabilities of artificial intelligence (AI) to evaluate human attributes, specifically focusing on two key dimensions of human social evaluation: morality and competence. Furthermore, it investigates the impact of exposure to advanced Large Language Models on these perceptions. In three studies (combined N = 200), we tested the hypothesis that people will find it less plausible that AI is capable of judging the morality conveyed by a behavior compared to judging its competence. Participants estimated the plausibility of AI origin for a set of written impressions of positive and negative behaviors related to morality and competence. Studies 1 and 3 supported our hypothesis that people would be more inclined to attribute AI origin to competence-related impressions compared to morality-related ones. In Study 2, we found this effect only for impressions of positive behaviors. Additional exploratory analyses clarified that the differentiation between the AI origin of competence and morality judgments persisted throughout the first half year after the public launch of popular AI chatbot (i.e., ChatGPT) and could not be explained by participants' general attitudes toward AI, or the actual source of the impressions (i.e., AI or human). These findings suggest an enduring belief that AI is less adept at assessing the morality compared to the competence of human behavior, even as AI capabilities continued to advance.

查看原文
分享 分享
微信好友 朋友圈 QQ好友 复制链接
本刊更多论文
在聊天机器人兴起之际,对人工智能系统判断道德和能力的看法。
本文研究了人类如何判断人工智能(AI)评估人类属性的能力,特别关注人类社会评价的两个关键维度:道德和能力。此外,本文还研究了接触先进的大型语言模型对这些看法的影响。在三项研究(总人数=200)中,我们测试了这样一个假设:与判断能力相比,人们会认为人工智能判断行为所传达的道德性的能力不太可信。受试者对一组与道德和能力有关的正面和负面行为的书面印象进行了人工智能起源可信度估计。研究 1 和研究 3 支持了我们的假设,即与道德相关的印象相比,人们更倾向于将人工智能的起源归因于能力相关的印象。在研究 2 中,我们发现只有对积极行为的印象才会产生这种效应。更多的探索性分析表明,在流行的人工智能聊天机器人(即 ChatGPT)公开发布后的前半年中,能力和道德判断的人工智能来源之间的差异一直存在,这并不能用参与者对人工智能的总体态度或印象的实际来源(即人工智能还是人类)来解释。这些研究结果表明,即使人工智能的能力在不断进步,但人们始终认为,与人类行为的能力相比,人工智能不太擅长评估道德。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 去求助
来源期刊
CiteScore
6.80
自引率
7.30%
发文量
96
审稿时长
25 weeks
期刊最新文献
Fixation durations on familiar items are longer due to attenuation of exploration. Different facets of age perception in people with developmental prosopagnosia and "super-recognisers". Self-evaluations and the language of the beholder: objective performance and language solidarity predict L2 and L1 self-evaluations in bilingual adults. Correction: Distress reactions and susceptibility to misinformation for an analogue trauma event. Jack of all trades, master of one: domain-specific and domain-general contributions to perceptual expertise in visual comparison.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
已复制链接
已复制链接
快去分享给好友吧!
我知道了
×
扫码分享
扫码分享
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1