People judge others more harshly after talking to bots.

IF 2.2 Q2 MULTIDISCIPLINARY SCIENCES PNAS nexus Pub Date : 2024-09-19 eCollection Date: 2024-09-01 DOI:10.1093/pnasnexus/pgae397
Kian Siong Tey, Asaf Mazar, Geoff Tomaino, Angela L Duckworth, Lyle H Ungar
{"title":"People judge others more harshly after talking to bots.","authors":"Kian Siong Tey, Asaf Mazar, Geoff Tomaino, Angela L Duckworth, Lyle H Ungar","doi":"10.1093/pnasnexus/pgae397","DOIUrl":null,"url":null,"abstract":"<p><p>People now commonly interact with Artificial Intelligence (AI) agents. How do these interactions shape how humans perceive each other? In two preregistered studies (total <i>N</i> = 1,261), we show that people evaluate other humans more harshly after interacting with an AI (compared with an unrelated purported human). In Study 1, participants who worked on a creative task with AIs (versus purported humans) subsequently rated another purported human's work more negatively. Study 2 replicated this effect and demonstrated that the results hold even when participants believed their evaluation would not be shared with the purported human. Exploratory analyses of participants' conversations show that prior to their human evaluations they were more demanding, more instrumental and displayed less positive affect towards AIs (versus purported humans). These findings point to a potentially worrisome side effect of the exponential rise in human-AI interactions.</p>","PeriodicalId":74468,"journal":{"name":"PNAS nexus","volume":null,"pages":null},"PeriodicalIF":2.2000,"publicationDate":"2024-09-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11421659/pdf/","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"PNAS nexus","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1093/pnasnexus/pgae397","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"2024/9/1 0:00:00","PubModel":"eCollection","JCR":"Q2","JCRName":"MULTIDISCIPLINARY SCIENCES","Score":null,"Total":0}
引用次数: 0

Abstract

People now commonly interact with Artificial Intelligence (AI) agents. How do these interactions shape how humans perceive each other? In two preregistered studies (total N = 1,261), we show that people evaluate other humans more harshly after interacting with an AI (compared with an unrelated purported human). In Study 1, participants who worked on a creative task with AIs (versus purported humans) subsequently rated another purported human's work more negatively. Study 2 replicated this effect and demonstrated that the results hold even when participants believed their evaluation would not be shared with the purported human. Exploratory analyses of participants' conversations show that prior to their human evaluations they were more demanding, more instrumental and displayed less positive affect towards AIs (versus purported humans). These findings point to a potentially worrisome side effect of the exponential rise in human-AI interactions.

查看原文
分享 分享
微信好友 朋友圈 QQ好友 复制链接
本刊更多论文
与机器人交谈后,人们对他人的评价会更加苛刻。
现在,人们经常与人工智能(AI)代理互动。这些互动如何影响人类对彼此的看法?在两项预先登记的研究(总人数 = 1,261)中,我们发现人们在与人工智能(与无关的假想人类相比)互动后,对其他人类的评价更为苛刻。在研究 1 中,与人工智能(与假想的人类相比)共同完成一项创造性任务的参与者随后会对另一个假想人类的作品做出更负面的评价。研究 2 复制了这一效果,并证明即使参与者认为他们的评价不会与假想的人类分享,结果也是成立的。对参与者对话的探索性分析表明,在进行人类评价之前,他们对人工智能(相对于声称的人类)的要求更高、更注重工具性,并表现出更少的积极情绪。这些发现表明,人类与人工智能的交互呈指数级增长,可能会产生令人担忧的副作用。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 去求助
来源期刊
CiteScore
1.80
自引率
0.00%
发文量
0
期刊最新文献
Pollen foraging mediates exposure to dichotomous stressor syndromes in honey bees. Affective polarization is uniformly distributed across American States. Attraction to politically extreme users on social media. Critical thinking and misinformation vulnerability: experimental evidence from Colombia. Descriptive norms can "backfire" in hyper-polarized contexts.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
已复制链接
已复制链接
快去分享给好友吧!
我知道了
×
扫码分享
扫码分享
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1