人非圣贤孰能无过:偏见突出有助于克服对医疗人工智能的抵触情绪

IF 9 1区 心理学 Q1 PSYCHOLOGY, EXPERIMENTAL Computers in Human Behavior Pub Date : 2024-08-14 DOI:10.1016/j.chb.2024.108402
{"title":"人非圣贤孰能无过:偏见突出有助于克服对医疗人工智能的抵触情绪","authors":"","doi":"10.1016/j.chb.2024.108402","DOIUrl":null,"url":null,"abstract":"<div><p>Prior research has shown that many individuals exhibit an aversion to algorithms and are resistant to the use of artificial intelligence (AI) in healthcare. In the present research, we show that an intervention that increases the <em>salience of bias</em> in decision making—either in general or specifically with respect to gender or age—makes individuals relatively more receptive to medical AI. This increased receptiveness to AI occurs because bias is perceived to be a fundamentally human shortcoming. As such, when the prospect of bias is made salient, perceptions of <em>AI integrity</em>—defined as the perceived fairness and trustworthiness of an AI agent relative to a human counterpart—are enhanced.</p></div>","PeriodicalId":48471,"journal":{"name":"Computers in Human Behavior","volume":null,"pages":null},"PeriodicalIF":9.0000,"publicationDate":"2024-08-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"To err is human: Bias salience can help overcome resistance to medical AI\",\"authors\":\"\",\"doi\":\"10.1016/j.chb.2024.108402\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"<div><p>Prior research has shown that many individuals exhibit an aversion to algorithms and are resistant to the use of artificial intelligence (AI) in healthcare. In the present research, we show that an intervention that increases the <em>salience of bias</em> in decision making—either in general or specifically with respect to gender or age—makes individuals relatively more receptive to medical AI. This increased receptiveness to AI occurs because bias is perceived to be a fundamentally human shortcoming. As such, when the prospect of bias is made salient, perceptions of <em>AI integrity</em>—defined as the perceived fairness and trustworthiness of an AI agent relative to a human counterpart—are enhanced.</p></div>\",\"PeriodicalId\":48471,\"journal\":{\"name\":\"Computers in Human Behavior\",\"volume\":null,\"pages\":null},\"PeriodicalIF\":9.0000,\"publicationDate\":\"2024-08-14\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"Computers in Human Behavior\",\"FirstCategoryId\":\"102\",\"ListUrlMain\":\"https://www.sciencedirect.com/science/article/pii/S074756322400270X\",\"RegionNum\":1,\"RegionCategory\":\"心理学\",\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"Q1\",\"JCRName\":\"PSYCHOLOGY, EXPERIMENTAL\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"Computers in Human Behavior","FirstCategoryId":"102","ListUrlMain":"https://www.sciencedirect.com/science/article/pii/S074756322400270X","RegionNum":1,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q1","JCRName":"PSYCHOLOGY, EXPERIMENTAL","Score":null,"Total":0}
引用次数: 0

摘要

先前的研究表明,许多人对算法表现出反感,并抵制在医疗保健中使用人工智能(AI)。在本研究中,我们发现,如果干预措施能增加决策中偏见的显著性--无论是总体上的还是具体到性别或年龄上的--就会使人们相对更容易接受医疗人工智能。这种对人工智能接受度的提高是因为偏见被认为是人类的根本缺陷。因此,当偏见的前景变得突出时,人们对人工智能完整性的感知就会增强,这种完整性是指人工智能代理相对于人类代理的公平性和可信度。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
查看原文
分享 分享
微信好友 朋友圈 QQ好友 复制链接
本刊更多论文
To err is human: Bias salience can help overcome resistance to medical AI

Prior research has shown that many individuals exhibit an aversion to algorithms and are resistant to the use of artificial intelligence (AI) in healthcare. In the present research, we show that an intervention that increases the salience of bias in decision making—either in general or specifically with respect to gender or age—makes individuals relatively more receptive to medical AI. This increased receptiveness to AI occurs because bias is perceived to be a fundamentally human shortcoming. As such, when the prospect of bias is made salient, perceptions of AI integrity—defined as the perceived fairness and trustworthiness of an AI agent relative to a human counterpart—are enhanced.

求助全文
通过发布文献求助,成功后即可免费获取论文全文。 去求助
来源期刊
CiteScore
19.10
自引率
4.00%
发文量
381
审稿时长
40 days
期刊介绍: Computers in Human Behavior is a scholarly journal that explores the psychological aspects of computer use. It covers original theoretical works, research reports, literature reviews, and software and book reviews. The journal examines both the use of computers in psychology, psychiatry, and related fields, and the psychological impact of computer use on individuals, groups, and society. Articles discuss topics such as professional practice, training, research, human development, learning, cognition, personality, and social interactions. It focuses on human interactions with computers, considering the computer as a medium through which human behaviors are shaped and expressed. Professionals interested in the psychological aspects of computer use will find this journal valuable, even with limited knowledge of computers.
期刊最新文献
The negative consequences of networking through social network services: A social comparison perspective Can online behaviors be linked to mental health? Active versus passive social network usage on depression via envy and self-esteem Self-regulation deficiencies and perceived problematic online pornography use among young Chinese women: The role of self-acceptance Flow in ChatGPT-based logic learning and its influences on logic and self-efficacy in English argumentative writing Navigating online perils: Socioeconomic status, online activity lifestyles, and online fraud targeting and victimization of old adults in China
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
已复制链接
已复制链接
快去分享给好友吧!
我知道了
×
扫码分享
扫码分享
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1