Mathew S. Isaac , Rebecca Jen-Hui Wang , Lucy E. Napper , Jessecae K. Marsh
{"title":"To err is human: Bias salience can help overcome resistance to medical AI","authors":"Mathew S. Isaac , Rebecca Jen-Hui Wang , Lucy E. Napper , Jessecae K. Marsh","doi":"10.1016/j.chb.2024.108402","DOIUrl":null,"url":null,"abstract":"<div><p>Prior research has shown that many individuals exhibit an aversion to algorithms and are resistant to the use of artificial intelligence (AI) in healthcare. In the present research, we show that an intervention that increases the <em>salience of bias</em> in decision making—either in general or specifically with respect to gender or age—makes individuals relatively more receptive to medical AI. This increased receptiveness to AI occurs because bias is perceived to be a fundamentally human shortcoming. As such, when the prospect of bias is made salient, perceptions of <em>AI integrity</em>—defined as the perceived fairness and trustworthiness of an AI agent relative to a human counterpart—are enhanced.</p></div>","PeriodicalId":48471,"journal":{"name":"Computers in Human Behavior","volume":"161 ","pages":"Article 108402"},"PeriodicalIF":9.0000,"publicationDate":"2024-08-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Computers in Human Behavior","FirstCategoryId":"102","ListUrlMain":"https://www.sciencedirect.com/science/article/pii/S074756322400270X","RegionNum":1,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q1","JCRName":"PSYCHOLOGY, EXPERIMENTAL","Score":null,"Total":0}
引用次数: 0
Abstract
Prior research has shown that many individuals exhibit an aversion to algorithms and are resistant to the use of artificial intelligence (AI) in healthcare. In the present research, we show that an intervention that increases the salience of bias in decision making—either in general or specifically with respect to gender or age—makes individuals relatively more receptive to medical AI. This increased receptiveness to AI occurs because bias is perceived to be a fundamentally human shortcoming. As such, when the prospect of bias is made salient, perceptions of AI integrity—defined as the perceived fairness and trustworthiness of an AI agent relative to a human counterpart—are enhanced.
期刊介绍:
Computers in Human Behavior is a scholarly journal that explores the psychological aspects of computer use. It covers original theoretical works, research reports, literature reviews, and software and book reviews. The journal examines both the use of computers in psychology, psychiatry, and related fields, and the psychological impact of computer use on individuals, groups, and society. Articles discuss topics such as professional practice, training, research, human development, learning, cognition, personality, and social interactions. It focuses on human interactions with computers, considering the computer as a medium through which human behaviors are shaped and expressed. Professionals interested in the psychological aspects of computer use will find this journal valuable, even with limited knowledge of computers.