Confirmation bias in AI-assisted decision-making: AI triage recommendations congruent with expert judgments increase psychologist trust and recommendation acceptance
{"title":"Confirmation bias in AI-assisted decision-making: AI triage recommendations congruent with expert judgments increase psychologist trust and recommendation acceptance","authors":"Anna Bashkirova, Dario Krpan","doi":"10.1016/j.chbah.2024.100066","DOIUrl":null,"url":null,"abstract":"<div><p>The surging global demand for mental healthcare (MH) services has amplified the interest in utilizing AI-assisted technologies in critical MH components, including assessment and triage. However, while reducing practitioner burden through decision support is a priority in MH-AI integration, the impact of AI systems on practitioner decisions remains under-researched. This study is the first to investigate the interplay between practitioner judgments and AI recommendations in MH diagnostic decision-making. Using a between-subjects vignette design, the study deployed a mock AI system to provide information about patient triage and assessments to a sample of MH professionals and psychology students with a strong understanding of assessments and triage procedures. Findings showed that participants were more inclined to trust and accept AI recommendations when they aligned with their initial diagnoses and professional intuition. Moreover, those claiming higher expertise demonstrated increased skepticism when AI's suggestions deviated from their professional judgment. The study underscores that MH practitioners neither show unwavering trust in, nor complete adherence to AI, but rather exhibit confirmation bias, predominantly favoring suggestions mirroring their pre-existing beliefs. These insights suggest that while practitioners can potentially correct faulty AI recommendations, the utility of implementing debiased AI to counteract practitioner biases warrants additional investigation.</p></div>","PeriodicalId":100324,"journal":{"name":"Computers in Human Behavior: Artificial Humans","volume":"2 1","pages":"Article 100066"},"PeriodicalIF":0.0000,"publicationDate":"2024-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.sciencedirect.com/science/article/pii/S2949882124000264/pdfft?md5=bafd7fff021f0f74d35c3c67faab7274&pid=1-s2.0-S2949882124000264-main.pdf","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Computers in Human Behavior: Artificial Humans","FirstCategoryId":"1085","ListUrlMain":"https://www.sciencedirect.com/science/article/pii/S2949882124000264","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 0
Abstract
The surging global demand for mental healthcare (MH) services has amplified the interest in utilizing AI-assisted technologies in critical MH components, including assessment and triage. However, while reducing practitioner burden through decision support is a priority in MH-AI integration, the impact of AI systems on practitioner decisions remains under-researched. This study is the first to investigate the interplay between practitioner judgments and AI recommendations in MH diagnostic decision-making. Using a between-subjects vignette design, the study deployed a mock AI system to provide information about patient triage and assessments to a sample of MH professionals and psychology students with a strong understanding of assessments and triage procedures. Findings showed that participants were more inclined to trust and accept AI recommendations when they aligned with their initial diagnoses and professional intuition. Moreover, those claiming higher expertise demonstrated increased skepticism when AI's suggestions deviated from their professional judgment. The study underscores that MH practitioners neither show unwavering trust in, nor complete adherence to AI, but rather exhibit confirmation bias, predominantly favoring suggestions mirroring their pre-existing beliefs. These insights suggest that while practitioners can potentially correct faulty AI recommendations, the utility of implementing debiased AI to counteract practitioner biases warrants additional investigation.