Background
Clinical decision-making is shaped by healthcare provider-related factors such as experience, qualification and cognitive skills. AI-based Clinical Decision Support Systems (CDSS) promise to enhance diagnostic accuracy but may also introduce risks, particularly through automation bias. The relative impact of correct and incorrect AI recommendations compared to human factors remains poorly understood.
Methods
A simulated diagnostic intervention study was conducted with 223 physicians and nurses, who generated 1,338 decisions when assessing wound maceration from images combined with AI recommendations. Participants first completed a baseline assessment of diagnostic performance without AI support, followed by a second phase including AI recommendations (correct or incorrect, based on a CNN). Diagnostic decisions were analysed using a generalised linear mixed model (GLMM) to examine the influence of AI recommendation correctness and healthcare provider-related factors (diagnostic performance, qualification, experience, trust in AI, gender, profession, age, healthcare sector) on decision accuracy.
Results
AI recommendations had a strong and bidirectional influence on diagnostic accuracy. Participants were ten times more likely to make correct decisions when receiving a correct AI recommendation (OR = 10.0, p < 0.001), but their accuracy decreased reciprocally when the AI recommendation was incorrect. Among provider-related factors, high baseline diagnostic performance (OR = 2.44, p = 0.019), pertinent formal qualifications (OR = 1.40, p = 0.049), longer work experience (OR = 1.89, p = 0.018), and female gender (OR = 1.55, p = 0.008) were associated with higher diagnostic accuracy. Trust in AI, age, profession, and healthcare sector showed no significant effects in the multivariate model. The overall effect of introducing AI was equivocal compared to baseline, however, there was a differential effect whether the recommendation was correct or wrong.
Conclusions
AI recommendations can exert a stronger influence on diagnostic decisions than healthcare provider-related factors. While AI support improved accuracy when correct, it reduced accuracy when incorrect, indicating overreliance on the system and posing a substantial safety risk. These findings highlight the dual nature of AI in clinical decision support and underscore the imperative for systems with consistently high quality in clinical practice. Equally important, clinicians must receive training and support to critically assess AI recommendations when making clinical decisions.
扫码关注我们
求助内容:
应助结果提醒方式:
