Obstructive sleep apnea (OSA) is stereotypically a condition of the middle-aged, obese, snoring man, a depiction that obscures the true diversity of vast affected populations. The growing use of artificial intelligence (AI) text-to-image generators for medical applications risks further reinforcement of this bias through prejudiced visual depictions of disease. We analyzed 1,000 images generated by ChatGPT-4o using the prompt "Person with obstructive sleep apnea" to evaluate demographics represented in its portrayals of OSA. ChatGPT-4o consistently portrayed individuals as middle-aged (98.3%), male (99.8%), White (94.7%), and obese/overweight (97.2%)-a representation differing markedly and in all axes from real-world prevalence. These findings suggest that AI-generated imagery may draw on and reinforce a narrow and outdated perception of OSA, potentially contributing to diagnostic bias and health disparities. As AI becomes increasingly integrated into clinical practice and educational tools, ensuring accurate and inclusive representations will be essential to advancing equity in sleep medicine.
扫码关注我们
求助内容:
应助结果提醒方式:
