Beyond the stereotypes: Artificial Intelligence image generation and diversity in anesthesiology.

IF 3 Q2 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Frontiers in Artificial Intelligence Pub Date : 2024-10-09 eCollection Date: 2024-01-01 DOI:10.3389/frai.2024.1462819
Mia Gisselbaek, Laurens Minsart, Ekin Köselerli, Mélanie Suppan, Basak Ceyda Meco, Laurence Seidel, Adelin Albert, Odmara L Barreto Chang, Sarah Saxena, Joana Berger-Estilita
{"title":"Beyond the stereotypes: Artificial Intelligence image generation and diversity in anesthesiology.","authors":"Mia Gisselbaek, Laurens Minsart, Ekin Köselerli, Mélanie Suppan, Basak Ceyda Meco, Laurence Seidel, Adelin Albert, Odmara L Barreto Chang, Sarah Saxena, Joana Berger-Estilita","doi":"10.3389/frai.2024.1462819","DOIUrl":null,"url":null,"abstract":"<p><strong>Introduction: </strong>Artificial Intelligence (AI) is increasingly being integrated into anesthesiology to enhance patient safety, improve efficiency, and streamline various aspects of practice.</p><p><strong>Objective: </strong>This study aims to evaluate whether AI-generated images accurately depict the demographic racial and ethnic diversity observed in the Anesthesia workforce and to identify inherent social biases in these images.</p><p><strong>Methods: </strong>This cross-sectional analysis was conducted from January to February 2024. Demographic data were collected from the American Society of Anesthesiologists (ASA) and the European Society of Anesthesiology and Intensive Care (ESAIC). Two AI text-to-image models, ChatGPT DALL-E 2 and Midjourney, generated images of anesthesiologists across various subspecialties. Three independent reviewers assessed and categorized each image based on sex, race/ethnicity, age, and emotional traits.</p><p><strong>Results: </strong>A total of 1,200 images were analyzed. We found significant discrepancies between AI-generated images and actual demographic data. The models predominantly portrayed anesthesiologists as White, with ChatGPT DALL-E2 at 64.2% and Midjourney at 83.0%. Moreover, male gender was highly associated with White ethnicity by ChatGPT DALL-E2 (79.1%) and with non-White ethnicity by Midjourney (87%). Age distribution also varied significantly, with younger anesthesiologists underrepresented. The analysis also revealed predominant traits such as \"masculine, \"\"attractive, \"and \"trustworthy\" across various subspecialties.</p><p><strong>Conclusion: </strong>AI models exhibited notable biases in gender, race/ethnicity, and age representation, failing to reflect the actual diversity within the anesthesiologist workforce. These biases highlight the need for more diverse training datasets and strategies to mitigate bias in AI-generated images to ensure accurate and inclusive representations in the medical field.</p>","PeriodicalId":33315,"journal":{"name":"Frontiers in Artificial Intelligence","volume":"7 ","pages":"1462819"},"PeriodicalIF":3.0000,"publicationDate":"2024-10-09","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11497631/pdf/","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Frontiers in Artificial Intelligence","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.3389/frai.2024.1462819","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"2024/1/1 0:00:00","PubModel":"eCollection","JCR":"Q2","JCRName":"COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE","Score":null,"Total":0}
引用次数: 0

Abstract

Introduction: Artificial Intelligence (AI) is increasingly being integrated into anesthesiology to enhance patient safety, improve efficiency, and streamline various aspects of practice.

Objective: This study aims to evaluate whether AI-generated images accurately depict the demographic racial and ethnic diversity observed in the Anesthesia workforce and to identify inherent social biases in these images.

Methods: This cross-sectional analysis was conducted from January to February 2024. Demographic data were collected from the American Society of Anesthesiologists (ASA) and the European Society of Anesthesiology and Intensive Care (ESAIC). Two AI text-to-image models, ChatGPT DALL-E 2 and Midjourney, generated images of anesthesiologists across various subspecialties. Three independent reviewers assessed and categorized each image based on sex, race/ethnicity, age, and emotional traits.

Results: A total of 1,200 images were analyzed. We found significant discrepancies between AI-generated images and actual demographic data. The models predominantly portrayed anesthesiologists as White, with ChatGPT DALL-E2 at 64.2% and Midjourney at 83.0%. Moreover, male gender was highly associated with White ethnicity by ChatGPT DALL-E2 (79.1%) and with non-White ethnicity by Midjourney (87%). Age distribution also varied significantly, with younger anesthesiologists underrepresented. The analysis also revealed predominant traits such as "masculine, ""attractive, "and "trustworthy" across various subspecialties.

Conclusion: AI models exhibited notable biases in gender, race/ethnicity, and age representation, failing to reflect the actual diversity within the anesthesiologist workforce. These biases highlight the need for more diverse training datasets and strategies to mitigate bias in AI-generated images to ensure accurate and inclusive representations in the medical field.

查看原文
分享 分享
微信好友 朋友圈 QQ好友 复制链接
本刊更多论文
超越刻板印象:人工智能图像生成与麻醉学的多样性。
导言:人工智能(AI)正越来越多地融入麻醉学,以加强患者安全、提高效率并简化实践的各个方面:本研究旨在评估人工智能生成的图像是否准确描述了麻醉工作人员中观察到的人口种族和民族多样性,并找出这些图像中固有的社会偏见:这项横断面分析于 2024 年 1 月至 2 月进行。人口统计学数据收集自美国麻醉医师协会(ASA)和欧洲麻醉学和重症监护协会(ESAIC)。ChatGPT DALL-E 2 和 Midjourney 这两个人工智能文本到图像模型生成了各亚专科麻醉医师的图像。三名独立审查员根据性别、种族/民族、年龄和情感特征对每张图片进行评估和分类:共分析了 1,200 张图片。我们发现人工智能生成的图像与实际人口统计学数据之间存在很大差异。模型主要将麻醉师描绘成白人,其中 ChatGPT DALL-E2 为 64.2%,Midjourney 为 83.0%。此外,在 ChatGPT DALL-E2 中,男性性别与白人种族高度相关(79.1%),在 Midjourney 中,男性性别与非白人种族高度相关(87%)。年龄分布也有很大差异,年轻麻醉医师所占比例较低。分析还显示,"阳刚"、"有魅力 "和 "值得信赖 "等特征在各亚专科中占主导地位:人工智能模型在性别、种族/民族和年龄代表性方面表现出明显的偏差,未能反映麻醉医师队伍的实际多样性。这些偏差凸显出需要更多样化的训练数据集和策略来减少人工智能生成图像中的偏差,以确保医疗领域的准确和包容性表现。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 去求助
来源期刊
CiteScore
6.10
自引率
2.50%
发文量
272
审稿时长
13 weeks
期刊最新文献
Advancing smart city factories: enhancing industrial mechanical operations via deep learning techniques. Inpainting of damaged temple murals using edge- and line-guided diffusion patch GAN. Catalyzing IVF outcome prediction: exploring advanced machine learning paradigms for enhanced success rate prognostication. Predicting patient reported outcome measures: a scoping review for the artificial intelligence-guided patient preference predictor. A generative AI-driven interactive listening assessment task.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
已复制链接
已复制链接
快去分享给好友吧!
我知道了
×
扫码分享
扫码分享
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1