Ethical Considerations in Human-Centered AI: Advancing Oncology Chatbots Through Large Language Models.

James C L Chow, Kay Li
{"title":"Ethical Considerations in Human-Centered AI: Advancing Oncology Chatbots Through Large Language Models.","authors":"James C L Chow, Kay Li","doi":"10.2196/64406","DOIUrl":null,"url":null,"abstract":"<p><p>The integration of chatbots in oncology underscores the pressing need for human-centered artificial intelligence (AI) that addresses patient and family concerns with empathy and precision. Human-centered AI emphasizes ethical principles, empathy, and user-centric approaches, ensuring technology aligns with human values and needs. This review critically examines the ethical implications of using large language models (LLMs) like GPT-3 and GPT-4 (OpenAI) in oncology chatbots. It examines how these models replicate human-like language patterns, impacting the design of ethical AI systems. The paper identifies key strategies for ethically developing oncology chatbots, focusing on potential biases arising from extensive datasets and neural networks. Specific datasets, such as those sourced from predominantly Western medical literature and patient interactions, may introduce biases by overrepresenting certain demographic groups. Moreover, the training methodologies of LLMs, including fine-tuning processes, can exacerbate these biases, leading to outputs that may disproportionately favor affluent or Western populations while neglecting marginalized communities. By providing examples of biased outputs in oncology chatbots, the review highlights the ethical challenges LLMs present and the need for mitigation strategies. The study emphasizes integrating human-centric values into AI to mitigate these biases, ultimately advocating for the development of oncology chatbots that are aligned with ethical principles and capable of serving diverse patient populations equitably.</p>","PeriodicalId":73552,"journal":{"name":"JMIR bioinformatics and biotechnology","volume":" ","pages":"e64406"},"PeriodicalIF":0.0000,"publicationDate":"2024-11-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11579624/pdf/","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"JMIR bioinformatics and biotechnology","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.2196/64406","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 0

Abstract

The integration of chatbots in oncology underscores the pressing need for human-centered artificial intelligence (AI) that addresses patient and family concerns with empathy and precision. Human-centered AI emphasizes ethical principles, empathy, and user-centric approaches, ensuring technology aligns with human values and needs. This review critically examines the ethical implications of using large language models (LLMs) like GPT-3 and GPT-4 (OpenAI) in oncology chatbots. It examines how these models replicate human-like language patterns, impacting the design of ethical AI systems. The paper identifies key strategies for ethically developing oncology chatbots, focusing on potential biases arising from extensive datasets and neural networks. Specific datasets, such as those sourced from predominantly Western medical literature and patient interactions, may introduce biases by overrepresenting certain demographic groups. Moreover, the training methodologies of LLMs, including fine-tuning processes, can exacerbate these biases, leading to outputs that may disproportionately favor affluent or Western populations while neglecting marginalized communities. By providing examples of biased outputs in oncology chatbots, the review highlights the ethical challenges LLMs present and the need for mitigation strategies. The study emphasizes integrating human-centric values into AI to mitigate these biases, ultimately advocating for the development of oncology chatbots that are aligned with ethical principles and capable of serving diverse patient populations equitably.

查看原文
分享 分享
微信好友 朋友圈 QQ好友 复制链接
本刊更多论文
以人为本的人工智能中的伦理考虑:通过大型语言模型推进肿瘤聊天机器人的发展。
无序:聊天机器人与肿瘤学的结合凸显了对以人为本的人工智能的迫切需要,这种人工智能能以同理心和精准度解决患者和家属关心的问题。以人为本的人工智能强调伦理原则、同理心和以用户为中心的方法,确保技术符合人类的价值观和需求。本综述批判性地研究了在肿瘤聊天机器人中使用 GPT-3 和 GPT-4 等大型语言模型(LLM)的伦理意义。它探讨了这些模型如何复制类似人类的语言模式,从而影响符合伦理的人工智能系统的设计。论文确定了从伦理角度开发肿瘤聊天机器人的关键策略,重点关注大量数据集和神经网络可能产生的偏差。特定的数据集,如主要来自西方医学文献和患者互动的数据集,可能会因过度代表某些人口群体而产生偏差。此外,LLM 的训练方法(包括微调过程)可能会加剧这些偏差,导致输出结果过度偏向富裕或西方人群,而忽视边缘化群体。通过举例说明肿瘤聊天机器人中存在的偏差,该综述强调了LLMs带来的伦理挑战以及制定缓解策略的必要性。本研究强调将以人为本的价值观融入人工智能以减轻这些偏见,最终倡导开发符合伦理原则并能公平服务于不同患者群体的肿瘤聊天机器人。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 去求助
来源期刊
CiteScore
2.90
自引率
0.00%
发文量
0
期刊最新文献
Effect of a Web-Based Heartfulness Program on the Mental Well-Being, Biomarkers, and Gene Expression Profile of Health Care Students: Randomized Controlled Trial. Eco-Evolutionary Drivers of Vibrio parahaemolyticus Sequence Type 3 Expansion: Retrospective Machine Learning Approach. Exploring the Intersection of Schizophrenia, Machine Learning, and Genomics: Scoping Review. Ethical Considerations in Human-Centered AI: Advancing Oncology Chatbots Through Large Language Models. Enhancing Suicide Risk Prediction With Polygenic Scores in Psychiatric Emergency Settings: Prospective Study.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
已复制链接
已复制链接
快去分享给好友吧!
我知道了
×
扫码分享
扫码分享
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1