Large language models in cancer: potentials, risks, and safeguards.

BJR artificial intelligence Pub Date : 2024-12-20 eCollection Date: 2025-01-01 DOI:10.1093/bjrai/ubae019
Md Muntasir Zitu, Tuan Dung Le, Thanh Duong, Shohreh Haddadan, Melany Garcia, Rossybelle Amorrortu, Yayi Zhao, Dana E Rollison, Thanh Thieu
{"title":"Large language models in cancer: potentials, risks, and safeguards.","authors":"Md Muntasir Zitu, Tuan Dung Le, Thanh Duong, Shohreh Haddadan, Melany Garcia, Rossybelle Amorrortu, Yayi Zhao, Dana E Rollison, Thanh Thieu","doi":"10.1093/bjrai/ubae019","DOIUrl":null,"url":null,"abstract":"<p><p>This review examines the use of large language models (LLMs) in cancer, analysing articles sourced from PubMed, Embase, and Ovid Medline, published between 2017 and 2024. Our search strategy included terms related to LLMs, cancer research, risks, safeguards, and ethical issues, focusing on studies that utilized text-based data. 59 articles were included in the review, categorized into 3 segments: quantitative studies on LLMs, chatbot-focused studies, and qualitative discussions on LLMs on cancer. Quantitative studies highlight LLMs' advanced capabilities in natural language processing (NLP), while chatbot-focused articles demonstrate their potential in clinical support and data management. Qualitative research underscores the broader implications of LLMs, including the risks and ethical considerations. Our findings suggest that LLMs, notably ChatGPT, have potential in data analysis, patient interaction, and personalized treatment in cancer care. However, the review identifies critical risks, including data biases and ethical challenges. We emphasize the need for regulatory oversight, targeted model development, and continuous evaluation. In conclusion, integrating LLMs in cancer research offers promising prospects but necessitates a balanced approach focusing on accuracy, ethical integrity, and data privacy. This review underscores the need for further study, encouraging responsible exploration and application of artificial intelligence in oncology.</p>","PeriodicalId":517427,"journal":{"name":"BJR artificial intelligence","volume":"2 1","pages":"ubae019"},"PeriodicalIF":0.0000,"publicationDate":"2024-12-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11703354/pdf/","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"BJR artificial intelligence","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1093/bjrai/ubae019","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"2025/1/1 0:00:00","PubModel":"eCollection","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 0

Abstract

This review examines the use of large language models (LLMs) in cancer, analysing articles sourced from PubMed, Embase, and Ovid Medline, published between 2017 and 2024. Our search strategy included terms related to LLMs, cancer research, risks, safeguards, and ethical issues, focusing on studies that utilized text-based data. 59 articles were included in the review, categorized into 3 segments: quantitative studies on LLMs, chatbot-focused studies, and qualitative discussions on LLMs on cancer. Quantitative studies highlight LLMs' advanced capabilities in natural language processing (NLP), while chatbot-focused articles demonstrate their potential in clinical support and data management. Qualitative research underscores the broader implications of LLMs, including the risks and ethical considerations. Our findings suggest that LLMs, notably ChatGPT, have potential in data analysis, patient interaction, and personalized treatment in cancer care. However, the review identifies critical risks, including data biases and ethical challenges. We emphasize the need for regulatory oversight, targeted model development, and continuous evaluation. In conclusion, integrating LLMs in cancer research offers promising prospects but necessitates a balanced approach focusing on accuracy, ethical integrity, and data privacy. This review underscores the need for further study, encouraging responsible exploration and application of artificial intelligence in oncology.

查看原文
分享 分享
微信好友 朋友圈 QQ好友 复制链接
本刊更多论文
癌症中的大型语言模型:潜力、风险和保障。
本综述研究了大型语言模型(llm)在癌症研究中的应用,分析了2017年至2024年间发表的来自PubMed、Embase和Ovid Medline的文章。我们的搜索策略包括与法学硕士、癌症研究、风险、保障和伦理问题相关的术语,重点关注使用基于文本的数据的研究。该综述共纳入59篇文章,分为3部分:法学硕士的定量研究、以聊天机器人为重点的研究和法学硕士对癌症的定性讨论。定量研究强调了法学硕士在自然语言处理(NLP)方面的先进能力,而以聊天机器人为重点的文章则展示了它们在临床支持和数据管理方面的潜力。定性研究强调了法学硕士更广泛的影响,包括风险和伦理考虑。我们的研究结果表明,llm,特别是ChatGPT,在癌症护理的数据分析、患者互动和个性化治疗方面具有潜力。然而,该审查确定了关键风险,包括数据偏差和道德挑战。我们强调监管监督、有针对性的模式开发和持续评估的必要性。总之,将法学硕士整合到癌症研究中提供了很好的前景,但需要一种平衡的方法,关注准确性、道德完整性和数据隐私。这篇综述强调了进一步研究的必要性,鼓励人工智能在肿瘤学中的负责任的探索和应用。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 去求助
来源期刊
自引率
0.00%
发文量
0
期刊最新文献
Large language models in cancer: potentials, risks, and safeguards. Foundational artificial intelligence models and modern medical practice. AI and machine learning in medical imaging: key points from development to translation. Artificial intelligence in medicine: mitigating risks and maximizing benefits via quality assurance, quality control, and acceptance testing. Auto-segmentation of neck nodal metastases using self-distilled masked image transformer on longitudinal MR images.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
已复制链接
已复制链接
快去分享给好友吧!
我知道了
×
扫码分享
扫码分享
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1