重点错位的临床大型语言模型

IF 18.8 1区 计算机科学 Q1 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Nature Machine Intelligence Pub Date : 2024-11-18 DOI:10.1038/s42256-024-00929-0
Zining Luo, Haowei Ma, Zhiwu Li, Yuquan Chen, Yixin Sun, Aimin Hu, Jiang Yu, Yang Qiao, Junxian Gu, Hongying Li, Xuxi Peng, Dunrui Wang, Ying Liu, Zhenglong Liu, Jiebin Xie, Zhen Jiang, Gang Tian
{"title":"重点错位的临床大型语言模型","authors":"Zining Luo, Haowei Ma, Zhiwu Li, Yuquan Chen, Yixin Sun, Aimin Hu, Jiang Yu, Yang Qiao, Junxian Gu, Hongying Li, Xuxi Peng, Dunrui Wang, Ying Liu, Zhenglong Liu, Jiebin Xie, Zhen Jiang, Gang Tian","doi":"10.1038/s42256-024-00929-0","DOIUrl":null,"url":null,"abstract":"<p>On 12 September 2024, OpenAI released two new large language models (LLMs) — o1-preview and o1-mini — marking an important shift in the competitive landscape of commercial LLMs, particularly concerning their reasoning capabilities. Since the introduction of GPT-3.5, OpenAI has launched 31 LLMs in two years. Researchers are rapidly applying these evolving commercial models in clinical medicine, achieving results that sometimes exceed human performance in specific tasks. Although such success is encouraging, the development of the models used for these tasks may not align with the characteristics and needs of clinical practice.</p><p>LLMs can be categorized as either open-source or closed-source. Open-source models, such as Meta’s Llama, allow developers to access source code, training data and documentation freely. By contrast, closed-source models are accessed only through official channels or application programming interfaces (APIs). Initially, open-source models dominated the LLM landscape, until the release of OpenAI’s GPT-3 in 2020<sup>1</sup>, which attracted considerable commercial interest and shifted focus towards closed-source approaches<sup>2</sup>.</p>","PeriodicalId":48533,"journal":{"name":"Nature Machine Intelligence","volume":"18 1","pages":""},"PeriodicalIF":18.8000,"publicationDate":"2024-11-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"Clinical large language models with misplaced focus\",\"authors\":\"Zining Luo, Haowei Ma, Zhiwu Li, Yuquan Chen, Yixin Sun, Aimin Hu, Jiang Yu, Yang Qiao, Junxian Gu, Hongying Li, Xuxi Peng, Dunrui Wang, Ying Liu, Zhenglong Liu, Jiebin Xie, Zhen Jiang, Gang Tian\",\"doi\":\"10.1038/s42256-024-00929-0\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"<p>On 12 September 2024, OpenAI released two new large language models (LLMs) — o1-preview and o1-mini — marking an important shift in the competitive landscape of commercial LLMs, particularly concerning their reasoning capabilities. Since the introduction of GPT-3.5, OpenAI has launched 31 LLMs in two years. Researchers are rapidly applying these evolving commercial models in clinical medicine, achieving results that sometimes exceed human performance in specific tasks. Although such success is encouraging, the development of the models used for these tasks may not align with the characteristics and needs of clinical practice.</p><p>LLMs can be categorized as either open-source or closed-source. Open-source models, such as Meta’s Llama, allow developers to access source code, training data and documentation freely. By contrast, closed-source models are accessed only through official channels or application programming interfaces (APIs). Initially, open-source models dominated the LLM landscape, until the release of OpenAI’s GPT-3 in 2020<sup>1</sup>, which attracted considerable commercial interest and shifted focus towards closed-source approaches<sup>2</sup>.</p>\",\"PeriodicalId\":48533,\"journal\":{\"name\":\"Nature Machine Intelligence\",\"volume\":\"18 1\",\"pages\":\"\"},\"PeriodicalIF\":18.8000,\"publicationDate\":\"2024-11-18\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"Nature Machine Intelligence\",\"FirstCategoryId\":\"94\",\"ListUrlMain\":\"https://doi.org/10.1038/s42256-024-00929-0\",\"RegionNum\":1,\"RegionCategory\":\"计算机科学\",\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"Q1\",\"JCRName\":\"COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"Nature Machine Intelligence","FirstCategoryId":"94","ListUrlMain":"https://doi.org/10.1038/s42256-024-00929-0","RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q1","JCRName":"COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE","Score":null,"Total":0}
引用次数: 0

摘要

2024 年 9 月 12 日,OpenAI 发布了两款新的大型语言模型(LLM)--o1-preview 和 o1-mini,标志着商业 LLM 的竞争格局发生了重要变化,尤其是在推理能力方面。自 GPT-3.5 推出以来,OpenAI 已在两年内推出了 31 个 LLM。研究人员正在迅速将这些不断发展的商业模型应用于临床医学,取得的成果有时甚至超过了人类在特定任务中的表现。尽管这种成功令人鼓舞,但用于这些任务的模型的开发可能与临床实践的特点和需求不符。LLM 可分为开源和闭源两种。开源模型,如 Meta 的 Llama,允许开发人员自由访问源代码、训练数据和文档。相比之下,闭源模型只能通过官方渠道或应用编程接口(API)访问。最初,开源模型在 LLM 领域占据主导地位,直到 20201 年 OpenAI 的 GPT-3 发布,吸引了相当大的商业兴趣,并将重点转向闭源方法2。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
查看原文
分享 分享
微信好友 朋友圈 QQ好友 复制链接
本刊更多论文
Clinical large language models with misplaced focus

On 12 September 2024, OpenAI released two new large language models (LLMs) — o1-preview and o1-mini — marking an important shift in the competitive landscape of commercial LLMs, particularly concerning their reasoning capabilities. Since the introduction of GPT-3.5, OpenAI has launched 31 LLMs in two years. Researchers are rapidly applying these evolving commercial models in clinical medicine, achieving results that sometimes exceed human performance in specific tasks. Although such success is encouraging, the development of the models used for these tasks may not align with the characteristics and needs of clinical practice.

LLMs can be categorized as either open-source or closed-source. Open-source models, such as Meta’s Llama, allow developers to access source code, training data and documentation freely. By contrast, closed-source models are accessed only through official channels or application programming interfaces (APIs). Initially, open-source models dominated the LLM landscape, until the release of OpenAI’s GPT-3 in 20201, which attracted considerable commercial interest and shifted focus towards closed-source approaches2.

求助全文
通过发布文献求助,成功后即可免费获取论文全文。 去求助
来源期刊
CiteScore
36.90
自引率
2.10%
发文量
127
期刊介绍: Nature Machine Intelligence is a distinguished publication that presents original research and reviews on various topics in machine learning, robotics, and AI. Our focus extends beyond these fields, exploring their profound impact on other scientific disciplines, as well as societal and industrial aspects. We recognize limitless possibilities wherein machine intelligence can augment human capabilities and knowledge in domains like scientific exploration, healthcare, medical diagnostics, and the creation of safe and sustainable cities, transportation, and agriculture. Simultaneously, we acknowledge the emergence of ethical, social, and legal concerns due to the rapid pace of advancements. To foster interdisciplinary discussions on these far-reaching implications, Nature Machine Intelligence serves as a platform for dialogue facilitated through Comments, News Features, News & Views articles, and Correspondence. Our goal is to encourage a comprehensive examination of these subjects. Similar to all Nature-branded journals, Nature Machine Intelligence operates under the guidance of a team of skilled editors. We adhere to a fair and rigorous peer-review process, ensuring high standards of copy-editing and production, swift publication, and editorial independence.
期刊最新文献
Machine learning for practical quantum error mitigation AI pioneers win 2024 Nobel prizes Reshaping the discovery of self-assembling peptides with generative AI guided by hybrid deep learning A soft skin with self-decoupled three-axis force-sensing taxels Efficient rare event sampling with unsupervised normalizing flows
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
已复制链接
已复制链接
快去分享给好友吧!
我知道了
×
扫码分享
扫码分享
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1