在合理的低计算资源条件下开发日本医学大语言模型并进行双语评估

Issey Sukeda
{"title":"在合理的低计算资源条件下开发日本医学大语言模型并进行双语评估","authors":"Issey Sukeda","doi":"arxiv-2409.11783","DOIUrl":null,"url":null,"abstract":"The recent success of large language models (LLMs) and the scaling law has\nled to a widespread adoption of larger models. Particularly in the healthcare\nindustry, there is an increasing demand for locally operated LLMs due to\nsecurity concerns. However, the majority of high quality open-source LLMs have\na size of 70B parameters, imposing significant financial burdens on users for\nGPU preparation and operation. To overcome these issues, we present a medical\nadaptation based on the recent 7B models, which enables the operation in low\ncomputational resources. We compare the performance on medical\nquestion-answering benchmarks in two languages (Japanese and English),\ndemonstrating that its scores reach parity with or surpass those of currently\nexisting medical LLMs that are ten times larger. We find that fine-tuning an\nEnglish-centric base model on Japanese medical dataset improves the score in\nboth language, supporting the effect of cross-lingual knowledge transfer. We\nhope that this study will alleviate financial challenges, serving as a stepping\nstone for clinical institutions to practically utilize LLMs locally. Our\nevaluation code is available at\nhttps://huggingface.co/stardust-coder/jmedllm-7b-v1.","PeriodicalId":501030,"journal":{"name":"arXiv - CS - Computation and Language","volume":"11 1","pages":""},"PeriodicalIF":0.0000,"publicationDate":"2024-09-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"Development and bilingual evaluation of Japanese medical large language model within reasonably low computational resources\",\"authors\":\"Issey Sukeda\",\"doi\":\"arxiv-2409.11783\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"The recent success of large language models (LLMs) and the scaling law has\\nled to a widespread adoption of larger models. Particularly in the healthcare\\nindustry, there is an increasing demand for locally operated LLMs due to\\nsecurity concerns. However, the majority of high quality open-source LLMs have\\na size of 70B parameters, imposing significant financial burdens on users for\\nGPU preparation and operation. To overcome these issues, we present a medical\\nadaptation based on the recent 7B models, which enables the operation in low\\ncomputational resources. We compare the performance on medical\\nquestion-answering benchmarks in two languages (Japanese and English),\\ndemonstrating that its scores reach parity with or surpass those of currently\\nexisting medical LLMs that are ten times larger. We find that fine-tuning an\\nEnglish-centric base model on Japanese medical dataset improves the score in\\nboth language, supporting the effect of cross-lingual knowledge transfer. We\\nhope that this study will alleviate financial challenges, serving as a stepping\\nstone for clinical institutions to practically utilize LLMs locally. Our\\nevaluation code is available at\\nhttps://huggingface.co/stardust-coder/jmedllm-7b-v1.\",\"PeriodicalId\":501030,\"journal\":{\"name\":\"arXiv - CS - Computation and Language\",\"volume\":\"11 1\",\"pages\":\"\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2024-09-18\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"arXiv - CS - Computation and Language\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/arxiv-2409.11783\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"arXiv - CS - Computation and Language","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/arxiv-2409.11783","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 0

摘要

最近,大型语言模型(LLMs)的成功和扩展规律促使大型模型被广泛采用。特别是在医疗保健行业,出于安全考虑,对本地运行的 LLM 的需求越来越大。然而,大多数高质量的开源 LLM 都有 70B 的参数,这给用户的 GPU 准备和运行带来了巨大的经济负担。为了克服这些问题,我们提出了一种基于最新 7B 模型的医疗适应方法,它可以在低计算资源下运行。我们比较了两种语言(日语和英语)医学问题解答基准的性能,结果表明它的得分与目前已有的医学 LLMs 相当,甚至超过了它们的十倍。我们发现,在日语医学数据集上对以英语为中心的基础模型进行微调,可以提高两种语言的得分,这支持了跨语言知识转移的效果。我们希望这项研究能缓解财政困难,为临床机构在本地实际利用 LLMs 铺平道路。评价代码见https://huggingface.co/stardust-coder/jmedllm-7b-v1。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
查看原文
分享 分享
微信好友 朋友圈 QQ好友 复制链接
本刊更多论文
Development and bilingual evaluation of Japanese medical large language model within reasonably low computational resources
The recent success of large language models (LLMs) and the scaling law has led to a widespread adoption of larger models. Particularly in the healthcare industry, there is an increasing demand for locally operated LLMs due to security concerns. However, the majority of high quality open-source LLMs have a size of 70B parameters, imposing significant financial burdens on users for GPU preparation and operation. To overcome these issues, we present a medical adaptation based on the recent 7B models, which enables the operation in low computational resources. We compare the performance on medical question-answering benchmarks in two languages (Japanese and English), demonstrating that its scores reach parity with or surpass those of currently existing medical LLMs that are ten times larger. We find that fine-tuning an English-centric base model on Japanese medical dataset improves the score in both language, supporting the effect of cross-lingual knowledge transfer. We hope that this study will alleviate financial challenges, serving as a stepping stone for clinical institutions to practically utilize LLMs locally. Our evaluation code is available at https://huggingface.co/stardust-coder/jmedllm-7b-v1.
求助全文
通过发布文献求助,成功后即可免费获取论文全文。 去求助
来源期刊
自引率
0.00%
发文量
0
期刊最新文献
LLMs + Persona-Plug = Personalized LLMs MEOW: MEMOry Supervised LLM Unlearning Via Inverted Facts Extract-and-Abstract: Unifying Extractive and Abstractive Summarization within Single Encoder-Decoder Framework Development and bilingual evaluation of Japanese medical large language model within reasonably low computational resources Human-like Affective Cognition in Foundation Models
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
已复制链接
已复制链接
快去分享给好友吧!
我知道了
×
扫码分享
扫码分享
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1