评估用于真实世界化学和材料科学应用的微调大语言模型

IF 7.6 1区 化学 Q1 CHEMISTRY, MULTIDISCIPLINARY Chemical Science Pub Date : 2024-11-22 DOI:10.1039/d4sc04401k
Joren Van Herck, María Victoria Gil, Kevin Maik Jablonka, Alex Abrudan, Andy Sode Anker, Mehrdad Asgari, Ben J Blaiszik, Antonio Buffo, Leander Choudhury, Clemence Corminboeuf, Hilal Daglar, Amir Mohammad Elahi, Ian T. Foster, Susana García, Matthew Garvin, Guillaume Godin, Lydia L. Good, Jianan Gu, Noémie Xiao Hu, Xin Jin, Tanja Junkers, Seda Keskin, Tuomas Knowles, Ruben Laplaza, Michele Lessona, Sauradeep Majumdar, Hossein Mashhadimoslem, Ruaraidh D McIntosh, Seyed Mohamad Moosavi, Beatriz Mouriño, Francesca Nerli, Cova Pevida, Neda Poudineh, Mahyar Rajabi Kochi, Kadi-Liis Saar, Fahimeh Hooriabad Saboor, Morteza Sagharichiha, K. J. Schmidt, Jiale Shi, Elena Simone, Dennis Svatunek, Marco Taddei, Igor V. Tetko, Domonkos Tolnai, Sahar Vahdatifar, Jonathan K. Whitmer, Florian Wieland, Regine Willumeit-Römer, Andreas Züttel, Berend Smit
{"title":"评估用于真实世界化学和材料科学应用的微调大语言模型","authors":"Joren Van Herck, María Victoria Gil, Kevin Maik Jablonka, Alex Abrudan, Andy Sode Anker, Mehrdad Asgari, Ben J Blaiszik, Antonio Buffo, Leander Choudhury, Clemence Corminboeuf, Hilal Daglar, Amir Mohammad Elahi, Ian T. Foster, Susana García, Matthew Garvin, Guillaume Godin, Lydia L. Good, Jianan Gu, Noémie Xiao Hu, Xin Jin, Tanja Junkers, Seda Keskin, Tuomas Knowles, Ruben Laplaza, Michele Lessona, Sauradeep Majumdar, Hossein Mashhadimoslem, Ruaraidh D McIntosh, Seyed Mohamad Moosavi, Beatriz Mouriño, Francesca Nerli, Cova Pevida, Neda Poudineh, Mahyar Rajabi Kochi, Kadi-Liis Saar, Fahimeh Hooriabad Saboor, Morteza Sagharichiha, K. J. Schmidt, Jiale Shi, Elena Simone, Dennis Svatunek, Marco Taddei, Igor V. Tetko, Domonkos Tolnai, Sahar Vahdatifar, Jonathan K. Whitmer, Florian Wieland, Regine Willumeit-Römer, Andreas Züttel, Berend Smit","doi":"10.1039/d4sc04401k","DOIUrl":null,"url":null,"abstract":"The current generation of large language models (LLMs) have limited chemical knowledge. Recently, it has been shown that these LLMs can learn and predict chemical properties through fine-tuning. Using natural language to train machine learning models opens doors to a wider chemical audience, as field-specific featurization techniques can be omitted. In this work, we explore the potential and limitations of this approach. We studied the performance of fine-tuning three open-source LLMs (GPT-J-6B, Llama-3.1-8B, and Mistral-7B) for a range of different chemical questions. We benchmark their performances against ``traditional\" machine learning models and find that, in most cases, the fine-tuning approach is superior for a simple classification problem. Depending on the size of the dataset and the type of questions, we also successfully address more sophisticated problems. The most important conclusions of this work are that, for all datasets considered, their conversion into an LLM fine-tuning training set is straightforward and that fine-tuning with even relatively small datasets leads to predictive models. These results suggest that the systematic use of LLMs to guide experiments and simulations will be a powerful technique in any research study, significantly reducing unnecessary experiments or computations.","PeriodicalId":9909,"journal":{"name":"Chemical Science","volume":"4 1","pages":""},"PeriodicalIF":7.6000,"publicationDate":"2024-11-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"Assessment of Fine-Tuned Large Language Models for Real-World Chemistry and Material Science Applications\",\"authors\":\"Joren Van Herck, María Victoria Gil, Kevin Maik Jablonka, Alex Abrudan, Andy Sode Anker, Mehrdad Asgari, Ben J Blaiszik, Antonio Buffo, Leander Choudhury, Clemence Corminboeuf, Hilal Daglar, Amir Mohammad Elahi, Ian T. Foster, Susana García, Matthew Garvin, Guillaume Godin, Lydia L. Good, Jianan Gu, Noémie Xiao Hu, Xin Jin, Tanja Junkers, Seda Keskin, Tuomas Knowles, Ruben Laplaza, Michele Lessona, Sauradeep Majumdar, Hossein Mashhadimoslem, Ruaraidh D McIntosh, Seyed Mohamad Moosavi, Beatriz Mouriño, Francesca Nerli, Cova Pevida, Neda Poudineh, Mahyar Rajabi Kochi, Kadi-Liis Saar, Fahimeh Hooriabad Saboor, Morteza Sagharichiha, K. J. Schmidt, Jiale Shi, Elena Simone, Dennis Svatunek, Marco Taddei, Igor V. Tetko, Domonkos Tolnai, Sahar Vahdatifar, Jonathan K. Whitmer, Florian Wieland, Regine Willumeit-Römer, Andreas Züttel, Berend Smit\",\"doi\":\"10.1039/d4sc04401k\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"The current generation of large language models (LLMs) have limited chemical knowledge. Recently, it has been shown that these LLMs can learn and predict chemical properties through fine-tuning. Using natural language to train machine learning models opens doors to a wider chemical audience, as field-specific featurization techniques can be omitted. In this work, we explore the potential and limitations of this approach. We studied the performance of fine-tuning three open-source LLMs (GPT-J-6B, Llama-3.1-8B, and Mistral-7B) for a range of different chemical questions. We benchmark their performances against ``traditional\\\" machine learning models and find that, in most cases, the fine-tuning approach is superior for a simple classification problem. Depending on the size of the dataset and the type of questions, we also successfully address more sophisticated problems. The most important conclusions of this work are that, for all datasets considered, their conversion into an LLM fine-tuning training set is straightforward and that fine-tuning with even relatively small datasets leads to predictive models. These results suggest that the systematic use of LLMs to guide experiments and simulations will be a powerful technique in any research study, significantly reducing unnecessary experiments or computations.\",\"PeriodicalId\":9909,\"journal\":{\"name\":\"Chemical Science\",\"volume\":\"4 1\",\"pages\":\"\"},\"PeriodicalIF\":7.6000,\"publicationDate\":\"2024-11-22\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"Chemical Science\",\"FirstCategoryId\":\"92\",\"ListUrlMain\":\"https://doi.org/10.1039/d4sc04401k\",\"RegionNum\":1,\"RegionCategory\":\"化学\",\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"Q1\",\"JCRName\":\"CHEMISTRY, MULTIDISCIPLINARY\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"Chemical Science","FirstCategoryId":"92","ListUrlMain":"https://doi.org/10.1039/d4sc04401k","RegionNum":1,"RegionCategory":"化学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q1","JCRName":"CHEMISTRY, MULTIDISCIPLINARY","Score":null,"Total":0}
引用次数: 0

摘要

目前的大型语言模型(LLMs)对化学知识的掌握十分有限。最近的研究表明,这些 LLM 可以通过微调来学习和预测化学特性。由于可以省略特定领域的特征化技术,使用自然语言训练机器学习模型为更广泛的化学受众打开了大门。在这项工作中,我们探索了这种方法的潜力和局限性。我们研究了针对一系列不同化学问题微调三种开源 LLM(GPT-J-6B、Llama-3.1-8B 和 Mistral-7B)的性能。我们将它们的性能与 "传统 "机器学习模型进行比较,发现在大多数情况下,微调方法在简单分类问题上更胜一筹。根据数据集的大小和问题的类型,我们还能成功解决更复杂的问题。这项工作最重要的结论是,对于所考虑的所有数据集,将其转换为 LLM 微调训练集都很简单,而且即使是相对较小的数据集,微调也能产生预测模型。这些结果表明,系统地使用 LLM 来指导实验和模拟将成为任何研究中的一项强大技术,可大大减少不必要的实验或计算。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
查看原文
分享 分享
微信好友 朋友圈 QQ好友 复制链接
本刊更多论文
Assessment of Fine-Tuned Large Language Models for Real-World Chemistry and Material Science Applications
The current generation of large language models (LLMs) have limited chemical knowledge. Recently, it has been shown that these LLMs can learn and predict chemical properties through fine-tuning. Using natural language to train machine learning models opens doors to a wider chemical audience, as field-specific featurization techniques can be omitted. In this work, we explore the potential and limitations of this approach. We studied the performance of fine-tuning three open-source LLMs (GPT-J-6B, Llama-3.1-8B, and Mistral-7B) for a range of different chemical questions. We benchmark their performances against ``traditional" machine learning models and find that, in most cases, the fine-tuning approach is superior for a simple classification problem. Depending on the size of the dataset and the type of questions, we also successfully address more sophisticated problems. The most important conclusions of this work are that, for all datasets considered, their conversion into an LLM fine-tuning training set is straightforward and that fine-tuning with even relatively small datasets leads to predictive models. These results suggest that the systematic use of LLMs to guide experiments and simulations will be a powerful technique in any research study, significantly reducing unnecessary experiments or computations.
求助全文
通过发布文献求助,成功后即可免费获取论文全文。 去求助
来源期刊
Chemical Science
Chemical Science CHEMISTRY, MULTIDISCIPLINARY-
CiteScore
14.40
自引率
4.80%
发文量
1352
审稿时长
2.1 months
期刊介绍: Chemical Science is a journal that encompasses various disciplines within the chemical sciences. Its scope includes publishing ground-breaking research with significant implications for its respective field, as well as appealing to a wider audience in related areas. To be considered for publication, articles must showcase innovative and original advances in their field of study and be presented in a manner that is understandable to scientists from diverse backgrounds. However, the journal generally does not publish highly specialized research.
期刊最新文献
Adding multiple electrons to helicenes: how they respond? Dynamic selection in metallo-organic cube CdⅡ8L4 conformations induced by perfluorooctanoate encapsulation Assessment of Fine-Tuned Large Language Models for Real-World Chemistry and Material Science Applications Ag(I) emitters with ultrafast spin-flip dynamics for high-efficiency electroluminescence Cooperative Photoredox and N-Heterocyclic Carbene-Catalyzed Formal C-H Acylation of Cyclopropanes via Deconstruction-Reconstruction Strategy
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
已复制链接
已复制链接
快去分享给好友吧!
我知道了
×
扫码分享
扫码分享
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1