Exploring the expertise of large language models in materials science and metallurgical engineering†

IF 6.2 Q1 CHEMISTRY, MULTIDISCIPLINARY Digital discovery Pub Date : 2025-01-20 DOI:10.1039/D4DD00319E
Christophe Bajan and Guillaume Lambard
{"title":"Exploring the expertise of large language models in materials science and metallurgical engineering†","authors":"Christophe Bajan and Guillaume Lambard","doi":"10.1039/D4DD00319E","DOIUrl":null,"url":null,"abstract":"<p >The integration of artificial intelligence into various domains is rapidly increasing, with Large Language Models (LLMs) becoming more prevalent in numerous applications. This work is included in an overall project which aims to train an LLM specifically in the field of materials science. To assess the impact of this specialized training, it is essential to establish the baseline performance of existing LLMs in materials science. In this study, we evaluated 15 different LLMs using the MaScQA question answering (Q&amp;A) benchmark. This benchmark comprises questions from the Graduate Aptitude Test in Engineering (GATE), tailored to test models' capabilities in answering questions related to materials science and metallurgical engineering. Our results indicate that closed-source LLMs, such as Claude-3.5-Sonnet and GPT-4o, perform the best with an overall accuracy of ∼84%, while open-source models, such as Llama3-70b and Phi3-14b, top at ∼56% and ∼43%, respectively. These findings provide a baseline for the raw capabilities of LLMs on Q&amp;A tasks applied to materials science, and emphasise the substantial improvement that could be brought to open-source models <em>via</em> prompt engineering and fine-tuning strategies. We anticipate that this work could push the adoption of LLMs as valuable assistants in materials science, demonstrating their utilities in this specialised domain and related sub-domains.</p>","PeriodicalId":72816,"journal":{"name":"Digital discovery","volume":" 2","pages":" 500-512"},"PeriodicalIF":6.2000,"publicationDate":"2025-01-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://pubs.rsc.org/en/content/articlepdf/2025/dd/d4dd00319e?page=search","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Digital discovery","FirstCategoryId":"1085","ListUrlMain":"https://pubs.rsc.org/en/content/articlelanding/2025/dd/d4dd00319e","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q1","JCRName":"CHEMISTRY, MULTIDISCIPLINARY","Score":null,"Total":0}
引用次数: 0

Abstract

The integration of artificial intelligence into various domains is rapidly increasing, with Large Language Models (LLMs) becoming more prevalent in numerous applications. This work is included in an overall project which aims to train an LLM specifically in the field of materials science. To assess the impact of this specialized training, it is essential to establish the baseline performance of existing LLMs in materials science. In this study, we evaluated 15 different LLMs using the MaScQA question answering (Q&A) benchmark. This benchmark comprises questions from the Graduate Aptitude Test in Engineering (GATE), tailored to test models' capabilities in answering questions related to materials science and metallurgical engineering. Our results indicate that closed-source LLMs, such as Claude-3.5-Sonnet and GPT-4o, perform the best with an overall accuracy of ∼84%, while open-source models, such as Llama3-70b and Phi3-14b, top at ∼56% and ∼43%, respectively. These findings provide a baseline for the raw capabilities of LLMs on Q&A tasks applied to materials science, and emphasise the substantial improvement that could be brought to open-source models via prompt engineering and fine-tuning strategies. We anticipate that this work could push the adoption of LLMs as valuable assistants in materials science, demonstrating their utilities in this specialised domain and related sub-domains.

Abstract Image

查看原文
分享 分享
微信好友 朋友圈 QQ好友 复制链接
本刊更多论文
探索材料科学和冶金工程中大语言模型的专业知识
人工智能在各个领域的集成正在迅速增加,大型语言模型(llm)在许多应用中变得越来越普遍。这项工作包括在一个整体项目,旨在培养法学硕士专门在材料科学领域。为了评估这种专业培训的影响,有必要建立现有材料科学法学硕士的基线性能。在这项研究中,我们使用MaScQA问答(Q&;A)基准评估了15个不同的法学硕士。该基准测试包括工程研究生能力倾向测试(GATE)中的问题,用于测试模型回答材料科学和冶金工程相关问题的能力。我们的研究结果表明,闭源llm,如Claude-3.5-Sonnet和gpt - 40,表现最好,总体准确率为~ 84%,而开源模型,如Llama3-70b和Phi3-14b,分别以~ 56%和~ 43%的准确率最高。这些发现为法学硕士在应用于材料科学的Q&; a任务上的原始能力提供了基线,并强调了通过及时的工程和微调策略可以给开源模型带来的实质性改进。我们预计这项工作可以推动法学硕士作为材料科学的宝贵助手,展示他们在这一专业领域和相关子领域的效用。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 去求助
来源期刊
CiteScore
2.80
自引率
0.00%
发文量
0
期刊最新文献
A universal machine learning model for the electronic density of states. Precision fragment addition: domain-specific DeepFrag2 models for smarter lead optimization. MC3D: the materials cloud computational database of experimentally known stoichiometric inorganics. Scientific knowledge graph and ontology generation using open large language models. A simple compound prioritization method for drug discovery considering multi-target binding.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
已复制链接
已复制链接
快去分享给好友吧!
我知道了
×
扫码分享
扫码分享
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1