Towards a benchmark dataset for large language models in the context of process automation

IF 4.1 Q2 ENGINEERING, CHEMICAL Digital Chemical Engineering Pub Date : 2024-09-16 DOI:10.1016/j.dche.2024.100186
Tejennour Tizaoui , Ruomu Tan
{"title":"Towards a benchmark dataset for large language models in the context of process automation","authors":"Tejennour Tizaoui ,&nbsp;Ruomu Tan","doi":"10.1016/j.dche.2024.100186","DOIUrl":null,"url":null,"abstract":"<div><div>The field of process automation possesses a substantial corpus of textual documentation that can be leveraged with Large Language Models (LLMs) and Natural Language Understanding (NLU) systems. Recent advancements in diverse LLMs, available in open source, present an opportunity to utilize them effectively in this area. However, LLMs are pre-trained on general textual data and lack knowledge in more specialized and niche areas such as process automation. Furthermore, the lack of datasets specifically tailored to process automation makes it difficult to assess the effectiveness of LLMs in this domain accurately. This paper aims to lay the foundation for creating a multitask benchmark for evaluating and adapting LLMs in process automation. In the paper, we introduce a novel workflow for semi-automated data generation, specifically tailored to creating extractive Question Answering (QA) datasets. The proposed methodology in this paper involves extracting passages from academic papers focusing on process automation, generating corresponding questions, and subsequently annotating and evaluating the dataset. The dataset initially created also undergoes data augmentation and is evaluated using metrics for semantic similarity. This study then benchmarked six LLMs on the newly created extractive QA dataset for process automation.</div></div>","PeriodicalId":72815,"journal":{"name":"Digital Chemical Engineering","volume":"13 ","pages":"Article 100186"},"PeriodicalIF":4.1000,"publicationDate":"2024-09-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.sciencedirect.com/science/article/pii/S2772508124000486/pdfft?md5=41f0a659b6aed87235c44fe3a8cc7489&pid=1-s2.0-S2772508124000486-main.pdf","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Digital Chemical Engineering","FirstCategoryId":"1085","ListUrlMain":"https://www.sciencedirect.com/science/article/pii/S2772508124000486","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q2","JCRName":"ENGINEERING, CHEMICAL","Score":null,"Total":0}
引用次数: 0

Abstract

The field of process automation possesses a substantial corpus of textual documentation that can be leveraged with Large Language Models (LLMs) and Natural Language Understanding (NLU) systems. Recent advancements in diverse LLMs, available in open source, present an opportunity to utilize them effectively in this area. However, LLMs are pre-trained on general textual data and lack knowledge in more specialized and niche areas such as process automation. Furthermore, the lack of datasets specifically tailored to process automation makes it difficult to assess the effectiveness of LLMs in this domain accurately. This paper aims to lay the foundation for creating a multitask benchmark for evaluating and adapting LLMs in process automation. In the paper, we introduce a novel workflow for semi-automated data generation, specifically tailored to creating extractive Question Answering (QA) datasets. The proposed methodology in this paper involves extracting passages from academic papers focusing on process automation, generating corresponding questions, and subsequently annotating and evaluating the dataset. The dataset initially created also undergoes data augmentation and is evaluated using metrics for semantic similarity. This study then benchmarked six LLMs on the newly created extractive QA dataset for process automation.
查看原文
分享 分享
微信好友 朋友圈 QQ好友 复制链接
本刊更多论文
为流程自动化背景下的大型语言模型建立基准数据集
流程自动化领域拥有大量的文本文档语料库,可以通过大型语言模型(LLM)和自然语言理解(NLU)系统加以利用。最近,开源的各种 LLM 取得了进步,为在这一领域有效利用 LLM 提供了机会。然而,LLMs 是在一般文本数据上预先训练的,缺乏流程自动化等更专业、更细分领域的知识。此外,由于缺乏专门针对流程自动化的数据集,因此很难准确评估 LLM 在该领域的有效性。本文旨在为创建多任务基准奠定基础,以评估和调整流程自动化中的 LLM。在本文中,我们介绍了一种新颖的半自动数据生成工作流程,专门用于创建提取式问题解答(QA)数据集。本文提出的方法包括从关注流程自动化的学术论文中提取段落,生成相应的问题,然后对数据集进行注释和评估。最初创建的数据集还要进行数据扩充,并使用语义相似度指标进行评估。然后,本研究在新创建的流程自动化提取性质量保证数据集上对六种 LLM 进行了基准测试。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 去求助
来源期刊
CiteScore
3.10
自引率
0.00%
发文量
0
期刊最新文献
Online learning supported surrogate-based flowsheet model maintenance Simulation of TCM-based ionic liquids density behavior in a mixture with ethanol using machine learning approaches DFT in catalysis: Complex equations for practical computing applications in chemistry Data driven prediction of hydrochar yields from biomass hydrothermal carbonization using extreme gradient boosting algorithm with principal component analysis Early-stage chemical process screening through hybrid modeling: Introduction and case study of a reaction–crystallization process
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
已复制链接
已复制链接
快去分享给好友吧!
我知道了
×
扫码分享
扫码分享
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1