通过语义搜索揭示工业文本中的维护见解

IF 8.2 1区 计算机科学 Q1 COMPUTER SCIENCE, INTERDISCIPLINARY APPLICATIONS Computers in Industry Pub Date : 2024-03-21 DOI:10.1016/j.compind.2024.104083
Syed Meesam Raza Naqvi , Mohammad Ghufran , Christophe Varnier , Jean-Marc Nicod , Kamran Javed , Noureddine Zerhouni
{"title":"通过语义搜索揭示工业文本中的维护见解","authors":"Syed Meesam Raza Naqvi ,&nbsp;Mohammad Ghufran ,&nbsp;Christophe Varnier ,&nbsp;Jean-Marc Nicod ,&nbsp;Kamran Javed ,&nbsp;Noureddine Zerhouni","doi":"10.1016/j.compind.2024.104083","DOIUrl":null,"url":null,"abstract":"<div><p>Maintenance records in Computerized Maintenance Management Systems (CMMS) contain valuable human knowledge on maintenance activities. These records primarily consist of noisy and unstructured texts written by maintenance experts. The technical nature of the text, combined with a concise writing style and frequent use of abbreviations, makes it difficult to be processed through classical Natural Language Processing (NLP) pipelines. Due to these complexities, this text must be normalized before feeding to classical machine learning models. Developing these custom normalization pipelines requires manual labor and domain expertise and is a time-consuming process that demands constant updates. This leads to the under-utilization of this valuable source of information to generate insights to help with maintenance decision support. This study proposes a Technical Language Processing (TLP) pipeline for semantic search in industrial text using BERT (Bidirectional Encoder Representations), a transformer-based Large Language Model (LLM). The proposed pipeline can automatically process complex unstructured industrial text and does not require custom preprocessing. To adapt the BERT model for the target domain, three unsupervised domain fine-tuning techniques are compared to identify the best strategy for leveraging available tacit knowledge in industrial text. The proposed approach is validated on two industrial maintenance records from the mining and aviation domains. Semantic search results are analyzed from a quantitative and qualitative perspective. Analysis shows that TSDAE, a state-of-the-art unsupervised domain fine-tuning technique, can efficiently identify intricate patterns in the industrial text regardless of associated complexities. BERT model fine-tuned with TSDAE on industrial text achieved a precision of 0.94 and 0.97 for mining excavators and aviation maintenance records, respectively.</p></div>","PeriodicalId":55219,"journal":{"name":"Computers in Industry","volume":null,"pages":null},"PeriodicalIF":8.2000,"publicationDate":"2024-03-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"Unlocking maintenance insights in industrial text through semantic search\",\"authors\":\"Syed Meesam Raza Naqvi ,&nbsp;Mohammad Ghufran ,&nbsp;Christophe Varnier ,&nbsp;Jean-Marc Nicod ,&nbsp;Kamran Javed ,&nbsp;Noureddine Zerhouni\",\"doi\":\"10.1016/j.compind.2024.104083\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"<div><p>Maintenance records in Computerized Maintenance Management Systems (CMMS) contain valuable human knowledge on maintenance activities. These records primarily consist of noisy and unstructured texts written by maintenance experts. The technical nature of the text, combined with a concise writing style and frequent use of abbreviations, makes it difficult to be processed through classical Natural Language Processing (NLP) pipelines. Due to these complexities, this text must be normalized before feeding to classical machine learning models. Developing these custom normalization pipelines requires manual labor and domain expertise and is a time-consuming process that demands constant updates. This leads to the under-utilization of this valuable source of information to generate insights to help with maintenance decision support. This study proposes a Technical Language Processing (TLP) pipeline for semantic search in industrial text using BERT (Bidirectional Encoder Representations), a transformer-based Large Language Model (LLM). The proposed pipeline can automatically process complex unstructured industrial text and does not require custom preprocessing. To adapt the BERT model for the target domain, three unsupervised domain fine-tuning techniques are compared to identify the best strategy for leveraging available tacit knowledge in industrial text. The proposed approach is validated on two industrial maintenance records from the mining and aviation domains. Semantic search results are analyzed from a quantitative and qualitative perspective. Analysis shows that TSDAE, a state-of-the-art unsupervised domain fine-tuning technique, can efficiently identify intricate patterns in the industrial text regardless of associated complexities. BERT model fine-tuned with TSDAE on industrial text achieved a precision of 0.94 and 0.97 for mining excavators and aviation maintenance records, respectively.</p></div>\",\"PeriodicalId\":55219,\"journal\":{\"name\":\"Computers in Industry\",\"volume\":null,\"pages\":null},\"PeriodicalIF\":8.2000,\"publicationDate\":\"2024-03-21\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"Computers in Industry\",\"FirstCategoryId\":\"94\",\"ListUrlMain\":\"https://www.sciencedirect.com/science/article/pii/S0166361524000113\",\"RegionNum\":1,\"RegionCategory\":\"计算机科学\",\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"Q1\",\"JCRName\":\"COMPUTER SCIENCE, INTERDISCIPLINARY APPLICATIONS\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"Computers in Industry","FirstCategoryId":"94","ListUrlMain":"https://www.sciencedirect.com/science/article/pii/S0166361524000113","RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q1","JCRName":"COMPUTER SCIENCE, INTERDISCIPLINARY APPLICATIONS","Score":null,"Total":0}
引用次数: 0

摘要

计算机化维护管理系统(CMMS)中的维护记录包含有关维护活动的宝贵人类知识。这些记录主要由维护专家撰写的嘈杂和非结构化文本组成。文本的技术性质,加上简洁的写作风格和频繁使用缩写,使其很难通过经典的自然语言处理(NLP)管道进行处理。由于这些复杂性,在将这些文本输入经典机器学习模型之前,必须对其进行规范化处理。开发这些定制的规范化管道需要人工和领域专业知识,而且是一个需要不断更新的耗时过程。这就导致无法充分利用这一宝贵的信息来源来生成有助于维护决策支持的见解。本研究提出了一种技术语言处理(TLP)管道,利用基于转换器的大型语言模型(LLM)BERT(双向编码器表示法)在工业文本中进行语义搜索。建议的管道可自动处理复杂的非结构化工业文本,且无需定制预处理。为使 BERT 模型适应目标领域,比较了三种无监督领域微调技术,以确定利用工业文本中可用隐性知识的最佳策略。提议的方法在采矿和航空领域的两个工业维护记录上进行了验证。从定量和定性的角度对语义搜索结果进行了分析。分析表明,TSDAE 是一种最先进的无监督领域微调技术,可以有效识别工业文本中的复杂模式,而无需考虑相关的复杂性。使用 TSDAE 对工业文本进行微调的 BERT 模型在采矿挖掘机和航空维修记录方面的精确度分别达到了 0.94 和 0.97。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
查看原文
分享 分享
微信好友 朋友圈 QQ好友 复制链接
本刊更多论文
Unlocking maintenance insights in industrial text through semantic search

Maintenance records in Computerized Maintenance Management Systems (CMMS) contain valuable human knowledge on maintenance activities. These records primarily consist of noisy and unstructured texts written by maintenance experts. The technical nature of the text, combined with a concise writing style and frequent use of abbreviations, makes it difficult to be processed through classical Natural Language Processing (NLP) pipelines. Due to these complexities, this text must be normalized before feeding to classical machine learning models. Developing these custom normalization pipelines requires manual labor and domain expertise and is a time-consuming process that demands constant updates. This leads to the under-utilization of this valuable source of information to generate insights to help with maintenance decision support. This study proposes a Technical Language Processing (TLP) pipeline for semantic search in industrial text using BERT (Bidirectional Encoder Representations), a transformer-based Large Language Model (LLM). The proposed pipeline can automatically process complex unstructured industrial text and does not require custom preprocessing. To adapt the BERT model for the target domain, three unsupervised domain fine-tuning techniques are compared to identify the best strategy for leveraging available tacit knowledge in industrial text. The proposed approach is validated on two industrial maintenance records from the mining and aviation domains. Semantic search results are analyzed from a quantitative and qualitative perspective. Analysis shows that TSDAE, a state-of-the-art unsupervised domain fine-tuning technique, can efficiently identify intricate patterns in the industrial text regardless of associated complexities. BERT model fine-tuned with TSDAE on industrial text achieved a precision of 0.94 and 0.97 for mining excavators and aviation maintenance records, respectively.

求助全文
通过发布文献求助,成功后即可免费获取论文全文。 去求助
来源期刊
Computers in Industry
Computers in Industry 工程技术-计算机:跨学科应用
CiteScore
18.90
自引率
8.00%
发文量
152
审稿时长
22 days
期刊介绍: The objective of Computers in Industry is to present original, high-quality, application-oriented research papers that: • Illuminate emerging trends and possibilities in the utilization of Information and Communication Technology in industry; • Establish connections or integrations across various technology domains within the expansive realm of computer applications for industry; • Foster connections or integrations across diverse application areas of ICT in industry.
期刊最新文献
Rapid quality control for recycled coarse aggregates (RCA) streams: Multi-sensor integration for advanced contaminant detection Apple varieties and growth prediction with time series classification based on deep learning to impact the harvesting decisions Maximum subspace transferability discriminant analysis: A new cross-domain similarity measure for wind-turbine fault transfer diagnosis Dual channel visible graph convolutional neural network for microleakage monitoring of pipeline weld homalographic cracks Video-based automatic people counting for public transport: On-bus versus off-bus deployment
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
已复制链接
已复制链接
快去分享给好友吧!
我知道了
×
扫码分享
扫码分享
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1