Processing of Condition Monitoring Annotations with BERT and Technical Language Substitution: A Case Study

Karl Lowenmark, C. Taal, Joakim Nivre, M. Liwicki, Fredrik Sandin
{"title":"Processing of Condition Monitoring Annotations with BERT and Technical Language Substitution: A Case Study","authors":"Karl Lowenmark, C. Taal, Joakim Nivre, M. Liwicki, Fredrik Sandin","doi":"10.36001/phme.2022.v7i1.3356","DOIUrl":null,"url":null,"abstract":"Annotations in condition monitoring systems contain information regarding asset history and fault characteristics in the form of unstructured text that could, if unlocked, be used for intelligent fault diagnosis. However, processing these annotations with pre-trained natural language models such as BERT is problematic due to out-of-vocabulary (OOV) technical terms, resulting in inaccurate language embeddings. Here we investigate the effect of OOV technical terms on BERT and SentenceBERT embeddings by substituting technical terms with natural language descriptions. The embeddings were computed for each annotation in a pre-processed corpus, with and without substitution. The K-Means clustering score was calculated on sentence embeddings, and a Long Short-Term Memory (LSTM) network was trained on word embeddings with the objective to recreate the output from a keywordbased annotation classifier. The K-Means score for SentenceBERT annotation embeddings improved by 40% at seven clusters by technical language substitution, and the labelling capacity of the BERT-LSTM model was improved from 88.3 to 94.2%. These results indicate that the substitution of OOV technical terms can improve the representation accuracy of the embeddings of the pre-trained BERT and SentenceBERT models, and that pre-trained language models can be used to process technical language.","PeriodicalId":422825,"journal":{"name":"PHM Society European Conference","volume":"19 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2022-06-29","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"3","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"PHM Society European Conference","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.36001/phme.2022.v7i1.3356","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 3

Abstract

Annotations in condition monitoring systems contain information regarding asset history and fault characteristics in the form of unstructured text that could, if unlocked, be used for intelligent fault diagnosis. However, processing these annotations with pre-trained natural language models such as BERT is problematic due to out-of-vocabulary (OOV) technical terms, resulting in inaccurate language embeddings. Here we investigate the effect of OOV technical terms on BERT and SentenceBERT embeddings by substituting technical terms with natural language descriptions. The embeddings were computed for each annotation in a pre-processed corpus, with and without substitution. The K-Means clustering score was calculated on sentence embeddings, and a Long Short-Term Memory (LSTM) network was trained on word embeddings with the objective to recreate the output from a keywordbased annotation classifier. The K-Means score for SentenceBERT annotation embeddings improved by 40% at seven clusters by technical language substitution, and the labelling capacity of the BERT-LSTM model was improved from 88.3 to 94.2%. These results indicate that the substitution of OOV technical terms can improve the representation accuracy of the embeddings of the pre-trained BERT and SentenceBERT models, and that pre-trained language models can be used to process technical language.
查看原文
分享 分享
微信好友 朋友圈 QQ好友 复制链接
本刊更多论文
用BERT和技术语言替代处理状态监测注释:一个案例研究
状态监测系统中的注释以非结构化文本的形式包含有关资产历史和故障特征的信息,如果解锁,可以用于智能故障诊断。然而,由于词汇表外(OOV)技术术语,使用预训练的自然语言模型(如BERT)处理这些注释存在问题,从而导致不准确的语言嵌入。本文通过将技术术语替换为自然语言描述来研究OOV技术术语对BERT和SentenceBERT嵌入的影响。在预处理语料库中计算每个注释的嵌入,有和没有替换。在句子嵌入上计算K-Means聚类得分,并在词嵌入上训练长短期记忆(LSTM)网络,目的是重建基于关键词的标注分类器的输出。通过技术语言替换,senencebert标注嵌入的K-Means分数在7个聚类上提高了40%,BERT-LSTM模型的标注能力从88.3提高到94.2%。这些结果表明,OOV技术术语的替换可以提高预训练BERT和SentenceBERT模型嵌入的表示精度,并且可以使用预训练的语言模型对技术语言进行处理。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 去求助
来源期刊
自引率
0.00%
发文量
0
期刊最新文献
Certainty Groups: A Practical Approach to Distinguish Confidence Levels in Neural Networks Domain Knowledge Informed Unsupervised Fault Detection for Rolling Element Bearings Novel Methodology for Health Assessment in Printed Circuit Boards On the Integration of Fundamental Knowledge about Degradation Processes into Data-Driven Diagnostics and Prognostics Using Theory-Guided Data Science Long Horizon Anomaly Prediction in Multivariate Time Series with Causal Autoencoders
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
已复制链接
已复制链接
快去分享给好友吧!
我知道了
×
扫码分享
扫码分享
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1