Pub Date : 2024-05-31DOI: 10.1016/j.compind.2024.104110
Brad Hershowitz , Melinda Hodkiewicz , Tyler Bikaun , Michael Stewart , Wei Liu
Large numbers of maintenance Work Request Notification (WRN) records are created by industry as part of standard business work flows. These digital records hold invaluable insights crucial to best practice in asset management. Of particular interest are the cause–effect relations in the long text WRN field. In this research we develop a two-stage deep learning pipeline to extract cause-and-effect triples and construct a causal graph database. A novel sentence-level noise removal method in the first stage filters out information extraneous to causal semantics. The second stage leverages a joint entity-and-relation extraction model to extract causal relations. To train the noise removal and causality extraction models we produced an annotated dataset of 1027 WRN records. The results for causality extraction as measured by F1-score are 83% and 92% for the identification of Cause and Effect entities respectively, and 78% for a correct causal relation between these entities. The pipeline is applied to a real-word, industrial plant dataset of 98,000 WRN records to produce a graph database. This work provides a framework for technical personnel to query the causes of equipment failures enabling answers to questions such as “what are the most common, costly, and recent causes of failures at my facility?”.
{"title":"Causal knowledge extraction from long text maintenance documents","authors":"Brad Hershowitz , Melinda Hodkiewicz , Tyler Bikaun , Michael Stewart , Wei Liu","doi":"10.1016/j.compind.2024.104110","DOIUrl":"https://doi.org/10.1016/j.compind.2024.104110","url":null,"abstract":"<div><p>Large numbers of maintenance Work Request Notification (WRN) records are created by industry as part of standard business work flows. These digital records hold invaluable insights crucial to best practice in asset management. Of particular interest are the cause–effect relations in the <em>long text</em> WRN field. In this research we develop a two-stage deep learning pipeline to extract cause-and-effect triples and construct a causal graph database. A novel sentence-level noise removal method in the first stage filters out information extraneous to causal semantics. The second stage leverages a joint entity-and-relation extraction model to extract causal relations. To train the noise removal and causality extraction models we produced an annotated dataset of 1027 WRN records. The results for causality extraction as measured by F1-score are 83% and 92% for the identification of <em>Cause</em> and <em>Effect</em> entities respectively, and 78% for a correct causal relation between these entities. The pipeline is applied to a real-word, industrial plant dataset of 98,000 WRN records to produce a graph database. This work provides a framework for technical personnel to query the causes of equipment failures enabling answers to questions such as “what are the most <em>common</em>, <em>costly</em>, and <em>recent</em> causes of failures at my facility?”.</p></div>","PeriodicalId":55219,"journal":{"name":"Computers in Industry","volume":null,"pages":null},"PeriodicalIF":10.0,"publicationDate":"2024-05-31","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.sciencedirect.com/science/article/pii/S0166361524000381/pdfft?md5=96893d090d4ff3f33a64736705fd345b&pid=1-s2.0-S0166361524000381-main.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141243457","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-05-31DOI: 10.1016/j.compind.2024.104109
Feng Liang , Lun Zhao , Yu Ren , Sen Wang , Sandy To , Zeshan Abbas , Md Shafiqul Islam
Ultrasound welding technology is widely applied in the field of industrial manufacturing. In complex working conditions, various factors such as welding parameters, equipment conditions and operational techniques contribute to the formation of diverse and unpredictable line defects during the welding process. These defects exhibit characteristics such as varied shapes, random positions, and diverse types. Consequently, traditional defect surface detection methods face challenges in achieving efficient and accurate non-destructive testing. To achieve real-time detection of ultrasound welding defects efficiently, we have developed a lightweight network called the Lightweight Attention Detection Network (LAD-Net) based on an attention mechanism. Firstly, this work proposes a Deformable Convolution Feature Extraction Module (DCFE-Module) aimed at addressing the challenge of extracting features from welding defects characterized by variable shapes, random positions, and complex defect types. Additionally, to prevent the loss of critical defect features and enhance the network's capability for feature extraction and integration, this study designs a Lightweight Step Attention Mechanism Module (LSAM-Module) based on the proposed Step Attention Mechanism Convolution (SAM-Conv). Finally, by integrating the Efficient Multi-scale Attention (EMA) module and the Explicit Visual Center (EVC) module into the network, we address the issue of imbalance between global and local information processing, and promote the integration of key defect features. Qualitative and quantitative experimental results conducted on both ultrasound welding defect data and the publicly available NEU-DET dataset demonstrate that the proposed LAD-Net method achieves high performance. On our custom dataset, the F1 score and [email protected] reached 0.954 and 94.2%, respectively. Furthermore, the method exhibits superior detection performance on the public dataset.
{"title":"LAD-Net: A lightweight welding defect surface non-destructive detection algorithm based on the attention mechanism","authors":"Feng Liang , Lun Zhao , Yu Ren , Sen Wang , Sandy To , Zeshan Abbas , Md Shafiqul Islam","doi":"10.1016/j.compind.2024.104109","DOIUrl":"https://doi.org/10.1016/j.compind.2024.104109","url":null,"abstract":"<div><p>Ultrasound welding technology is widely applied in the field of industrial manufacturing. In complex working conditions, various factors such as welding parameters, equipment conditions and operational techniques contribute to the formation of diverse and unpredictable line defects during the welding process. These defects exhibit characteristics such as varied shapes, random positions, and diverse types. Consequently, traditional defect surface detection methods face challenges in achieving efficient and accurate non-destructive testing. To achieve real-time detection of ultrasound welding defects efficiently, we have developed a lightweight network called the Lightweight Attention Detection Network (LAD-Net) based on an attention mechanism. Firstly, this work proposes a Deformable Convolution Feature Extraction Module (DCFE-Module) aimed at addressing the challenge of extracting features from welding defects characterized by variable shapes, random positions, and complex defect types. Additionally, to prevent the loss of critical defect features and enhance the network's capability for feature extraction and integration, this study designs a Lightweight Step Attention Mechanism Module (LSAM-Module) based on the proposed Step Attention Mechanism Convolution (SAM-Conv). Finally, by integrating the Efficient Multi-scale Attention (EMA) module and the Explicit Visual Center (EVC) module into the network, we address the issue of imbalance between global and local information processing, and promote the integration of key defect features. Qualitative and quantitative experimental results conducted on both ultrasound welding defect data and the publicly available NEU-DET dataset demonstrate that the proposed LAD-Net method achieves high performance. On our custom dataset, the F1 score and [email protected] reached 0.954 and 94.2%, respectively. Furthermore, the method exhibits superior detection performance on the public dataset.</p></div>","PeriodicalId":55219,"journal":{"name":"Computers in Industry","volume":null,"pages":null},"PeriodicalIF":10.0,"publicationDate":"2024-05-31","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141243449","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-05-28DOI: 10.1016/j.compind.2024.104107
H. Groefsema , N.R.T.P. van Beest
Organizations use business process management systems to automate processes that they use to perform tasks or interact with customers. However, several variants of the same business process may exist due to, e.g., mergers, customer-tailored services, diverse market segments, or distinct legislation across borders. As a result, reliable support for process variability has been identified as a necessity. In this article, we introduce the concept of declarative process families to support process variability and present a procedure to formally verify whether a business process model is part of a specified process family. The procedure allows to identify potential parts in the process that violate the process family. By introducing the concept of process families, we allow organizations to deviate from their prescribed processes using normal process model notation and automatically verify if such a deviation is allowed. To demonstrate the applicability of the approach, a simple example process is used that describes several variants of a car rental process which is required to adhere to several process families. Moreover, to support the proposed procedure, we present a tool that allows business processes, specified as Petri nets, to be verified against their declarative process families using the NuSMV2 model checker.
企业使用业务流程管理系统来自动执行任务或与客户互动的流程。然而,由于合并、客户定制服务、不同的细分市场或不同的跨境立法等原因,同一业务流程可能存在多个变体。因此,为流程的可变性提供可靠的支持已被视为一种必要。在本文中,我们介绍了声明式流程族的概念,以支持流程的可变性,并提出了一种正式验证业务流程模型是否属于指定流程族的程序。该程序可识别流程中违反流程族的潜在部分。通过引入流程族的概念,我们允许企业使用正常的流程模型符号偏离规定的流程,并自动验证这种偏离是否被允许。为了证明该方法的适用性,我们使用了一个简单的流程示例,该示例描述了汽车租赁流程的多个变体,要求该流程遵守多个流程族。此外,为了支持所建议的程序,我们还介绍了一种工具,它允许使用 NuSMV2 模型检查器根据其声明式流程族对指定为 Petri 网的业务流程进行验证。
{"title":"Supporting business process variability through declarative process families","authors":"H. Groefsema , N.R.T.P. van Beest","doi":"10.1016/j.compind.2024.104107","DOIUrl":"https://doi.org/10.1016/j.compind.2024.104107","url":null,"abstract":"<div><p>Organizations use business process management systems to automate processes that they use to perform tasks or interact with customers. However, several variants of the same business process may exist due to, e.g., mergers, customer-tailored services, diverse market segments, or distinct legislation across borders. As a result, reliable support for process variability has been identified as a necessity. In this article, we introduce the concept of declarative process families to support process variability and present a procedure to formally verify whether a business process model is part of a specified process family. The procedure allows to identify potential parts in the process that violate the process family. By introducing the concept of process families, we allow organizations to deviate from their prescribed processes using normal process model notation and automatically verify if such a deviation is allowed. To demonstrate the applicability of the approach, a simple example process is used that describes several variants of a car rental process which is required to adhere to several process families. Moreover, to support the proposed procedure, we present a tool that allows business processes, specified as Petri nets, to be verified against their declarative process families using the NuSMV2 model checker.</p></div>","PeriodicalId":55219,"journal":{"name":"Computers in Industry","volume":null,"pages":null},"PeriodicalIF":10.0,"publicationDate":"2024-05-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.sciencedirect.com/science/article/pii/S0166361524000356/pdfft?md5=6f75c79f113276a7f4fe23e4e7e4517e&pid=1-s2.0-S0166361524000356-main.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141164299","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-05-27DOI: 10.1016/j.compind.2024.104106
M. Saqib Nawaz , M. Zohaib Nawaz , Philippe Fournier-Viger , José María Luna
Employee attrition and absenteeism are major problems that affect many industries and organizations, resulting in diminished productivity, elevated costs, and losses. These phenomena can be attributed to multiple factors that are difficult to anticipate for human resources or management. Therefore, this paper proposes a content-based methodology for the analysis and classification of employee attrition and absenteeism that can be used for talent analysis and management, a task that is traditionally carried out ex-post. The developed methodology, called E(3A)CSPM, is based on SPM (sequential pattern mining). In the methodology, four public datasets with diversified employee data are adopted, which are initially transformed into a suitable format. Then, SPM algorithms are applied to the transformed datasets to reveal recurring patterns and rules of features. The discovered patterns and rules not only offer information regarding features that have a key role in employee attrition and absenteeism but also their values. These frequent patterns of features are thereafter used to classify/predict employee attrition and absenteeism. Eight classifiers and multiple evaluation metrics are used in experiments. The performance of E(3A)CSPM is contrasted with state-of-the-art approaches for employee attrition and absenteeism and the obtained findings reveal that E(3A)CSPM surpasses these approaches.
{"title":"Analysis and classification of employee attrition and absenteeism in industry: A sequential pattern mining-based methodology","authors":"M. Saqib Nawaz , M. Zohaib Nawaz , Philippe Fournier-Viger , José María Luna","doi":"10.1016/j.compind.2024.104106","DOIUrl":"https://doi.org/10.1016/j.compind.2024.104106","url":null,"abstract":"<div><p>Employee attrition and absenteeism are major problems that affect many industries and organizations, resulting in diminished productivity, elevated costs, and losses. These phenomena can be attributed to multiple factors that are difficult to anticipate for human resources or management. Therefore, this paper proposes a content-based methodology for the analysis and classification of employee attrition and absenteeism that can be used for talent analysis and management, a task that is traditionally carried out ex-post. The developed methodology, called E(3A)CSPM, is based on SPM (sequential pattern mining). In the methodology, four public datasets with diversified employee data are adopted, which are initially transformed into a suitable format. Then, SPM algorithms are applied to the transformed datasets to reveal recurring patterns and rules of features. The discovered patterns and rules not only offer information regarding features that have a key role in employee attrition and absenteeism but also their values. These frequent patterns of features are thereafter used to classify/predict employee attrition and absenteeism. Eight classifiers and multiple evaluation metrics are used in experiments. The performance of E(3A)CSPM is contrasted with state-of-the-art approaches for employee attrition and absenteeism and the obtained findings reveal that E(3A)CSPM surpasses these approaches.</p></div>","PeriodicalId":55219,"journal":{"name":"Computers in Industry","volume":null,"pages":null},"PeriodicalIF":10.0,"publicationDate":"2024-05-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141164298","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-05-10DOI: 10.1016/j.compind.2024.104105
Xingjun Dong , Changsheng Zhang , Junhao Wang , Yao Chen , Dawei Wang
This study presents a framework for the real-time detection of surface cracking in large-sized stamped metal parts. The framework aims to address the challenges of low detection efficiency and high error rates associated with manual cracking detection. Within this framework, a novel network, SNF-YOLOv8, is proposed to efficiently detect cracking while ensuring that the detection speed matches the production speed. The network incorporates a convolutional spatial-to-depth module to enhance the detection of small-sized cracking and mitigate surface interference during inspections. Furthermore, a visual self-attention mechanism is introduced to improve feature extraction. A combination of standard convolutional and depth-wise separable convolutional layers in the neck network enhances speed without compromising accuracy. Experimental validation conducted using a dataset from actual production lines, in collaboration with a multi-national corporation, demonstrates that SNF-YOLOv8 achieves an average precision of 85.2% at a detection speed of 164 frames per second. The framework achieves an accuracy rate of 98.8% in detecting large-sized cracking and 96.4% in detecting small-sized cracking, meeting the requirements for high-precision and real-time detection applications.
{"title":"Real-time detection of surface cracking defects for large-sized stamped parts","authors":"Xingjun Dong , Changsheng Zhang , Junhao Wang , Yao Chen , Dawei Wang","doi":"10.1016/j.compind.2024.104105","DOIUrl":"https://doi.org/10.1016/j.compind.2024.104105","url":null,"abstract":"<div><p>This study presents a framework for the real-time detection of surface cracking in large-sized stamped metal parts. The framework aims to address the challenges of low detection efficiency and high error rates associated with manual cracking detection. Within this framework, a novel network, SNF-YOLOv8, is proposed to efficiently detect cracking while ensuring that the detection speed matches the production speed. The network incorporates a convolutional spatial-to-depth module to enhance the detection of small-sized cracking and mitigate surface interference during inspections. Furthermore, a visual self-attention mechanism is introduced to improve feature extraction. A combination of standard convolutional and depth-wise separable convolutional layers in the neck network enhances speed without compromising accuracy. Experimental validation conducted using a dataset from actual production lines, in collaboration with a multi-national corporation, demonstrates that SNF-YOLOv8 achieves an average precision of 85.2% at a detection speed of 164 frames per second. The framework achieves an accuracy rate of 98.8% in detecting large-sized cracking and 96.4% in detecting small-sized cracking, meeting the requirements for high-precision and real-time detection applications.</p></div>","PeriodicalId":55219,"journal":{"name":"Computers in Industry","volume":null,"pages":null},"PeriodicalIF":10.0,"publicationDate":"2024-05-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140902080","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-05-08DOI: 10.1016/j.compind.2024.104104
Arno Kasper , Martin Land , Will Bertrand , Jacob Wijngaard
To make manufacturing technology productive, manufacturers rely on a production planning and control (PPC) framework that plans ahead and monitors ongoing transformation processes. The design of an appropriate framework has far-reaching implications for the manufacturing organization as a whole. Yet, to date, there has been no unified guidance on key PPC design issues. This is strongly needed, as it has been argued that novel information processing technologies – as part of Industry 4.0 – result in PPC frameworks with decentral structures. This conflicts with traditional works arguing for hierarchical or central structures. Therefore, we review the PPC design literature to create a comprehensive overview and summarize design proposals. Based on our review, we come to the intermediate conclusion that PPC frameworks continue to have a hierarchical structure, although decision-making is shifted more to decentral levels compared to traditional hierarchies. Our analysis suggests that the effect of a decentralization shift has potentially strong and poorly understood implications, both from a decision-making and organizational perspective.
{"title":"Designing production planning and control in smart manufacturing","authors":"Arno Kasper , Martin Land , Will Bertrand , Jacob Wijngaard","doi":"10.1016/j.compind.2024.104104","DOIUrl":"https://doi.org/10.1016/j.compind.2024.104104","url":null,"abstract":"<div><p>To make manufacturing technology productive, manufacturers rely on a production planning and control (PPC) framework that plans ahead and monitors ongoing transformation processes. The design of an appropriate framework has far-reaching implications for the manufacturing organization as a whole. Yet, to date, there has been no unified guidance on key PPC design issues. This is strongly needed, as it has been argued that novel information processing technologies – as part of Industry 4.0 – result in PPC frameworks with decentral structures. This conflicts with traditional works arguing for hierarchical or central structures. Therefore, we review the PPC design literature to create a comprehensive overview and summarize design proposals. Based on our review, we come to the intermediate conclusion that PPC frameworks continue to have a hierarchical structure, although decision-making is shifted more to decentral levels compared to traditional hierarchies. Our analysis suggests that the effect of a decentralization shift has potentially strong and poorly understood implications, both from a decision-making and organizational perspective.</p></div>","PeriodicalId":55219,"journal":{"name":"Computers in Industry","volume":null,"pages":null},"PeriodicalIF":10.0,"publicationDate":"2024-05-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.sciencedirect.com/science/article/pii/S0166361524000320/pdfft?md5=0bb013d88ab81a8ce87b3c8fefc23521&pid=1-s2.0-S0166361524000320-main.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140894763","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-05-07DOI: 10.1016/j.compind.2024.104100
Baekgyu Kwon , Junho Kim , Hyunoh Lee , Hyo-Won Suh , Duhwan Mun
In the manufacturing industry, unstructured documents such as design guidelines, regulatory documents, and failure cases are essential for product development. However, due to the large volume and frequent revisions of these documents, designers often find it difficult to keep up to date with the latest content. This study presents a method for analyzing the characteristics of unstructured design guidelines and automatically constructing a knowledgebase of design requirements from them. A knowledgebase is structured data that a computer can understand, and that can be used to assist designers in the design process. The knowledgebase is constructed using the sections of the document, including design variables and design requirements. The construction process involves pre-processing the documents, extracting information using natural language processing models, and generating a knowledgebase using predefined rules. A requirements knowledgebase was experimentally constructed from a standard document on the general requirements for the design of pressure vessels (American Society of Mechanical Engineers Section VIII Division 1) using the proposed method. In the experiment, the accuracy of information extraction was 86.3 %, and the generation process took 3 min and 50 s. Thus, the proposed method eliminates the need for specialized training of deep learning models and can be applied to various design guideline documents with simple modifications to the design vocabulary and rules. The knowledgebase has applications in design validation, and is expected to enhance the efficiency of the product development process and contribute to reducing the overall development timeline.
在制造业中,设计指南、监管文件和故障案例等非结构化文档对产品开发至关重要。然而,由于这些文件数量庞大、修订频繁,设计人员往往很难及时了解最新内容。本研究提出了一种方法,用于分析非结构化设计指南的特点,并从中自动构建设计要求知识库。知识库是计算机能够理解的结构化数据,可用于在设计过程中协助设计人员。知识库是利用文件的各个部分构建的,包括设计变量和设计要求。构建过程包括预处理文档、使用自然语言处理模型提取信息,以及使用预定义规则生成知识库。使用所提出的方法,从压力容器设计一般要求的标准文件(美国机械工程师协会第 VIII 章第 1 节)中构建了一个需求知识库。在实验中,信息提取的准确率为 86.3%,生成过程耗时 3 分 50 秒。因此,所提出的方法无需对深度学习模型进行专门训练,只需对设计词汇和规则进行简单修改,即可应用于各种设计指南文档。该知识库可应用于设计验证,有望提高产品开发流程的效率,并有助于缩短整体开发时间。
{"title":"Construction of design requirements knowledgebase from unstructured design guidelines using natural language processing","authors":"Baekgyu Kwon , Junho Kim , Hyunoh Lee , Hyo-Won Suh , Duhwan Mun","doi":"10.1016/j.compind.2024.104100","DOIUrl":"https://doi.org/10.1016/j.compind.2024.104100","url":null,"abstract":"<div><p>In the manufacturing industry, unstructured documents such as design guidelines, regulatory documents, and failure cases are essential for product development. However, due to the large volume and frequent revisions of these documents, designers often find it difficult to keep up to date with the latest content. This study presents a method for analyzing the characteristics of unstructured design guidelines and automatically constructing a knowledgebase of design requirements from them. A knowledgebase is structured data that a computer can understand, and that can be used to assist designers in the design process. The knowledgebase is constructed using the sections of the document, including design variables and design requirements. The construction process involves pre-processing the documents, extracting information using natural language processing models, and generating a knowledgebase using predefined rules. A requirements knowledgebase was experimentally constructed from a standard document on the general requirements for the design of pressure vessels (American Society of Mechanical Engineers Section VIII Division 1) using the proposed method. In the experiment, the accuracy of information extraction was 86.3 %, and the generation process took 3 min and 50 s. Thus, the proposed method eliminates the need for specialized training of deep learning models and can be applied to various design guideline documents with simple modifications to the design vocabulary and rules. The knowledgebase has applications in design validation, and is expected to enhance the efficiency of the product development process and contribute to reducing the overall development timeline.</p></div>","PeriodicalId":55219,"journal":{"name":"Computers in Industry","volume":null,"pages":null},"PeriodicalIF":10.0,"publicationDate":"2024-05-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140844342","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-05-01DOI: 10.1016/j.compind.2024.104099
Zhenya Wang , Qiusheng Luo , Hui Chen , Jingshan Zhao , Ligang Yao , Jun Zhang , Fulei Chu
As a crucial component supporting aero-engine functionality, effective fault diagnosis of bearings is essential to ensure the engine's reliability and sustained airworthiness. However, practical limitations prevail due to the scarcity of aero-engine bearing fault data, hampering the implementation of intelligent diagnosis techniques. This paper presents a specialized method for aero-engine bearing fault diagnosis under conditions of limited sample availability. Initially, the proposed method employs the refined composite multiscale phase entropy (RCMPhE) to extract entropy features capable of characterizing the transient signal dynamics of aero-engine bearings. Based on the signal amplitude information, the composite multiscale decomposition sequence is formulated, followed by the creation of scatter diagrams for each sub-sequence. These diagrams are partitioned into segments, enabling individualized probability distribution computation within each sector, culminating in refined entropy value operations. Thus, the RCMPhE addresses issues prevalent in existing entropy theories such as deviation and instability. Subsequently, the bonobo optimization support vector machine is introduced to establish a mapping correlation between entropy domain features and fault types, enhancing its fault identification capabilities in aero-engine bearings. Experimental validation conducted on drivetrain system bearing data, actual aero-engine bearing data, and actual aerospace bearing data demonstrate remarkable fault diagnosis accuracy rates of 99.83 %, 100 %, and 100 %, respectively, with merely 5 training samples per state. Additionally, when compared to the existing eight fault diagnosis methods, the proposed method demonstrates an enhanced recognition accuracy by up to 28.97 %. This substantiates its effectiveness and potential in addressing small sample limitations in aero-engine bearing fault diagnosis.
{"title":"A high-accuracy intelligent fault diagnosis method for aero-engine bearings with limited samples","authors":"Zhenya Wang , Qiusheng Luo , Hui Chen , Jingshan Zhao , Ligang Yao , Jun Zhang , Fulei Chu","doi":"10.1016/j.compind.2024.104099","DOIUrl":"https://doi.org/10.1016/j.compind.2024.104099","url":null,"abstract":"<div><p>As a crucial component supporting aero-engine functionality, effective fault diagnosis of bearings is essential to ensure the engine's reliability and sustained airworthiness. However, practical limitations prevail due to the scarcity of aero-engine bearing fault data, hampering the implementation of intelligent diagnosis techniques. This paper presents a specialized method for aero-engine bearing fault diagnosis under conditions of limited sample availability. Initially, the proposed method employs the refined composite multiscale phase entropy (RCMPhE) to extract entropy features capable of characterizing the transient signal dynamics of aero-engine bearings. Based on the signal amplitude information, the composite multiscale decomposition sequence is formulated, followed by the creation of scatter diagrams for each sub-sequence. These diagrams are partitioned into segments, enabling individualized probability distribution computation within each sector, culminating in refined entropy value operations. Thus, the RCMPhE addresses issues prevalent in existing entropy theories such as deviation and instability. Subsequently, the bonobo optimization support vector machine is introduced to establish a mapping correlation between entropy domain features and fault types, enhancing its fault identification capabilities in aero-engine bearings. Experimental validation conducted on drivetrain system bearing data, actual aero-engine bearing data, and actual aerospace bearing data demonstrate remarkable fault diagnosis accuracy rates of 99.83<!--> <!-->%, 100<!--> <!-->%, and 100<!--> <!-->%, respectively, with merely 5 training samples per state. Additionally, when compared to the existing eight fault diagnosis methods, the proposed method demonstrates an enhanced recognition accuracy by up to 28.97<!--> <!-->%. This substantiates its effectiveness and potential in addressing small sample limitations in aero-engine bearing fault diagnosis.</p></div>","PeriodicalId":55219,"journal":{"name":"Computers in Industry","volume":null,"pages":null},"PeriodicalIF":10.0,"publicationDate":"2024-05-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140815979","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-04-30DOI: 10.1016/j.compind.2024.104101
Chang Su , Yong Han , Xin Tang , Qi Jiang , Tao Wang , Qingchen He
The Knowledge-Based Digital Twin System is a digital twin system developed on the foundation of a knowledge graph, aimed at serving the complex manufacturing process. This system embraces a knowledge-driven modeling approach, aspiring to construct a digital twin model for the manufacturing process, thereby enabling precise description, management, prediction, and optimization of the process. The core of this system lies in the comprehensive knowledge graph that encapsulates all pertinent information about the manufacturing process, facilitating dynamic modeling and iteration through knowledge matching and inference within the knowledge, geometry, and decision model. This approach not only ensures consistency across models but also addresses the challenge of coupling multi-source heterogeneous information, creating a holistic and precise information model. As the manufacturing process deepens and knowledge accumulates, the model's understanding of the process progressively enhances, promoting self-evolution and continuous optimization. The developed knowledge-decision-geometry model acts as the ontological layer within the digital twin framework, laying a foundational conceptual framework for the digital twin of the manufacturing process. Validated on an aero-engine blade production line in a factory, the results demonstrate that the knowledge model, as the core driver, enables continuous self-updating of the geometric model for an accurate depiction of the entire manufacturing process, while the decision model provides deep insights for decision-makers based on knowledge. The system not only effectively controls, predicts, and optimizes the manufacturing process but also continually evolves as the process advances. This research offers a new perspective on the realization of the digital twin for the manufacturing process, providing solid theoretical support with a knowledge-driven approach.
{"title":"Knowledge-based digital twin system: Using a knowlege-driven approach for manufacturing process modeling","authors":"Chang Su , Yong Han , Xin Tang , Qi Jiang , Tao Wang , Qingchen He","doi":"10.1016/j.compind.2024.104101","DOIUrl":"https://doi.org/10.1016/j.compind.2024.104101","url":null,"abstract":"<div><p>The Knowledge-Based Digital Twin System is a digital twin system developed on the foundation of a knowledge graph, aimed at serving the complex manufacturing process. This system embraces a knowledge-driven modeling approach, aspiring to construct a digital twin model for the manufacturing process, thereby enabling precise description, management, prediction, and optimization of the process. The core of this system lies in the comprehensive knowledge graph that encapsulates all pertinent information about the manufacturing process, facilitating dynamic modeling and iteration through knowledge matching and inference within the knowledge, geometry, and decision model. This approach not only ensures consistency across models but also addresses the challenge of coupling multi-source heterogeneous information, creating a holistic and precise information model. As the manufacturing process deepens and knowledge accumulates, the model's understanding of the process progressively enhances, promoting self-evolution and continuous optimization. The developed knowledge-decision-geometry model acts as the ontological layer within the digital twin framework, laying a foundational conceptual framework for the digital twin of the manufacturing process. Validated on an aero-engine blade production line in a factory, the results demonstrate that the knowledge model, as the core driver, enables continuous self-updating of the geometric model for an accurate depiction of the entire manufacturing process, while the decision model provides deep insights for decision-makers based on knowledge. The system not only effectively controls, predicts, and optimizes the manufacturing process but also continually evolves as the process advances. This research offers a new perspective on the realization of the digital twin for the manufacturing process, providing solid theoretical support with a knowledge-driven approach.</p></div>","PeriodicalId":55219,"journal":{"name":"Computers in Industry","volume":null,"pages":null},"PeriodicalIF":10.0,"publicationDate":"2024-04-30","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140815980","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-04-26DOI: 10.1016/j.compind.2024.104097
Mehmet Yavuz Yağci, Muhammed Ali Aydin
Anomaly detection with high accuracy, recall, and low error rate is critical for the safe and uninterrupted operation of cyber-physical systems. However, detecting anomalies in multimodal time series with different modalities obtained from cyber-physical systems is challenging. Although deep learning methods show very good results in anomaly detection, they fail to detect anomalies according to the requirements of cyber-physical systems. In the use of graph-based methods, data loss occurs during the conversion of time series into graphs. The fixed window size used to transform time series into graphs causes a loss of spatio-temporal correlations. In this study, we propose an Event Aware Graph Attention Network (EA-GAT), which can detect anomalies by event-based cyber-physical system analysis. EA-GAT detects and tracks the sensors in cyber-physical systems and the correlations between them. The system analyzes and models the relationship between the components during the marked periods as a graph. Anomalies in the system are found through the created graph models. Experiments show that the EA-GAT technique is more effective than other deep learning methods on SWaT, WADI, MSL datasets used in various studies. The event-based dynamic approach is significantly superior to the fixed-size sliding window technique, which uses the same learning structure. In addition, anomaly analysis is used to identify the attack target and the affected components. At the same time, with the slip subsequence module, the data is divided into subgroups and processed simultaneously.
高准确率、高召回率和低错误率的异常检测对于网络物理系统的安全和不间断运行至关重要。然而,从网络物理系统中获取的不同模态的多模态时间序列中检测异常是一项挑战。虽然深度学习方法在异常检测方面取得了很好的效果,但它们无法按照网络物理系统的要求检测异常。在使用基于图形的方法时,在将时间序列转换为图形的过程中会出现数据丢失。用于将时间序列转换为图形的固定窗口大小会造成时空相关性的丢失。在本研究中,我们提出了一种事件感知图注意网络(EA-GAT),它可以通过基于事件的网络物理系统分析来检测异常。EA-GAT 可检测和跟踪网络物理系统中的传感器以及它们之间的关联。该系统以图表的形式分析和模拟标记期间各组件之间的关系。通过创建的图形模型,可以发现系统中的异常情况。实验表明,在各种研究中使用的 SWaT、WADI 和 MSL 数据集上,EA-GAT 技术比其他深度学习方法更有效。基于事件的动态方法明显优于使用相同学习结构的固定大小滑动窗口技术。此外,异常分析还可用于识别攻击目标和受影响的组件。同时,利用滑动子序列模块,将数据分成子组并同时进行处理。
{"title":"EA-GAT: Event aware graph attention network on cyber-physical systems","authors":"Mehmet Yavuz Yağci, Muhammed Ali Aydin","doi":"10.1016/j.compind.2024.104097","DOIUrl":"https://doi.org/10.1016/j.compind.2024.104097","url":null,"abstract":"<div><p>Anomaly detection with high accuracy, recall, and low error rate is critical for the safe and uninterrupted operation of cyber-physical systems. However, detecting anomalies in multimodal time series with different modalities obtained from cyber-physical systems is challenging. Although deep learning methods show very good results in anomaly detection, they fail to detect anomalies according to the requirements of cyber-physical systems. In the use of graph-based methods, data loss occurs during the conversion of time series into graphs. The fixed window size used to transform time series into graphs causes a loss of spatio-temporal correlations. In this study, we propose an Event Aware Graph Attention Network (EA-GAT), which can detect anomalies by event-based cyber-physical system analysis. EA-GAT detects and tracks the sensors in cyber-physical systems and the correlations between them. The system analyzes and models the relationship between the components during the marked periods as a graph. Anomalies in the system are found through the created graph models. Experiments show that the EA-GAT technique is more effective than other deep learning methods on SWaT, WADI, MSL datasets used in various studies. The event-based dynamic approach is significantly superior to the fixed-size sliding window technique, which uses the same learning structure. In addition, anomaly analysis is used to identify the attack target and the affected components. At the same time, with the slip subsequence module, the data is divided into subgroups and processed simultaneously.</p></div>","PeriodicalId":55219,"journal":{"name":"Computers in Industry","volume":null,"pages":null},"PeriodicalIF":10.0,"publicationDate":"2024-04-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140646957","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}