Pub Date : 2024-04-20DOI: 10.1016/j.compind.2024.104098
Weisheng Lu , Liupengfei Wu
Protecting intellectual property rights (IPR) in the architecture, engineering, and construction (AEC) industry is a long-standing challenge. In the collaborative digital environments, where multiple professionals use digital platforms such as building information modelling to collaborate on a building design, this challenge has intensified. This research harnesses the functions of blockchain technology, such as consensus mechanisms, distributed broadcasting ledgers, cryptographic algorithms, and non-fungible tokens, to propose a blockchain-based framework to protect building design IPR in the AEC industry. Adopting a design science approach, a framework is proposed and then further developed into a system that is implemented, illustrated, and evaluated in a case study. The system uses non-fungible tokens to tokenize building design IPR and deploys blockchain’s decentralized consensus mechanisms, distributed ledgers, and cryptographic algorithms to safeguard the IPR and its transactions. This prototype system is found feasible with satisfactory performance in enhancing the efficiency of IPR registration and protection, reducing cost, improving information transparency, reinforcing immutability, and preventing non-valuable registrations. Researchers and practitioners are encouraged to develop the framework for different applications such as real-life design IPR protection and design management.
{"title":"A blockchain-based deployment framework for protecting building design intellectual property rights in collaborative digital environments","authors":"Weisheng Lu , Liupengfei Wu","doi":"10.1016/j.compind.2024.104098","DOIUrl":"https://doi.org/10.1016/j.compind.2024.104098","url":null,"abstract":"<div><p>Protecting intellectual property rights (IPR) in the architecture, engineering, and construction (AEC) industry is a long-standing challenge. In the collaborative digital environments, where multiple professionals use digital platforms such as building information modelling to collaborate on a building design, this challenge has intensified. This research harnesses the functions of blockchain technology, such as consensus mechanisms, distributed broadcasting ledgers, cryptographic algorithms, and non-fungible tokens, to propose a blockchain-based framework to protect building design IPR in the AEC industry. Adopting a design science approach, a framework is proposed and then further developed into a system that is implemented, illustrated, and evaluated in a case study. The system uses non-fungible tokens to tokenize building design IPR and deploys blockchain’s decentralized consensus mechanisms, distributed ledgers, and cryptographic algorithms to safeguard the IPR and its transactions. This prototype system is found feasible with satisfactory performance in enhancing the efficiency of IPR registration and protection, reducing cost, improving information transparency, reinforcing immutability, and preventing non-valuable registrations. Researchers and practitioners are encouraged to develop the framework for different applications such as real-life design IPR protection and design management.</p></div>","PeriodicalId":55219,"journal":{"name":"Computers in Industry","volume":null,"pages":null},"PeriodicalIF":10.0,"publicationDate":"2024-04-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140620832","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-03-26DOI: 10.1016/j.compind.2024.104086
Yaqing Xu , Yassine Qamsane , Saumuy Puchala , Annette Januszczak , Dawn M. Tilbury , Kira Barton
Efficient performance monitoring in production systems holds paramount importance as it enables organizations to optimize their manufacturing processes, enhance productivity, and maintain a competitive edge in the market. Typically, machine and system level performance monitoring systems are investigated independently, whereas an integrated approach that considers both levels can offer valuable insights and benefits. This paper introduces a data-driven approach for evaluating and improving the performance of production lines by monitoring the performance of both individual machines and their interactions as a system. The approach begins with a rigorous methodology for classifying machine states recorded by the Manufacturing Execution System (MES) into finer-grained substates, enabling a comprehensive analysis of machine cycle time variability. Subsequently, these substates are leveraged as a foundation for constructing performance monitoring models at both the machine and system levels, employing probabilistic automata for the machine level and logistic regression for the system level. The system-level performance monitoring model is constructed to predict a Flow metric that enables the prediction of abnormal behaviors and deviations from production targets. This data-driven approach serves as a foundational ingredient of a system-level digital twin, designed to provide production lines with insights that enable proactive implementation of measures aimed at optimizing overall manufacturing efficiency. Through an industrial test case from the automotive industry, the results demonstrate the capability of performance monitoring, capturing errors within confidence intervals, and establishing predictive cause-and-effect relationships between machines within the production system.
{"title":"A data-driven approach toward a machine- and system-level performance monitoring digital twin for production lines","authors":"Yaqing Xu , Yassine Qamsane , Saumuy Puchala , Annette Januszczak , Dawn M. Tilbury , Kira Barton","doi":"10.1016/j.compind.2024.104086","DOIUrl":"https://doi.org/10.1016/j.compind.2024.104086","url":null,"abstract":"<div><p>Efficient performance monitoring in production systems holds paramount importance as it enables organizations to optimize their manufacturing processes, enhance productivity, and maintain a competitive edge in the market. Typically, machine and system level performance monitoring systems are investigated independently, whereas an integrated approach that considers both levels can offer valuable insights and benefits. This paper introduces a data-driven approach for evaluating and improving the performance of production lines by monitoring the performance of both individual machines and their interactions as a system. The approach begins with a rigorous methodology for classifying machine states recorded by the Manufacturing Execution System (MES) into finer-grained substates, enabling a comprehensive analysis of machine cycle time variability. Subsequently, these substates are leveraged as a foundation for constructing performance monitoring models at both the machine and system levels, employing probabilistic automata for the machine level and logistic regression for the system level. The system-level performance monitoring model is constructed to predict a Flow metric that enables the prediction of abnormal behaviors and deviations from production targets. This data-driven approach serves as a foundational ingredient of a system-level digital twin, designed to provide production lines with insights that enable proactive implementation of measures aimed at optimizing overall manufacturing efficiency. Through an industrial test case from the automotive industry, the results demonstrate the capability of performance monitoring, capturing errors within confidence intervals, and establishing predictive cause-and-effect relationships between machines within the production system.</p></div>","PeriodicalId":55219,"journal":{"name":"Computers in Industry","volume":null,"pages":null},"PeriodicalIF":10.0,"publicationDate":"2024-03-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140295830","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-03-22DOI: 10.1016/j.compind.2024.104082
Saika Wong , Chunmo Zheng , Xing Su , Yinqiu Tang
Contract review is an essential step in construction projects to prevent potential losses. However, the current methods for reviewing construction contracts lack effectiveness and reliability, leading to time-consuming and error-prone processes. Although large language models (LLMs) have shown promise in revolutionizing natural language processing (NLP) tasks, they struggle with domain-specific knowledge and addressing specialized issues. This paper presents a novel approach that leverages LLMs with construction contract knowledge to emulate the process of contract review by human experts. Our tuning-free approach incorporates construction contract domain knowledge to enhance language models for identifying construction contract risks. The use of natural language when building the domain knowledge base facilitates practical implementation. We evaluated our method on real construction contracts and achieved solid performance. Additionally, we investigated how LLMs employ logical thinking during the task and provided insights and recommendations for future research.
{"title":"Construction contract risk identification based on knowledge-augmented language models","authors":"Saika Wong , Chunmo Zheng , Xing Su , Yinqiu Tang","doi":"10.1016/j.compind.2024.104082","DOIUrl":"https://doi.org/10.1016/j.compind.2024.104082","url":null,"abstract":"<div><p>Contract review is an essential step in construction projects to prevent potential losses. However, the current methods for reviewing construction contracts lack effectiveness and reliability, leading to time-consuming and error-prone processes. Although large language models (LLMs) have shown promise in revolutionizing natural language processing (NLP) tasks, they struggle with domain-specific knowledge and addressing specialized issues. This paper presents a novel approach that leverages LLMs with construction contract knowledge to emulate the process of contract review by human experts. Our tuning-free approach incorporates construction contract domain knowledge to enhance language models for identifying construction contract risks. The use of natural language when building the domain knowledge base facilitates practical implementation. We evaluated our method on real construction contracts and achieved solid performance. Additionally, we investigated how LLMs employ logical thinking during the task and provided insights and recommendations for future research.</p></div>","PeriodicalId":55219,"journal":{"name":"Computers in Industry","volume":null,"pages":null},"PeriodicalIF":10.0,"publicationDate":"2024-03-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140191000","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Maintenance records in Computerized Maintenance Management Systems (CMMS) contain valuable human knowledge on maintenance activities. These records primarily consist of noisy and unstructured texts written by maintenance experts. The technical nature of the text, combined with a concise writing style and frequent use of abbreviations, makes it difficult to be processed through classical Natural Language Processing (NLP) pipelines. Due to these complexities, this text must be normalized before feeding to classical machine learning models. Developing these custom normalization pipelines requires manual labor and domain expertise and is a time-consuming process that demands constant updates. This leads to the under-utilization of this valuable source of information to generate insights to help with maintenance decision support. This study proposes a Technical Language Processing (TLP) pipeline for semantic search in industrial text using BERT (Bidirectional Encoder Representations), a transformer-based Large Language Model (LLM). The proposed pipeline can automatically process complex unstructured industrial text and does not require custom preprocessing. To adapt the BERT model for the target domain, three unsupervised domain fine-tuning techniques are compared to identify the best strategy for leveraging available tacit knowledge in industrial text. The proposed approach is validated on two industrial maintenance records from the mining and aviation domains. Semantic search results are analyzed from a quantitative and qualitative perspective. Analysis shows that TSDAE, a state-of-the-art unsupervised domain fine-tuning technique, can efficiently identify intricate patterns in the industrial text regardless of associated complexities. BERT model fine-tuned with TSDAE on industrial text achieved a precision of 0.94 and 0.97 for mining excavators and aviation maintenance records, respectively.
{"title":"Unlocking maintenance insights in industrial text through semantic search","authors":"Syed Meesam Raza Naqvi , Mohammad Ghufran , Christophe Varnier , Jean-Marc Nicod , Kamran Javed , Noureddine Zerhouni","doi":"10.1016/j.compind.2024.104083","DOIUrl":"https://doi.org/10.1016/j.compind.2024.104083","url":null,"abstract":"<div><p>Maintenance records in Computerized Maintenance Management Systems (CMMS) contain valuable human knowledge on maintenance activities. These records primarily consist of noisy and unstructured texts written by maintenance experts. The technical nature of the text, combined with a concise writing style and frequent use of abbreviations, makes it difficult to be processed through classical Natural Language Processing (NLP) pipelines. Due to these complexities, this text must be normalized before feeding to classical machine learning models. Developing these custom normalization pipelines requires manual labor and domain expertise and is a time-consuming process that demands constant updates. This leads to the under-utilization of this valuable source of information to generate insights to help with maintenance decision support. This study proposes a Technical Language Processing (TLP) pipeline for semantic search in industrial text using BERT (Bidirectional Encoder Representations), a transformer-based Large Language Model (LLM). The proposed pipeline can automatically process complex unstructured industrial text and does not require custom preprocessing. To adapt the BERT model for the target domain, three unsupervised domain fine-tuning techniques are compared to identify the best strategy for leveraging available tacit knowledge in industrial text. The proposed approach is validated on two industrial maintenance records from the mining and aviation domains. Semantic search results are analyzed from a quantitative and qualitative perspective. Analysis shows that TSDAE, a state-of-the-art unsupervised domain fine-tuning technique, can efficiently identify intricate patterns in the industrial text regardless of associated complexities. BERT model fine-tuned with TSDAE on industrial text achieved a precision of 0.94 and 0.97 for mining excavators and aviation maintenance records, respectively.</p></div>","PeriodicalId":55219,"journal":{"name":"Computers in Industry","volume":null,"pages":null},"PeriodicalIF":10.0,"publicationDate":"2024-03-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140181045","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2023-12-21DOI: 10.1016/j.compind.2023.104063
Mehrzad Shahinmoghadam , Samira Ebrahimi Kahou , Ali Motamedi
While the adoption of open Building Information Modeling (open BIM) standards continues to grow, the inherent complexity and multifaceted nature of the built asset lifecycle data present a critical bottleneck for effective information retrieval. To address this challenge, the research community has started to investigate advanced natural language-based search for building information models. However, the accelerated pace of advancements in deep learning-based natural language processing research has introduced a complex landscape for domain-specific applications, making it challenging to navigate through various design choices that accommodate an effective balance between prediction accuracy and the accompanying computational costs. This study focuses on the semantic tagging of user queries, which is a cardinal task for the identification and classification of references related to building entities and their specific descriptors. To foster adaptability across various applications and disciplines, a semantic annotation scheme is introduced that is firmly rooted in the Industry Foundation Classes (IFC) schema. By taking a comparative approach, we conducted a series of experiments to identify the strengths and weaknesses of traditional and emergent deep learning architectures for the task at hand. Our findings underscore the critical importance of domain-specific and context-dependent embedding learning for the effective extraction of building entities and their respective descriptions.
{"title":"Neural semantic tagging for natural language-based search in building information models: Implications for practice","authors":"Mehrzad Shahinmoghadam , Samira Ebrahimi Kahou , Ali Motamedi","doi":"10.1016/j.compind.2023.104063","DOIUrl":"10.1016/j.compind.2023.104063","url":null,"abstract":"<div><p><span><span>While the adoption of open Building Information Modeling (open BIM) standards continues to grow, the inherent complexity and multifaceted nature of the built </span>asset lifecycle<span> data present a critical bottleneck for effective information retrieval. To address this challenge, the research community has started to investigate advanced natural language-based search for building information models. However, the accelerated pace of advancements in deep learning-based natural language processing research has introduced a complex landscape for domain-specific applications, making it challenging to navigate through various design choices that accommodate an effective balance between prediction accuracy and the accompanying computational costs. This study focuses on the semantic tagging of user queries, which is a cardinal task for the identification and classification of references related to building entities and their specific descriptors. To foster adaptability across various applications and disciplines, a </span></span>semantic annotation<span> scheme is introduced that is firmly rooted in the Industry Foundation Classes (IFC) schema. By taking a comparative approach, we conducted a series of experiments to identify the strengths and weaknesses of traditional and emergent deep learning architectures for the task at hand. Our findings underscore the critical importance of domain-specific and context-dependent embedding learning for the effective extraction of building entities and their respective descriptions.</span></p></div>","PeriodicalId":55219,"journal":{"name":"Computers in Industry","volume":null,"pages":null},"PeriodicalIF":10.0,"publicationDate":"2023-12-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"138827023","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2023-12-21DOI: 10.1016/j.compind.2023.104064
August Asheim Birkeland , Marius Udnæs
The current practice for creating as-built geometric Digital Twins (gDTs) of industrial facilities is both labour-intensive and error-prone. In aged industries it typically involves manually crafting a CAD or BIM model from a point cloud collected using terrestrial laser scanners. Recent advances within deep learning (DL) offer the possibility to automate semantic and instance segmentation of point clouds, contributing to a more efficient modelling process. DL networks, however, are data-intensive, requiring large domain-specific datasets. Producing labelled point cloud datasets involves considerable manual labour, and in the industrial domain no open-source instance segmentation dataset exists. We propose a semi-automatic workflow leveraging object descriptions contained in existing gDTs to efficiently create semantic- and instance-labelled point cloud datasets. To prove the efficiency of our workflow, we apply it to two separate areas of a gas processing plant covering a total of . We record the effort needed to process one of the areas, labelling a total of 260 million points in 70 h. When benchmarking on a state-of-the-art 3D instance segmentation network, the additional data from the 70-hour effort raises mIoU from 24.4% to 44.4%, AP from 19.7% to 52.5% and RC from 45.9% to 76.7% respectively.
{"title":"Semi-automated dataset creation for semantic and instance segmentation of industrial point clouds.","authors":"August Asheim Birkeland , Marius Udnæs","doi":"10.1016/j.compind.2023.104064","DOIUrl":"10.1016/j.compind.2023.104064","url":null,"abstract":"<div><p>The current practice for creating as-built geometric Digital Twins (gDTs) of industrial facilities is both labour-intensive and error-prone. In aged industries it typically involves manually crafting a CAD or BIM model from a point cloud collected using terrestrial laser scanners. Recent advances within deep learning (DL) offer the possibility to automate semantic and instance segmentation of point clouds, contributing to a more efficient modelling process. DL networks, however, are data-intensive, requiring large domain-specific datasets. Producing labelled point cloud datasets involves considerable manual labour, and in the industrial domain no open-source instance segmentation dataset exists. We propose a semi-automatic workflow leveraging object descriptions contained in existing gDTs to efficiently create semantic- and instance-labelled point cloud datasets. To prove the efficiency of our workflow, we apply it to two separate areas of a gas processing plant covering a total of <span><math><mrow><mn>40</mn><mspace></mspace><mn>000</mn><mspace></mspace><msup><mrow><mtext>m</mtext></mrow><mrow><mn>2</mn></mrow></msup></mrow></math></span>. We record the effort needed to process one of the areas, labelling a total of 260 million points in 70 h. When benchmarking on a state-of-the-art 3D instance segmentation network, the additional data from the 70-hour effort raises mIoU from 24.4% to 44.4%, AP from 19.7% to 52.5% and RC from 45.9% to 76.7% respectively.</p></div>","PeriodicalId":55219,"journal":{"name":"Computers in Industry","volume":null,"pages":null},"PeriodicalIF":10.0,"publicationDate":"2023-12-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.sciencedirect.com/science/article/pii/S0166361523002142/pdfft?md5=866f5e5296cb9cc744004f2c402aba42&pid=1-s2.0-S0166361523002142-main.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"138827014","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
In the new hyper connected factories, data gathering, and prediction models are key to keeping both productivity and piece quality. This paper presents a software platform that monitors and detects outliers in an industrial manufacturing process using scalable software tools. The platform collects data from a machine, processes it, and displays visualizations in a dashboard along with the results. A statistical method is used to detect outliers in the manufacturing process. The performance of the platform is assessed in two ways: firstly by monitoring a five-axis milling machine and secondly, using simulated tests. Former tests prove the suitability of the platform and reveal the issues that arise in a real environment, and latter tests prove the scalability of the platform with higher data processing needs than the previous ones.
{"title":"Implementation of a scalable platform for real-time monitoring of machine tools","authors":"Endika Tapia , Unai Lopez-Novoa , Leonardo Sastoque-Pinilla , Luis Norberto López-de-Lacalle","doi":"10.1016/j.compind.2023.104065","DOIUrl":"https://doi.org/10.1016/j.compind.2023.104065","url":null,"abstract":"<div><p>In the new hyper connected factories, data gathering, and prediction models are key to keeping both productivity and piece quality. This paper presents a software platform that monitors and detects outliers in an industrial manufacturing process using scalable software tools. The platform collects data from a machine, processes it, and displays visualizations in a dashboard along with the results. A statistical method is used to detect outliers in the manufacturing process. The performance of the platform is assessed in two ways: firstly by monitoring a five-axis milling machine and secondly, using simulated tests. Former tests prove the suitability of the platform and reveal the issues that arise in a real environment, and latter tests prove the scalability of the platform with higher data processing needs than the previous ones.</p></div>","PeriodicalId":55219,"journal":{"name":"Computers in Industry","volume":null,"pages":null},"PeriodicalIF":10.0,"publicationDate":"2023-12-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.sciencedirect.com/science/article/pii/S0166361523002154/pdfft?md5=075be02aa14bfe05041d47bded655429&pid=1-s2.0-S0166361523002154-main.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"138770184","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2023-12-15DOI: 10.1016/j.compind.2023.104060
Rui Qin , Zhifen Zhang , Jing Huang , Zhengyao Du , Xianwen Xiang , Jie Wang , Guangrui Wen , Weifeng He
The data-driven method based on acoustic emission signals is gradually becoming a hot topic in the field of laser shock peening quality monitoring. Although some existing deep learning methods do provide excellent monitoring accuracy and speed, they lack physical interpretability in nature, and the opacity of these decisions poses a great challenge to their credibility. The weak interpretability of deep learning models has become the biggest obstacle to the landing of artificial intelligence projects. To overcome this drawback, this paper proposes a monitoring strategy that can achieve physical interpretability in feature extraction, selection and classification, namely, jointly generating monitoring results and explanations. Specifically, it is an end-to-end model that combines convolutional neural units, gated recurrent units, and attention mechanisms. Firstly, a wavelet analysis with physical meaning that can be autonomously learned is performed on the acoustic emission. Then, the contribution of features is distinguished based on the correlation of information in different frequency bands, and redundant and noisy features are removed. Finally, the interpretability evaluation of processing quality is realized by using gated recurrent units with attention mechanisms. The effectiveness and reliability of the proposed method are confirmed by the experimental data of both laser shock peening at small and large gradient energies compared to state-of-the-art feature methods, CNN- and LSTM-based models. Most importantly, the physical interpretation of acoustic emission signals during the processing can increase the credibility of decisions and provide a basic logic for on-site judgments by professionals.
{"title":"A novel physically interpretable end-to-end network for stress monitoring in laser shock peening","authors":"Rui Qin , Zhifen Zhang , Jing Huang , Zhengyao Du , Xianwen Xiang , Jie Wang , Guangrui Wen , Weifeng He","doi":"10.1016/j.compind.2023.104060","DOIUrl":"https://doi.org/10.1016/j.compind.2023.104060","url":null,"abstract":"<div><p><span><span>The data-driven method based on acoustic emission signals is gradually becoming a hot topic in the field of </span>laser shock peening<span> quality monitoring. Although some existing deep learning methods do provide excellent monitoring accuracy and speed, they lack physical </span></span>interpretability<span><span> in nature, and the opacity of these decisions poses a great challenge to their credibility. The weak interpretability of deep learning models has become the biggest obstacle to the landing of artificial intelligence<span> projects. To overcome this drawback, this paper proposes a monitoring strategy that can achieve physical interpretability in feature extraction, selection and classification, namely, jointly generating monitoring results and explanations. Specifically, it is an end-to-end model that combines convolutional neural units, gated </span></span>recurrent<span> units, and attention mechanisms. Firstly, a wavelet analysis with physical meaning that can be autonomously learned is performed on the acoustic emission. Then, the contribution of features is distinguished based on the correlation of information in different frequency bands, and redundant and noisy features are removed. Finally, the interpretability evaluation of processing quality is realized by using gated recurrent units with attention mechanisms. The effectiveness and reliability of the proposed method are confirmed by the experimental data of both laser shock peening at small and large gradient energies compared to state-of-the-art feature methods, CNN- and LSTM-based models. Most importantly, the physical interpretation of acoustic emission signals during the processing can increase the credibility of decisions and provide a basic logic for on-site judgments by professionals.</span></span></p></div>","PeriodicalId":55219,"journal":{"name":"Computers in Industry","volume":null,"pages":null},"PeriodicalIF":10.0,"publicationDate":"2023-12-15","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"138657269","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2023-12-12DOI: 10.1016/j.compind.2023.104062
Yan Liu , Zuhua Xu , Kai Wang , Jun Zhao , Chunyue Song , Zhijiang Shao
Incipient faults are characterized by low-amplitude, unclear fault features, which are susceptible to unknown disturbances, leading to unsatisfactory detection performance. In this paper, an incipient fault detection enhancement method based on siamese spatial-temporal multi-mode feature contrast learning method is proposed. Firstly, we design a novel siamese spatial-temporal multi-mode convolutional neural network model consisting of two weight-shared spatial-temporal multi-mode convolutional neural networks and one feature discrimination measure operator, which are then used to extract the spatial-temporal multi-mode features of two datasets and to measure the distance between them. Then, an incipient fault feature discrimination intensification training strategy is developed to enhance the incipient fault detection performance. Specifically, this strategy intends to maximize the feature distance between the normal data and the incipient fault data, as well as that between different incipient faults, while minimizing the feature distance between the normal data and between the same incipient faults. Moreover, due to the long-term slow change characteristic of the incipient fault, the multi-head self-attention Long Short-Term Memory is presented as a dynamic feature learning model to further lopsidedly learn the incipient fault temporal long-term dependency according to attention weights utilizing the scaled dot-product multi-head self-attention mechanism. Finally, the performance of the proposed method is demonstrated on two industrial cases.
{"title":"Incipient fault detection enhancement based on spatial-temporal multi-mode siamese feature contrast learning for industrial dynamic process","authors":"Yan Liu , Zuhua Xu , Kai Wang , Jun Zhao , Chunyue Song , Zhijiang Shao","doi":"10.1016/j.compind.2023.104062","DOIUrl":"https://doi.org/10.1016/j.compind.2023.104062","url":null,"abstract":"<div><p><span>Incipient faults are characterized by low-amplitude, unclear fault features, which are susceptible to unknown disturbances, leading to unsatisfactory detection performance. In this paper, an incipient fault detection enhancement method based on siamese spatial-temporal multi-mode feature contrast learning method is proposed. Firstly, we design a novel siamese spatial-temporal multi-mode convolutional neural network model consisting of two weight-shared spatial-temporal multi-mode convolutional neural networks and one feature discrimination measure operator, which are then used to extract the spatial-temporal multi-mode features of two datasets and to measure the distance between them. Then, an incipient fault feature discrimination intensification training strategy is developed to enhance the incipient fault detection performance. Specifically, this strategy intends to maximize the feature distance between the normal data and the incipient fault data, as well as that between different incipient faults, while minimizing the feature distance between the normal data and between the same incipient faults. Moreover, due to the long-term slow change characteristic of the incipient fault, the multi-head self-attention Long Short-Term Memory is presented as a dynamic </span>feature learning model to further lopsidedly learn the incipient fault temporal long-term dependency according to attention weights utilizing the scaled dot-product multi-head self-attention mechanism. Finally, the performance of the proposed method is demonstrated on two industrial cases.</p></div>","PeriodicalId":55219,"journal":{"name":"Computers in Industry","volume":null,"pages":null},"PeriodicalIF":10.0,"publicationDate":"2023-12-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"138570388","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2023-12-08DOI: 10.1016/j.compind.2023.104059
David Jones , James Gopsill , Ric Real , Chris Snider , Harry Felton , Lee Kent , Mark Goudswaard , Owen Freeman Gebler , Ben Hicks
The management of data related to prototypes created during new product development is seen as a beneficial yet challenging activity. While attempts have been made to understand prototypes and their context in a range of use-cases, there is a gap in the understanding of the data that captures a prototype’s context and physical form. This paper highlights this gap, and addresses it through the development of a new taxonomy. Using existing literature, a body of domain-specific terms, and the combined experience of the nine authors, a robust and systematic taxonomy development process was followed. A comparison of the developed and pre-existing taxonomies, and an illustrative example, is used for evaluation. The taxonomy is fully presented along with a description of each of the 53 dimensions, and it is intended to be the foundation upon which methods and processes can be developed to improve the capture, curation and integration of physical prototypes in new product development.
{"title":"The prototype taxonomised: Towards the capture, curation, and integration of physical models in new product development","authors":"David Jones , James Gopsill , Ric Real , Chris Snider , Harry Felton , Lee Kent , Mark Goudswaard , Owen Freeman Gebler , Ben Hicks","doi":"10.1016/j.compind.2023.104059","DOIUrl":"https://doi.org/10.1016/j.compind.2023.104059","url":null,"abstract":"<div><p>The management of data related to prototypes created during new product development is seen as a beneficial yet challenging activity. While attempts have been made to understand prototypes and their context in a range of use-cases, there is a gap in the understanding of the data that captures a prototype’s context and physical form. This paper highlights this gap, and addresses it through the development of a new taxonomy. Using existing literature, a body of domain-specific terms, and the combined experience of the nine authors, a robust and systematic taxonomy development process was followed. A comparison of the developed and pre-existing taxonomies, and an illustrative example, is used for evaluation. The taxonomy is fully presented along with a description of each of the 53 dimensions, and it is intended to be the foundation upon which methods and processes can be developed to improve the capture, curation and integration of physical prototypes in new product development.</p></div>","PeriodicalId":55219,"journal":{"name":"Computers in Industry","volume":null,"pages":null},"PeriodicalIF":10.0,"publicationDate":"2023-12-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.sciencedirect.com/science/article/pii/S0166361523002099/pdfft?md5=34af196fc22742e8f6c7e68f29556a3c&pid=1-s2.0-S0166361523002099-main.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"138550124","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}