Pub Date : 2024-05-08DOI: 10.1016/j.compind.2024.104104
Arno Kasper , Martin Land , Will Bertrand , Jacob Wijngaard
To make manufacturing technology productive, manufacturers rely on a production planning and control (PPC) framework that plans ahead and monitors ongoing transformation processes. The design of an appropriate framework has far-reaching implications for the manufacturing organization as a whole. Yet, to date, there has been no unified guidance on key PPC design issues. This is strongly needed, as it has been argued that novel information processing technologies – as part of Industry 4.0 – result in PPC frameworks with decentral structures. This conflicts with traditional works arguing for hierarchical or central structures. Therefore, we review the PPC design literature to create a comprehensive overview and summarize design proposals. Based on our review, we come to the intermediate conclusion that PPC frameworks continue to have a hierarchical structure, although decision-making is shifted more to decentral levels compared to traditional hierarchies. Our analysis suggests that the effect of a decentralization shift has potentially strong and poorly understood implications, both from a decision-making and organizational perspective.
{"title":"Designing production planning and control in smart manufacturing","authors":"Arno Kasper , Martin Land , Will Bertrand , Jacob Wijngaard","doi":"10.1016/j.compind.2024.104104","DOIUrl":"https://doi.org/10.1016/j.compind.2024.104104","url":null,"abstract":"<div><p>To make manufacturing technology productive, manufacturers rely on a production planning and control (PPC) framework that plans ahead and monitors ongoing transformation processes. The design of an appropriate framework has far-reaching implications for the manufacturing organization as a whole. Yet, to date, there has been no unified guidance on key PPC design issues. This is strongly needed, as it has been argued that novel information processing technologies – as part of Industry 4.0 – result in PPC frameworks with decentral structures. This conflicts with traditional works arguing for hierarchical or central structures. Therefore, we review the PPC design literature to create a comprehensive overview and summarize design proposals. Based on our review, we come to the intermediate conclusion that PPC frameworks continue to have a hierarchical structure, although decision-making is shifted more to decentral levels compared to traditional hierarchies. Our analysis suggests that the effect of a decentralization shift has potentially strong and poorly understood implications, both from a decision-making and organizational perspective.</p></div>","PeriodicalId":55219,"journal":{"name":"Computers in Industry","volume":"159 ","pages":"Article 104104"},"PeriodicalIF":10.0,"publicationDate":"2024-05-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.sciencedirect.com/science/article/pii/S0166361524000320/pdfft?md5=0bb013d88ab81a8ce87b3c8fefc23521&pid=1-s2.0-S0166361524000320-main.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140894763","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-05-07DOI: 10.1016/j.compind.2024.104100
Baekgyu Kwon , Junho Kim , Hyunoh Lee , Hyo-Won Suh , Duhwan Mun
In the manufacturing industry, unstructured documents such as design guidelines, regulatory documents, and failure cases are essential for product development. However, due to the large volume and frequent revisions of these documents, designers often find it difficult to keep up to date with the latest content. This study presents a method for analyzing the characteristics of unstructured design guidelines and automatically constructing a knowledgebase of design requirements from them. A knowledgebase is structured data that a computer can understand, and that can be used to assist designers in the design process. The knowledgebase is constructed using the sections of the document, including design variables and design requirements. The construction process involves pre-processing the documents, extracting information using natural language processing models, and generating a knowledgebase using predefined rules. A requirements knowledgebase was experimentally constructed from a standard document on the general requirements for the design of pressure vessels (American Society of Mechanical Engineers Section VIII Division 1) using the proposed method. In the experiment, the accuracy of information extraction was 86.3 %, and the generation process took 3 min and 50 s. Thus, the proposed method eliminates the need for specialized training of deep learning models and can be applied to various design guideline documents with simple modifications to the design vocabulary and rules. The knowledgebase has applications in design validation, and is expected to enhance the efficiency of the product development process and contribute to reducing the overall development timeline.
在制造业中,设计指南、监管文件和故障案例等非结构化文档对产品开发至关重要。然而,由于这些文件数量庞大、修订频繁,设计人员往往很难及时了解最新内容。本研究提出了一种方法,用于分析非结构化设计指南的特点,并从中自动构建设计要求知识库。知识库是计算机能够理解的结构化数据,可用于在设计过程中协助设计人员。知识库是利用文件的各个部分构建的,包括设计变量和设计要求。构建过程包括预处理文档、使用自然语言处理模型提取信息,以及使用预定义规则生成知识库。使用所提出的方法,从压力容器设计一般要求的标准文件(美国机械工程师协会第 VIII 章第 1 节)中构建了一个需求知识库。在实验中,信息提取的准确率为 86.3%,生成过程耗时 3 分 50 秒。因此,所提出的方法无需对深度学习模型进行专门训练,只需对设计词汇和规则进行简单修改,即可应用于各种设计指南文档。该知识库可应用于设计验证,有望提高产品开发流程的效率,并有助于缩短整体开发时间。
{"title":"Construction of design requirements knowledgebase from unstructured design guidelines using natural language processing","authors":"Baekgyu Kwon , Junho Kim , Hyunoh Lee , Hyo-Won Suh , Duhwan Mun","doi":"10.1016/j.compind.2024.104100","DOIUrl":"https://doi.org/10.1016/j.compind.2024.104100","url":null,"abstract":"<div><p>In the manufacturing industry, unstructured documents such as design guidelines, regulatory documents, and failure cases are essential for product development. However, due to the large volume and frequent revisions of these documents, designers often find it difficult to keep up to date with the latest content. This study presents a method for analyzing the characteristics of unstructured design guidelines and automatically constructing a knowledgebase of design requirements from them. A knowledgebase is structured data that a computer can understand, and that can be used to assist designers in the design process. The knowledgebase is constructed using the sections of the document, including design variables and design requirements. The construction process involves pre-processing the documents, extracting information using natural language processing models, and generating a knowledgebase using predefined rules. A requirements knowledgebase was experimentally constructed from a standard document on the general requirements for the design of pressure vessels (American Society of Mechanical Engineers Section VIII Division 1) using the proposed method. In the experiment, the accuracy of information extraction was 86.3 %, and the generation process took 3 min and 50 s. Thus, the proposed method eliminates the need for specialized training of deep learning models and can be applied to various design guideline documents with simple modifications to the design vocabulary and rules. The knowledgebase has applications in design validation, and is expected to enhance the efficiency of the product development process and contribute to reducing the overall development timeline.</p></div>","PeriodicalId":55219,"journal":{"name":"Computers in Industry","volume":"159 ","pages":"Article 104100"},"PeriodicalIF":10.0,"publicationDate":"2024-05-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140844342","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-05-01DOI: 10.1016/j.compind.2024.104099
Zhenya Wang , Qiusheng Luo , Hui Chen , Jingshan Zhao , Ligang Yao , Jun Zhang , Fulei Chu
As a crucial component supporting aero-engine functionality, effective fault diagnosis of bearings is essential to ensure the engine's reliability and sustained airworthiness. However, practical limitations prevail due to the scarcity of aero-engine bearing fault data, hampering the implementation of intelligent diagnosis techniques. This paper presents a specialized method for aero-engine bearing fault diagnosis under conditions of limited sample availability. Initially, the proposed method employs the refined composite multiscale phase entropy (RCMPhE) to extract entropy features capable of characterizing the transient signal dynamics of aero-engine bearings. Based on the signal amplitude information, the composite multiscale decomposition sequence is formulated, followed by the creation of scatter diagrams for each sub-sequence. These diagrams are partitioned into segments, enabling individualized probability distribution computation within each sector, culminating in refined entropy value operations. Thus, the RCMPhE addresses issues prevalent in existing entropy theories such as deviation and instability. Subsequently, the bonobo optimization support vector machine is introduced to establish a mapping correlation between entropy domain features and fault types, enhancing its fault identification capabilities in aero-engine bearings. Experimental validation conducted on drivetrain system bearing data, actual aero-engine bearing data, and actual aerospace bearing data demonstrate remarkable fault diagnosis accuracy rates of 99.83 %, 100 %, and 100 %, respectively, with merely 5 training samples per state. Additionally, when compared to the existing eight fault diagnosis methods, the proposed method demonstrates an enhanced recognition accuracy by up to 28.97 %. This substantiates its effectiveness and potential in addressing small sample limitations in aero-engine bearing fault diagnosis.
{"title":"A high-accuracy intelligent fault diagnosis method for aero-engine bearings with limited samples","authors":"Zhenya Wang , Qiusheng Luo , Hui Chen , Jingshan Zhao , Ligang Yao , Jun Zhang , Fulei Chu","doi":"10.1016/j.compind.2024.104099","DOIUrl":"https://doi.org/10.1016/j.compind.2024.104099","url":null,"abstract":"<div><p>As a crucial component supporting aero-engine functionality, effective fault diagnosis of bearings is essential to ensure the engine's reliability and sustained airworthiness. However, practical limitations prevail due to the scarcity of aero-engine bearing fault data, hampering the implementation of intelligent diagnosis techniques. This paper presents a specialized method for aero-engine bearing fault diagnosis under conditions of limited sample availability. Initially, the proposed method employs the refined composite multiscale phase entropy (RCMPhE) to extract entropy features capable of characterizing the transient signal dynamics of aero-engine bearings. Based on the signal amplitude information, the composite multiscale decomposition sequence is formulated, followed by the creation of scatter diagrams for each sub-sequence. These diagrams are partitioned into segments, enabling individualized probability distribution computation within each sector, culminating in refined entropy value operations. Thus, the RCMPhE addresses issues prevalent in existing entropy theories such as deviation and instability. Subsequently, the bonobo optimization support vector machine is introduced to establish a mapping correlation between entropy domain features and fault types, enhancing its fault identification capabilities in aero-engine bearings. Experimental validation conducted on drivetrain system bearing data, actual aero-engine bearing data, and actual aerospace bearing data demonstrate remarkable fault diagnosis accuracy rates of 99.83<!--> <!-->%, 100<!--> <!-->%, and 100<!--> <!-->%, respectively, with merely 5 training samples per state. Additionally, when compared to the existing eight fault diagnosis methods, the proposed method demonstrates an enhanced recognition accuracy by up to 28.97<!--> <!-->%. This substantiates its effectiveness and potential in addressing small sample limitations in aero-engine bearing fault diagnosis.</p></div>","PeriodicalId":55219,"journal":{"name":"Computers in Industry","volume":"159 ","pages":"Article 104099"},"PeriodicalIF":10.0,"publicationDate":"2024-05-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140815979","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-04-30DOI: 10.1016/j.compind.2024.104101
Chang Su , Yong Han , Xin Tang , Qi Jiang , Tao Wang , Qingchen He
The Knowledge-Based Digital Twin System is a digital twin system developed on the foundation of a knowledge graph, aimed at serving the complex manufacturing process. This system embraces a knowledge-driven modeling approach, aspiring to construct a digital twin model for the manufacturing process, thereby enabling precise description, management, prediction, and optimization of the process. The core of this system lies in the comprehensive knowledge graph that encapsulates all pertinent information about the manufacturing process, facilitating dynamic modeling and iteration through knowledge matching and inference within the knowledge, geometry, and decision model. This approach not only ensures consistency across models but also addresses the challenge of coupling multi-source heterogeneous information, creating a holistic and precise information model. As the manufacturing process deepens and knowledge accumulates, the model's understanding of the process progressively enhances, promoting self-evolution and continuous optimization. The developed knowledge-decision-geometry model acts as the ontological layer within the digital twin framework, laying a foundational conceptual framework for the digital twin of the manufacturing process. Validated on an aero-engine blade production line in a factory, the results demonstrate that the knowledge model, as the core driver, enables continuous self-updating of the geometric model for an accurate depiction of the entire manufacturing process, while the decision model provides deep insights for decision-makers based on knowledge. The system not only effectively controls, predicts, and optimizes the manufacturing process but also continually evolves as the process advances. This research offers a new perspective on the realization of the digital twin for the manufacturing process, providing solid theoretical support with a knowledge-driven approach.
{"title":"Knowledge-based digital twin system: Using a knowlege-driven approach for manufacturing process modeling","authors":"Chang Su , Yong Han , Xin Tang , Qi Jiang , Tao Wang , Qingchen He","doi":"10.1016/j.compind.2024.104101","DOIUrl":"https://doi.org/10.1016/j.compind.2024.104101","url":null,"abstract":"<div><p>The Knowledge-Based Digital Twin System is a digital twin system developed on the foundation of a knowledge graph, aimed at serving the complex manufacturing process. This system embraces a knowledge-driven modeling approach, aspiring to construct a digital twin model for the manufacturing process, thereby enabling precise description, management, prediction, and optimization of the process. The core of this system lies in the comprehensive knowledge graph that encapsulates all pertinent information about the manufacturing process, facilitating dynamic modeling and iteration through knowledge matching and inference within the knowledge, geometry, and decision model. This approach not only ensures consistency across models but also addresses the challenge of coupling multi-source heterogeneous information, creating a holistic and precise information model. As the manufacturing process deepens and knowledge accumulates, the model's understanding of the process progressively enhances, promoting self-evolution and continuous optimization. The developed knowledge-decision-geometry model acts as the ontological layer within the digital twin framework, laying a foundational conceptual framework for the digital twin of the manufacturing process. Validated on an aero-engine blade production line in a factory, the results demonstrate that the knowledge model, as the core driver, enables continuous self-updating of the geometric model for an accurate depiction of the entire manufacturing process, while the decision model provides deep insights for decision-makers based on knowledge. The system not only effectively controls, predicts, and optimizes the manufacturing process but also continually evolves as the process advances. This research offers a new perspective on the realization of the digital twin for the manufacturing process, providing solid theoretical support with a knowledge-driven approach.</p></div>","PeriodicalId":55219,"journal":{"name":"Computers in Industry","volume":"159 ","pages":"Article 104101"},"PeriodicalIF":10.0,"publicationDate":"2024-04-30","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140815980","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-04-30DOI: 10.1016/j.compind.2024.104102
Zhonglin Zuo , Hao Zhang , Zheng Li , Li Ma , Shan Liang , Tong Liu , Mehmet Mercangöz
Detecting leaks in natural gas gathering pipelines is paramount for the safe and reliable operation of the gas and oil industry. Due to the lack of leak data and the changes in leak features, semi-supervised leak detection methods that use normal data for health model learning have attracted much attention. However, these approaches usually consider one-class normal samples as health data, which may fail to fit the reality of unlabeled multi-class non-leak data under variable operating conditions. In addition, existing semi-supervised methods often suffer from insufficient representation learning as they employ step-by-step training or rely on the low-level reconstruction of autoencoders. To address the above two key challenges, this paper proposes a novel end-to-end self-supervised leak detection method, self-supervised multi-sphere support vector data description. Specifically, it utilizes the presented multi-sphere support vector data description to model unlabeled multi-class non-leak data and the introduced self-supervised learning strategy to boost the representation learning of the end-to-end semi-supervised model. Moreover, the categories of unlabeled multi-class non-leak data are learned in an unsupervised way through alternating feature clustering and pseudo-label-based classification. A robust leak score calculation method is also designed to improve the performance of the proposed method. Finally, the experimental results on the field data collected from pipelines demonstrate the effectiveness of the proposed method.
{"title":"A self-supervised leak detection method for natural gas gathering pipelines considering unlabeled multi-class non-leak data","authors":"Zhonglin Zuo , Hao Zhang , Zheng Li , Li Ma , Shan Liang , Tong Liu , Mehmet Mercangöz","doi":"10.1016/j.compind.2024.104102","DOIUrl":"https://doi.org/10.1016/j.compind.2024.104102","url":null,"abstract":"<div><p>Detecting leaks in natural gas gathering pipelines is paramount for the safe and reliable operation of the gas and oil industry. Due to the lack of leak data and the changes in leak features, semi-supervised leak detection methods that use normal data for health model learning have attracted much attention. However, these approaches usually consider one-class normal samples as health data, which may fail to fit the reality of unlabeled multi-class non-leak data under variable operating conditions. In addition, existing semi-supervised methods often suffer from insufficient representation learning as they employ step-by-step training or rely on the low-level reconstruction of autoencoders. To address the above two key challenges, this paper proposes a novel end-to-end self-supervised leak detection method, self-supervised multi-sphere support vector data description. Specifically, it utilizes the presented multi-sphere support vector data description to model unlabeled multi-class non-leak data and the introduced self-supervised learning strategy to boost the representation learning of the end-to-end semi-supervised model. Moreover, the categories of unlabeled multi-class non-leak data are learned in an unsupervised way through alternating feature clustering and pseudo-label-based classification. A robust leak score calculation method is also designed to improve the performance of the proposed method. Finally, the experimental results on the field data collected from pipelines demonstrate the effectiveness of the proposed method.</p></div>","PeriodicalId":55219,"journal":{"name":"Computers in Industry","volume":"159 ","pages":"Article 104102"},"PeriodicalIF":10.0,"publicationDate":"2024-04-30","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140815981","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-04-27DOI: 10.1016/j.compind.2024.104103
Feifeng Jiang , Jun Ma , Christopher John Webster , Weiwei Chen , Wei Wang
Accurate land valuation is crucial in sustainable urban development, influencing pivotal decisions on resource allocation and land-use strategies. Most existing studies, primarily using point-based modeling approaches, face challenges on granularity, generalizability, and spatial effect capturing, limiting their effectiveness in regional land valuation with high granularity. This study therefore proposes the LVGAN (i.e., land value generative adversarial networks) framework for regional land value estimation. The LVGAN model redefines land valuation as an image generation task, employing deep generative techniques combined with attention mechanisms to forecast high-resolution relative value distributions for informed decision-making. Applied to a case study of New York City (NYC), the LVGAN model outperforms typical deep generative methods, with MAE (Mean Absolute Error) and MSE (Mean Squared Error) averagely reduced by 36.58 % and 59.28 %, respectively. The model exhibits varied performance across five NYC boroughs and diverse urban contexts, excelling in Manhattan with limited value variability, and in areas characterized by residential zoning and high density. It identifies influential factors such as road network, built density, and land use in determining NYC land valuation. By enhancing data-driven decision-making at early design stages, the LVGAN model can promote stakeholder engagement and strategic planning for sustainable and well-structured urban environments.
{"title":"Estimating and explaining regional land value distribution using attention-enhanced deep generative models","authors":"Feifeng Jiang , Jun Ma , Christopher John Webster , Weiwei Chen , Wei Wang","doi":"10.1016/j.compind.2024.104103","DOIUrl":"https://doi.org/10.1016/j.compind.2024.104103","url":null,"abstract":"<div><p>Accurate land valuation is crucial in sustainable urban development, influencing pivotal decisions on resource allocation and land-use strategies. Most existing studies, primarily using point-based modeling approaches, face challenges on granularity, generalizability, and spatial effect capturing, limiting their effectiveness in regional land valuation with high granularity. This study therefore proposes the LVGAN (i.e., land value generative adversarial networks) framework for regional land value estimation. The LVGAN model redefines land valuation as an image generation task, employing deep generative techniques combined with attention mechanisms to forecast high-resolution relative value distributions for informed decision-making. Applied to a case study of New York City (NYC), the LVGAN model outperforms typical deep generative methods, with MAE (Mean Absolute Error) and MSE (Mean Squared Error) averagely reduced by 36.58 % and 59.28 %, respectively. The model exhibits varied performance across five NYC boroughs and diverse urban contexts, excelling in Manhattan with limited value variability, and in areas characterized by residential zoning and high density. It identifies influential factors such as road network, built density, and land use in determining NYC land valuation. By enhancing data-driven decision-making at early design stages, the LVGAN model can promote stakeholder engagement and strategic planning for sustainable and well-structured urban environments.</p></div>","PeriodicalId":55219,"journal":{"name":"Computers in Industry","volume":"159 ","pages":"Article 104103"},"PeriodicalIF":10.0,"publicationDate":"2024-04-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140807015","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-04-26DOI: 10.1016/j.compind.2024.104097
Mehmet Yavuz Yağci, Muhammed Ali Aydin
Anomaly detection with high accuracy, recall, and low error rate is critical for the safe and uninterrupted operation of cyber-physical systems. However, detecting anomalies in multimodal time series with different modalities obtained from cyber-physical systems is challenging. Although deep learning methods show very good results in anomaly detection, they fail to detect anomalies according to the requirements of cyber-physical systems. In the use of graph-based methods, data loss occurs during the conversion of time series into graphs. The fixed window size used to transform time series into graphs causes a loss of spatio-temporal correlations. In this study, we propose an Event Aware Graph Attention Network (EA-GAT), which can detect anomalies by event-based cyber-physical system analysis. EA-GAT detects and tracks the sensors in cyber-physical systems and the correlations between them. The system analyzes and models the relationship between the components during the marked periods as a graph. Anomalies in the system are found through the created graph models. Experiments show that the EA-GAT technique is more effective than other deep learning methods on SWaT, WADI, MSL datasets used in various studies. The event-based dynamic approach is significantly superior to the fixed-size sliding window technique, which uses the same learning structure. In addition, anomaly analysis is used to identify the attack target and the affected components. At the same time, with the slip subsequence module, the data is divided into subgroups and processed simultaneously.
高准确率、高召回率和低错误率的异常检测对于网络物理系统的安全和不间断运行至关重要。然而,从网络物理系统中获取的不同模态的多模态时间序列中检测异常是一项挑战。虽然深度学习方法在异常检测方面取得了很好的效果,但它们无法按照网络物理系统的要求检测异常。在使用基于图形的方法时,在将时间序列转换为图形的过程中会出现数据丢失。用于将时间序列转换为图形的固定窗口大小会造成时空相关性的丢失。在本研究中,我们提出了一种事件感知图注意网络(EA-GAT),它可以通过基于事件的网络物理系统分析来检测异常。EA-GAT 可检测和跟踪网络物理系统中的传感器以及它们之间的关联。该系统以图表的形式分析和模拟标记期间各组件之间的关系。通过创建的图形模型,可以发现系统中的异常情况。实验表明,在各种研究中使用的 SWaT、WADI 和 MSL 数据集上,EA-GAT 技术比其他深度学习方法更有效。基于事件的动态方法明显优于使用相同学习结构的固定大小滑动窗口技术。此外,异常分析还可用于识别攻击目标和受影响的组件。同时,利用滑动子序列模块,将数据分成子组并同时进行处理。
{"title":"EA-GAT: Event aware graph attention network on cyber-physical systems","authors":"Mehmet Yavuz Yağci, Muhammed Ali Aydin","doi":"10.1016/j.compind.2024.104097","DOIUrl":"https://doi.org/10.1016/j.compind.2024.104097","url":null,"abstract":"<div><p>Anomaly detection with high accuracy, recall, and low error rate is critical for the safe and uninterrupted operation of cyber-physical systems. However, detecting anomalies in multimodal time series with different modalities obtained from cyber-physical systems is challenging. Although deep learning methods show very good results in anomaly detection, they fail to detect anomalies according to the requirements of cyber-physical systems. In the use of graph-based methods, data loss occurs during the conversion of time series into graphs. The fixed window size used to transform time series into graphs causes a loss of spatio-temporal correlations. In this study, we propose an Event Aware Graph Attention Network (EA-GAT), which can detect anomalies by event-based cyber-physical system analysis. EA-GAT detects and tracks the sensors in cyber-physical systems and the correlations between them. The system analyzes and models the relationship between the components during the marked periods as a graph. Anomalies in the system are found through the created graph models. Experiments show that the EA-GAT technique is more effective than other deep learning methods on SWaT, WADI, MSL datasets used in various studies. The event-based dynamic approach is significantly superior to the fixed-size sliding window technique, which uses the same learning structure. In addition, anomaly analysis is used to identify the attack target and the affected components. At the same time, with the slip subsequence module, the data is divided into subgroups and processed simultaneously.</p></div>","PeriodicalId":55219,"journal":{"name":"Computers in Industry","volume":"159 ","pages":"Article 104097"},"PeriodicalIF":10.0,"publicationDate":"2024-04-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140646957","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-04-20DOI: 10.1016/j.compind.2024.104098
Weisheng Lu , Liupengfei Wu
Protecting intellectual property rights (IPR) in the architecture, engineering, and construction (AEC) industry is a long-standing challenge. In the collaborative digital environments, where multiple professionals use digital platforms such as building information modelling to collaborate on a building design, this challenge has intensified. This research harnesses the functions of blockchain technology, such as consensus mechanisms, distributed broadcasting ledgers, cryptographic algorithms, and non-fungible tokens, to propose a blockchain-based framework to protect building design IPR in the AEC industry. Adopting a design science approach, a framework is proposed and then further developed into a system that is implemented, illustrated, and evaluated in a case study. The system uses non-fungible tokens to tokenize building design IPR and deploys blockchain’s decentralized consensus mechanisms, distributed ledgers, and cryptographic algorithms to safeguard the IPR and its transactions. This prototype system is found feasible with satisfactory performance in enhancing the efficiency of IPR registration and protection, reducing cost, improving information transparency, reinforcing immutability, and preventing non-valuable registrations. Researchers and practitioners are encouraged to develop the framework for different applications such as real-life design IPR protection and design management.
{"title":"A blockchain-based deployment framework for protecting building design intellectual property rights in collaborative digital environments","authors":"Weisheng Lu , Liupengfei Wu","doi":"10.1016/j.compind.2024.104098","DOIUrl":"https://doi.org/10.1016/j.compind.2024.104098","url":null,"abstract":"<div><p>Protecting intellectual property rights (IPR) in the architecture, engineering, and construction (AEC) industry is a long-standing challenge. In the collaborative digital environments, where multiple professionals use digital platforms such as building information modelling to collaborate on a building design, this challenge has intensified. This research harnesses the functions of blockchain technology, such as consensus mechanisms, distributed broadcasting ledgers, cryptographic algorithms, and non-fungible tokens, to propose a blockchain-based framework to protect building design IPR in the AEC industry. Adopting a design science approach, a framework is proposed and then further developed into a system that is implemented, illustrated, and evaluated in a case study. The system uses non-fungible tokens to tokenize building design IPR and deploys blockchain’s decentralized consensus mechanisms, distributed ledgers, and cryptographic algorithms to safeguard the IPR and its transactions. This prototype system is found feasible with satisfactory performance in enhancing the efficiency of IPR registration and protection, reducing cost, improving information transparency, reinforcing immutability, and preventing non-valuable registrations. Researchers and practitioners are encouraged to develop the framework for different applications such as real-life design IPR protection and design management.</p></div>","PeriodicalId":55219,"journal":{"name":"Computers in Industry","volume":"159 ","pages":"Article 104098"},"PeriodicalIF":10.0,"publicationDate":"2024-04-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140620832","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-04-05DOI: 10.1016/j.compind.2024.104094
Miguel Rodríguez-García , Iria González-Romero , Ángel Ortiz-Bas , José Carlos Prado-Prado
The purpose of this study is twofold: investigating how omnichannel (OC) retailers manage e-fulfillment costs and establishing how these costs relate to the evolution of OC retailers' e-fulfillment strategies. Experts in e-fulfillment from 34 European OC retailers across various sectors participated in an exploratory survey. The study's results reveal that although e-fulfillment costs significantly influence the evolution of e-fulfillment strategies, many OC retailers fulfilling online orders from retail stores or traditional warehouses remain unaware of the actual costs of e-fulfillment. Activities other than picking and last-mile delivery, such as inbound logistics and storage, are poorly controlled. Furthermore, complex cost metrics such as cost-to-serve—the total cost associated with delivering a specific order to a specific customer—are predominantly found among OC retailers operating fulfillment centers (FCs) in their e-fulfillment distribution networks. This underscores the need for all OC retailers to accurately assess e-fulfillment costs at multiple levels, which will be crucial for optimizing order preparation, tailoring pricing strategies, and achieving profitability, especially when operating hybrid e-fulfillment strategies where online orders are prepared in multiple facilities. As the largest study on e-fulfillment costs to date, it highlights the importance of advancing e-fulfillment cost management systems among OC retailers and adopting an approach that encompasses all e-fulfillment activities. Future research should delve into the key challenges of developing these systems, considering the operational realities of each OC retailer.
{"title":"E-fulfillment cost management in omnichannel retailing: An exploratory study","authors":"Miguel Rodríguez-García , Iria González-Romero , Ángel Ortiz-Bas , José Carlos Prado-Prado","doi":"10.1016/j.compind.2024.104094","DOIUrl":"https://doi.org/10.1016/j.compind.2024.104094","url":null,"abstract":"<div><p>The purpose of this study is twofold: investigating how omnichannel (OC) retailers manage e-fulfillment costs and establishing how these costs relate to the evolution of OC retailers' e-fulfillment strategies. Experts in e-fulfillment from 34 European OC retailers across various sectors participated in an exploratory survey. The study's results reveal that although e-fulfillment costs significantly influence the evolution of e-fulfillment strategies, many OC retailers fulfilling online orders from retail stores or traditional warehouses remain unaware of the actual costs of e-fulfillment. Activities other than picking and last-mile delivery, such as inbound logistics and storage, are poorly controlled. Furthermore, complex cost metrics such as <em>cost-to-serve</em>—the total cost associated with delivering a specific order to a specific customer—are predominantly found among OC retailers operating fulfillment centers (FCs) in their e-fulfillment distribution networks. This underscores the need for all OC retailers to accurately assess e-fulfillment costs at multiple levels, which will be crucial for optimizing order preparation, tailoring pricing strategies, and achieving profitability, especially when operating hybrid e-fulfillment strategies where online orders are prepared in multiple facilities. As the largest study on e-fulfillment costs to date, it highlights the importance of advancing e-fulfillment cost management systems among OC retailers and adopting an approach that encompasses all e-fulfillment activities. Future research should delve into the key challenges of developing these systems, considering the operational realities of each OC retailer.</p></div>","PeriodicalId":55219,"journal":{"name":"Computers in Industry","volume":"159 ","pages":"Article 104094"},"PeriodicalIF":10.0,"publicationDate":"2024-04-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140348172","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
In the realm of smart manufacturing, Augmented Reality (AR) technology has gained increasing attention among researchers and manufacturers due to its practicality and adaptability. For this reason, it has been widely embraced in various industrial fields, especially for helping operators assemble products. Despite its widespread adoption, there is a debate in the research community about how effective AR is for improving user performance in assembly tasks, particularly when using handheld devices. These disparities can be attributed to differences in experimental approaches, such as the frequent use of qualitative methods, the inclusion of non-representative users, and the limited number of comprehensive case studies.
In response to this, the paper delved into the benefits of AR applications, with a specific focus on measuring user performance and the cognitive workload perceived by users during assembly activities. To this end, an AR assembly guidance tool has been developed to assist users during assembly tasks, running on a mobile device, specifically a tablet, for freedom of movement and high portability. Experimentation involved the assembly of a comprehensive case study and a diverse user group, allowing the comparison representative users and experienced industrial operators. The results were promising, indicating that AR technology effectively enhances user performance during assembly-guided activities compared to conventional methods, particularly when users are unfamiliar with the task at hand. This study brings valuable insights by addressing previous research limitations and providing strong evidence of AR's positive impact on user performance in real-world assembly scenarios.
在智能制造领域,增强现实(AR)技术因其实用性和适应性越来越受到研究人员和制造商的关注。因此,它已被广泛应用于各个工业领域,尤其是帮助操作员组装产品。尽管 AR 被广泛采用,但研究界对其在提高用户装配任务(尤其是使用手持设备时)的性能方面的有效性仍存在争议。这些差异可归因于实验方法的不同,如经常使用定性方法、纳入非代表性用户以及综合案例研究数量有限等。为此,本文深入探讨了 AR 应用的益处,特别侧重于测量用户在装配活动中的表现和认知工作量。为此,我们开发了一种 AR 装配指导工具,在移动设备(特别是平板电脑)上运行,以帮助用户完成装配任务,从而实现移动自由和高度便携性。实验涉及一项综合案例研究和一个多样化的用户群体,以便对具有代表性的用户和经验丰富的工业操作员进行比较。实验结果很有希望,表明与传统方法相比,AR 技术能有效提高用户在装配指导活动中的表现,尤其是在用户不熟悉手头任务的情况下。这项研究解决了以往研究的局限性,并提供了强有力的证据,证明在现实世界的装配场景中,AR 技术对用户表现产生了积极影响,从而带来了宝贵的见解。
{"title":"Assessing user performance in augmented reality assembly guidance for industry 4.0 operators","authors":"Emanuele Marino , Loris Barbieri , Fabio Bruno , Maurizio Muzzupappa","doi":"10.1016/j.compind.2024.104085","DOIUrl":"https://doi.org/10.1016/j.compind.2024.104085","url":null,"abstract":"<div><p>In the realm of smart manufacturing, Augmented Reality (AR) technology has gained increasing attention among researchers and manufacturers due to its practicality and adaptability. For this reason, it has been widely embraced in various industrial fields, especially for helping operators assemble products. Despite its widespread adoption, there is a debate in the research community about how effective AR is for improving user performance in assembly tasks, particularly when using handheld devices. These disparities can be attributed to differences in experimental approaches, such as the frequent use of qualitative methods, the inclusion of non-representative users, and the limited number of comprehensive case studies.</p><p>In response to this, the paper delved into the benefits of AR applications, with a specific focus on measuring user performance and the cognitive workload perceived by users during assembly activities. To this end, an AR assembly guidance tool has been developed to assist users during assembly tasks, running on a mobile device, specifically a tablet, for freedom of movement and high portability. Experimentation involved the assembly of a comprehensive case study and a diverse user group, allowing the comparison representative users and experienced industrial operators. The results were promising, indicating that AR technology effectively enhances user performance during assembly-guided activities compared to conventional methods, particularly when users are unfamiliar with the task at hand. This study brings valuable insights by addressing previous research limitations and providing strong evidence of AR's positive impact on user performance in real-world assembly scenarios.</p></div>","PeriodicalId":55219,"journal":{"name":"Computers in Industry","volume":"157 ","pages":"Article 104085"},"PeriodicalIF":10.0,"publicationDate":"2024-03-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.sciencedirect.com/science/article/pii/S0166361524000137/pdfft?md5=a5c06905a4d05c32628ea8e5f4e4cf78&pid=1-s2.0-S0166361524000137-main.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140320451","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}