Product Digital Twins (DTs) are digital representations of a physical asset that update synchronously throughout its lifecycle. Over the past decade, a rich and varied literature on the development of new technologies and approaches to implementing product DTs has emerged. This literature has been reviewed multiple times, but the variety in focus and scope of DT reviews has become so extensive that it is challenging to assess our collective understanding and knowledge of DT theory. We address this issue by conducting a systematic umbrella review of product DT reviews, classifying and analysing review themes to understand strengths and shortcomings of product DT literature. Our analysis reveals a key shortcoming in the product DT literature: There is currently little evidence and understanding of DT value. Understanding how DTs provide value to an organisation is of paramount importance, as it will determine the elements of the DT that truly have an effect on value, as well as the mechanisms by which that value is created. We conclude this work by presenting a five-item research agenda to address these shortcomings and develop our understanding of DT value. Since DTs can be complex and expensive to implement, research and practice should focus on those elements of the DT that provide value to the organisation.
This research focuses on trading card quality inspection, where defects have a significant effect on both the quality inspection and grading. The present inspection procedure is subjective which means the grading is sensitive to mistakes made by individuals. To address this, a deep neural network based on transfer learning for automated defect detection is proposed with a particular emphasis on corner grading which is a crucial factor in overall card grading. This paper presents an extension of our prior study, in which we achieved an accuracy of 78% by employing the VGG-net and InceptionV3 models. In this study, our emphasis is on the DenseNet model where convolutional layers are used to extract features and regularisation methods including batch normalisation and spatial dropout are incorporated for better defect classification. Our approach outperformed prior findings, as evidenced by experimental results based on a real dataset provided by our industry partner, achieving an 83% mean accuracy in defect classification. Additionally, this study investigates various calibration approaches to fine-tune the model confidence. To make the model more reliable, a rule-based approach is incorporated to classify defects based on confidence scores. Finally, a human-in-the-loop system is integrated to inspect the misclassified samples. Our results demonstrate that the model’s performance and confidence are expected to improve further when a large number of misclassified samples, along with human feedback, are used to retrain the network.
The electronic navigational charts are crucial carriers for representing the multi-source heterogeneous data of Waterborne Traffic Elements (WTEs). However, their layer-based modelling method has some shortcomings in expressing the multi-granularity features, complex relationships, and dynamic evolution of elements. This paper proposes an objectification modelling method for WTEs based on the concept of multi-granularity spatiotemporal object modelling. A classification system for waterborne traffic objects is developed based on the relevance of behavior to elements; combining characteristics of waterborne traffic, a data model for waterborne traffic objects is constructed from eight aspects: spatiotemporal reference, spatiotemporal position, spatial form, basic information, attributes, behavioral ability, structure, and associative relationships. An object extraction function is also established, extracting object attributes and relationships between objects according to different element classes. Taking the Jiashan section of the Hangzhou-Shanghai Line in Zhejiang Province as the experimental subject, the multi-granularity spatiotemporal characteristics, dynamic evolution, and relationship expression of channel class objects are tested. The experimental results show that the proposed method provides the theoretical basis and data organization mode for the multi-granularity expression of WTEs.
Industry 4.0 has led to a huge increase in data coming from machine maintenance. At the same time, advances in Natural Language Processing (NLP) and Large Language Models provide new ways to analyse this data. In our research, we use NLP to analyse maintenance work orders, and specifically the descriptions of failures and the corresponding repair actions. Many NLP studies have focused on failure descriptions for categorising them, extracting specific information about failure, or supporting failure analysis methodologies (such as FMEA). Whereas, the analysis of repair actions and its relationship with failure remains underexplored. Addressing this gap, our study makes three significant contributions. Firstly, we focused on the Italian language, which presents additional challenges due to the dominance of NLP systems that are mainly designed for English. Secondly, it proposes a method for automatically subdividing a repair action into a set of sub-tasks. Lastly, it introduces an approach that employs association rule mining to recommend sub-tasks to maintainers when addressing failures. We tested our approach with a case study from an automotive company in Italy. The case study provides insights into the current barriers faced by NLP applications in maintenance, offering a glimpse into the future opportunities for smart maintenance systems.
Industrial equipment condition monitoring and fault detection are crucial to ensure the reliability of industrial production. Recently, data-driven fault detection methods have achieved significant success, but they all face challenges due to data fragmentation and limited fault detection capabilities. Although centralized data collection can improve detection accuracy, the conflicting interests brought by data privacy issues make data sharing between different devices impractical, thus forming the problem of industrial data silos. To address these challenges, this paper proposes a class prototype guided personalized lightweight federated learning framework(FedCPG). This framework decouples the local network, only uploading the backbone model to the server for model aggregation, while employing the head model for local personalized updates, thereby achieving efficient model aggregation. Furthermore, the framework incorporates prototype constraints to steer the local personalized update process, mitigating the effects of data heterogeneity. Finally, a lightweight feature extraction network is designed to reduce communication overhead. Multiple complex industrial data distribution scenarios were simulated on two benchmark industrial datasets. Extensive experiments have demonstrated that FedCPG can achieve an average detection accuracy of 95% in complex industrial scenarios, while simultaneously reducing memory usage and the number of parameters by 82%, surpassing existing methods in most average metrics. These findings offer novel perspectives on the application of personalized federated learning in industrial fault detection.
A patent map is widely used in the field of technical information mining, which can support tasks such as detecting patent vacuums and predicting technical trends. However, existing patent map construction methods have the limitations of insufficient intelligence and accuracy in mining patent technical features, which leads to failure to effectively complete the above tasks. To address these limitations, this paper proposes a patent map construction method based on multi-dimensional technical feature mining that mainly includes the following three stages. First, on the basis of the dependency parsing technology, the technical features contained in patents are fully mined in the form of triplets from three dimensions: function, behaviour and structure. Second, on the basis of Wordnet, the original triplets in three dimensions are standardised for different task scenarios. Finally, on the basis of the standard triplets, the patent map can be constructed to detect patent vacuums and support design tasks. In addition, a prototype system is developed based on the proposed method, and the effectiveness and practicability of the method and system are verified using a 3D printer as an engineering example.
Novel digital on-demand manufacturing technologies provide a significant opportunity to support development of virtual warehousing and in turn improve supply chain performance. However, the implementation of virtual warehouse comes with a set of challenges, especially where the objective is to virtually warehouse standard or legacy parts that have been developed and verified initially for conventional (non-digital) manufacturing. In this paper, we explore the key elements required for successful implementation of a virtual warehouse for legacy parts based on a combination of part digitalization, on-demand manufacturing, and part validation. Our proposed framework for adoption of virtual warehouse includes development of a digital inventory which includes supply chain and manufacturability data, identification, and selection of suitable parts for on-demand manufacturing, selection of on-demand manufacturing technology, fit-for-purpose validation of the parts. Our framework is exemplified through a case study, and we conclude that the building of an effective virtual warehouse requires several enablers, including availability of digital data about technical and supply chain characteristics of parts, but also a suitable part identification tool. This part identification tool needs to be flexible to include comparison with reference parts already produced by different on-demand manufacturing technologies.
We propose a novel framework for detecting 3D human–object interactions (HOI) in construction sites and a toolkit for generating construction-related human–object interaction graphs. Computer vision methods have been adopted for construction site safety surveillance in recent years. The current computer vision methods rely on videos and images, with which safety verification is performed on common-sense knowledge, without considering 3D spatial relationships among the detected instances. We propose a new method to incorporate spatial understanding by directly inferring the interactions from 3D point cloud data. The proposed model is trained on a 3D construction site dataset generated from our crafted simulation toolkit. The model achieves 54.11% mean interaction over union (mIOU) and 72.98% average mean precision(mAP) for the worker–object interaction relationship recognition. The model is also validated on PiGraphs, a benchmarking dataset with 3D human–object interaction types, and compared against other existing 3D interaction detection frameworks. It was observed that it achieves superior performance from the state-of-the-art model, increasing the interaction detection mAP by 17.01%. Besides the 3D interaction model, we also simulate interactions from industrial surveillance footage using MoCap and physical constraints, which will be released to foster future studies in the domain.
In business processes, an operational problem refers to a deviation and an inefficiency that prohibits an organization from reaching its goals, e.g., a delay in approving a purchase order in a Procure-To-Pay (P2P) process. Operational process monitoring aims to assess the occurrence of such operational problems by analyzing event data that record the execution of business processes. Once the problems are detected, organizations can act upon the corresponding problems with viable actions, e.g., adding more resources, bypassing problematic activities, etc. A plethora of approaches have been proposed to implement operational process monitoring. The lion’s share of existing approaches assumes that a single case notion (e.g., a purchase order in a P2P process) exists in a business process and analyzes operational problems defined over the single case notion. However, most real-life business processes manifest the interplay of multiple interrelated objects. For instance, an execution of an omnipresent P2P process involves multiple objects of different types, e.g., purchase orders, goods receipts, invoices, etc. Applying the existing approaches in these object-centric business processes results in inaccurate or misleading results. In this study, we propose a novel approach to assessing operational problems within object-centric business processes. Our approach not only ensures an accurate assessment of existing problems but also facilitates the analysis of object-centric problems that consider the interaction among different objects. We evaluate this approach by applying it to both simulated business processes and real-life business processes.