This study investigates the distributed heterogeneous hybrid flow-shop scheduling problem (DHHFSP) with the tardiness and energy consumption criteria. A decomposition-based multi-objective artificial bee colony (MOABC/D) algorithm is developed to solve the scheduling problem. In the MOABC/D algorithm, a tri-level encoding scheme combined with domain-specific heuristic rules are designed to enable comprehensive solution space exploration. A local search framework incorporating five novel critical path-based neighbourhood structures to intensify subproblem investigation. An adaptive optimisation strategy integrating similarity-based prioritisation, dynamic neighbourhood relationships, and coordinated information sharing among adjacent subproblems. A solution exchange strategy is proposed to assist the algorithm jump out of the local optimum, and continue searching for solutions in various directions. Comprehensive simulation trials validate the algorithm's ability to balance scheduling efficiency and energy conservation in the DHHFSP. It shows great promise for multi-objective optimisation in complex distributed manufacturing systems with heterogeneous resources.
{"title":"Improved Multi-Objective Evolution Algorithm for Energy-Aware Distributed Heterogeneous Hybrid Flowshop Scheduling Problem","authors":"Yingli Li, Haibing Liu, Biao Zhang","doi":"10.1049/cim2.70046","DOIUrl":"10.1049/cim2.70046","url":null,"abstract":"<p>This study investigates the distributed heterogeneous hybrid flow-shop scheduling problem (DHHFSP) with the tardiness and energy consumption criteria. A decomposition-based multi-objective artificial bee colony (MOABC/D) algorithm is developed to solve the scheduling problem. In the MOABC/D algorithm, a tri-level encoding scheme combined with domain-specific heuristic rules are designed to enable comprehensive solution space exploration. A local search framework incorporating five novel critical path-based neighbourhood structures to intensify subproblem investigation. An adaptive optimisation strategy integrating similarity-based prioritisation, dynamic neighbourhood relationships, and coordinated information sharing among adjacent subproblems. A solution exchange strategy is proposed to assist the algorithm jump out of the local optimum, and continue searching for solutions in various directions. Comprehensive simulation trials validate the algorithm's ability to balance scheduling efficiency and energy conservation in the DHHFSP. It shows great promise for multi-objective optimisation in complex distributed manufacturing systems with heterogeneous resources.</p>","PeriodicalId":33286,"journal":{"name":"IET Collaborative Intelligent Manufacturing","volume":"7 1","pages":""},"PeriodicalIF":3.1,"publicationDate":"2025-09-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://ietresearch.onlinelibrary.wiley.com/doi/epdf/10.1049/cim2.70046","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145038120","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
The value of a product relies greatly on the significance of the innovations when a product is designed and manufactured. Whereas numerous computer aided design (CAD) tools, such as computer aided engineering (CAE) and artificial intelligence (AI) tools, are widely used to create and accelerate the innovations in engineering design, commercially available AI tools sacrifice the efficiency for generality in virtual design. Because the behaviours of a physical model must be interpreted as governing mathematical models, this may ignore key analytical correspondence of inputs and outputs in the physical model. Design optimisation is simulation based with a limited exploration of a design space. We argue that for innovations such as routine designs or parametric designs that are rooted in knowledge-based engineering (KBE), a sophisticated tool, rather than a general-purpose CAE tool, should be developed to optimise a design solution analytically. To illustrate the feasibility and effectiveness of the proposed idea, a parametric FEA model was developed for a client company in construction. The model is programed and implemented, its conciseness, efficiency and accuracy was proven by comparative studies with SolidWorks simulation. It was recommended and used by the client company for practical use.
{"title":"A Parametric FEA Model for Evaluation of Structural Elements Subjected to Varying Loading Conditions","authors":"Xiaoqin Wang, Zhuming Bi","doi":"10.1049/cim2.70045","DOIUrl":"10.1049/cim2.70045","url":null,"abstract":"<p>The value of a product relies greatly on the significance of the innovations when a product is designed and manufactured. Whereas numerous <i>computer aided design</i> (CAD) tools, such as <i>computer aided engineering</i> (CAE) and <i>artificial intelligence</i> (AI) tools, are widely used to create and accelerate the innovations in engineering design, commercially available AI tools sacrifice the efficiency for generality in virtual design. Because the behaviours of a physical model must be interpreted as governing mathematical models, this may ignore key analytical correspondence of inputs and outputs in the physical model. Design optimisation is simulation based with a limited exploration of a design space. We argue that for innovations such as routine designs or parametric designs that are rooted in <i>knowledge-based engineering</i> (KBE), a sophisticated tool, rather than a general-purpose CAE tool, should be developed to optimise a design solution analytically. To illustrate the feasibility and effectiveness of the proposed idea, a parametric FEA model was developed for a client company in construction. The model is programed and implemented, its conciseness, efficiency and accuracy was proven by comparative studies with SolidWorks simulation. It was recommended and used by the client company for practical use.</p>","PeriodicalId":33286,"journal":{"name":"IET Collaborative Intelligent Manufacturing","volume":"7 1","pages":""},"PeriodicalIF":3.1,"publicationDate":"2025-09-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://ietresearch.onlinelibrary.wiley.com/doi/epdf/10.1049/cim2.70045","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145037723","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Lin Liu, Shaobo Li, Xiaoyang Ji, Jing Yang, Zukun Yu
Empowering robots with tactile perception is crucial for the future development of intelligent robots. Tactile perception can expand the application scenarios of robots to perform more complex tasks. Unfortunately, existing approaches are flawed in their use of data collected by robotic tactile sensors because they either do not consider that tactile sensation is event-driven, which means that tactile data are spatiotemporal, or they ignore that too few samples of tactile data would cause overfitting problems in the network model. We introduce DLC-NddModel, a method based on spiking neural networks (SNNs) that incorporates Adam optimisation, regularisation and cosine annealing method. DLC-NddModel aims to fully interpret the spatiotemporal nature of the tactile data using the spatiotemporal dynamics of SNNs and to alleviate the overfitting problem caused by the few samples. Furthermore, unlike previous work using SNNs, we use a different approximation function to surmount the nondifferentiable spiking activity of the spiking neurons, thus making the gradient descent method usable and effective. To effectively alleviate the overfitting problem caused by too few tactile data samples, we explore solutions through regularisation strategies that add training noise or regularisation terms to the loss function. We compare DLC-NddModel against four prior state-of-the-art approaches on the EvTouch-Objects tactile spike dataset. Our experimental results demonstrate that DLC-NddModel has higher recognition accuracy than the comparison method when recognising household object data with an ACC value improvement of at least 2.362%.
{"title":"DLC-NddMode: A Spiking Neural Network Tactile Object Recognition Model With Adaptive Optimisation and Regularisation","authors":"Lin Liu, Shaobo Li, Xiaoyang Ji, Jing Yang, Zukun Yu","doi":"10.1049/cim2.70043","DOIUrl":"10.1049/cim2.70043","url":null,"abstract":"<p>Empowering robots with tactile perception is crucial for the future development of intelligent robots. Tactile perception can expand the application scenarios of robots to perform more complex tasks. Unfortunately, existing approaches are flawed in their use of data collected by robotic tactile sensors because they either do not consider that tactile sensation is event-driven, which means that tactile data are spatiotemporal, or they ignore that too few samples of tactile data would cause overfitting problems in the network model. We introduce DLC-NddModel, a method based on spiking neural networks (SNNs) that incorporates Adam optimisation, regularisation and cosine annealing method. DLC-NddModel aims to fully interpret the spatiotemporal nature of the tactile data using the spatiotemporal dynamics of SNNs and to alleviate the overfitting problem caused by the few samples. Furthermore, unlike previous work using SNNs, we use a different approximation function to surmount the nondifferentiable spiking activity of the spiking neurons, thus making the gradient descent method usable and effective. To effectively alleviate the overfitting problem caused by too few tactile data samples, we explore solutions through regularisation strategies that add training noise or regularisation terms to the loss function. We compare DLC-NddModel against four prior state-of-the-art approaches on the EvTouch-Objects tactile spike dataset. Our experimental results demonstrate that DLC-NddModel has higher recognition accuracy than the comparison method when recognising household object data with an ACC value improvement of at least 2.362%.</p>","PeriodicalId":33286,"journal":{"name":"IET Collaborative Intelligent Manufacturing","volume":"7 1","pages":""},"PeriodicalIF":3.1,"publicationDate":"2025-09-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://ietresearch.onlinelibrary.wiley.com/doi/epdf/10.1049/cim2.70043","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145005601","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Andre da Silva Martin, Luiz Fernando Rodrigues Pinto, Geraldo Cardoso de Oliveira Neto, Francesco Facchini
Industry 4.0 enabling technologies have been integrated into manufacturing systems. One of these technologies, the collaborative robot or cobot, holds significant expansion potential due to its shared application with humans in manufacturing environments. It offers cost reduction, a safer working environment, especially regarding ergonomic risks, and product quality improvements. This research aimed to assess the economic and social benefits, focusing on the reduction of ergonomic risks and quality gains resulting from the implementation of cobots in the engine assembly process. A case study was conducted in an engine assembly company, involving process observation, data collection, analysis of technical reports, and interviews with managers. The results indicated that integrating cobots into the manufacturing process is advantageous for the industry. There was a significant reduction in annual operating costs, totalling $41,602.56, leading to a return on investment in 1 year and 9 months. Furthermore, ensuring torque in the correct sequence resulted in product quality improvement, reduced ergonomic risks, and a safer working environment for operators. This research contributes to advancing knowledge on the economic, social, and quality advantages of cobot application in the engine assembly process.
{"title":"Collaborative Robot in Engine Assembly: A Socioeconomic Approach to Technological Advancement in Manufacturing","authors":"Andre da Silva Martin, Luiz Fernando Rodrigues Pinto, Geraldo Cardoso de Oliveira Neto, Francesco Facchini","doi":"10.1049/cim2.70044","DOIUrl":"10.1049/cim2.70044","url":null,"abstract":"<p>Industry 4.0 enabling technologies have been integrated into manufacturing systems. One of these technologies, the collaborative robot or cobot, holds significant expansion potential due to its shared application with humans in manufacturing environments. It offers cost reduction, a safer working environment, especially regarding ergonomic risks, and product quality improvements. This research aimed to assess the economic and social benefits, focusing on the reduction of ergonomic risks and quality gains resulting from the implementation of cobots in the engine assembly process. A case study was conducted in an engine assembly company, involving process observation, data collection, analysis of technical reports, and interviews with managers. The results indicated that integrating cobots into the manufacturing process is advantageous for the industry. There was a significant reduction in annual operating costs, totalling $41,602.56, leading to a return on investment in 1 year and 9 months. Furthermore, ensuring torque in the correct sequence resulted in product quality improvement, reduced ergonomic risks, and a safer working environment for operators. This research contributes to advancing knowledge on the economic, social, and quality advantages of cobot application in the engine assembly process.</p>","PeriodicalId":33286,"journal":{"name":"IET Collaborative Intelligent Manufacturing","volume":"7 1","pages":""},"PeriodicalIF":3.1,"publicationDate":"2025-08-30","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://ietresearch.onlinelibrary.wiley.com/doi/epdf/10.1049/cim2.70044","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144920548","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
As a key transmission component, the gear failure (such as broken teeth, wear, pitting, etc.) of the gearbox can easily lead to equipment shutdown, production interruption and even cause safety accidents, which is extremely harmful. The existing fault diagnosis methods have obvious shortcomings: the traditional BP neural network has weak global optimisation ability and slow convergence; the BP model optimised by traditional particle swarm optimisation (PSO) is limited in diagnostic accuracy because PSO is easy to fall into local optimum. In this paper, the data of four working conditions of gears are collected. After preprocessing, an improved PSO algorithm combining weight index change and particle disturbance strategy is proposed to optimise the BP neural network to construct the diagnosis model. Experiments show that the accuracy of this fault diagnosis model is 29% higher than that of the traditional BP model. It provides an efficient and reliable solution for mechanical fault diagnosis, which is of great significance for reducing losses and ensuring safety.
{"title":"Research on Gear Box Fault Diagnosis Technology Based on PCA-EDPSO-BP Neural Network","authors":"Daohai Zhang, Yang Lu, Haoran Li","doi":"10.1049/cim2.70042","DOIUrl":"10.1049/cim2.70042","url":null,"abstract":"<p>As a key transmission component, the gear failure (such as broken teeth, wear, pitting, etc.) of the gearbox can easily lead to equipment shutdown, production interruption and even cause safety accidents, which is extremely harmful. The existing fault diagnosis methods have obvious shortcomings: the traditional BP neural network has weak global optimisation ability and slow convergence; the BP model optimised by traditional particle swarm optimisation (PSO) is limited in diagnostic accuracy because PSO is easy to fall into local optimum. In this paper, the data of four working conditions of gears are collected. After preprocessing, an improved PSO algorithm combining weight index change and particle disturbance strategy is proposed to optimise the BP neural network to construct the diagnosis model. Experiments show that the accuracy of this fault diagnosis model is 29% higher than that of the traditional BP model. It provides an efficient and reliable solution for mechanical fault diagnosis, which is of great significance for reducing losses and ensuring safety.</p>","PeriodicalId":33286,"journal":{"name":"IET Collaborative Intelligent Manufacturing","volume":"7 1","pages":""},"PeriodicalIF":3.1,"publicationDate":"2025-08-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://ietresearch.onlinelibrary.wiley.com/doi/epdf/10.1049/cim2.70042","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144881429","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Mónika Gugolya, Tibor Medvegy, János Abonyi, Tamás Ruppert
This study investigates the learning curve in an assembly process under distraction, highlighting the use of video-based monitoring to evaluate changes in human performance over time. The experimental setup involving camera- and timer-based monitoring to evaluate operator performance in different metrics, including time-based indicators and accuracy of the assembled product. Participants were tasked with replicating patterns until they got a flat learning curve without any distractions during the process. After learning the process, they were also asked to repeat the task with conversation-based distractions to assess its influence during the main task. In our developed framework, an ArUco marker-based video recognition enabled the accuracy assessment. Statistical analyses of the collected data provided insight into performance variations. The study evaluates changes in the learning curve during verbal distraction, highlighting the need to understand and consider its effect during the process. The experiments revealed significant effects of distraction on the completion time, but the camera-based recognition system showed no notable decline in work quality.
{"title":"Assessing the Learning Curve of Human Operators Under Verbal Distraction","authors":"Mónika Gugolya, Tibor Medvegy, János Abonyi, Tamás Ruppert","doi":"10.1049/cim2.70038","DOIUrl":"10.1049/cim2.70038","url":null,"abstract":"<p>This study investigates the learning curve in an assembly process under distraction, highlighting the use of video-based monitoring to evaluate changes in human performance over time. The experimental setup involving camera- and timer-based monitoring to evaluate operator performance in different metrics, including time-based indicators and accuracy of the assembled product. Participants were tasked with replicating patterns until they got a flat learning curve without any distractions during the process. After learning the process, they were also asked to repeat the task with conversation-based distractions to assess its influence during the main task. In our developed framework, an ArUco marker-based video recognition enabled the accuracy assessment. Statistical analyses of the collected data provided insight into performance variations. The study evaluates changes in the learning curve during verbal distraction, highlighting the need to understand and consider its effect during the process. The experiments revealed significant effects of distraction on the completion time, but the camera-based recognition system showed no notable decline in work quality.</p>","PeriodicalId":33286,"journal":{"name":"IET Collaborative Intelligent Manufacturing","volume":"7 1","pages":""},"PeriodicalIF":3.1,"publicationDate":"2025-08-15","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://ietresearch.onlinelibrary.wiley.com/doi/epdf/10.1049/cim2.70038","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144853685","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Lucía Gálvez del Postigo Gallego, Sanja Lazarova-Molnar, Alejandro del Real Torres, Luis E. Acevedo Galicia
The dynamic nature of manufacturing and evolving customer demands require agile adaptation within Manufacturing Ecosystems—interconnected networks of enterprises and institutions collaborating to develop market—oriented solutions. To support this adaptation, it is crucial to evaluate large volumes of data and assess alternative scenarios electively. Digital Twins (DTs) enable the replication of physical systems into virtual models, facilitating the exploration of such scenarios. In most applications, Decision Support (DS) is essential and can be considered intrinsic to DTs. By integrating DS within DTs, the loop can be closed—transforming simulation information into actionable decisions. This study investigates recent advances and trends in the use of DTs for DS in production processes, with a focus on applications in Manufacturing Ecosystems. A systematic review is conducted to examine how DTs contribute to complex and holistic decision-making, including tasks such as production planning, maintenance scheduling, and defect management. Special attention is given to how decisions are made within DT-based applications and the extent of their autonomy and complexity. The review contributes to the identification of current research directions and gaps regarding the integration of DTs and DS, with the aim of supporting more effective and adaptive manufacturing strategies.
{"title":"Decision Support Within Digital Twins in Manufacturing Ecosystems: A Review","authors":"Lucía Gálvez del Postigo Gallego, Sanja Lazarova-Molnar, Alejandro del Real Torres, Luis E. Acevedo Galicia","doi":"10.1049/cim2.70041","DOIUrl":"10.1049/cim2.70041","url":null,"abstract":"<p>The dynamic nature of manufacturing and evolving customer demands require agile adaptation within Manufacturing Ecosystems—interconnected networks of enterprises and institutions collaborating to develop market—oriented solutions. To support this adaptation, it is crucial to evaluate large volumes of data and assess alternative scenarios electively. Digital Twins (DTs) enable the replication of physical systems into virtual models, facilitating the exploration of such scenarios. In most applications, Decision Support (DS) is essential and can be considered intrinsic to DTs. By integrating DS within DTs, the loop can be closed—transforming simulation information into actionable decisions. This study investigates recent advances and trends in the use of DTs for DS in production processes, with a focus on applications in Manufacturing Ecosystems. A systematic review is conducted to examine how DTs contribute to complex and holistic decision-making, including tasks such as production planning, maintenance scheduling, and defect management. Special attention is given to how decisions are made within DT-based applications and the extent of their autonomy and complexity. The review contributes to the identification of current research directions and gaps regarding the integration of DTs and DS, with the aim of supporting more effective and adaptive manufacturing strategies.</p>","PeriodicalId":33286,"journal":{"name":"IET Collaborative Intelligent Manufacturing","volume":"7 1","pages":""},"PeriodicalIF":3.1,"publicationDate":"2025-08-09","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://ietresearch.onlinelibrary.wiley.com/doi/epdf/10.1049/cim2.70041","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145135359","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
With the increasing awareness of environmental protection, sustainable manufacturing has become an important component in various industries. As an essential foundation for sustainable strategy, safe and reliable operation and maintenance of nuclear power resources is crucial, requesting agile and precise response and diagnosis of equipment failure signals. Due to security requirements, nuclear power plants strictly isolate operating data and form an actual data island. Simultaneously, the insufficient amount of fault sample data makes it difficult to establish an accurate fault diagnosis model. How to establish a stable and reliable nuclear power steam turbine vibration fault diagnosis model across different nuclear power plants and nuclear equipment has become a big problem. To achieve secure model aggregation without violating client privacy, federated learning (FL) has become a research hot spot for model aggregation, but it ignores the differences between source clients and fails to capture domain-invariant features during local training, which hinders its further development. To address this challenge, a federated deep domain adaptation-based framework considering privacy-preserving (FL-DDA) is proposed for operations and maintenance in nuclear power plants. The framework performs feature extraction locally in source nuclear power plants and targets nuclear power plants, such that the features are shared securely without revealing data privacy. At the same time, domain adversarial training is integrated into the local model training to realise the transfer of vibration fault diagnosis knowledge. Furthermore, an adaptive weight mechanism is devised to facilitate the adaptive adjustment of model weights in federated aggregation. Finally, a desensitised vibration dataset in nuclear power steam turbines is applied for validation, and FL-DDA is compared with other existing methods. Under the premise of data privacy security, the proposed FL-DDA framework proves to outperform its peers in vibration fault diagnosis and domain adaptation.
{"title":"A federated deep domain adaptation-based framework for nuclear power steam turbines considering privacy-preserving","authors":"Bingtao Hu, Ruirui Zhong, Junjie Song, Jingren Guo, Yong Wang, Shanhe Lou, Jianrong Tan","doi":"10.1049/cim2.12110","DOIUrl":"10.1049/cim2.12110","url":null,"abstract":"<p>With the increasing awareness of environmental protection, sustainable manufacturing has become an important component in various industries. As an essential foundation for sustainable strategy, safe and reliable operation and maintenance of nuclear power resources is crucial, requesting agile and precise response and diagnosis of equipment failure signals. Due to security requirements, nuclear power plants strictly isolate operating data and form an actual data island. Simultaneously, the insufficient amount of fault sample data makes it difficult to establish an accurate fault diagnosis model. How to establish a stable and reliable nuclear power steam turbine vibration fault diagnosis model across different nuclear power plants and nuclear equipment has become a big problem. To achieve secure model aggregation without violating client privacy, federated learning (FL) has become a research hot spot for model aggregation, but it ignores the differences between source clients and fails to capture domain-invariant features during local training, which hinders its further development. To address this challenge, a federated deep domain adaptation-based framework considering privacy-preserving (FL-DDA) is proposed for operations and maintenance in nuclear power plants. The framework performs feature extraction locally in source nuclear power plants and targets nuclear power plants, such that the features are shared securely without revealing data privacy. At the same time, domain adversarial training is integrated into the local model training to realise the transfer of vibration fault diagnosis knowledge. Furthermore, an adaptive weight mechanism is devised to facilitate the adaptive adjustment of model weights in federated aggregation. Finally, a desensitised vibration dataset in nuclear power steam turbines is applied for validation, and FL-DDA is compared with other existing methods. Under the premise of data privacy security, the proposed FL-DDA framework proves to outperform its peers in vibration fault diagnosis and domain adaptation.</p>","PeriodicalId":33286,"journal":{"name":"IET Collaborative Intelligent Manufacturing","volume":"7 1","pages":""},"PeriodicalIF":3.1,"publicationDate":"2025-07-31","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://onlinelibrary.wiley.com/doi/epdf/10.1049/cim2.12110","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144751669","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Computer-aided design (CAD) serves as an essential and irreplaceable tool for engineers and designers, optimising design workflows and driving innovation across diverse industries. Nevertheless, mastering these sophisticated CAD programmes requires substantial training and expertise from practitioners. To address these challenges, this paper introduces a framework for reconstructing CAD models from multiview. Specifically, we present a novel end-to-end neural network capable of directly reconstructing parametric CAD command sequences from multiview. Subsequently, the proposed network addresses the low-rank bottleneck inherent in traditional attention mechanisms of neural networks. Finally, we present a novel parametric CAD dataset that incorporates multiview for corresponding CAD sequences while eliminating redundant data. Comparative experiments reveal that the proposed framework effectively reconstructs high-quality parametric CAD models, which are readily editable in collaborative CAD/CAM environments.
{"title":"Multiview Reconstruction of Parametric CAD Models","authors":"Rubin Fan, Yi Zhang, Fazhi He","doi":"10.1049/cim2.70037","DOIUrl":"10.1049/cim2.70037","url":null,"abstract":"<p>Computer-aided design (CAD) serves as an essential and irreplaceable tool for engineers and designers, optimising design workflows and driving innovation across diverse industries. Nevertheless, mastering these sophisticated CAD programmes requires substantial training and expertise from practitioners. To address these challenges, this paper introduces a framework for reconstructing CAD models from multiview. Specifically, we present a novel end-to-end neural network capable of directly reconstructing parametric CAD command sequences from multiview. Subsequently, the proposed network addresses the low-rank bottleneck inherent in traditional attention mechanisms of neural networks. Finally, we present a novel parametric CAD dataset that incorporates multiview for corresponding CAD sequences while eliminating redundant data. Comparative experiments reveal that the proposed framework effectively reconstructs high-quality parametric CAD models, which are readily editable in collaborative CAD/CAM environments.</p>","PeriodicalId":33286,"journal":{"name":"IET Collaborative Intelligent Manufacturing","volume":"7 1","pages":""},"PeriodicalIF":3.1,"publicationDate":"2025-07-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://onlinelibrary.wiley.com/doi/epdf/10.1049/cim2.70037","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144714793","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Federico Manuri, Francesco De Pace, Ismaele Piparo, Andrea Sanna
Industrial manufacturing faces many challenges and opportunities as novel technologies change how products are designed and produced. The design step of a product requires skills and time, starting from conceptualising the object's 3D shape. However, AI models have been proven capable of reconstructing 3D models from images. Thus, a designer may approach the modelling phase of a product with traditional CAD software, relying not only on existing 3D models but also on the digitalisation of everyday real objects, prototypes, or photographs. However, AI models need to be trained on extensive datasets to obtain reliable behaviours, and the manual creation of such datasets is usually time-consuming. Synthetic datasets could speed up the model's training process providing automatically labelled data for the objects of interest for the designer. This research explores a novel approach to foster synthetic dataset generation for 3D object reconstruction. The proposed pipeline involves setting up 3D models and customising the rendering pipeline to create datasets with different rendering properties automatically. These datasets are then used to train and test a 3D object reconstruction model to investigate how to improve synthetic dataset generation to optimise performance.
{"title":"AI-Aided Design for Industrial Manufacturing: Generating Synthetic Image Datasets to Train 3D Object Reconstruction Neural Networks","authors":"Federico Manuri, Francesco De Pace, Ismaele Piparo, Andrea Sanna","doi":"10.1049/cim2.70039","DOIUrl":"10.1049/cim2.70039","url":null,"abstract":"<p>Industrial manufacturing faces many challenges and opportunities as novel technologies change how products are designed and produced. The design step of a product requires skills and time, starting from conceptualising the object's 3D shape. However, AI models have been proven capable of reconstructing 3D models from images. Thus, a designer may approach the modelling phase of a product with traditional CAD software, relying not only on existing 3D models but also on the digitalisation of everyday real objects, prototypes, or photographs. However, AI models need to be trained on extensive datasets to obtain reliable behaviours, and the manual creation of such datasets is usually time-consuming. Synthetic datasets could speed up the model's training process providing automatically labelled data for the objects of interest for the designer. This research explores a novel approach to foster synthetic dataset generation for 3D object reconstruction. The proposed pipeline involves setting up 3D models and customising the rendering pipeline to create datasets with different rendering properties automatically. These datasets are then used to train and test a 3D object reconstruction model to investigate how to improve synthetic dataset generation to optimise performance.</p>","PeriodicalId":33286,"journal":{"name":"IET Collaborative Intelligent Manufacturing","volume":"7 1","pages":""},"PeriodicalIF":3.1,"publicationDate":"2025-07-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://onlinelibrary.wiley.com/doi/epdf/10.1049/cim2.70039","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144657683","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}