Pub Date : 2024-10-30DOI: 10.1016/j.jmsy.2024.10.014
Lin Huang , Dunbing Tang , Zequn Zhang , Haihua Zhu , Qixiang Cai , Shikui Zhao
The distributed scheduling problem (DSP) becomes particularly important with the popularization of the distributed manufacturing mode. The distributed job shop scheduling problem (DJSP) is a typical representative of the DSP. It consists of two subproblems, assigning jobs to factories and determining the operation sequence on machines. Some benchmark instances have been proposed to test the performance of the DJSP approach, but most instances have not found the optimal solution. In this paper, an iterated greedy algorithm integrating job insertion (IGJI) is proposed to solve the DJSP. Firstly, a job insertion strategy based on idle time (JIIT) is designed for the insertion of a job into a factory. Secondly, JIIT is used in the reconstruction phase of IGJI, while three destruction-reconstruction methods are designed to balance the makespan among factories. Finally, tabu search is adopted in the local search phase of IGJI to improve the solution quality further. The performance of IGJI is tested on 240 benchmark instances, and the experimental results show that the solution quality of IGJI outperforms the other four state-of-the-art algorithms. In particular, IGJI has found 231 new solutions for these benchmark instances.
{"title":"An iterated greedy algorithm integrating job insertion strategy for distributed job shop scheduling problems","authors":"Lin Huang , Dunbing Tang , Zequn Zhang , Haihua Zhu , Qixiang Cai , Shikui Zhao","doi":"10.1016/j.jmsy.2024.10.014","DOIUrl":"10.1016/j.jmsy.2024.10.014","url":null,"abstract":"<div><div>The distributed scheduling problem (DSP) becomes particularly important with the popularization of the distributed manufacturing mode. The distributed job shop scheduling problem (DJSP) is a typical representative of the DSP. It consists of two subproblems, assigning jobs to factories and determining the operation sequence on machines. Some benchmark instances have been proposed to test the performance of the DJSP approach, but most instances have not found the optimal solution. In this paper, an iterated greedy algorithm integrating job insertion (IGJI) is proposed to solve the DJSP. Firstly, a job insertion strategy based on idle time (JIIT) is designed for the insertion of a job into a factory. Secondly, JIIT is used in the reconstruction phase of IGJI, while three destruction-reconstruction methods are designed to balance the makespan among factories. Finally, tabu search is adopted in the local search phase of IGJI to improve the solution quality further. The performance of IGJI is tested on 240 benchmark instances, and the experimental results show that the solution quality of IGJI outperforms the other four state-of-the-art algorithms. In particular, IGJI has found 231 new solutions for these benchmark instances.</div></div>","PeriodicalId":16227,"journal":{"name":"Journal of Manufacturing Systems","volume":"77 ","pages":"Pages 746-763"},"PeriodicalIF":12.2,"publicationDate":"2024-10-30","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142554250","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-10-30DOI: 10.1016/j.jmsy.2024.10.023
Xiaojian Wen , Yicheng Sun , Shimin Liu , Jinsong Bao , Dan Zhang
Complex digital twin (DT) systems offer a robust solution for design, optimization, and operational management in industrial domains. However, in an effort to faithfully replicate the dynamic changes of the physical world with high fidelity, the excessively intricate and highly coupled system components present modeling challenges, making it difficult to accurately capture the system's dynamic characteristics and internal correlations. Particularly in scenarios involving multi-scale and multi-physics coupling, complex systems lack adequate fine-grained decomposition (FGD) methods. This results in cumbersome information exchange and consistency maintenance between models of different granularities. To address these limitations, this paper proposes a method for multi-level decomposition of complex twin models. This method constructs a FGD model for DTs by integrating three key correlation mechanisms between components: semantic association, dynamic association, and topological association. The decomposed model achieves reasonable simplification and abstraction while maintaining the accuracy of the complex system, thereby balancing computational efficiency and simulation precision. The case study validation employed a marine diesel engine piston production line to test the proposed decomposition method, verifying the effectiveness of the approach.
{"title":"Fine-grained decomposition of complex digital twin systems driven by semantic-topological-dynamic associations","authors":"Xiaojian Wen , Yicheng Sun , Shimin Liu , Jinsong Bao , Dan Zhang","doi":"10.1016/j.jmsy.2024.10.023","DOIUrl":"10.1016/j.jmsy.2024.10.023","url":null,"abstract":"<div><div>Complex digital twin (DT) systems offer a robust solution for design, optimization, and operational management in industrial domains. However, in an effort to faithfully replicate the dynamic changes of the physical world with high fidelity, the excessively intricate and highly coupled system components present modeling challenges, making it difficult to accurately capture the system's dynamic characteristics and internal correlations. Particularly in scenarios involving multi-scale and multi-physics coupling, complex systems lack adequate fine-grained decomposition (FGD) methods. This results in cumbersome information exchange and consistency maintenance between models of different granularities. To address these limitations, this paper proposes a method for multi-level decomposition of complex twin models. This method constructs a FGD model for DTs by integrating three key correlation mechanisms between components: semantic association, dynamic association, and topological association. The decomposed model achieves reasonable simplification and abstraction while maintaining the accuracy of the complex system, thereby balancing computational efficiency and simulation precision. The case study validation employed a marine diesel engine piston production line to test the proposed decomposition method, verifying the effectiveness of the approach.</div></div>","PeriodicalId":16227,"journal":{"name":"Journal of Manufacturing Systems","volume":"77 ","pages":"Pages 780-797"},"PeriodicalIF":12.2,"publicationDate":"2024-10-30","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142554252","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-10-30DOI: 10.1016/j.jmsy.2024.10.004
Hassaan Ahmad , Wei Cheng , Ji Xing , Wentao Wang , Shuhong Du , Linying Li , Rongyong Zhang , Xuefeng Chen , Jinqi Lu
Planetary gearboxes are popular in many industrial applications due to their compactness and higher transmission ratios. With recent developments in the area of machine learning, Deep Learning-based Fault Diagnosis (DLFD) has become the preferred approach over traditional signal processing methods, physics-based models, and shallow machine learning techniques. This paper presents a systematic review that identifies key research questions for fault types, datasets used, challenges addressed, approaches applied to address the challenges and comparison of the methods using diagnosis accuracies, computation load, and model complexity. The review highlights that the researchers have focused on several challenges, including fault diagnosis under varying operating conditions, imbalanced data, noisy data, limited labeled fault samples, and zero faulty samples. To address these issues various methods have been proposed in the literature, such as incorporating signal processing, data augmentation, transfer learning using domain adaptation, adversarial learning, and integrating physics-based models. Enhancing the industrial applicability of DLFD methods requires validating these methods under multi-problem scenarios, improving transfer learning accuracy for cross-machine fault diagnosis, enhancing interpretability and trust, optimizing for lightweight implementation, and utilizing industrial datasets. Addressing these areas will enable DLFD methods to achieve greater reliability and wider adoption in industrial maintenance practices.
{"title":"Deep learning-based fault diagnosis of planetary gearbox: A systematic review","authors":"Hassaan Ahmad , Wei Cheng , Ji Xing , Wentao Wang , Shuhong Du , Linying Li , Rongyong Zhang , Xuefeng Chen , Jinqi Lu","doi":"10.1016/j.jmsy.2024.10.004","DOIUrl":"10.1016/j.jmsy.2024.10.004","url":null,"abstract":"<div><div>Planetary gearboxes are popular in many industrial applications due to their compactness and higher transmission ratios. With recent developments in the area of machine learning, Deep Learning-based Fault Diagnosis (DLFD) has become the preferred approach over traditional signal processing methods, physics-based models, and shallow machine learning techniques. This paper presents a systematic review that identifies key research questions for fault types, datasets used, challenges addressed, approaches applied to address the challenges and comparison of the methods using diagnosis accuracies, computation load, and model complexity. The review highlights that the researchers have focused on several challenges, including fault diagnosis under varying operating conditions, imbalanced data, noisy data, limited labeled fault samples, and zero faulty samples. To address these issues various methods have been proposed in the literature, such as incorporating signal processing, data augmentation, transfer learning using domain adaptation, adversarial learning, and integrating physics-based models. Enhancing the industrial applicability of DLFD methods requires validating these methods under multi-problem scenarios, improving transfer learning accuracy for cross-machine fault diagnosis, enhancing interpretability and trust, optimizing for lightweight implementation, and utilizing industrial datasets. Addressing these areas will enable DLFD methods to achieve greater reliability and wider adoption in industrial maintenance practices.</div></div>","PeriodicalId":16227,"journal":{"name":"Journal of Manufacturing Systems","volume":"77 ","pages":"Pages 730-745"},"PeriodicalIF":12.2,"publicationDate":"2024-10-30","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142537578","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-10-30DOI: 10.1016/j.jmsy.2024.10.022
Foivos Psarommatis , Victor Azamfirei
Without product quality, companies cannot survive in today’s competitive and regulated environment. Quality affects not only the product, process, and services, but also the true sustainable capability of a company, being economic, social, and environmental. Different Quality Management (QM) paradigms and approaches have been used to plan, assure, control, and improve production processes and product quality. Nevertheless, most such paradigms were conceived before major technological advancements, thus relying heavily on processes and on people’s knowledge. New paradigms such as Digital Lean, Quality 4.0, and Zero-Defect Manufacturing (ZDM), challenge such views and incorporate emerging technologies into the QM umbrella. Through a literature review, this paper analyses the different QM approaches and combines all the best practices of past and present to support sustainable manufacturing. This paper’s findings include (i) a methodological conceptualization of different QM approaches, (ii) an identification of shortcomings, (iii) analysis of the domain of application, (iv) a proposal for a conceptual framework, and (v) proposals for future work consisting of aligning such theoretical findings with empirical results.
{"title":"Zero Defect Manufacturing: A complete guide for advanced and sustainable quality management","authors":"Foivos Psarommatis , Victor Azamfirei","doi":"10.1016/j.jmsy.2024.10.022","DOIUrl":"10.1016/j.jmsy.2024.10.022","url":null,"abstract":"<div><div>Without product quality, companies cannot survive in today’s competitive and regulated environment. Quality affects not only the product, process, and services, but also the true sustainable capability of a company, being economic, social, and environmental. Different Quality Management (QM) paradigms and approaches have been used to plan, assure, control, and improve production processes and product quality. Nevertheless, most such paradigms were conceived before major technological advancements, thus relying heavily on processes and on people’s knowledge. New paradigms such as Digital Lean, Quality 4.0, and Zero-Defect Manufacturing (ZDM), challenge such views and incorporate emerging technologies into the QM umbrella. Through a literature review, this paper analyses the different QM approaches and combines all the best practices of past and present to support sustainable manufacturing. This paper’s findings include (i) a methodological conceptualization of different QM approaches, (ii) an identification of shortcomings, (iii) analysis of the domain of application, (iv) a proposal for a conceptual framework, and (v) proposals for future work consisting of aligning such theoretical findings with empirical results.</div></div>","PeriodicalId":16227,"journal":{"name":"Journal of Manufacturing Systems","volume":"77 ","pages":"Pages 764-779"},"PeriodicalIF":12.2,"publicationDate":"2024-10-30","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142554198","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-10-29DOI: 10.1016/j.jmsy.2024.10.009
Yifan Zhang , Qiang Zhang , Ye Hu , Qing Wang , Liang Cheng , Yinglin Ke
This paper introduces a framework for surrogating aircraft structural deformation using simulation data. The framework compresses high-dimensional field data into embeddings via Principal Component Analysis (PCA) and advanced deep learning methods. It establishes a mapping from discretized control points to these embeddings, enabling complete surrogation from the parameter space to structural deformation. The approach facilitates simultaneous surrogation of both displacement and stress fields, providing a robust evaluation metric for assessing assembly quality. Furthermore, the performance of the proposed PCA and deep learning-based surrogation methods is evaluated using multiple metrics. Results demonstrate that the proposed Conditional Convolutional Autoencoders, enhanced by Triplet attention (C2AE-Tri), achieve higher accuracy and over 60 % data reduction compared to the PCA baseline. This improvement highlights the framework's scalability and utility, particularly when data acquisition is challenging or costly.
{"title":"A surrogate modeling framework for aircraft assembly deformation using triplet attention-enhanced conditional autoencoder","authors":"Yifan Zhang , Qiang Zhang , Ye Hu , Qing Wang , Liang Cheng , Yinglin Ke","doi":"10.1016/j.jmsy.2024.10.009","DOIUrl":"10.1016/j.jmsy.2024.10.009","url":null,"abstract":"<div><div>This paper introduces a framework for surrogating aircraft structural deformation using simulation data. The framework compresses high-dimensional field data into embeddings via Principal Component Analysis (PCA) and advanced deep learning methods. It establishes a mapping from discretized control points to these embeddings, enabling complete surrogation from the parameter space to structural deformation. The approach facilitates simultaneous surrogation of both displacement and stress fields, providing a robust evaluation metric for assessing assembly quality. Furthermore, the performance of the proposed PCA and deep learning-based surrogation methods is evaluated using multiple metrics. Results demonstrate that the proposed Conditional Convolutional Autoencoders, enhanced by Triplet attention (C2AE-Tri), achieve higher accuracy and over 60 % data reduction compared to the PCA baseline. This improvement highlights the framework's scalability and utility, particularly when data acquisition is challenging or costly.</div></div>","PeriodicalId":16227,"journal":{"name":"Journal of Manufacturing Systems","volume":"77 ","pages":"Pages 708-729"},"PeriodicalIF":12.2,"publicationDate":"2024-10-29","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142535595","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-10-29DOI: 10.1016/j.jmsy.2024.10.017
Xuehao Sun, Fengli Zhang, Xiaotong Niu, Jinjiang Wang
Commissioning machine tools before machining is crucial for improving efficiency and performance. Current virtual commissioning technologies have limitations, such as detachment from operation scenarios, which can reduce commissioning effect. This paper presents a digital twin commissioning method for machine tools based on scenario simulation. The method takes into account the machining conditions to build virtual machining scenarios and carries out virtual machining commissioning based on a twin model. The digital twin model of the machine tool is constructed using the unified multi-domain modelling language to ensure consistent response to machining conditions, control effect, and mapping effect of real and virtual parameter changes. Secondly, the machining scenario simulation strategy is formulated and the decoupling analysis for the machining process is carried out to achieve the parametric representation of the working conditions and the simulation of the machining loads. Finally, the parameter adjustment and optimization are investigated under variable machining conditions and variable parameters. The experimental results demonstrate that the proposed method reduces the commissioning time of the spindle machining system of machine tools, decreases the response time by approximately 12 %, and reduces the steady-state error by about 52 %. These findings confirm the effectiveness of the proposed method and its feasibility for field application.
{"title":"A digital twin commissioning method for machine tools based on scenario simulation","authors":"Xuehao Sun, Fengli Zhang, Xiaotong Niu, Jinjiang Wang","doi":"10.1016/j.jmsy.2024.10.017","DOIUrl":"10.1016/j.jmsy.2024.10.017","url":null,"abstract":"<div><div>Commissioning machine tools before machining is crucial for improving efficiency and performance. Current virtual commissioning technologies have limitations, such as detachment from operation scenarios, which can reduce commissioning effect. This paper presents a digital twin commissioning method for machine tools based on scenario simulation. The method takes into account the machining conditions to build virtual machining scenarios and carries out virtual machining commissioning based on a twin model. The digital twin model of the machine tool is constructed using the unified multi-domain modelling language to ensure consistent response to machining conditions, control effect, and mapping effect of real and virtual parameter changes. Secondly, the machining scenario simulation strategy is formulated and the decoupling analysis for the machining process is carried out to achieve the parametric representation of the working conditions and the simulation of the machining loads. Finally, the parameter adjustment and optimization are investigated under variable machining conditions and variable parameters. The experimental results demonstrate that the proposed method reduces the commissioning time of the spindle machining system of machine tools, decreases the response time by approximately 12 %, and reduces the steady-state error by about 52 %. These findings confirm the effectiveness of the proposed method and its feasibility for field application.</div></div>","PeriodicalId":16227,"journal":{"name":"Journal of Manufacturing Systems","volume":"77 ","pages":"Pages 697-707"},"PeriodicalIF":12.2,"publicationDate":"2024-10-29","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142535589","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-10-28DOI: 10.1016/j.jmsy.2024.10.011
Dachuan Shi , Philipp Liedl , Thomas Bauernhansl
In the era of Industry 4.0, Zero Defect Manufacturing (ZDM) has emerged as a prominent strategy for quality improvement, emphasizing data-driven approaches for defect prediction, prevention, and mitigation. The success of ZDM heavily depends on the availability and quality of data typically collected from diverse and heterogeneous sources during production and quality control, presenting challenges in data interoperability. Addressing this, we introduce a novel approach leveraging Asset Administration Shell (AAS) and Large Language Models (LLMs) for creating interoperable information models that incorporate semantic contextual information to enhance the interoperability of data integration in the quality control process. AAS, initiated by German industry stakeholders, shows a significant advancement in information modeling, blending ontology and digital twin concepts for the virtual representation of assets. In this work, we develop a systematic, use-case-driven methodology for AAS-based information modeling. This methodology guides the design and implementation of AAS models, ensuring model properties are presented in a unified structure and reference external standardized vocabularies to maintain consistency across different systems. To automate this referencing process, we propose a novel LLM-based algorithm to semantically search model properties within a standardized vocabulary repository. This algorithm significantly reduces manual intervention in model development. A case study in the injection molding domain demonstrates the practical application of our approach, showcasing the integration and linking of product quality and machine process data with the help of the developed AAS models. Statistical evaluation of our LLM-based semantic search algorithm confirms its efficacy in enhancing data interoperability. This methodology offers a scalable and adaptable solution for various industrial use cases, promoting widespread data interoperability in the context of Industry 4.0.
{"title":"Interoperable information modelling leveraging asset administration shell and large language model for quality control toward zero defect manufacturing","authors":"Dachuan Shi , Philipp Liedl , Thomas Bauernhansl","doi":"10.1016/j.jmsy.2024.10.011","DOIUrl":"10.1016/j.jmsy.2024.10.011","url":null,"abstract":"<div><div>In the era of Industry 4.0, Zero Defect Manufacturing (ZDM) has emerged as a prominent strategy for quality improvement, emphasizing data-driven approaches for defect prediction, prevention, and mitigation. The success of ZDM heavily depends on the availability and quality of data typically collected from diverse and heterogeneous sources during production and quality control, presenting challenges in data interoperability. Addressing this, we introduce a novel approach leveraging Asset Administration Shell (AAS) and Large Language Models (LLMs) for creating interoperable information models that incorporate semantic contextual information to enhance the interoperability of data integration in the quality control process. AAS, initiated by German industry stakeholders, shows a significant advancement in information modeling, blending ontology and digital twin concepts for the virtual representation of assets. In this work, we develop a systematic, use-case-driven methodology for AAS-based information modeling. This methodology guides the design and implementation of AAS models, ensuring model properties are presented in a unified structure and reference external standardized vocabularies to maintain consistency across different systems. To automate this referencing process, we propose a novel LLM-based algorithm to semantically search model properties within a standardized vocabulary repository. This algorithm significantly reduces manual intervention in model development. A case study in the injection molding domain demonstrates the practical application of our approach, showcasing the integration and linking of product quality and machine process data with the help of the developed AAS models. Statistical evaluation of our LLM-based semantic search algorithm confirms its efficacy in enhancing data interoperability. This methodology offers a scalable and adaptable solution for various industrial use cases, promoting widespread data interoperability in the context of Industry 4.0.</div></div>","PeriodicalId":16227,"journal":{"name":"Journal of Manufacturing Systems","volume":"77 ","pages":"Pages 678-696"},"PeriodicalIF":12.2,"publicationDate":"2024-10-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142536176","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-10-26DOI: 10.1016/j.jmsy.2024.10.005
Yanshan Gao , Ying Cheng , Lei Wang , Fei Tao , Qing-Guo Wang , Jing Liu
Enhancing capacity utilization of manufacturing resources is of utmost importance in tackling the current challenges of meeting customized and small-batch market demands. Given the research highlights on platform-based manufacturing service collaboration (MSC) offering high-quality service solutions, efficient service scheduling strategies are urgently needed to maximize overall utility amidst great computational complexity and unpredictable task arrivals. To address this issue, this paper proposes a novel distributed online task dispatch and service scheduling (DOTDSS) strategy in platform-aggregated MSC. What sets our method apart is its goal to optimize a long-term average utility performance with considering queuing dynamics of manufacturing services in multi-task processing, thereby maintaining sustainable platform operations. Firstly, we jointly consider task dispatch and service scheduling decisions into the formulation of a quality-of-service aware (QoS) stochastics optimization problem. The newly constructed logarithmic utility function effectively strikes a trade-off between the throughput and capacity utilization of manufacturing services with diverse capabilities. By incorporating the goal of reducing queue lengths, we then transform the optimization problem into a form with less computational complexity and guaranteed optimality using Lyapunov optimization. We further propose a DOTDSS strategy that relies solely on the current system state and queue information to generate scalable MSC solutions. It does not need to predict task arrival statistics in advance, and it exhibits great adaptability to uncertainties in task arrivals and service availabilities. Finally, numerical results based on simulation data and real workload traces demonstrate the effectiveness of our method. It also shows that the aggregation collaboration pattern among a group of candidates can achieve better performance than that by the optimal candidate alone.
{"title":"Long-term average throughput-utilization utility maximization in platform-aggregated manufacturing service collaboration","authors":"Yanshan Gao , Ying Cheng , Lei Wang , Fei Tao , Qing-Guo Wang , Jing Liu","doi":"10.1016/j.jmsy.2024.10.005","DOIUrl":"10.1016/j.jmsy.2024.10.005","url":null,"abstract":"<div><div>Enhancing capacity utilization of manufacturing resources is of utmost importance in tackling the current challenges of meeting customized and small-batch market demands. Given the research highlights on platform-based manufacturing service collaboration (MSC) offering high-quality service solutions, efficient service scheduling strategies are urgently needed to maximize overall utility amidst great computational complexity and unpredictable task arrivals. To address this issue, this paper proposes a novel distributed online task dispatch and service scheduling (DOTDSS) strategy in platform-aggregated MSC. What sets our method apart is its goal to optimize a long-term average utility performance with considering queuing dynamics of manufacturing services in multi-task processing, thereby maintaining sustainable platform operations. Firstly, we jointly consider task dispatch and service scheduling decisions into the formulation of a quality-of-service aware (QoS) stochastics optimization problem. The newly constructed logarithmic utility function effectively strikes a trade-off between the throughput and capacity utilization of manufacturing services with diverse capabilities. By incorporating the goal of reducing queue lengths, we then transform the optimization problem into a form with less computational complexity and guaranteed optimality using Lyapunov optimization. We further propose a DOTDSS strategy that relies solely on the current system state and queue information to generate scalable MSC solutions. It does not need to predict task arrival statistics in advance, and it exhibits great adaptability to uncertainties in task arrivals and service availabilities. Finally, numerical results based on simulation data and real workload traces demonstrate the effectiveness of our method. It also shows that the aggregation collaboration pattern among a group of candidates can achieve better performance than that by the optimal candidate alone.</div></div>","PeriodicalId":16227,"journal":{"name":"Journal of Manufacturing Systems","volume":"77 ","pages":"Pages 662-677"},"PeriodicalIF":12.2,"publicationDate":"2024-10-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142535594","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
In recent years, intelligent fault diagnosis (IFD) based on Artificial Intelligence (AI) has gained significant attention and achieved remarkable breakthroughs. However, the black-box property of AI-enabled IFD may render it non-interpretable, which is essential for safety-critical industrial assets. In this paper, we propose a fully interpretable IFD approach that incorporates expert knowledge using neuro-symbolic AI. The proposed approach, named Deep Expert Network, defines neuro-symbolic node, including signal processing operators, statistical operators, and logical operators to establish a clear semantic space for the network. All operators are connected with trainable weights that decide the connections. End-to-end and gradient-based learning are utilized to optimize both the model structure weights and parameters to fit the fault signal and obtain a fully interpretable decision route. The transparency of model, generalization ability toward unseen working conditions, and robustness to noise attack are demonstrated through case study of rotating machinery, paving the way for future industrial applications.
{"title":"Deep expert network: A unified method toward knowledge-informed fault diagnosis via fully interpretable neuro-symbolic AI","authors":"Qi Li, Yuekai Liu, Shilin Sun, Zhaoye Qin, Fulei Chu","doi":"10.1016/j.jmsy.2024.10.007","DOIUrl":"10.1016/j.jmsy.2024.10.007","url":null,"abstract":"<div><div>In recent years, intelligent fault diagnosis (IFD) based on Artificial Intelligence (AI) has gained significant attention and achieved remarkable breakthroughs. However, the black-box property of AI-enabled IFD may render it non-interpretable, which is essential for safety-critical industrial assets. In this paper, we propose a fully interpretable IFD approach that incorporates expert knowledge using neuro-symbolic AI. The proposed approach, named Deep Expert Network, defines neuro-symbolic node, including signal processing operators, statistical operators, and logical operators to establish a clear semantic space for the network. All operators are connected with trainable weights that decide the connections. End-to-end and gradient-based learning are utilized to optimize both the model structure weights and parameters to fit the fault signal and obtain a fully interpretable decision route. The transparency of model, generalization ability toward unseen working conditions, and robustness to noise attack are demonstrated through case study of rotating machinery, paving the way for future industrial applications.</div></div>","PeriodicalId":16227,"journal":{"name":"Journal of Manufacturing Systems","volume":"77 ","pages":"Pages 652-661"},"PeriodicalIF":12.2,"publicationDate":"2024-10-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142535588","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-10-23DOI: 10.1016/j.jmsy.2024.09.016
Zhenrong Wang, Weifeng Li, Miao Wang, Baohui Liu, Tongzhi Niu, Bin Li
Deep learning-based surface defect detection methods have obtained good performance. However, customizing architectures for specific tasks is a complex and laborious process. Neural architecture search (NAS) offers a promising data-driven adaptive design approach. Yet, deploying NAS in industrial applications presents challenges due to its reliance on supervised learning paradigm. Hence, we propose a mixed semi-supervised adaptive network for commutator surface defect detection, even with limited labeled samples. In the proposed framework, we employ a multi-branch network with complementary perturbation flows, leveraging consistency regularization, pseudo-labeling, and contrastive learning. First, a confidence-guided directional consistency regularization strategy aligns features in high-quality directions. Second, confidence-aware hybrid pseudo-labeling improves the pseudo-supervision quality. Finally, foreground/background contrast awareness encourages the model to more sensitively identify defect regions. The detection backbone is data-driven generated through a neural architecture search process, replacing manual design strategies. Experimental results show our method automatically generates optimal commutator detection networks using limited labels, outperforming existing state-of-the-art methods. Our work paves the way for adaptive defect detection networks with limited labels and can extend to surface defect detection in various production lines.
基于深度学习的表面缺陷检测方法取得了良好的性能。然而,为特定任务定制架构是一个复杂而费力的过程。神经架构搜索(NAS)提供了一种很有前景的数据驱动自适应设计方法。然而,由于依赖于监督学习模式,在工业应用中部署 NAS 会面临挑战。因此,我们提出了一种用于换向器表面缺陷检测的混合半监督自适应网络,即使标注的样本有限。在提出的框架中,我们采用了具有互补扰动流的多分支网络,利用一致性正则化、伪标记和对比学习。首先,置信度指导下的方向一致性正则化策略使高质量方向上的特征保持一致。其次,置信度感知混合伪标签提高了伪监督的质量。最后,前景/背景对比意识促使模型更灵敏地识别缺陷区域。检测骨干由数据驱动,通过神经架构搜索过程生成,取代了人工设计策略。实验结果表明,我们的方法能利用有限的标签自动生成最佳换向器检测网络,性能优于现有的先进方法。我们的工作为使用有限标签的自适应缺陷检测网络铺平了道路,并可扩展到各种生产线的表面缺陷检测。
{"title":"Semi-supervised adaptive network for commutator defect detection with limited labels","authors":"Zhenrong Wang, Weifeng Li, Miao Wang, Baohui Liu, Tongzhi Niu, Bin Li","doi":"10.1016/j.jmsy.2024.09.016","DOIUrl":"10.1016/j.jmsy.2024.09.016","url":null,"abstract":"<div><div>Deep learning-based surface defect detection methods have obtained good performance. However, customizing architectures for specific tasks is a complex and laborious process. Neural architecture search (NAS) offers a promising data-driven adaptive design approach. Yet, deploying NAS in industrial applications presents challenges due to its reliance on supervised learning paradigm. Hence, we propose a mixed semi-supervised adaptive network for commutator surface defect detection, even with limited labeled samples. In the proposed framework, we employ a multi-branch network with complementary perturbation flows, leveraging consistency regularization, pseudo-labeling, and contrastive learning. First, a confidence-guided directional consistency regularization strategy aligns features in high-quality directions. Second, confidence-aware hybrid pseudo-labeling improves the pseudo-supervision quality. Finally, foreground/background contrast awareness encourages the model to more sensitively identify defect regions. The detection backbone is data-driven generated through a neural architecture search process, replacing manual design strategies. Experimental results show our method automatically generates optimal commutator detection networks using limited labels, outperforming existing state-of-the-art methods. Our work paves the way for adaptive defect detection networks with limited labels and can extend to surface defect detection in various production lines.</div></div>","PeriodicalId":16227,"journal":{"name":"Journal of Manufacturing Systems","volume":"77 ","pages":"Pages 639-651"},"PeriodicalIF":12.2,"publicationDate":"2024-10-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142536178","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}