Pub Date : 2024-08-12DOI: 10.1007/s10845-024-02475-3
Xiaoyao Wang, Fuzhou Du, Delong Zhao, Chang Liu
The integration of deep learning (DL) into vision inspection methods is increasingly recognized as a valuable approach to substantially enhance the adaptability and robustness. However, it is well known that high-performance neural networks typically require large training datasets with high-quality manual annotations, which are difficult to obtain in many manufacturing processes. To enhance the performance of DL methods for vision task with few samples, this paper proposes a novel metric called Effectiveness of Auxiliary Task (EAT) and presents a multi-task learning approach utilizing this metric for selecting effective auxiliary task branch and adaptive co-training them with main tasks. Experiments conducted on two vision tasks with few samples show that the proposed approach effectively eliminates ineffective task branches and enhances the contribution of the selected tasks to the main task: reducing the average normalized pixel error from 0.0613 to 0.0143 in pose key-points detection and elevating the Intersection over Union (IoU) from 0.6383 to 0.6921 in surface defect segmentation. Remarkably, these enhancements are achieved without necessitating additional manual labeling efforts.
{"title":"A multi-task effectiveness metric and an adaptive co-training method for enhancing learning performance with few samples","authors":"Xiaoyao Wang, Fuzhou Du, Delong Zhao, Chang Liu","doi":"10.1007/s10845-024-02475-3","DOIUrl":"https://doi.org/10.1007/s10845-024-02475-3","url":null,"abstract":"<p>The integration of deep learning (DL) into vision inspection methods is increasingly recognized as a valuable approach to substantially enhance the adaptability and robustness. However, it is well known that high-performance neural networks typically require large training datasets with high-quality manual annotations, which are difficult to obtain in many manufacturing processes. To enhance the performance of DL methods for vision task with few samples, this paper proposes a novel metric called Effectiveness of Auxiliary Task (EAT) and presents a multi-task learning approach utilizing this metric for selecting effective auxiliary task branch and adaptive co-training them with main tasks. Experiments conducted on two vision tasks with few samples show that the proposed approach effectively eliminates ineffective task branches and enhances the contribution of the selected tasks to the main task: reducing the average normalized pixel error from 0.0613 to 0.0143 in pose key-points detection and elevating the Intersection over Union (IoU) from 0.6383 to 0.6921 in surface defect segmentation. Remarkably, these enhancements are achieved without necessitating additional manual labeling efforts.</p>","PeriodicalId":16193,"journal":{"name":"Journal of Intelligent Manufacturing","volume":"193 1","pages":""},"PeriodicalIF":8.3,"publicationDate":"2024-08-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141934214","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-08-03DOI: 10.1007/s10845-024-02472-6
Marvin Carl May, Jan Oberst, Gisela Lanza
Continuous product individualization and customization led to the advent of lot size one in production and ultimately to product-inherent uniqueness. As complexities in individualization and processes grow, production systems need to adapt to unique, product-inherent constraints by advancing production control beyond predictive, rigid schedules. While complex processes, production systems and production constraints are not a novelty per se, modern production control approaches fall short of simultaneously regarding the flexibility of complex job shops and product unique constraints imposed on production control. To close this gap, this paper develops a novel, data driven, artificial intelligence based production control approach for complex job shops. For this purpose, product-inherent constraints are resolved by restricting the solution space of the production control according to a prediction based decision model. The approach validation is performed in a real semiconductor fab as a job shop that includes transitional time constraints as product-inherent constraints. Not violating these time constraints is essential to avoid scrap and similarly increase quality-based yield. To that end, transition times are forecasted and the adherence to these product-inherent constraints is evaluated based on one-sided prediction intervals and point estimators. The inclusion of product-inherent constraints leads to significant adherence improvements in the production system as indicated in the real-world semiconductor manufacturing case study and, hence, contributes a novel, data driven approach for production control. As a conclusion, the ability to avoid a large majority of violations of time constraints shows the approaches effectiveness and the future requirement to more accurately integrate such product-inherent constraints into production control.
{"title":"Managing product-inherent constraints with artificial intelligence: production control for time constraints in semiconductor manufacturing","authors":"Marvin Carl May, Jan Oberst, Gisela Lanza","doi":"10.1007/s10845-024-02472-6","DOIUrl":"https://doi.org/10.1007/s10845-024-02472-6","url":null,"abstract":"<p>Continuous product individualization and customization led to the advent of lot size one in production and ultimately to product-inherent uniqueness. As complexities in individualization and processes grow, production systems need to adapt to unique, product-inherent constraints by advancing production control beyond predictive, rigid schedules. While complex processes, production systems and production constraints are not a novelty per se, modern production control approaches fall short of simultaneously regarding the flexibility of complex job shops and product unique constraints imposed on production control. To close this gap, this paper develops a novel, data driven, artificial intelligence based production control approach for complex job shops. For this purpose, product-inherent constraints are resolved by restricting the solution space of the production control according to a prediction based decision model. The approach validation is performed in a real semiconductor fab as a job shop that includes transitional time constraints as product-inherent constraints. Not violating these time constraints is essential to avoid scrap and similarly increase quality-based yield. To that end, transition times are forecasted and the adherence to these product-inherent constraints is evaluated based on one-sided prediction intervals and point estimators. The inclusion of product-inherent constraints leads to significant adherence improvements in the production system as indicated in the real-world semiconductor manufacturing case study and, hence, contributes a novel, data driven approach for production control. As a conclusion, the ability to avoid a large majority of violations of time constraints shows the approaches effectiveness and the future requirement to more accurately integrate such product-inherent constraints into production control.</p>","PeriodicalId":16193,"journal":{"name":"Journal of Intelligent Manufacturing","volume":"36 1","pages":""},"PeriodicalIF":8.3,"publicationDate":"2024-08-03","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141886637","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-08-01DOI: 10.1007/s10845-024-02469-1
Qibin Wang, Linyang Yu, Liang Hao, Shengkang Yang, Tao Zhou, Wanghui Ji
Multi-sensor information fusion method has good performance in fault detection of rotary machine, in which each sensor information has made different contributions. The contribution of each sensor changes based on the working conditions of the machine, which can lead to a degradation in the performance of the transfer method when used in cross-domain mechanical fault detection. To solve this problem, an adaptive transfer fault detection method for rotary machine with multi-sensor information fusion is proposed. Firstly, multi-sensor data under different working conditions is collected, and features of different sensors are extracted by the corresponding deep learning model. Secondly, the multi-information interaction fusion network is designed to exchange sensor information and obtain fusion features. Then the fusion feature transfer model is proposed for cross-domain fault detection. Finally, the model is trained with the bearing dataset of the University of Paderborn. The results show that the transfer fault detection method with multi-sensor information fusion achieves state-of-the-art performances in cross-domain fault detection. It can adjust adaptively the contribution of each sensor information in the cross-domain fault detection.
{"title":"An adaptive transfer fault detection method for rotary machine with multi-sensor information fusion","authors":"Qibin Wang, Linyang Yu, Liang Hao, Shengkang Yang, Tao Zhou, Wanghui Ji","doi":"10.1007/s10845-024-02469-1","DOIUrl":"https://doi.org/10.1007/s10845-024-02469-1","url":null,"abstract":"<p>Multi-sensor information fusion method has good performance in fault detection of rotary machine, in which each sensor information has made different contributions. The contribution of each sensor changes based on the working conditions of the machine, which can lead to a degradation in the performance of the transfer method when used in cross-domain mechanical fault detection. To solve this problem, an adaptive transfer fault detection method for rotary machine with multi-sensor information fusion is proposed. Firstly, multi-sensor data under different working conditions is collected, and features of different sensors are extracted by the corresponding deep learning model. Secondly, the multi-information interaction fusion network is designed to exchange sensor information and obtain fusion features. Then the fusion feature transfer model is proposed for cross-domain fault detection. Finally, the model is trained with the bearing dataset of the University of Paderborn. The results show that the transfer fault detection method with multi-sensor information fusion achieves state-of-the-art performances in cross-domain fault detection. It can adjust adaptively the contribution of each sensor information in the cross-domain fault detection.</p>","PeriodicalId":16193,"journal":{"name":"Journal of Intelligent Manufacturing","volume":"82 1","pages":""},"PeriodicalIF":8.3,"publicationDate":"2024-08-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141886643","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-08-01DOI: 10.1007/s10845-024-02456-6
Francesco Lupi, Nelson Freitas, Miguel Arvana, Andre Dionisio Rocha, Antonio Maffei, José Barata, Michele Lanzetta
This paper proposes and implements a novel pipeline for the self-reconfiguration of a flexible, reconfigurable, CAD-based, and autonomous Vision Inspection System (VIS), expanding upon the modular framework theoretically outlined in (Lupi, F., Maffei, A., & Lanzetta, M. (2024). CAD-based Autonomous Vision Inspection Systems. Procedia Computer Science, 232, 2127–2136. https://doi.org/10.1016/J.PROCS.2024.02.033.). The pipeline automates the extraction and processing of inspection features manually incorporated by the designer into the Computer Aided Design (CAD) 3D model during the design stage, in accordance with Model Based Design (MBD) principles, which, in turn, facilitate virtuous approaches such as concurrent engineering and design for (Dfx), ultimately minimizing the time to market. The enriched CAD, containing inspection annotations (textual or dimensional) attached to geometrical entities, serving as the pipeline’s input, can be exported in a neutral file format, adhering to the Standard for Product Data Exchange (STEP) Application Protocol (AP)242, regardless of the modeling software used. The pipeline’s output is a Reconfiguration (ReCo) file, enabling the flexible hardware (e.g., robotic inspection cell) and software components of the VIS to be reconfigured via software (programmable). The main achievements of this work include: (i) demonstrating the feasibility of an end-to-end (i.e., CAD-to-ReCo file) pipeline that integrates the proposed software modules via Application Programming Interfaces (API)s, and (ii) formally defining the ReCo file. Experimental results from a demonstrative implementation enhance the clarity of the paper. The accuracy in defect detection achieved a 96% true positive rate and a 6% false positive rate, resulting in an overall accuracy of 94% and a precision of 88% across 72 quality inspection checks for six different inspection features of two product variants, each tested on six samples.
{"title":"Next-generation Vision Inspection Systems: a pipeline from 3D model to ReCo file","authors":"Francesco Lupi, Nelson Freitas, Miguel Arvana, Andre Dionisio Rocha, Antonio Maffei, José Barata, Michele Lanzetta","doi":"10.1007/s10845-024-02456-6","DOIUrl":"https://doi.org/10.1007/s10845-024-02456-6","url":null,"abstract":"<p>This paper proposes and implements a novel pipeline for the self-reconfiguration of a flexible, reconfigurable, CAD-based, and autonomous Vision Inspection System (VIS), expanding upon the modular framework theoretically outlined in (Lupi, F., Maffei, A., & Lanzetta, M. (2024). CAD-based Autonomous Vision Inspection Systems. <i>Procedia Computer Science</i>, <i>232</i>, 2127–2136. https://doi.org/10.1016/J.PROCS.2024.02.033.). The pipeline automates the extraction and processing of inspection features manually incorporated by the designer into the Computer Aided Design (CAD) 3D model during the design stage, in accordance with Model Based Design (MBD) principles, which, in turn, facilitate virtuous approaches such as concurrent engineering and design for (Dfx), ultimately minimizing the time to market. The enriched CAD, containing inspection annotations (textual or dimensional) attached to geometrical entities, serving as the pipeline’s input, can be exported in a neutral file format, adhering to the Standard for Product Data Exchange (STEP) Application Protocol (AP)242, regardless of the modeling software used. The pipeline’s output is a Reconfiguration (ReCo) file, enabling the flexible hardware (e.g., robotic inspection cell) and software components of the VIS to be reconfigured via software (programmable). The main achievements of this work include: (i) demonstrating the feasibility of an end-to-end (i.e., CAD-to-ReCo file) pipeline that integrates the proposed software modules via Application Programming Interfaces (API)s, and (ii) formally defining the ReCo file. Experimental results from a demonstrative implementation enhance the clarity of the paper. The accuracy in defect detection achieved a 96% true positive rate and a 6% false positive rate, resulting in an overall accuracy of 94% and a precision of 88% across 72 quality inspection checks for six different inspection features of two product variants, each tested on six samples.</p>","PeriodicalId":16193,"journal":{"name":"Journal of Intelligent Manufacturing","volume":"68 1","pages":""},"PeriodicalIF":8.3,"publicationDate":"2024-08-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141880424","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-07-31DOI: 10.1007/s10845-024-02457-5
Amra Peles, Vincent C. Paquit, Ryan R. Dehoff
Accelerating fabrication of additively manufactured components with precise microstructures is important for quality and qualification of built parts, as well as for a fundamental understanding of process improvement. Accomplishing this requires fast and robust characterization of melt pool geometries and structural defects in images. This paper proposes a pragmatic approach based on implementation of deep learning models and self-consistent workflow that enable systematic segmentation of defects and melt pools in optical images. Deep learning is based on an image-to-image translation–conditional generative adversarial neural network architecture. An artificial intelligence (AI) tool based on this deep learning model enables fast and incrementally more accurate predictions of the prevalent geometric features, including melt pool boundaries and printing-induced structural defects. We present statistical analysis of geometric features that is enabled by the AI tool, showing strong spatial correlation of defects and the melt pool boundaries. The correlations of widths and heights of melt pools with dataset processing parameters show the highest sensitivity to thermal influences resulting from laser passes in adjacent and subsequent layer passes. The presented models and tools are demonstrated on the aluminum alloy and datasets produced with different sets of processing parameters. However, they have universal quality and could easily be adapted to different material compositions. The method can be easily generalized to microstructural characterizations other than optical microscopy.
{"title":"Deep-learning based artificial intelligence tool for melt pools and defect segmentation","authors":"Amra Peles, Vincent C. Paquit, Ryan R. Dehoff","doi":"10.1007/s10845-024-02457-5","DOIUrl":"https://doi.org/10.1007/s10845-024-02457-5","url":null,"abstract":"<p>Accelerating fabrication of additively manufactured components with precise microstructures is important for quality and qualification of built parts, as well as for a fundamental understanding of process improvement. Accomplishing this requires fast and robust characterization of melt pool geometries and structural defects in images. This paper proposes a pragmatic approach based on implementation of deep learning models and self-consistent workflow that enable systematic segmentation of defects and melt pools in optical images. Deep learning is based on an image-to-image translation–conditional generative adversarial neural network architecture. An artificial intelligence (AI) tool based on this deep learning model enables fast and incrementally more accurate predictions of the prevalent geometric features, including melt pool boundaries and printing-induced structural defects. We present statistical analysis of geometric features that is enabled by the AI tool, showing strong spatial correlation of defects and the melt pool boundaries. The correlations of widths and heights of melt pools with dataset processing parameters show the highest sensitivity to thermal influences resulting from laser passes in adjacent and subsequent layer passes. The presented models and tools are demonstrated on the aluminum alloy and datasets produced with different sets of processing parameters. However, they have universal quality and could easily be adapted to different material compositions. The method can be easily generalized to microstructural characterizations other than optical microscopy.</p>","PeriodicalId":16193,"journal":{"name":"Journal of Intelligent Manufacturing","volume":"8 1","pages":""},"PeriodicalIF":8.3,"publicationDate":"2024-07-31","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141870826","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-07-29DOI: 10.1007/s10845-024-02468-2
Joseph Cohen, Xun Huan, Jun Ni
Data-driven artificial intelligence models require explainability in intelligent manufacturing to streamline adoption and trust in modern industry. However, recently developed explainable artificial intelligence (XAI) techniques that estimate feature contributions on a model-agnostic level such as SHapley Additive exPlanations (SHAP) have not yet been evaluated for semi-supervised fault diagnosis and prognosis problems characterized by class imbalance and weakly labeled datasets. This paper explores the potential of utilizing Shapley values for a new clustering framework compatible with semi-supervised learning problems, loosening the strict supervision requirement of current XAI techniques. This broad methodology is validated on two case studies: a heatmap image dataset obtained from a semiconductor manufacturing process featuring class imbalance, and the benchmark N-CMAPSS dataset. Semi-supervised clustering based on Shapley values significantly improves upon clustering quality compared to the fully unsupervised case, deriving information-dense and meaningful clusters that relate to underlying fault diagnosis model predictions. These clusters can also be characterized by high-precision decision rules in terms of original feature values, as demonstrated in the second case study. The rules, limited to 2 terms utilizing original feature scales, describe 14 out of the 19 derived equipment failure clusters with average precision exceeding 0.85, showcasing the promising utility of the explainable clustering framework for intelligent manufacturing applications.
{"title":"Shapley-based explainable AI for clustering applications in fault diagnosis and prognosis","authors":"Joseph Cohen, Xun Huan, Jun Ni","doi":"10.1007/s10845-024-02468-2","DOIUrl":"https://doi.org/10.1007/s10845-024-02468-2","url":null,"abstract":"<p>Data-driven artificial intelligence models require explainability in intelligent manufacturing to streamline adoption and trust in modern industry. However, recently developed explainable artificial intelligence (XAI) techniques that estimate feature contributions on a model-agnostic level such as SHapley Additive exPlanations (SHAP) have not yet been evaluated for semi-supervised fault diagnosis and prognosis problems characterized by class imbalance and weakly labeled datasets. This paper explores the potential of utilizing Shapley values for a new clustering framework compatible with semi-supervised learning problems, loosening the strict supervision requirement of current XAI techniques. This broad methodology is validated on two case studies: a heatmap image dataset obtained from a semiconductor manufacturing process featuring class imbalance, and the benchmark N-CMAPSS dataset. Semi-supervised clustering based on Shapley values significantly improves upon clustering quality compared to the fully unsupervised case, deriving information-dense and meaningful clusters that relate to underlying fault diagnosis model predictions. These clusters can also be characterized by high-precision decision rules in terms of original feature values, as demonstrated in the second case study. The rules, limited to 2 terms utilizing original feature scales, describe 14 out of the 19 derived equipment failure clusters with average precision exceeding 0.85, showcasing the promising utility of the explainable clustering framework for intelligent manufacturing applications.</p>","PeriodicalId":16193,"journal":{"name":"Journal of Intelligent Manufacturing","volume":"21 1","pages":""},"PeriodicalIF":8.3,"publicationDate":"2024-07-29","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141870825","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
In intelligent manufacturing workshops, the lack of an efficient collaborative mechanism among the various computational resources leads to higher latency, increased costs, and uneven computational load distribution, compromising the response efficacy of intelligent manufacturing services. To address these challenges, this paper introduces an edge-fog-cloud hybrid collaborative computing architecture (EFCHC) that enhances the interaction among multi-layer computational resources. Furthermore, the computational tasks offloading model under EFCHC is formulated to minimize objectives such as latency and cost. To refine the offloading solution, a novel multi-group parallel evolutionary strategy is proposed, which includes a two-stage pre-allocation scheme and a hyper-heuristic evolutionary operator for effective solution identification. In multi-objective benchmark testing experiments, the proposed algorithm substantially outperforms other comparative algorithms in terms of accuracy, convergence, and stability. In simulated workshop scenarios, the proposed offloading strategy reduces the total computational latency and cost by 17.81% and 21.89%, and enhances the load balancing efficiency by up to 52.50%, compared to six typical benchmark algorithms and architectures.