Pub Date : 2024-09-17DOI: 10.1016/j.compind.2024.104185
Xiaodong Cheng , Yuanqiao Wen , Zhongyi Sui , Liang Huang , He Lin
The electronic navigational charts are crucial carriers for representing the multi-source heterogeneous data of Waterborne Traffic Elements (WTEs). However, their layer-based modelling method has some shortcomings in expressing the multi-granularity features, complex relationships, and dynamic evolution of elements. This paper proposes an objectification modelling method for WTEs based on the concept of multi-granularity spatiotemporal object modelling. A classification system for waterborne traffic objects is developed based on the relevance of behavior to elements; combining characteristics of waterborne traffic, a data model for waterborne traffic objects is constructed from eight aspects: spatiotemporal reference, spatiotemporal position, spatial form, basic information, attributes, behavioral ability, structure, and associative relationships. An object extraction function is also established, extracting object attributes and relationships between objects according to different element classes. Taking the Jiashan section of the Hangzhou-Shanghai Line in Zhejiang Province as the experimental subject, the multi-granularity spatiotemporal characteristics, dynamic evolution, and relationship expression of channel class objects are tested. The experimental results show that the proposed method provides the theoretical basis and data organization mode for the multi-granularity expression of WTEs.
{"title":"Multi-granularity spatiotemporal object modelling of waterborne traffic elements","authors":"Xiaodong Cheng , Yuanqiao Wen , Zhongyi Sui , Liang Huang , He Lin","doi":"10.1016/j.compind.2024.104185","DOIUrl":"10.1016/j.compind.2024.104185","url":null,"abstract":"<div><p>The electronic navigational charts are crucial carriers for representing the multi-source heterogeneous data of Waterborne Traffic Elements (WTEs). However, their layer-based modelling method has some shortcomings in expressing the multi-granularity features, complex relationships, and dynamic evolution of elements. This paper proposes an objectification modelling method for WTEs based on the concept of multi-granularity spatiotemporal object modelling. A classification system for waterborne traffic objects is developed based on the relevance of behavior to elements; combining characteristics of waterborne traffic, a data model for waterborne traffic objects is constructed from eight aspects: spatiotemporal reference, spatiotemporal position, spatial form, basic information, attributes, behavioral ability, structure, and associative relationships. An object extraction function is also established, extracting object attributes and relationships between objects according to different element classes. Taking the Jiashan section of the Hangzhou-Shanghai Line in Zhejiang Province as the experimental subject, the multi-granularity spatiotemporal characteristics, dynamic evolution, and relationship expression of channel class objects are tested. The experimental results show that the proposed method provides the theoretical basis and data organization mode for the multi-granularity expression of WTEs.</p></div>","PeriodicalId":55219,"journal":{"name":"Computers in Industry","volume":"164 ","pages":"Article 104185"},"PeriodicalIF":8.2,"publicationDate":"2024-09-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142243209","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-09-17DOI: 10.1016/j.compind.2024.104186
Vito Giordano , Gualtiero Fantoni
Industry 4.0 has led to a huge increase in data coming from machine maintenance. At the same time, advances in Natural Language Processing (NLP) and Large Language Models provide new ways to analyse this data. In our research, we use NLP to analyse maintenance work orders, and specifically the descriptions of failures and the corresponding repair actions. Many NLP studies have focused on failure descriptions for categorising them, extracting specific information about failure, or supporting failure analysis methodologies (such as FMEA). Whereas, the analysis of repair actions and its relationship with failure remains underexplored. Addressing this gap, our study makes three significant contributions. Firstly, we focused on the Italian language, which presents additional challenges due to the dominance of NLP systems that are mainly designed for English. Secondly, it proposes a method for automatically subdividing a repair action into a set of sub-tasks. Lastly, it introduces an approach that employs association rule mining to recommend sub-tasks to maintainers when addressing failures. We tested our approach with a case study from an automotive company in Italy. The case study provides insights into the current barriers faced by NLP applications in maintenance, offering a glimpse into the future opportunities for smart maintenance systems.
{"title":"Decomposing maintenance actions into sub-tasks using natural language processing: A case study in an Italian automotive company","authors":"Vito Giordano , Gualtiero Fantoni","doi":"10.1016/j.compind.2024.104186","DOIUrl":"10.1016/j.compind.2024.104186","url":null,"abstract":"<div><p>Industry 4.0 has led to a huge increase in data coming from machine maintenance. At the same time, advances in Natural Language Processing (NLP) and Large Language Models provide new ways to analyse this data. In our research, we use NLP to analyse maintenance work orders, and specifically the descriptions of failures and the corresponding repair actions. Many NLP studies have focused on failure descriptions for categorising them, extracting specific information about failure, or supporting failure analysis methodologies (such as FMEA). Whereas, the analysis of repair actions and its relationship with failure remains underexplored. Addressing this gap, our study makes three significant contributions. Firstly, we focused on the Italian language, which presents additional challenges due to the dominance of NLP systems that are mainly designed for English. Secondly, it proposes a method for automatically subdividing a repair action into a set of sub-tasks. Lastly, it introduces an approach that employs association rule mining to recommend sub-tasks to maintainers when addressing failures. We tested our approach with a case study from an automotive company in Italy. The case study provides insights into the current barriers faced by NLP applications in maintenance, offering a glimpse into the future opportunities for smart maintenance systems.</p></div>","PeriodicalId":55219,"journal":{"name":"Computers in Industry","volume":"164 ","pages":"Article 104186"},"PeriodicalIF":8.2,"publicationDate":"2024-09-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.sciencedirect.com/science/article/pii/S0166361524001143/pdfft?md5=60b4c04fc51db998076996dc8ccd709b&pid=1-s2.0-S0166361524001143-main.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142243208","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-09-16DOI: 10.1016/j.compind.2024.104180
Haodong Li , Xingwei Wang , Peng Cao , Ying Li , Bo Yi , Min Huang
Industrial equipment condition monitoring and fault detection are crucial to ensure the reliability of industrial production. Recently, data-driven fault detection methods have achieved significant success, but they all face challenges due to data fragmentation and limited fault detection capabilities. Although centralized data collection can improve detection accuracy, the conflicting interests brought by data privacy issues make data sharing between different devices impractical, thus forming the problem of industrial data silos. To address these challenges, this paper proposes a class prototype guided personalized lightweight federated learning framework(FedCPG). This framework decouples the local network, only uploading the backbone model to the server for model aggregation, while employing the head model for local personalized updates, thereby achieving efficient model aggregation. Furthermore, the framework incorporates prototype constraints to steer the local personalized update process, mitigating the effects of data heterogeneity. Finally, a lightweight feature extraction network is designed to reduce communication overhead. Multiple complex industrial data distribution scenarios were simulated on two benchmark industrial datasets. Extensive experiments have demonstrated that FedCPG can achieve an average detection accuracy of 95% in complex industrial scenarios, while simultaneously reducing memory usage and the number of parameters by 82%, surpassing existing methods in most average metrics. These findings offer novel perspectives on the application of personalized federated learning in industrial fault detection.
{"title":"FedCPG: A class prototype guided personalized lightweight federated learning framework for cross-factory fault detection","authors":"Haodong Li , Xingwei Wang , Peng Cao , Ying Li , Bo Yi , Min Huang","doi":"10.1016/j.compind.2024.104180","DOIUrl":"10.1016/j.compind.2024.104180","url":null,"abstract":"<div><p>Industrial equipment condition monitoring and fault detection are crucial to ensure the reliability of industrial production. Recently, data-driven fault detection methods have achieved significant success, but they all face challenges due to data fragmentation and limited fault detection capabilities. Although centralized data collection can improve detection accuracy, the conflicting interests brought by data privacy issues make data sharing between different devices impractical, thus forming the problem of industrial data silos. To address these challenges, this paper proposes a class prototype guided personalized lightweight federated learning framework(FedCPG). This framework decouples the local network, only uploading the backbone model to the server for model aggregation, while employing the head model for local personalized updates, thereby achieving efficient model aggregation. Furthermore, the framework incorporates prototype constraints to steer the local personalized update process, mitigating the effects of data heterogeneity. Finally, a lightweight feature extraction network is designed to reduce communication overhead. Multiple complex industrial data distribution scenarios were simulated on two benchmark industrial datasets. Extensive experiments have demonstrated that FedCPG can achieve an average detection accuracy of 95% in complex industrial scenarios, while simultaneously reducing memory usage and the number of parameters by 82%, surpassing existing methods in most average metrics. These findings offer novel perspectives on the application of personalized federated learning in industrial fault detection.</p></div>","PeriodicalId":55219,"journal":{"name":"Computers in Industry","volume":"164 ","pages":"Article 104180"},"PeriodicalIF":8.2,"publicationDate":"2024-09-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142243071","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-09-12DOI: 10.1016/j.compind.2024.104167
Chuanxiao Li , Wenqiang Li , Hai Xiang , Yida Hong
A patent map is widely used in the field of technical information mining, which can support tasks such as detecting patent vacuums and predicting technical trends. However, existing patent map construction methods have the limitations of insufficient intelligence and accuracy in mining patent technical features, which leads to failure to effectively complete the above tasks. To address these limitations, this paper proposes a patent map construction method based on multi-dimensional technical feature mining that mainly includes the following three stages. First, on the basis of the dependency parsing technology, the technical features contained in patents are fully mined in the form of triplets from three dimensions: function, behaviour and structure. Second, on the basis of Wordnet, the original triplets in three dimensions are standardised for different task scenarios. Finally, on the basis of the standard triplets, the patent map can be constructed to detect patent vacuums and support design tasks. In addition, a prototype system is developed based on the proposed method, and the effectiveness and practicability of the method and system are verified using a 3D printer as an engineering example.
{"title":"A technical patent map construction method and system based on multi-dimensional technical feature extraction","authors":"Chuanxiao Li , Wenqiang Li , Hai Xiang , Yida Hong","doi":"10.1016/j.compind.2024.104167","DOIUrl":"10.1016/j.compind.2024.104167","url":null,"abstract":"<div><p>A patent map is widely used in the field of technical information mining, which can support tasks such as detecting patent vacuums and predicting technical trends. However, existing patent map construction methods have the limitations of insufficient intelligence and accuracy in mining patent technical features, which leads to failure to effectively complete the above tasks. To address these limitations, this paper proposes a patent map construction method based on multi-dimensional technical feature mining that mainly includes the following three stages. First, on the basis of the dependency parsing technology, the technical features contained in patents are fully mined in the form of triplets from three dimensions: function, behaviour and structure. Second, on the basis of Wordnet, the original triplets in three dimensions are standardised for different task scenarios. Finally, on the basis of the standard triplets, the patent map can be constructed to detect patent vacuums and support design tasks. In addition, a prototype system is developed based on the proposed method, and the effectiveness and practicability of the method and system are verified using a 3D printer as an engineering example.</p></div>","PeriodicalId":55219,"journal":{"name":"Computers in Industry","volume":"164 ","pages":"Article 104167"},"PeriodicalIF":8.2,"publicationDate":"2024-09-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142173588","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-09-11DOI: 10.1016/j.compind.2024.104184
Elham Sharifi , Atanu Chaudhuri , Saeed D. Farahani , Lasse G. Staal , Brian Vejrum Waehrens
Novel digital on-demand manufacturing technologies provide a significant opportunity to support development of virtual warehousing and in turn improve supply chain performance. However, the implementation of virtual warehouse comes with a set of challenges, especially where the objective is to virtually warehouse standard or legacy parts that have been developed and verified initially for conventional (non-digital) manufacturing. In this paper, we explore the key elements required for successful implementation of a virtual warehouse for legacy parts based on a combination of part digitalization, on-demand manufacturing, and part validation. Our proposed framework for adoption of virtual warehouse includes development of a digital inventory which includes supply chain and manufacturability data, identification, and selection of suitable parts for on-demand manufacturing, selection of on-demand manufacturing technology, fit-for-purpose validation of the parts. Our framework is exemplified through a case study, and we conclude that the building of an effective virtual warehouse requires several enablers, including availability of digital data about technical and supply chain characteristics of parts, but also a suitable part identification tool. This part identification tool needs to be flexible to include comparison with reference parts already produced by different on-demand manufacturing technologies.
{"title":"Virtual warehousing through digitalized inventory and on-demand manufacturing: A case study","authors":"Elham Sharifi , Atanu Chaudhuri , Saeed D. Farahani , Lasse G. Staal , Brian Vejrum Waehrens","doi":"10.1016/j.compind.2024.104184","DOIUrl":"10.1016/j.compind.2024.104184","url":null,"abstract":"<div><p>Novel digital on-demand manufacturing technologies provide a significant opportunity to support development of virtual warehousing and in turn improve supply chain performance. However, the implementation of virtual warehouse comes with a set of challenges, especially where the objective is to virtually warehouse standard or legacy parts that have been developed and verified initially for conventional (non-digital) manufacturing. In this paper, we explore the key elements required for successful implementation of a virtual warehouse for legacy parts based on a combination of part digitalization, on-demand manufacturing, and part validation. Our proposed framework for adoption of virtual warehouse includes development of a digital inventory which includes supply chain and manufacturability data, identification, and selection of suitable parts for on-demand manufacturing, selection of on-demand manufacturing technology, fit-for-purpose validation of the parts. Our framework is exemplified through a case study, and we conclude that the building of an effective virtual warehouse requires several enablers, including availability of digital data about technical and supply chain characteristics of parts, but also a suitable part identification tool. This part identification tool needs to be flexible to include comparison with reference parts already produced by different on-demand manufacturing technologies.</p></div>","PeriodicalId":55219,"journal":{"name":"Computers in Industry","volume":"164 ","pages":"Article 104184"},"PeriodicalIF":8.2,"publicationDate":"2024-09-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.sciencedirect.com/science/article/pii/S016636152400112X/pdfft?md5=60389a1452533dbdc2dab1d2bedbd96f&pid=1-s2.0-S016636152400112X-main.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142167644","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
We propose a novel framework for detecting 3D human–object interactions (HOI) in construction sites and a toolkit for generating construction-related human–object interaction graphs. Computer vision methods have been adopted for construction site safety surveillance in recent years. The current computer vision methods rely on videos and images, with which safety verification is performed on common-sense knowledge, without considering 3D spatial relationships among the detected instances. We propose a new method to incorporate spatial understanding by directly inferring the interactions from 3D point cloud data. The proposed model is trained on a 3D construction site dataset generated from our crafted simulation toolkit. The model achieves 54.11% mean interaction over union (mIOU) and 72.98% average mean precision(mAP) for the worker–object interaction relationship recognition. The model is also validated on PiGraphs, a benchmarking dataset with 3D human–object interaction types, and compared against other existing 3D interaction detection frameworks. It was observed that it achieves superior performance from the state-of-the-art model, increasing the interaction detection mAP by 17.01%. Besides the 3D interaction model, we also simulate interactions from industrial surveillance footage using MoCap and physical constraints, which will be released to foster future studies in the domain.
{"title":"Learning 3D human–object interaction graphs from transferable context knowledge for construction monitoring","authors":"Liuyue Xie, Shreyas Misra, Nischal Suresh, Justin Soza-Soto, Tomotake Furuhata, Kenji Shimada","doi":"10.1016/j.compind.2024.104171","DOIUrl":"10.1016/j.compind.2024.104171","url":null,"abstract":"<div><p>We propose a novel framework for detecting 3D human–object interactions (HOI) in construction sites and a toolkit for generating construction-related human–object interaction graphs. Computer vision methods have been adopted for construction site safety surveillance in recent years. The current computer vision methods rely on videos and images, with which safety verification is performed on common-sense knowledge, without considering 3D spatial relationships among the detected instances. We propose a new method to incorporate spatial understanding by directly inferring the interactions from 3D point cloud data. The proposed model is trained on a 3D construction site dataset generated from our crafted simulation toolkit. The model achieves 54.11% mean interaction over union (mIOU) and 72.98% average mean precision(mAP) for the worker–object interaction relationship recognition. The model is also validated on PiGraphs, a benchmarking dataset with 3D human–object interaction types, and compared against other existing 3D interaction detection frameworks. It was observed that it achieves superior performance from the state-of-the-art model, increasing the interaction detection mAP by 17.01%. Besides the 3D interaction model, we also simulate interactions from industrial surveillance footage using MoCap and physical constraints, which will be released to foster future studies in the domain.</p></div>","PeriodicalId":55219,"journal":{"name":"Computers in Industry","volume":"164 ","pages":"Article 104171"},"PeriodicalIF":8.2,"publicationDate":"2024-09-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.sciencedirect.com/science/article/pii/S016636152400099X/pdfft?md5=5de4190059c557871f94dcddc09652d4&pid=1-s2.0-S016636152400099X-main.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142164256","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-09-10DOI: 10.1016/j.compind.2024.104170
Gyunam Park, Wil M.P. van der Aalst
In business processes, an operational problem refers to a deviation and an inefficiency that prohibits an organization from reaching its goals, e.g., a delay in approving a purchase order in a Procure-To-Pay (P2P) process. Operational process monitoring aims to assess the occurrence of such operational problems by analyzing event data that record the execution of business processes. Once the problems are detected, organizations can act upon the corresponding problems with viable actions, e.g., adding more resources, bypassing problematic activities, etc. A plethora of approaches have been proposed to implement operational process monitoring. The lion’s share of existing approaches assumes that a single case notion (e.g., a purchase order in a P2P process) exists in a business process and analyzes operational problems defined over the single case notion. However, most real-life business processes manifest the interplay of multiple interrelated objects. For instance, an execution of an omnipresent P2P process involves multiple objects of different types, e.g., purchase orders, goods receipts, invoices, etc. Applying the existing approaches in these object-centric business processes results in inaccurate or misleading results. In this study, we propose a novel approach to assessing operational problems within object-centric business processes. Our approach not only ensures an accurate assessment of existing problems but also facilitates the analysis of object-centric problems that consider the interaction among different objects. We evaluate this approach by applying it to both simulated business processes and real-life business processes.
{"title":"Operational process monitoring: An object-centric approach","authors":"Gyunam Park, Wil M.P. van der Aalst","doi":"10.1016/j.compind.2024.104170","DOIUrl":"10.1016/j.compind.2024.104170","url":null,"abstract":"<div><p>In business processes, an operational problem refers to a deviation and an inefficiency that prohibits an organization from reaching its goals, e.g., a delay in approving a purchase order in a Procure-To-Pay (P2P) process. Operational process monitoring aims to assess the occurrence of such operational problems by analyzing event data that record the execution of business processes. Once the problems are detected, organizations can act upon the corresponding problems with viable actions, e.g., adding more resources, bypassing problematic activities, etc. A plethora of approaches have been proposed to implement operational process monitoring. The lion’s share of existing approaches assumes that a single case notion (e.g., a purchase order in a P2P process) exists in a business process and analyzes operational problems defined over the single case notion. However, most real-life business processes manifest the interplay of multiple interrelated objects. For instance, an execution of an omnipresent P2P process involves multiple objects of different types, e.g., purchase orders, goods receipts, invoices, etc. Applying the existing approaches in these object-centric business processes results in inaccurate or misleading results. In this study, we propose a novel approach to assessing operational problems within object-centric business processes. Our approach not only ensures an accurate assessment of existing problems but also facilitates the analysis of object-centric problems that consider the interaction among different objects. We evaluate this approach by applying it to both simulated business processes and real-life business processes.</p></div>","PeriodicalId":55219,"journal":{"name":"Computers in Industry","volume":"164 ","pages":"Article 104170"},"PeriodicalIF":8.2,"publicationDate":"2024-09-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.sciencedirect.com/science/article/pii/S0166361524000988/pdfft?md5=98100c0921f5cbb1ff207fca5e3978cf&pid=1-s2.0-S0166361524000988-main.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142164255","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-09-10DOI: 10.1016/j.compind.2024.104172
Xingchi Lu , Xuejian Yao , Quansheng Jiang , Yehu Shen , Fengyu Xu , Qixin Zhu
Performance degradation and remaining useful life (RUL) prediction are of great significance in improving the reliability of mechanical equipment. Existing cross-domain RUL prediction methods usually reduce data distribution discrepancy by domain adaptation, to overcome domain shift under cross-domain conditions. However, the fine-grained information between cross-domain degradation features and the specific characteristics of the target domain are often ignored, which limits the prediction performance. Aiming at these issues, a RUL prediction method based on dynamic hybrid domain adaptation (DHDA) and attention contrastive learning (A-CL) is proposed for the cross-domain rolling bearings. In the DHDA module, the conditional distribution alignment is achieved by the designed pseudo-label-guided domain adversarial network, and is assigned with a dynamic penalty term to dynamically adjust the conditional distribution when aligning the joint distribution, for improving the fine-grainedness of domain adaptation. The A-CL module aims to help the prediction model actively extract the degradation information of the target domain, to generate the degradation features that match the characteristics of the target domain, for improving the robustness of RUL prediction. Then, the proposed method is verified by the ablation and comparison experiments conducted on PHM2012 and XJTU-SY datasets. The results show that the proposed method performs high accuracy for cross-domain RUL prediction with good generalization performance under three different cross-domain scenarios.
{"title":"Remaining useful life prediction model of cross-domain rolling bearing via dynamic hybrid domain adaptation and attention contrastive learning","authors":"Xingchi Lu , Xuejian Yao , Quansheng Jiang , Yehu Shen , Fengyu Xu , Qixin Zhu","doi":"10.1016/j.compind.2024.104172","DOIUrl":"10.1016/j.compind.2024.104172","url":null,"abstract":"<div><p>Performance degradation and remaining useful life (RUL) prediction are of great significance in improving the reliability of mechanical equipment. Existing cross-domain RUL prediction methods usually reduce data distribution discrepancy by domain adaptation, to overcome domain shift under cross-domain conditions. However, the fine-grained information between cross-domain degradation features and the specific characteristics of the target domain are often ignored, which limits the prediction performance. Aiming at these issues, a RUL prediction method based on dynamic hybrid domain adaptation (DHDA) and attention contrastive learning (A-CL) is proposed for the cross-domain rolling bearings. In the DHDA module, the conditional distribution alignment is achieved by the designed pseudo-label-guided domain adversarial network, and is assigned with a dynamic penalty term to dynamically adjust the conditional distribution when aligning the joint distribution, for improving the fine-grainedness of domain adaptation. The A-CL module aims to help the prediction model actively extract the degradation information of the target domain, to generate the degradation features that match the characteristics of the target domain, for improving the robustness of RUL prediction. Then, the proposed method is verified by the ablation and comparison experiments conducted on PHM2012 and XJTU-SY datasets. The results show that the proposed method performs high accuracy for cross-domain RUL prediction with good generalization performance under three different cross-domain scenarios.</p></div>","PeriodicalId":55219,"journal":{"name":"Computers in Industry","volume":"164 ","pages":"Article 104172"},"PeriodicalIF":8.2,"publicationDate":"2024-09-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142164253","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-09-09DOI: 10.1016/j.compind.2024.104173
Andrea Loddo , Cecilia Di Ruberto , Giuliano Armano , Andrea Manconi
Cheese production, a globally cherished culinary tradition, faces challenges in ensuring consistent product quality and production efficiency. The critical phase of determining cutting time during curd formation significantly influences cheese quality and yield. Traditional methods often struggle to address variability in coagulation conditions, particularly in small-scale factories. In this paper, we present several key practical contributions to the field, including the introduction of CM-IDB, the first publicly available image dataset related to the cheese-making process. Also, we propose an innovative artificial intelligence-based approach to automate the detection of curd-firming time during cheese production using a combination of computer vision and machine learning techniques. The proposed method offers real-time insights into curd firmness, aiding in predicting optimal cutting times. Experimental results show the effectiveness of integrating sequence information with single image features, leading to improved classification performance. In particular, deep learning-based features demonstrate excellent classification capability when integrated with sequence information. The study suggests the suitability of the proposed approach for integration into real-time systems, especially within dairy production, to enhance product quality and production efficiency.
{"title":"Detecting coagulation time in cheese making by means of computer vision and machine learning techniques","authors":"Andrea Loddo , Cecilia Di Ruberto , Giuliano Armano , Andrea Manconi","doi":"10.1016/j.compind.2024.104173","DOIUrl":"10.1016/j.compind.2024.104173","url":null,"abstract":"<div><p>Cheese production, a globally cherished culinary tradition, faces challenges in ensuring consistent product quality and production efficiency. The critical phase of determining cutting time during curd formation significantly influences cheese quality and yield. Traditional methods often struggle to address variability in coagulation conditions, particularly in small-scale factories. In this paper, we present several key practical contributions to the field, including the introduction of CM-IDB, the first publicly available image dataset related to the cheese-making process. Also, we propose an innovative artificial intelligence-based approach to automate the detection of curd-firming time during cheese production using a combination of computer vision and machine learning techniques. The proposed method offers real-time insights into curd firmness, aiding in predicting optimal cutting times. Experimental results show the effectiveness of integrating sequence information with single image features, leading to improved classification performance. In particular, deep learning-based features demonstrate excellent classification capability when integrated with sequence information. The study suggests the suitability of the proposed approach for integration into real-time systems, especially within dairy production, to enhance product quality and production efficiency.</p></div>","PeriodicalId":55219,"journal":{"name":"Computers in Industry","volume":"164 ","pages":"Article 104173"},"PeriodicalIF":8.2,"publicationDate":"2024-09-09","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.sciencedirect.com/science/article/pii/S0166361524001015/pdfft?md5=049ee78fc600c8a36c293c17fd46e748&pid=1-s2.0-S0166361524001015-main.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142164254","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}