To support the design and optimization of human-centric manufacturing systems in the Industry 5.0 era, Model Based Definition (MBD) models with STEP knowledge graph (STEP KG) recommendation are crucial for exchanging and reusing digital twin models. Existing methods based on graph convolutional networks (GCN) focus on geometric semantics but overlook the needed correlation engineering semantics in the STEP KG. Our paper introduces a Quaternion Diffusion Graph Convolutional Network (QDGCN) recommendation framework, comprising quaternion semantic diffusion and quaternion parameter diffusion. The quaternion semantic diffusion method uses quaternion to combine multiple layers of semantic diffusion into a single set transformation operation and constructs the quaternion-based multi-layer semantic model on the STEP KG. The quaternion parameter diffusion method uses a quaternion parameter generation mechanism based on the diffusion model. It generates different weight coefficients for identifying the main node features in the STEP KG. The fusion of the two solves the inconsistency problem between geometric and engineering semantics. We compared QDGCN with state-of-the-art methods on real datasets, and the detailed analysis of experimental results demonstrates the effectiveness of QDGCN.
{"title":"STEP-based Model Recommendation Method for the Exchange and Reuse of Digital Twins","authors":"Chengfeng Jian, Zhuoran Dai, Junyu Chen, Meiyu Zhang","doi":"10.1016/j.jii.2025.100839","DOIUrl":"10.1016/j.jii.2025.100839","url":null,"abstract":"<div><div>To support the design and optimization of human-centric manufacturing systems in the Industry 5.0 era, Model Based Definition (MBD) models with STEP knowledge graph (STEP KG) recommendation are crucial for exchanging and reusing digital twin models. Existing methods based on graph convolutional networks (GCN) focus on geometric semantics but overlook the needed correlation engineering semantics in the STEP KG. Our paper introduces a Quaternion Diffusion Graph Convolutional Network (QDGCN) recommendation framework, comprising quaternion semantic diffusion and quaternion parameter diffusion. The quaternion semantic diffusion method uses quaternion to combine multiple layers of semantic diffusion into a single set transformation operation and constructs the quaternion-based multi-layer semantic model on the STEP KG. The quaternion parameter diffusion method uses a quaternion parameter generation mechanism based on the diffusion model. It generates different weight coefficients for identifying the main node features in the STEP KG. The fusion of the two solves the inconsistency problem between geometric and engineering semantics. We compared QDGCN with state-of-the-art methods on real datasets, and the detailed analysis of experimental results demonstrates the effectiveness of QDGCN.</div></div>","PeriodicalId":55975,"journal":{"name":"Journal of Industrial Information Integration","volume":"45 ","pages":"Article 100839"},"PeriodicalIF":10.4,"publicationDate":"2025-04-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143786091","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-03-26DOI: 10.1016/j.jii.2025.100842
Huayang Yu , Yisong Ouyang , Chuanyi Ma , Lizhuang Cui , Feng Guo
Light Detection and Ranging (LiDAR), an advanced non-contact sensing method capable of capturing 3D spatial data with up to millimeter-level precision depending on the ranging method, has been widely used in pavement defect detection and road asset management. This paper provides an overview of LiDAR-based pavement inspection techniques in terms of measurement principles, characterization of acquisition methods, and algorithmic processing of point cloud data. Subsequently, the characteristics of major LiDAR systems, including mobile laser scanning (MLS), terrestrial laser scanning (TLS), and airborne laser scanning (ALS), and their applicability for pavement information inspection are analyzed. MLS emerges as the predominant method due to its superior mobility and measurement precision in retrieving pavement data. Then, traditional and deep learning-based 3D point cloud processing algorithms are compared for pavement information inspection, challenges in achieving high accuracy and efficiency with large datasets are discussed, and future research directions are outlined in this study. Additionally, the paper highlights the practical outcomes achieved with economic LiDAR solutions, whose data densities are one to two orders of magnitude lower than those obtained with powerful and expensive solutions. Furthermore, the potential for integration with other technologies to enhance detection efficiency and precision is discussed.
{"title":"Advances and innovations in road surface inspection with light detection and ranging technology","authors":"Huayang Yu , Yisong Ouyang , Chuanyi Ma , Lizhuang Cui , Feng Guo","doi":"10.1016/j.jii.2025.100842","DOIUrl":"10.1016/j.jii.2025.100842","url":null,"abstract":"<div><div>Light Detection and Ranging (LiDAR), an advanced non-contact sensing method capable of capturing 3D spatial data with up to millimeter-level precision depending on the ranging method, has been widely used in pavement defect detection and road asset management. This paper provides an overview of LiDAR-based pavement inspection techniques in terms of measurement principles, characterization of acquisition methods, and algorithmic processing of point cloud data. Subsequently, the characteristics of major LiDAR systems, including mobile laser scanning (MLS), terrestrial laser scanning (TLS), and airborne laser scanning (ALS), and their applicability for pavement information inspection are analyzed. MLS emerges as the predominant method due to its superior mobility and measurement precision in retrieving pavement data. Then, traditional and deep learning-based 3D point cloud processing algorithms are compared for pavement information inspection, challenges in achieving high accuracy and efficiency with large datasets are discussed, and future research directions are outlined in this study. Additionally, the paper highlights the practical outcomes achieved with economic LiDAR solutions, whose data densities are one to two orders of magnitude lower than those obtained with powerful and expensive solutions. Furthermore, the potential for integration with other technologies to enhance detection efficiency and precision is discussed.</div></div>","PeriodicalId":55975,"journal":{"name":"Journal of Industrial Information Integration","volume":"45 ","pages":"Article 100842"},"PeriodicalIF":10.4,"publicationDate":"2025-03-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143760535","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-03-26DOI: 10.1016/j.jii.2025.100829
Lu Li, Zhengyi Chai
With the rapid development of Industrial Internet of Things (IIoT), the complexity of production environment has increased significantly. The flexible job-shop scheduling problem (FJSP) is often framed as a multi-objective optimization issue. However, as the scale and computational demands continue to grow, traditional multi-objective algorithms struggle to identify optimal scheduling policies. To address this challenge, we designed a novel evolutionary multitasking (EMT) framework to handle the complexity of FJSP in IIoT scenarios, considering uncertainty in time constraints through fuzzy processing time. Existing studies do not consider FJSP in IIoT scenario, our study fulfills this research gap. The energy-saving distributed FJSP with fuzzy processing time in IIoT (EFDFJSP) was studied, aiming to simultaneously optimize makespan and energy consumption. The problem was modeled as a multi-task multi-objective EFDFJSP (MMEFDFJSP) for the first time. And we proposed a novel reinforcement learning (RL)-based multi-task multi-objective algorithm with feedback mechanism (EMTRL-FD). Additionally, we proposed a local search (LS) operator selection strategy to efficiently allocate computing resources. Experimental results demonstrate that EMTRL-FD outperforms existing state-of-the-art algorithms in solving EFDFJSP.
{"title":"Energy-saving distributed flexible job-shop scheduling with fuzzy processing time in IIoT: A novel evolutionary multitasking algorithm","authors":"Lu Li, Zhengyi Chai","doi":"10.1016/j.jii.2025.100829","DOIUrl":"10.1016/j.jii.2025.100829","url":null,"abstract":"<div><div>With the rapid development of Industrial Internet of Things (IIoT), the complexity of production environment has increased significantly. The flexible job-shop scheduling problem (FJSP) is often framed as a multi-objective optimization issue. However, as the scale and computational demands continue to grow, traditional multi-objective algorithms struggle to identify optimal scheduling policies. To address this challenge, we designed a novel evolutionary multitasking (EMT) framework to handle the complexity of FJSP in IIoT scenarios, considering uncertainty in time constraints through fuzzy processing time. Existing studies do not consider FJSP in IIoT scenario, our study fulfills this research gap. The energy-saving distributed FJSP with fuzzy processing time in IIoT (EFDFJSP) was studied, aiming to simultaneously optimize makespan and energy consumption. The problem was modeled as a multi-task multi-objective EFDFJSP (MMEFDFJSP) for the first time. And we proposed a novel reinforcement learning (RL)-based multi-task multi-objective algorithm with feedback mechanism (EMTRL-FD). Additionally, we proposed a local search (LS) operator selection strategy to efficiently allocate computing resources. Experimental results demonstrate that EMTRL-FD outperforms existing state-of-the-art algorithms in solving EFDFJSP.</div></div>","PeriodicalId":55975,"journal":{"name":"Journal of Industrial Information Integration","volume":"45 ","pages":"Article 100829"},"PeriodicalIF":10.4,"publicationDate":"2025-03-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143747783","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-03-25DOI: 10.1016/j.jii.2025.100840
Madjid Tavana , Andreas Dellnitz , Morteza Yazdani
Strategic decision-making in space mission selection is inherently complex, requiring a balance of multiple, often conflicting, quantitative and qualitative factors under uncertainty. This paper introduces a novel fuzzy analytical framework that extends the Strategic Assessment Model (SAM) by incorporating trapezoidal fuzzy numbers to evaluate mission alternatives systematically. By addressing the uncertainty inherent in space exploration planning, this model provides a structured approach to assessing internal, transactional, and contextual factors—marking the first application of such techniques in NASA's mission selection process. The framework's application to Mars, lunar, and solar system exploration missions demonstrates its ability to provide robust and data-driven insights. The findings reveal that solar system exploration consistently emerges as the most resilient option, achieving a superior Mission Selection Score across diverse scenarios. Comprehensive sensitivity analysis further underscores the framework's reliability, showing that solar system exploration remains the optimal choice in approximately 90 % of cases despite variations in uncertainty levels. This research significantly advances the field of strategic space mission assessment by offering a rigorous, adaptable decision-support tool for decision-makers. It enhances NASA's capability to navigate the complexities of mission planning, ensuring optimal allocation of resources in an era of increasing privatization and international collaboration in space exploration. The proposed fuzzy SAM approach establishes a new benchmark for multi-criteria decision-making under uncertainty, paving the way for future applications in space policy, mission prioritization, and beyond.
{"title":"A comparative fuzzy strategic assessment framework for space mission selection at NASA","authors":"Madjid Tavana , Andreas Dellnitz , Morteza Yazdani","doi":"10.1016/j.jii.2025.100840","DOIUrl":"10.1016/j.jii.2025.100840","url":null,"abstract":"<div><div>Strategic decision-making in space mission selection is inherently complex, requiring a balance of multiple, often conflicting, quantitative and qualitative factors under uncertainty. This paper introduces a novel fuzzy analytical framework that extends the Strategic Assessment Model (SAM) by incorporating trapezoidal fuzzy numbers to evaluate mission alternatives systematically. By addressing the uncertainty inherent in space exploration planning, this model provides a structured approach to assessing internal, transactional, and contextual factors—marking the first application of such techniques in NASA's mission selection process. The framework's application to Mars, lunar, and solar system exploration missions demonstrates its ability to provide robust and data-driven insights. The findings reveal that solar system exploration consistently emerges as the most resilient option, achieving a superior Mission Selection Score across diverse scenarios. Comprehensive sensitivity analysis further underscores the framework's reliability, showing that solar system exploration remains the optimal choice in approximately 90 % of cases despite variations in uncertainty levels. This research significantly advances the field of strategic space mission assessment by offering a rigorous, adaptable decision-support tool for decision-makers. It enhances NASA's capability to navigate the complexities of mission planning, ensuring optimal allocation of resources in an era of increasing privatization and international collaboration in space exploration. The proposed fuzzy SAM approach establishes a new benchmark for multi-criteria decision-making under uncertainty, paving the way for future applications in space policy, mission prioritization, and beyond.</div></div>","PeriodicalId":55975,"journal":{"name":"Journal of Industrial Information Integration","volume":"45 ","pages":"Article 100840"},"PeriodicalIF":10.4,"publicationDate":"2025-03-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143738726","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-03-25DOI: 10.1016/j.jii.2025.100841
Alessandro Massaro, Francesco Santarsiero, Giovanni Schiuma
The proposed paper presents a methodology for mapping electronic manufacturing control processes within a Knowledge Management (KM) framework, aligning with human-centric and transdisciplinary approaches. Specifically, the paper explores a Proportional-Integral-Derivative (PID) process for tuning production machinery, facilitating quality management and predictive maintenance through an AI-driven model. The PID circuit model is designed using the LTspice tool, while the entire production workflow is structured according to the Business Process Model and Notation (BPMN) standard. The model incorporates Artificial Intelligence (AI) to optimize machine control, establishing an advanced Digital Twin (DT) model that enables interactive human-system collaboration. The work further describes Knowledge Base (KB) data sources that support KM within Industry 5.0 environments, emphasizing AI-enhanced, user-centered control systems. Finally, the paper discusses new managerial roles and skill sets necessary for overseeing these integrated, human-centric KM systems in next-generation industrial applications.
{"title":"Advanced Electronic Controller Circuits Enabling Production Processes and AI-driven KM in Industry 5.0","authors":"Alessandro Massaro, Francesco Santarsiero, Giovanni Schiuma","doi":"10.1016/j.jii.2025.100841","DOIUrl":"10.1016/j.jii.2025.100841","url":null,"abstract":"<div><div>The proposed paper presents a methodology for mapping electronic manufacturing control processes within a Knowledge Management (KM) framework, aligning with human-centric and transdisciplinary approaches. Specifically, the paper explores a Proportional-Integral-Derivative (PID) process for tuning production machinery, facilitating quality management and predictive maintenance through an AI-driven model. The PID circuit model is designed using the LTspice tool, while the entire production workflow is structured according to the Business Process Model and Notation (BPMN) standard. The model incorporates Artificial Intelligence (AI) to optimize machine control, establishing an advanced Digital Twin (DT) model that enables interactive human-system collaboration. The work further describes Knowledge Base (KB) data sources that support KM within Industry 5.0 environments, emphasizing AI-enhanced, user-centered control systems. Finally, the paper discusses new managerial roles and skill sets necessary for overseeing these integrated, human-centric KM systems in next-generation industrial applications.</div></div>","PeriodicalId":55975,"journal":{"name":"Journal of Industrial Information Integration","volume":"45 ","pages":"Article 100841"},"PeriodicalIF":10.4,"publicationDate":"2025-03-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143783685","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-03-23DOI: 10.1016/j.jii.2025.100837
Mohamed Safaa Shubber , Mohannad T. Mohammed , Sarah Qahtan , Hassan Abdulsattar Ibrahim , Nahia Mourad , A.A. Zaidan , B.B. Zaidan , Muhammet Deveci , Dragan Pamucar , Peng Wu
Ensuring supply chain resilience (SCRES) in the face of unforeseen disruptions, such as natural disasters, geopolitical conflicts, or economic downturns, is a critical goal for decision-makers. While numerous SCRES frameworks have been proposed in existing literature, there is a lack of studies ranking these frameworks. Moreover, none of these frameworks fully satisfy all evaluation attributes. To address this research gap, three key assessment concerns need attention: the presence of multiple evaluation attributes, the varying importance levels of these attributes, and the variation in data. Therefore, multi-attribute decision analysis (MADA) methods provide effective solutions by offering sensible and logical approaches to decision-making. These methods help eliminate ambiguity and uncertainty in the information provided by a particular solution. The primary objective of this study is to propose a decision-making approach that integrates the Pythagorean Fuzzy Rough Set (PFRS) framework with the Fuzzy Weighted Zero Inconsistency Criterion (FWZIC) and the Fuzzy Decision by Opinion Score Method (FDOSM), enhancing their effectiveness in handling complex and uncertain decision-making scenarios. This study's contributions include: (1) forming an opinion decision matrix for 13 SCRES frameworks with two sets of evaluation attributes, consisting of 11 sub-attributes, 7 of which fall under SCRES Antecedents and 4 under SCRES Phases; (2) reformulating FWZIC using PFRS, referred to as the PFRS–FWZIC method, to prioritize evaluation attributes and address uncertainty in the weighting process; (3) reformulating FDOSM using PFRS, referred to as the PFRS–FDOSM method, to address multiple barrier criteria and concerns related to data variance in uncertainty evaluation; and (4) proposing a decision-based approach by integrating the PFRS–FWZIC and PFRS–FDOSM methods, based on the formed opinion decision matrix, to evaluate and rank SCRES frameworks. The effectiveness of the proposed decision-based approach is validated through sensitivity analysis and evaluated through comparison analysis, both subjectively and objectively.
{"title":"Pythagorean fuzzy rough decision-based approach for developing supply chain resilience framework in the face of unforeseen disruptions","authors":"Mohamed Safaa Shubber , Mohannad T. Mohammed , Sarah Qahtan , Hassan Abdulsattar Ibrahim , Nahia Mourad , A.A. Zaidan , B.B. Zaidan , Muhammet Deveci , Dragan Pamucar , Peng Wu","doi":"10.1016/j.jii.2025.100837","DOIUrl":"10.1016/j.jii.2025.100837","url":null,"abstract":"<div><div>Ensuring supply chain resilience (SCRES) in the face of unforeseen disruptions, such as natural disasters, geopolitical conflicts, or economic downturns, is a critical goal for decision-makers. While numerous SCRES frameworks have been proposed in existing literature, there is a lack of studies ranking these frameworks. Moreover, none of these frameworks fully satisfy all evaluation attributes. To address this research gap, three key assessment concerns need attention: the presence of multiple evaluation attributes, the varying importance levels of these attributes, and the variation in data. Therefore, multi-attribute decision analysis (MADA) methods provide effective solutions by offering sensible and logical approaches to decision-making. These methods help eliminate ambiguity and uncertainty in the information provided by a particular solution. The primary objective of this study is to propose a decision-making approach that integrates the Pythagorean Fuzzy Rough Set (PFRS) framework with the Fuzzy Weighted Zero Inconsistency Criterion (FWZIC) and the Fuzzy Decision by Opinion Score Method (FDOSM), enhancing their effectiveness in handling complex and uncertain decision-making scenarios. This study's contributions include: (1) forming an opinion decision matrix for 13 SCRES frameworks with two sets of evaluation attributes, consisting of 11 sub-attributes, 7 of which fall under SCRES Antecedents and 4 under SCRES Phases; (2) reformulating FWZIC using PFRS, referred to as the PFRS–FWZIC method, to prioritize evaluation attributes and address uncertainty in the weighting process; (3) reformulating FDOSM using PFRS, referred to as the PFRS–FDOSM method, to address multiple barrier criteria and concerns related to data variance in uncertainty evaluation; and (4) proposing a decision-based approach by integrating the PFRS–FWZIC and PFRS–FDOSM methods, based on the formed opinion decision matrix, to evaluate and rank SCRES frameworks. The effectiveness of the proposed decision-based approach is validated through sensitivity analysis and evaluated through comparison analysis, both subjectively and objectively.</div></div>","PeriodicalId":55975,"journal":{"name":"Journal of Industrial Information Integration","volume":"45 ","pages":"Article 100837"},"PeriodicalIF":10.4,"publicationDate":"2025-03-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143767913","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Despite the increased focus on data in Industry 4.0, textual data has received little attention in the production and engineering management literature. Data sources such as maintenance records and machine documentation usually are not used to help maintenance decision-making. Available studies mainly focus on categorizing maintenance records or extracting meta-data, such as time of failure, maintenance cost, etc. One of the main reasons behind this underutilization is the complexity and unstructured nature of the industrial text. In this study, we propose a novel hybrid information retrieval approach for industrial text using multi-modal learning. Maintenance operators can use the proposed system to query maintenance records and find similar solutions to a given problem. The proposed system utilizes heterogeneous (multi-modal) data, a combination of maintenance records, and machine ontology to enhance semantic search results. We used the state-of-the-art Large Language Models (LLMs); BERT (Bidirectional Encoder Representations from Transformers) for textual similarity. For similarity among ontology labels, we used a modified version of Wu-Palmer’s similarity. A hybrid weighted similarity is proposed, incorporating text and ontology similarities to enhance semantic search results. The proposed approach was validated using an open-source dataset of real maintenance records from excavators collected over ten years from different mining sites. A retrieval comparison using only text and multi-modal data is performed to estimate the proposed system’s effectiveness. Quantitative and qualitative analysis of results indicates a performance improvement of 8% using the proposed hybrid similarity approach compared to only text-based retrieval. To the best of our knowledge, this is the first study to combine LLMs and machine ontology for semantic search in maintenance records.
{"title":"Enhancing semantic search using ontologies: A hybrid information retrieval approach for industrial text","authors":"Syed Meesam Raza Naqvi , Mohammad Ghufran , Christophe Varnier , Jean-Marc Nicod , Noureddine Zerhouni","doi":"10.1016/j.jii.2025.100835","DOIUrl":"10.1016/j.jii.2025.100835","url":null,"abstract":"<div><div>Despite the increased focus on data in Industry 4.0, textual data has received little attention in the production and engineering management literature. Data sources such as maintenance records and machine documentation usually are not used to help maintenance decision-making. Available studies mainly focus on categorizing maintenance records or extracting meta-data, such as time of failure, maintenance cost, etc. One of the main reasons behind this underutilization is the complexity and unstructured nature of the industrial text. In this study, we propose a novel hybrid information retrieval approach for industrial text using multi-modal learning. Maintenance operators can use the proposed system to query maintenance records and find similar solutions to a given problem. The proposed system utilizes heterogeneous (multi-modal) data, a combination of maintenance records, and machine ontology to enhance semantic search results. We used the state-of-the-art Large Language Models (LLMs); BERT (Bidirectional Encoder Representations from Transformers) for textual similarity. For similarity among ontology labels, we used a modified version of Wu-Palmer’s similarity. A hybrid weighted similarity is proposed, incorporating text and ontology similarities to enhance semantic search results. The proposed approach was validated using an open-source dataset of real maintenance records from excavators collected over ten years from different mining sites. A retrieval comparison using only text and multi-modal data is performed to estimate the proposed system’s effectiveness. Quantitative and qualitative analysis of results indicates a performance improvement of 8% using the proposed hybrid similarity approach compared to only text-based retrieval. To the best of our knowledge, this is the first study to combine LLMs and machine ontology for semantic search in maintenance records.</div></div>","PeriodicalId":55975,"journal":{"name":"Journal of Industrial Information Integration","volume":"45 ","pages":"Article 100835"},"PeriodicalIF":10.4,"publicationDate":"2025-03-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143677755","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-03-21DOI: 10.1016/j.jii.2025.100832
Fuyu Ma , Dong Li , Yu Liu , Dapeng Lan , Zhibo Pang
In the domain of industrial control, supervisory control and data acquisition (SCADA) systems are essential for real-time monitoring and efficient data acquisition. However, as industrial systems grow in scale and complexity, conventional tag configuration methods face challenges in balancing precision and operational efficiency. Addressing these challenges requires innovative solutions. The rapid evolution of generative artificial intelligence, particularly large language models (LLMs), offers a transformative approach. This study introduces a structured prompt optimization strategy, termed structured tag engineering prompt (STEP), to increase the ability of LLMs to generate high-quality tag files. To validate the STEP method, we assessed five mainstream LLMs on basic tag generation tasks via the CodeBERTScore and pass@k metrics. The results revealed that performance of all models has been improved, thus validating the effectiveness of the proposed optimization method. On the basis of these findings, a tag generation framework grounded in the STEP method was developed and validated through case studies and practical industrial scenarios. These validations confirmed the STEP method’s applicability, demonstrating its value and potential to advance prompt engineering for SCADA systems. In summary, this study contributes to the automation and intelligence of industrial control systems while providing unique insights through the application of LLMs combined with prompt engineering in addressing complex industrial tasks.
{"title":"STEP: A structured prompt optimization method for SCADA system tag generation using LLMs","authors":"Fuyu Ma , Dong Li , Yu Liu , Dapeng Lan , Zhibo Pang","doi":"10.1016/j.jii.2025.100832","DOIUrl":"10.1016/j.jii.2025.100832","url":null,"abstract":"<div><div>In the domain of industrial control, supervisory control and data acquisition (SCADA) systems are essential for real-time monitoring and efficient data acquisition. However, as industrial systems grow in scale and complexity, conventional tag configuration methods face challenges in balancing precision and operational efficiency. Addressing these challenges requires innovative solutions. The rapid evolution of generative artificial intelligence, particularly large language models (LLMs), offers a transformative approach. This study introduces a structured prompt optimization strategy, termed structured tag engineering prompt (STEP), to increase the ability of LLMs to generate high-quality tag files. To validate the STEP method, we assessed five mainstream LLMs on basic tag generation tasks via the CodeBERTScore and pass@k metrics. The results revealed that performance of all models has been improved, thus validating the effectiveness of the proposed optimization method. On the basis of these findings, a tag generation framework grounded in the STEP method was developed and validated through case studies and practical industrial scenarios. These validations confirmed the STEP method’s applicability, demonstrating its value and potential to advance prompt engineering for SCADA systems. In summary, this study contributes to the automation and intelligence of industrial control systems while providing unique insights through the application of LLMs combined with prompt engineering in addressing complex industrial tasks.</div></div>","PeriodicalId":55975,"journal":{"name":"Journal of Industrial Information Integration","volume":"45 ","pages":"Article 100832"},"PeriodicalIF":10.4,"publicationDate":"2025-03-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143760536","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Manufacturing is one of the industrial sectors taking benefit from the 4th industrial revolution and bringing existing production capacities closer to the ”factory of the future”. Quality, as a main concern in manufacturing, is also to benefit from this change of paradigm by introducing new key enabling technologies such as Internet of Things (IoT) and Artificial Intelligence (AI) into quality management, earning it the label of Quality 4.0 (Q4.0). The implementation of these paradigms is still gathering research efforts as it is arduous to design and realize effective end-to-end Decision Support Systems (DSSs) for Q4.0, with several dimensions to consider when integrating digitalization with quality. This is an even more challenging task when it comes to SMEs’ efforts to implement these concepts, given the particularities of these entities. This paper presents an approach to design a Total Manufacturing Quality 4.0 (TMQ 4.0) DSS by combining Sensor Network (SN) data and historical data in an end-to-end framework. Furthermore, the paper presents the validation of the approach through a case study application in a metal-cutting high-precision manufacturing SME. It shows promising Q4.0 estimations with regular Machine Learning (ML) algorithms (kNN, Random Forest, Logistic Regression, XGboost, feed-forward Deep Neural Network) when the steps of tending to data quality, data augmentation, and end-to-end design and implementation are applied. By providing building blocks for an end-to-end Q4.0 DSS design and implementation in an integrated quality control application, this approach aims at supporting end-users in the in-process quality control of their manufacturing operations.
{"title":"Bridging the gap between Industry 4.0 and manufacturing SMEs: A framework for an end-to-end Total Manufacturing Quality 4.0’s implementation and adoption","authors":"Badreddine Tanane , Mohand-Lounes Bentaha , Baudouin Dafflon , Néjib Moalla","doi":"10.1016/j.jii.2025.100833","DOIUrl":"10.1016/j.jii.2025.100833","url":null,"abstract":"<div><div>Manufacturing is one of the industrial sectors taking benefit from the 4th industrial revolution and bringing existing production capacities closer to the ”factory of the future”. Quality, as a main concern in manufacturing, is also to benefit from this change of paradigm by introducing new key enabling technologies such as Internet of Things (IoT) and Artificial Intelligence (AI) into quality management, earning it the label of Quality 4.0 (Q4.0). The implementation of these paradigms is still gathering research efforts as it is arduous to design and realize effective end-to-end Decision Support Systems (DSSs) for Q4.0, with several dimensions to consider when integrating digitalization with quality. This is an even more challenging task when it comes to SMEs’ efforts to implement these concepts, given the particularities of these entities. This paper presents an approach to design a Total Manufacturing Quality 4.0 (TMQ 4.0) DSS by combining Sensor Network (SN) data and historical data in an end-to-end framework. Furthermore, the paper presents the validation of the approach through a case study application in a metal-cutting high-precision manufacturing SME. It shows promising Q4.0 estimations with regular Machine Learning (ML) algorithms (kNN, Random Forest, Logistic Regression, XGboost, feed-forward Deep Neural Network) when the steps of tending to data quality, data augmentation, and end-to-end design and implementation are applied. By providing building blocks for an end-to-end Q4.0 DSS design and implementation in an integrated quality control application, this approach aims at supporting end-users in the in-process quality control of their manufacturing operations.</div></div>","PeriodicalId":55975,"journal":{"name":"Journal of Industrial Information Integration","volume":"45 ","pages":"Article 100833"},"PeriodicalIF":10.4,"publicationDate":"2025-03-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143678047","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-03-21DOI: 10.1016/j.jii.2025.100834
Ahmed Bensaoud, Jugal Kalita
The exponential growth of software complexity has led to a corresponding increase in software vulnerabilities, necessitating robust methods for automatic vulnerability detection and repair. This paper proposes DCodeBERT, a large language model (LLM) fine-tuned for vulnerability detection and repair in software code. Leveraging the pre-trained CodeBERT model, DCodeBERT is designed to understand both natural language and programming language context, enabling it to effectively identify vulnerabilities and suggest repairs. We conduct experiments to evaluate DCodeBERT’s performance, comparing it against several baseline models. The results demonstrate that DCodeBERT outperforms the baselines in both vulnerability detection and repair tasks across multiple programming languages, showcasing its effectiveness in enhancing software security.
{"title":"Advancing software security: DCodeBERT for automatic vulnerability detection and repair","authors":"Ahmed Bensaoud, Jugal Kalita","doi":"10.1016/j.jii.2025.100834","DOIUrl":"10.1016/j.jii.2025.100834","url":null,"abstract":"<div><div>The exponential growth of software complexity has led to a corresponding increase in software vulnerabilities, necessitating robust methods for automatic vulnerability detection and repair. This paper proposes DCodeBERT, a large language model (LLM) fine-tuned for vulnerability detection and repair in software code. Leveraging the pre-trained CodeBERT model, DCodeBERT is designed to understand both natural language and programming language context, enabling it to effectively identify vulnerabilities and suggest repairs. We conduct experiments to evaluate DCodeBERT’s performance, comparing it against several baseline models. The results demonstrate that DCodeBERT outperforms the baselines in both vulnerability detection and repair tasks across multiple programming languages, showcasing its effectiveness in enhancing software security.</div></div>","PeriodicalId":55975,"journal":{"name":"Journal of Industrial Information Integration","volume":"45 ","pages":"Article 100834"},"PeriodicalIF":10.4,"publicationDate":"2025-03-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143703999","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}