Pub Date : 2025-10-14DOI: 10.1016/j.jii.2025.100971
Ivo Perez Colo , Carolina Saavedra Sueldo , Luis Avila , Geraldina Roark , Gerardo G. Acosta , Mariano De Paula
From the perspective of systems theory, a production process can be conceptualized as an organized set of operations that primarily involves managing the flow of materials, goods, energy, and information. Optimal management of industrial flows is a complex decision-making problem that has been addressed for decades, from modeling and optimization theory to today’s artificial intelligence (AI) techniques. However, although many modern AI-based proposals have been successfully tested in various and diverse flow optimization problems, their performance and transferability to industrial plants are strongly dependent on their high-dimensional hyper-parameter settings. Typically, hyper-parameter tuning is still performed by human experts who spend a considerable amount of time conducting trial-and-error heuristic searches for optimal hyper-parameter configurations. This fact, in addition to being inefficient, makes democratization, integration, and scalability towards industrial systems inconvenient, as they commonly have limited qualified expert human resources. Keeping in mind this fact, in this work, we propose a simulation-based Bayesian optimization approach for autonomous optimal hyper-parameter adjustment of black-box AI-based decision-making techniques. Our proposal was tested on two flow optimization problems of very different nature and behavior, and each of them was addressed with different modern AI-based decision-making techniques.
{"title":"Self-adaptive solution for industrial integration of AI-based decision-making systems for industrial flows management","authors":"Ivo Perez Colo , Carolina Saavedra Sueldo , Luis Avila , Geraldina Roark , Gerardo G. Acosta , Mariano De Paula","doi":"10.1016/j.jii.2025.100971","DOIUrl":"10.1016/j.jii.2025.100971","url":null,"abstract":"<div><div>From the perspective of systems theory, a production process can be conceptualized as an organized set of operations that primarily involves managing the flow of materials, goods, energy, and information. Optimal management of industrial flows is a complex decision-making problem that has been addressed for decades, from modeling and optimization theory to today’s artificial intelligence (AI) techniques. However, although many modern AI-based proposals have been successfully tested in various and diverse flow optimization problems, their performance and transferability to industrial plants are strongly dependent on their high-dimensional hyper-parameter settings. Typically, hyper-parameter tuning is still performed by human experts who spend a considerable amount of time conducting trial-and-error heuristic searches for optimal hyper-parameter configurations. This fact, in addition to being inefficient, makes democratization, integration, and scalability towards industrial systems inconvenient, as they commonly have limited qualified expert human resources. Keeping in mind this fact, in this work, we propose a simulation-based Bayesian optimization approach for autonomous optimal hyper-parameter adjustment of black-box AI-based decision-making techniques. Our proposal was tested on two flow optimization problems of very different nature and behavior, and each of them was addressed with different modern AI-based decision-making techniques.</div></div>","PeriodicalId":55975,"journal":{"name":"Journal of Industrial Information Integration","volume":"48 ","pages":"Article 100971"},"PeriodicalIF":10.4,"publicationDate":"2025-10-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145362790","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-10-14DOI: 10.1016/j.jii.2025.100970
F.M. Martínez-García , A. Molina García , F.C. Gómez de León , M. Alarcón
Anomaly detection in production processes is essential for ensuring reliability and efficiency in the industrial sector. In this way, system optimization requires advanced monitoring strategies such as predictive maintenance and intelligent fault detection. Traditional diagnostic methods rely on retrospective data analysis and deterministic cause-effect models, while machine learning approaches enable real-time monitoring and data-driven modeling to detect deviations from normal operation. This study proposes a scalable anomaly detection framework based on clustering algorithms, specifically applied to batch distillation processes—critical operations in chemical manufacturing that remain underexplored in real-world applications, particularly in multiproduct plants. The methodology was validated through an industrial case study at a chemical facility in El Palmar, Murcia (Spain), operated by a multinational corporation. Over 300,000 data points were collected over three years, focusing on critical variables governing distillation unit performance. Clustering techniques including k-means, DBSCAN, and hierarchical clustering were applied to identify deviations from standard operating conditions. Results demonstrate the effectiveness, flexibility, and scalability of the proposed approach, detecting anomalies in real time due to equipment faults, unstable conditions, or operator error. Integration of this system reduces unplanned shutdowns, improves energy efficiency, safety, and product quality, and provides operators with a real-time dashboard for decision support. Statistical evaluation of algorithms ensures adaptability across product types, while the custom application enables graphical monitoring of process deviations. Future work includes integrating performance indicators and ERP/MES connectivity. This framework serves as a reference model for deploying scalable anomaly detection systems across diverse industrial environments.
{"title":"Distillation anomaly and fault detection based on clustering algorithms","authors":"F.M. Martínez-García , A. Molina García , F.C. Gómez de León , M. Alarcón","doi":"10.1016/j.jii.2025.100970","DOIUrl":"10.1016/j.jii.2025.100970","url":null,"abstract":"<div><div>Anomaly detection in production processes is essential for ensuring reliability and efficiency in the industrial sector. In this way, system optimization requires advanced monitoring strategies such as predictive maintenance and intelligent fault detection. Traditional diagnostic methods rely on retrospective data analysis and deterministic cause-effect models, while machine learning approaches enable real-time monitoring and data-driven modeling to detect deviations from normal operation. This study proposes a scalable anomaly detection framework based on clustering algorithms, specifically applied to batch distillation processes—critical operations in chemical manufacturing that remain underexplored in real-world applications, particularly in multiproduct plants. The methodology was validated through an industrial case study at a chemical facility in El Palmar, Murcia (Spain), operated by a multinational corporation. Over 300,000 data points were collected over three years, focusing on critical variables governing distillation unit performance. Clustering techniques including k-means, DBSCAN, and hierarchical clustering were applied to identify deviations from standard operating conditions. Results demonstrate the effectiveness, flexibility, and scalability of the proposed approach, detecting anomalies in real time due to equipment faults, unstable conditions, or operator error. Integration of this system reduces unplanned shutdowns, improves energy efficiency, safety, and product quality, and provides operators with a real-time dashboard for decision support. Statistical evaluation of algorithms ensures adaptability across product types, while the custom application enables graphical monitoring of process deviations. Future work includes integrating performance indicators and ERP/MES connectivity. This framework serves as a reference model for deploying scalable anomaly detection systems across diverse industrial environments.</div></div>","PeriodicalId":55975,"journal":{"name":"Journal of Industrial Information Integration","volume":"48 ","pages":"Article 100970"},"PeriodicalIF":10.4,"publicationDate":"2025-10-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145362793","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-10-13DOI: 10.1016/j.jii.2025.100969
Al Haj Ali Jana , Ben Gaffinet , Mario Lezoche , Hervé Panetto , Yannick Naudet
Cognition, the set of mental processes that enable humans to perceive, reason, learn and decide, plays an essential role in effective collaboration between humans and Cyber–Physical Systems (CPSs). To achieve seamless cognitive interoperability between humans and CPSs, it is necessary to integrate a Cognitive Digital Twin (CDT) and a Human Digital Twin (HDT) to provide digital representations of both physical assets and human cognitive states. In this article, we first analyse the three essential functions of CDT and HDT: emulation, cognition and simulation, and review the state-of-the-art technologies for each of them, from supervised learning and knowledge graphs to deep reinforcement learning. Focusing on the cognitive layer, we review the state of the art in cognitive architectures, describing their symbolic, sub-symbolic and hybrid types and reporting on their real-world implementations in different domains. We then assess the relevance of these architectures for the integration of human-like reasoning in CDTs. Finally, we identify the main technological challenges and gaps that need to be addressed in order to implement fully operational CDTs.
{"title":"Enabling human–CPS cognitive interoperability: Cognitive architectures as technologies for human-like cognitive digital twins","authors":"Al Haj Ali Jana , Ben Gaffinet , Mario Lezoche , Hervé Panetto , Yannick Naudet","doi":"10.1016/j.jii.2025.100969","DOIUrl":"10.1016/j.jii.2025.100969","url":null,"abstract":"<div><div>Cognition, the set of mental processes that enable humans to perceive, reason, learn and decide, plays an essential role in effective collaboration between humans and Cyber–Physical Systems (CPSs). To achieve seamless cognitive interoperability between humans and CPSs, it is necessary to integrate a Cognitive Digital Twin (CDT) and a Human Digital Twin (HDT) to provide digital representations of both physical assets and human cognitive states. In this article, we first analyse the three essential functions of CDT and HDT: emulation, cognition and simulation, and review the state-of-the-art technologies for each of them, from supervised learning and knowledge graphs to deep reinforcement learning. Focusing on the cognitive layer, we review the state of the art in cognitive architectures, describing their symbolic, sub-symbolic and hybrid types and reporting on their real-world implementations in different domains. We then assess the relevance of these architectures for the integration of human-like reasoning in CDTs. Finally, we identify the main technological challenges and gaps that need to be addressed in order to implement fully operational CDTs.</div></div>","PeriodicalId":55975,"journal":{"name":"Journal of Industrial Information Integration","volume":"48 ","pages":"Article 100969"},"PeriodicalIF":10.4,"publicationDate":"2025-10-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145315170","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Digital Twin is a cutting-edge technology designed to address disruptions in manufacturing operations by supporting humans in complex maintenance decisions via advanced data analytics and real-time synchronization. However, as the complexity of decisions increases, enhanced capabilities are required, such as reasoning and context awareness, leading to the Cognitive Digital Twin (CDT) concept. In this context, this work offers two contributions. First, it presents a state-of-the-art review on CDT for maintenance in manufacturing, identifying Fault Detection and Diagnosis (FDD) as a relevant investigation area. Second, it proposes a novel CDT framework specifically tailored to support FDD in industrial maintenance. The contributions are twofold: (i) an ontology that formalises maintenance expert knowledge and supports diagnostic reasoning; and (ii) data-driven algorithms that elaborate data from the physical system, and instantiate or update the proposed ontology. The structured integration of ontology and data analytics into an operational CDT framework enables and properly places all six cognitive capabilities - perception, attention, memory, reasoning, problem-solving, and learning - within a domain-specific framework tailored to maintenance, and especially to support FDD decisions. The CDT output is the augmented information flowing to the maintenance decision-making process, which is held by the maintenance staff, who, after the completion of the FDD activity, can act back on the physical asset with the required maintenance interventions. The CDT framework is finally tested in a laboratory setting, demonstrating its functional effectiveness in supporting maintainers in the FDD decision-making process by formalizing knowledge and guiding reasoning.
Digital Twin是一项尖端技术,旨在通过先进的数据分析和实时同步,支持人类进行复杂的维护决策,从而解决制造操作中的中断问题。然而,随着决策复杂性的增加,需要增强的功能,例如推理和上下文感知,从而产生认知数字孪生(CDT)概念。在此背景下,这项工作提供了两个贡献。首先,它介绍了CDT在制造维修中的最新进展,确定故障检测和诊断(FDD)作为相关的研究领域。其次,它提出了一个新的CDT框架,专门用于支持工业维护中的FDD。贡献是双重的:(i)一个将维护专家知识形式化并支持诊断推理的本体;(ii)数据驱动的算法,这些算法详细说明来自物理系统的数据,并实例化或更新所提出的本体。将本体和数据分析结构化地集成到一个可操作的CDT框架中,可以将所有六种认知能力——感知、注意、记忆、推理、解决问题和学习——适当地放置在为维护量身定制的特定领域框架中,特别是支持FDD决策。CDT输出是流向维护决策过程的增强信息,该决策过程由维护人员持有,维护人员在完成FDD活动后,可以使用所需的维护干预措施对物理资产采取行动。CDT框架最后在实验室环境中进行了测试,通过形式化知识和指导推理,证明了它在支持FDD决策过程中维护人员的功能有效性。
{"title":"Cognitive Digital Twin for industrial maintenance: operational framework for fault detection and diagnosis","authors":"Sofia Zappa , Chiara Franciosi , Adalberto Polenghi , Alexandre Voisin","doi":"10.1016/j.jii.2025.100974","DOIUrl":"10.1016/j.jii.2025.100974","url":null,"abstract":"<div><div>Digital Twin is a cutting-edge technology designed to address disruptions in manufacturing operations by supporting humans in complex maintenance decisions via advanced data analytics and real-time synchronization. However, as the complexity of decisions increases, enhanced capabilities are required, such as reasoning and context awareness, leading to the Cognitive Digital Twin (CDT) concept. In this context, this work offers two contributions. First, it presents a state-of-the-art review on CDT for maintenance in manufacturing, identifying Fault Detection and Diagnosis (FDD) as a relevant investigation area. Second, it proposes a novel CDT framework specifically tailored to support FDD in industrial maintenance. The contributions are twofold: (i) an ontology that formalises maintenance expert knowledge and supports diagnostic reasoning; and (ii) data-driven algorithms that elaborate data from the physical system, and instantiate or update the proposed ontology. The structured integration of ontology and data analytics into an operational CDT framework enables and properly places all six cognitive capabilities - perception, attention, memory, reasoning, problem-solving, and learning - within a domain-specific framework tailored to maintenance, and especially to support FDD decisions. The CDT output is the augmented information flowing to the maintenance decision-making process, which is held by the maintenance staff, who, after the completion of the FDD activity, can act back on the physical asset with the required maintenance interventions. The CDT framework is finally tested in a laboratory setting, demonstrating its functional effectiveness in supporting maintainers in the FDD decision-making process by formalizing knowledge and guiding reasoning.</div></div>","PeriodicalId":55975,"journal":{"name":"Journal of Industrial Information Integration","volume":"48 ","pages":"Article 100974"},"PeriodicalIF":10.4,"publicationDate":"2025-10-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145362796","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-10-11DOI: 10.1016/j.jii.2025.100972
Wenjun Xu , Jinshan Zhong , Jiayi Liu , Shuang Zheng , Xianglong Zou , Feng Liu
Under the industrial internet environment, achieving effective manufacturing process scheduling requires collaboration across the workshop level, production line level, industrial chain, and value chain. This collaboration enables the integration of manufacturing process information, aligning scheduling more closely with practical operations. However, the existing scheduling approaches mainly focus on a single level or a single chain, lacking the ability to address multi-level and cross-chain collaborative optimization. To overcome this limitation, this paper proposes a scheduling method for manufacturing process based on multi-level and cross-chain collaboration (MPMLCC) under the industrial internet environment. Firstly, the mathematical model is established to represent the four key stages of the manufacturing process: parts procurement, transportation, sub-assembly, and final assembly. The optimization model aims to minimize both the makespan and the total cost, reflecting time and cost efficiency across all stages. Then, an improved multi-objective grey wolf optimizer (IMOGWO) is designed to solve the MPMLCC scheduling problem. The algorithm integrates the opposition-based learning (OBL), the multi-neighborhood local search strategy to balance global exploration and local exploitation. Case studies based on the small satellites Oresat0 and Oresat1B are conducted to verify the effectiveness of the proposed method. Experimental results demonstrate that the proposed approach significantly improves solution quality and stability compared to the other multi-objective optimization algorithms. Furthermore, the scheduling outcomes confirm the effectiveness of manufacturing process information integration and collaborative optimization across multiple levels and chains.
{"title":"Manufacturing process scheduling method based on multi-level and cross-chain collaboration under industrial internet environment","authors":"Wenjun Xu , Jinshan Zhong , Jiayi Liu , Shuang Zheng , Xianglong Zou , Feng Liu","doi":"10.1016/j.jii.2025.100972","DOIUrl":"10.1016/j.jii.2025.100972","url":null,"abstract":"<div><div>Under the industrial internet environment, achieving effective manufacturing process scheduling requires collaboration across the workshop level, production line level, industrial chain, and value chain. This collaboration enables the integration of manufacturing process information, aligning scheduling more closely with practical operations. However, the existing scheduling approaches mainly focus on a single level or a single chain, lacking the ability to address multi-level and cross-chain collaborative optimization. To overcome this limitation, this paper proposes a scheduling method for manufacturing process based on multi-level and cross-chain collaboration (MPMLCC) under the industrial internet environment. Firstly, the mathematical model is established to represent the four key stages of the manufacturing process: parts procurement, transportation, sub-assembly, and final assembly. The optimization model aims to minimize both the makespan and the total cost, reflecting time and cost efficiency across all stages. Then, an improved multi-objective grey wolf optimizer (IMOGWO) is designed to solve the MPMLCC scheduling problem. The algorithm integrates the opposition-based learning (OBL), the multi-neighborhood local search strategy to balance global exploration and local exploitation. Case studies based on the small satellites Oresat0 and Oresat1B are conducted to verify the effectiveness of the proposed method. Experimental results demonstrate that the proposed approach significantly improves solution quality and stability compared to the other multi-objective optimization algorithms. Furthermore, the scheduling outcomes confirm the effectiveness of manufacturing process information integration and collaborative optimization across multiple levels and chains.</div></div>","PeriodicalId":55975,"journal":{"name":"Journal of Industrial Information Integration","volume":"48 ","pages":"Article 100972"},"PeriodicalIF":10.4,"publicationDate":"2025-10-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145315172","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-10-11DOI: 10.1016/j.jii.2025.100976
Yuxiang Liao , Qiuyu Yang , Jiangjun Ruan , Jingyi Xie , Xue Xue , Yuyi Lin
High-voltage circuit breakers (HVCBs) present challenges in cross-domain fault diagnosis under zero-shot scenarios due to their complex mechanisms, diverse failure modes, and scarce fault samples. To address this problem, this paper proposes a cross-domain zero-shot diagnosis method for HVCBs driven by multidomain spatial projection (MSP) and dual embedded structure (DES), named MSP-DES. The proposed method effectively identifies unseen fault categories using only existing fault data and auxiliary knowledge. First, the MSP strategy extracts optimal features from projection subspaces, which are class-specific spaces derived from distinct fault categories. It incorporates a pseudo-labeling mechanism (an unsupervised learning approach) to mine both intra-class and inter-class information within the target domain. Second, fine-grained fault semantic descriptions are constructed based on HVCB fault signal characteristics and mechanical structural variations. Third, the DES establishes bidirectional mappings between fault semantics and features in high-dimensional embedding space. Finally, a loss function balancing intra-class compactness and inter-class separation optimizes the DES. The experimental results demonstrate that MSP-DES achieves both single- and cross-domain fault diagnosis using only historical training data, outperforming suboptimal models with a 10.53% accuracy improvement.
{"title":"Cross-domain zero-shot fault diagnosis method for high-voltage circuit breakers driven by multidomain spatial projection and dual embedded structure","authors":"Yuxiang Liao , Qiuyu Yang , Jiangjun Ruan , Jingyi Xie , Xue Xue , Yuyi Lin","doi":"10.1016/j.jii.2025.100976","DOIUrl":"10.1016/j.jii.2025.100976","url":null,"abstract":"<div><div>High-voltage circuit breakers (HVCBs) present challenges in cross-domain fault diagnosis under zero-shot scenarios due to their complex mechanisms, diverse failure modes, and scarce fault samples. To address this problem, this paper proposes a cross-domain zero-shot diagnosis method for HVCBs driven by multidomain spatial projection (MSP) and dual embedded structure (DES), named MSP-DES. The proposed method effectively identifies unseen fault categories using only existing fault data and auxiliary knowledge. First, the MSP strategy extracts optimal features from projection subspaces, which are class-specific spaces derived from distinct fault categories. It incorporates a pseudo-labeling mechanism (an unsupervised learning approach) to mine both intra-class and inter-class information within the target domain. Second, fine-grained fault semantic descriptions are constructed based on HVCB fault signal characteristics and mechanical structural variations. Third, the DES establishes bidirectional mappings between fault semantics and features in high-dimensional embedding space. Finally, a loss function balancing intra-class compactness and inter-class separation optimizes the DES. The experimental results demonstrate that MSP-DES achieves both single- and cross-domain fault diagnosis using only historical training data, outperforming suboptimal models with a 10.53% accuracy improvement.</div></div>","PeriodicalId":55975,"journal":{"name":"Journal of Industrial Information Integration","volume":"48 ","pages":"Article 100976"},"PeriodicalIF":10.4,"publicationDate":"2025-10-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145315171","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-10-11DOI: 10.1016/j.jii.2025.100975
Enshen Zhu, Sheng Yang
The human digital twin (HDT) is a detailed and personalized digital representation of an individual, encompassing the physical, cognitive, psychological, and social characteristics. HDT, an extension of the traditional digital twin concept from the industrial engineering sector, finds applications in diverse human-centric sectors such as smart manufacturing, medical healthcare, personal fitness, and autonomous driving. Although human modelling and simulation (HMS) are essential for advancing HDT technology, existing literature reviews primarily emphasize general aspects, including the definition, hierarchical frameworks, and various applications of HDT, rather than providing a thorough overview of HMS methods and tools. To fill the gap, this review work is specifically focused on the HMS aspect in HDT, discussing the evolution of digital human simulation, HDT information models, HDT metamodels, and related tools and software. This study also provides a checklist on building the HDT metamodel from the collected human data.
{"title":"Towards human digital twin: Reviewing human modelling and simulation","authors":"Enshen Zhu, Sheng Yang","doi":"10.1016/j.jii.2025.100975","DOIUrl":"10.1016/j.jii.2025.100975","url":null,"abstract":"<div><div>The human digital twin (HDT) is a detailed and personalized digital representation of an individual, encompassing the physical, cognitive, psychological, and social characteristics. HDT, an extension of the traditional digital twin concept from the industrial engineering sector, finds applications in diverse human-centric sectors such as smart manufacturing, medical healthcare, personal fitness, and autonomous driving. Although human modelling and simulation (HMS) are essential for advancing HDT technology, existing literature reviews primarily emphasize general aspects, including the definition, hierarchical frameworks, and various applications of HDT, rather than providing a thorough overview of HMS methods and tools. To fill the gap, this review work is specifically focused on the HMS aspect in HDT, discussing the evolution of digital human simulation, HDT information models, HDT metamodels, and related tools and software. This study also provides a checklist on building the HDT metamodel from the collected human data.</div></div>","PeriodicalId":55975,"journal":{"name":"Journal of Industrial Information Integration","volume":"48 ","pages":"Article 100975"},"PeriodicalIF":10.4,"publicationDate":"2025-10-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145362792","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-10-11DOI: 10.1016/j.jii.2025.100977
Roberto Antonio Riascos Castaneda , Egon Ostrosi , Josip Stjepandic
Medical device manufacturers must be able to manage risks at several stages of the product development process and throughout the overall lifecycle. Through the implementation of PLM in healthcare, medical device manufacturers can efficiently oversee the complete lifecycle of their medical products. However, traditional PLM approaches do not adequately address integration with risk management. This paper presents a new framework that integrates dynamic risk management within Product Lifecycle Management (PLM) systems specifically for medical devices. This framework proposes a model for continuous risk identification and assessment, considering it as time-dependent. The time-dependent overall risk is defined in relation to time-dependent internal risk and time-dependent external risk. A modular product architecture integrating risk management is also proposed. Considering real-time risk assessment, modular product design allows for the design of medical devices that integrate risk assessment and risk management into the design process. The multi-layered approach enables the assessment of risks related to elementary functions, product functions, individual components, modules, and the entire medical device in real-time throughout the design and development process. A data model for product lifecycle management-based risk management and its implementation in PLM systems allows for a structured approach to handling safety-critical functions and aligning with regulatory requirements effectively. Our findings demonstrate that by embedding real-time risk identification, modular risk assessment, and overall risk tracking into Product Lifecycle Management (PLM) systems, manufacturers of medical devices can significantly improve safety, compliance, and decision-making. The integration of risk management with configuration management in PLM ensures traceability from design to final product, encouraging better collaboration and knowledge sharing across stakeholders.
{"title":"Dynamic product risk management in product lifecycle management of medical products","authors":"Roberto Antonio Riascos Castaneda , Egon Ostrosi , Josip Stjepandic","doi":"10.1016/j.jii.2025.100977","DOIUrl":"10.1016/j.jii.2025.100977","url":null,"abstract":"<div><div>Medical device manufacturers must be able to manage risks at several stages of the product development process and throughout the overall lifecycle. Through the implementation of PLM in healthcare, medical device manufacturers can efficiently oversee the complete lifecycle of their medical products. However, traditional PLM approaches do not adequately address integration with risk management. This paper presents a new framework that integrates dynamic risk management within Product Lifecycle Management (PLM) systems specifically for medical devices. This framework proposes a model for continuous risk identification and assessment, considering it as time-dependent. The time-dependent overall risk is defined in relation to time-dependent internal risk and time-dependent external risk. A modular product architecture integrating risk management is also proposed. Considering real-time risk assessment, modular product design allows for the design of medical devices that integrate risk assessment and risk management into the design process. The multi-layered approach enables the assessment of risks related to elementary functions, product functions, individual components, modules, and the entire medical device in real-time throughout the design and development process. A data model for product lifecycle management-based risk management and its implementation in PLM systems allows for a structured approach to handling safety-critical functions and aligning with regulatory requirements effectively. Our findings demonstrate that by embedding real-time risk identification, modular risk assessment, and overall risk tracking into Product Lifecycle Management (PLM) systems, manufacturers of medical devices can significantly improve safety, compliance, and decision-making. The integration of risk management with configuration management in PLM ensures traceability from design to final product, encouraging better collaboration and knowledge sharing across stakeholders.</div></div>","PeriodicalId":55975,"journal":{"name":"Journal of Industrial Information Integration","volume":"48 ","pages":"Article 100977"},"PeriodicalIF":10.4,"publicationDate":"2025-10-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145362794","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Debris flows are a significant hazard to human life, infrastructure, and the environment, particularly in mountainous regions where steep terrain and extreme weather intensify their impact. Mitigating these events requires robust decision-making frameworks that can effectively process uncertainty and complex data. This study introduces the score and accuracy functions for T-spherical fuzzy Soft numbers (T-SFSNs) and develops advanced Dombi aggregation operators (including weighted, ordered weighted, hybrid, and geometric forms) along with essential operational laws and properties. To enhance decision-making flexibility and incorporate parameterized uncertainty, the traditional Evaluation Based on Distance from Average Solution (EDAS) method has been systematically refined using the proposed score/accuracy functions and aggregation techniques. The modified EDAS framework expands the decision space, allowing for a more effective evaluation of mitigation measures under uncertain conditions. Additionally, a detailed case study on debris flow mitigation in mountainous regions is presented, demonstrating the effectiveness of the proposed methodology in selecting optimal mitigation strategies. To validate its feasibility, reliability, and superiority, a comparative analysis is conducted against existing multi-criteria decision-making (MCDM) approaches, highlighting the advantages of the enhanced method in handling complex decision scenarios. The results underscore the robustness and adaptability of the proposed framework in mitigating debris flow hazards.
{"title":"Effectiveness of debris flow mitigation measures through T-spherical fuzzy Soft Dombi aggregation operators with EDAS-based multi-criteria decision making in mountainous regions","authors":"Himanshu Dhumras , Manish Kumar , Rakesh Kumar Bajaj","doi":"10.1016/j.jii.2025.100968","DOIUrl":"10.1016/j.jii.2025.100968","url":null,"abstract":"<div><div>Debris flows are a significant hazard to human life, infrastructure, and the environment, particularly in mountainous regions where steep terrain and extreme weather intensify their impact. Mitigating these events requires robust decision-making frameworks that can effectively process uncertainty and complex data. This study introduces the score and accuracy functions for T-spherical fuzzy Soft numbers (T-SFSNs) and develops advanced Dombi aggregation operators (including weighted, ordered weighted, hybrid, and geometric forms) along with essential operational laws and properties. To enhance decision-making flexibility and incorporate parameterized uncertainty, the traditional Evaluation Based on Distance from Average Solution (EDAS) method has been systematically refined using the proposed score/accuracy functions and aggregation techniques. The modified EDAS framework expands the decision space, allowing for a more effective evaluation of mitigation measures under uncertain conditions. Additionally, a detailed case study on debris flow mitigation in mountainous regions is presented, demonstrating the effectiveness of the proposed methodology in selecting optimal mitigation strategies. To validate its feasibility, reliability, and superiority, a comparative analysis is conducted against existing multi-criteria decision-making (MCDM) approaches, highlighting the advantages of the enhanced method in handling complex decision scenarios. The results underscore the robustness and adaptability of the proposed framework in mitigating debris flow hazards.</div></div>","PeriodicalId":55975,"journal":{"name":"Journal of Industrial Information Integration","volume":"48 ","pages":"Article 100968"},"PeriodicalIF":10.4,"publicationDate":"2025-10-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145315173","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-09-30DOI: 10.1016/j.jii.2025.100967
Shasha Li , Tiejun Cui
Industrial system fault information is a fusion of fault-related data, where objects represent core fault events and factors quantify their dynamic states. Factor variations directly reflect object characteristics, driving the System Fault Evolution Process (SFEP)—a complex progression of system functionality from normal operation to failure, shaped by temporal changes in object states and factor values. To assess how factors influence object classification during SFEP, this paper proposes the Object Classification Judgment Method Integration-Based Factor-Object (OCJM-IFO), a novel approach rooted in the Neighborhood Preserving Embedding (NPE) algorithm. OCJM-IFO addresses critical limitations of existing methods: it handles sparse data, avoids the curse of dimensionality, and reduces reliance on prior rules by dynamically fusing weights from labelled (intra-class) and unlabelled (inter-class) data via an optimal weight ratio coefficient. This fusion enables comprehensive evaluation of factor impacts. Experiments on electrical systems and MOSFET faults (each involving 6 factors and 100 objects) validate the method: it identifies sets of favorable, uncertain, and unfavorable factors, with results aligning closely with physical fault characteristics. The algorithm requires a data structure composed of time-series objects, supporting real-time dataset updates. Thus, it is particularly well-suited for intelligent real-time monitoring systems in industrial environments, offering universal applicability and easy data accessibility. The construction process of the OCJM-IFO dataset is presented. This study strengthens fault information integration in industrial systems, providing a robust tool for fault diagnosis and preventive maintenance, with proven engineering applicability in enhancing system reliability.
{"title":"Information integration-based factor object approach for object classification judgment in the system fault evolution process","authors":"Shasha Li , Tiejun Cui","doi":"10.1016/j.jii.2025.100967","DOIUrl":"10.1016/j.jii.2025.100967","url":null,"abstract":"<div><div>Industrial system fault information is a fusion of fault-related data, where objects represent core fault events and factors quantify their dynamic states. Factor variations directly reflect object characteristics, driving the System Fault Evolution Process (SFEP)—a complex progression of system functionality from normal operation to failure, shaped by temporal changes in object states and factor values. To assess how factors influence object classification during SFEP, this paper proposes the Object Classification Judgment Method Integration-Based Factor-Object (OCJM-IFO), a novel approach rooted in the Neighborhood Preserving Embedding (NPE) algorithm. OCJM-IFO addresses critical limitations of existing methods: it handles sparse data, avoids the curse of dimensionality, and reduces reliance on prior rules by dynamically fusing weights from labelled (intra-class) and unlabelled (inter-class) data via an optimal weight ratio coefficient. This fusion enables comprehensive evaluation of factor impacts. Experiments on electrical systems and MOSFET faults (each involving 6 factors and 100 objects) validate the method: it identifies sets of favorable, uncertain, and unfavorable factors, with results aligning closely with physical fault characteristics. The algorithm requires a data structure composed of time-series objects, supporting real-time dataset updates. Thus, it is particularly well-suited for intelligent real-time monitoring systems in industrial environments, offering universal applicability and easy data accessibility. The construction process of the OCJM-IFO dataset is presented. This study strengthens fault information integration in industrial systems, providing a robust tool for fault diagnosis and preventive maintenance, with proven engineering applicability in enhancing system reliability.</div></div>","PeriodicalId":55975,"journal":{"name":"Journal of Industrial Information Integration","volume":"48 ","pages":"Article 100967"},"PeriodicalIF":10.4,"publicationDate":"2025-09-30","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145324751","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}