On the one hand, due to changes in the operating conditions or working environment of the equipment, the degradation process often exhibits characteristics of two-phase or even multi-phase. In contrast to single-phase degradation models, two-phase degradation modeling necessitates considering the variability of the change points and analyzing the characteristics of the degraded state at the change points. On the other hand, as sensor technology advances, multi-sensor data collection systems have become increasingly widespread, and combining data from several sources can considerably improve the accuracy of remaining useful life (RUL) estimation. However, the current research fails to simultaneously incorporate both of the aforementioned conditions. Consequently, constructing a multivariate phased deterioration model and estimating the RUL still present a significant challenge. With this particular consideration, this paper constructs a two-variable phased degradation model based on the Wiener process. The RUL analytic expression is derived by taking into account the diversity of individuals and the random nature of change points. A novel approach is provided to achieve precise detection of change points. The proposed model’s validity is ultimately confirmed through the use of a simulation dataset as well as two real working datasets.
{"title":"Reliability analysis and remaining useful life estimation of a two-variable phased degradation system","authors":"Bincheng Wen, Xin Zhao, Haizhen Zhu, Jinjun Cheng, Changjun Li, Mingqing Xiao","doi":"10.1016/j.compind.2025.104368","DOIUrl":"10.1016/j.compind.2025.104368","url":null,"abstract":"<div><div>On the one hand, due to changes in the operating conditions or working environment of the equipment, the degradation process often exhibits characteristics of two-phase or even multi-phase. In contrast to single-phase degradation models, two-phase degradation modeling necessitates considering the variability of the change points and analyzing the characteristics of the degraded state at the change points. On the other hand, as sensor technology advances, multi-sensor data collection systems have become increasingly widespread, and combining data from several sources can considerably improve the accuracy of remaining useful life (RUL) estimation. However, the current research fails to simultaneously incorporate both of the aforementioned conditions. Consequently, constructing a multivariate phased deterioration model and estimating the RUL still present a significant challenge. With this particular consideration, this paper constructs a two-variable phased degradation model based on the Wiener process. The RUL analytic expression is derived by taking into account the diversity of individuals and the random nature of change points. A novel approach is provided to achieve precise detection of change points. The proposed model’s validity is ultimately confirmed through the use of a simulation dataset as well as two real working datasets.</div></div>","PeriodicalId":55219,"journal":{"name":"Computers in Industry","volume":"173 ","pages":"Article 104368"},"PeriodicalIF":9.1,"publicationDate":"2025-09-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145093795","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-09-12DOI: 10.1016/j.compind.2025.104363
Wenjin Chen , Jia Sheng Yang , Chenbo Xia , Yaosong Li , Xu Xiao
The Road Damage Detection System (RDDS) is crucial in intelligent transportation networks, enhancing driving safety, comfort, and overall traffic efficiency. A key factor in the system's performance is the effectiveness of the underlying detection algorithm. Currently, the YOLOv8 algorithm is widely applied in defect detection, but it faces challenges due to the varying scales of road damage. Specifically, the convolutional downsampling module in the backbone network often has a limited receptive field, reducing its ability to capture global information, while the multi-scale feature fusion network may lose critical local defect details and deep location information. These limitations hinder YOLOv8’s performance in detecting pavement defects. To address these issues, we propose an enhanced algorithm, YOLOv8 with Context Capture and Slimneck Structure (YOLOv8-CCS), which targets multi-scale defect characteristics and the prevalence of small-sized targets in road damage detection. To overcome the limited receptive field and improve global context awareness, we have integrated an enhanced context-guided module downsampling component (E-ContextGuidedBlock_Down), which expands the receptive field and improves context capture. Additionally, we replace the existing multi-scale fusion network with Ghost Shuffle Convolution (GSConv)-Slimneck and introduce the Enhanced VoVNet-based Ghost Shuffle Cross Stage Partial (VoVGSCSP-E) module in specific layers. To further enhance feature extraction and minimize information loss during fusion, we incorporate the Content-Aware ReAssembly of Features (CARAFE) upsampling module and a weighted feature fusion method. Finally, the Multi-Level Context Attention Bottleneck (MLCABOT) module is added between the backbone network and the multi-scale feature fusion network, improving the connectivity and overall feature extraction capability. In validation, our proposed method outperformed YOLOv8 by 3 %, 4.7 % and 3.8 % on the RDD-2022, ROAD-MAS and Unmanned Aerial Vehicle Asphalt Pavement Distress Dataset (UAPD) datasets, respectively. It also achieved the highest F1 score among comparable detection models and ranked among the top three in inference speed. These results highlight the potential of YOLOv8-CCS for real-time road damage detection, providing a more accurate and comprehensive solution for urban pavement management. Such a system, equipped with an advanced detection algorithm, can significantly improve road maintenance efficiency and enhance driving safety.
{"title":"Road surface damage detection based on enhanced YOLOv8","authors":"Wenjin Chen , Jia Sheng Yang , Chenbo Xia , Yaosong Li , Xu Xiao","doi":"10.1016/j.compind.2025.104363","DOIUrl":"10.1016/j.compind.2025.104363","url":null,"abstract":"<div><div>The Road Damage Detection System (RDDS) is crucial in intelligent transportation networks, enhancing driving safety, comfort, and overall traffic efficiency. A key factor in the system's performance is the effectiveness of the underlying detection algorithm. Currently, the YOLOv8 algorithm is widely applied in defect detection, but it faces challenges due to the varying scales of road damage. Specifically, the convolutional downsampling module in the backbone network often has a limited receptive field, reducing its ability to capture global information, while the multi-scale feature fusion network may lose critical local defect details and deep location information. These limitations hinder YOLOv8’s performance in detecting pavement defects. To address these issues, we propose an enhanced algorithm, YOLOv8 with Context Capture and Slimneck Structure (YOLOv8-CCS), which targets multi-scale defect characteristics and the prevalence of small-sized targets in road damage detection. To overcome the limited receptive field and improve global context awareness, we have integrated an enhanced context-guided module downsampling component (E-ContextGuidedBlock_Down), which expands the receptive field and improves context capture. Additionally, we replace the existing multi-scale fusion network with Ghost Shuffle Convolution (GSConv)-Slimneck and introduce the Enhanced VoVNet-based Ghost Shuffle Cross Stage Partial (VoVGSCSP-E) module in specific layers. To further enhance feature extraction and minimize information loss during fusion, we incorporate the Content-Aware ReAssembly of Features (CARAFE) upsampling module and a weighted feature fusion method. Finally, the Multi-Level Context Attention Bottleneck (MLCABOT) module is added between the backbone network and the multi-scale feature fusion network, improving the connectivity and overall feature extraction capability. In validation, our proposed method outperformed YOLOv8 by 3 %, 4.7 % and 3.8 % on the RDD-2022, ROAD-MAS and Unmanned Aerial Vehicle Asphalt Pavement Distress Dataset (UAPD) datasets, respectively. It also achieved the highest F1 score among comparable detection models and ranked among the top three in inference speed. These results highlight the potential of YOLOv8-CCS for real-time road damage detection, providing a more accurate and comprehensive solution for urban pavement management. Such a system, equipped with an advanced detection algorithm, can significantly improve road maintenance efficiency and enhance driving safety.</div></div>","PeriodicalId":55219,"journal":{"name":"Computers in Industry","volume":"173 ","pages":"Article 104363"},"PeriodicalIF":9.1,"publicationDate":"2025-09-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145050103","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-09-12DOI: 10.1016/j.compind.2025.104365
Yonggang Li , Yaotong Su , Lei Xia , Yuanjin Zhang , Weinong Wu , Longjiang Li
When evaluating the reliability of a wind power system, it is imperative to undertake differentiated sampling and meticulously predict extensive datasets. Existing studies frequently constrain raw data within narrowly defined parameter spaces to enhance their statistical significance. However, such an approach may inadvertently engender overly optimistic reliability evaluations, neglecting rare yet crucial failure scenarios. Consequently, this oversight potentially underestimates systemic risks and undermines robustness. To date, the dichotomy between high data acquisition rates and the intrinsic characteristics of collected data remains inadequately addressed. Concurrently, an urgent requirement persists for developing precise data distribution models capable of comprehensively assessing wind power system reliability. In response, Long Short-Term Memory (LSTM) models are employed to bridge this research gap, enabling predictions of wind power generation through analyses of data at varying granularities. Subsequently, an Improved Latin Hypercube Sampling (ILHS) methodology is implemented to partition sampling intervals, integrating seamlessly with the Monte Carlo (MC) method for wind power data sampling. This reliability assessment model fully exploits the flexibility of the proposed sampling technique, enhancing the precision of sample probability distributions, interval segmentation, and data stratification. Empirical evidence demonstrates that the proposed algorithm exhibits superior predictive accuracy and enhanced statistical efficacy relative to conventional methodologies. Thus, it offers a robust and efficacious solution for assessing the reliability of wind power integration. This study evaluates the practical reliability of a local wind power integration system in Southwest China. Additionally, methods for discerning vulnerabilities are systematically applied to fortify critical power buses and augment overall system reliability.
{"title":"Reliability evaluation of wind power systems by integrating granularity-related latin hypercube sampling with LSTM-based prediction","authors":"Yonggang Li , Yaotong Su , Lei Xia , Yuanjin Zhang , Weinong Wu , Longjiang Li","doi":"10.1016/j.compind.2025.104365","DOIUrl":"10.1016/j.compind.2025.104365","url":null,"abstract":"<div><div>When evaluating the reliability of a wind power system, it is imperative to undertake differentiated sampling and meticulously predict extensive datasets. Existing studies frequently constrain raw data within narrowly defined parameter spaces to enhance their statistical significance. However, such an approach may inadvertently engender overly optimistic reliability evaluations, neglecting rare yet crucial failure scenarios. Consequently, this oversight potentially underestimates systemic risks and undermines robustness. To date, the dichotomy between high data acquisition rates and the intrinsic characteristics of collected data remains inadequately addressed. Concurrently, an urgent requirement persists for developing precise data distribution models capable of comprehensively assessing wind power system reliability. In response, Long Short-Term Memory (LSTM) models are employed to bridge this research gap, enabling predictions of wind power generation through analyses of data at varying granularities. Subsequently, an Improved Latin Hypercube Sampling (ILHS) methodology is implemented to partition sampling intervals, integrating seamlessly with the Monte Carlo (MC) method for wind power data sampling. This reliability assessment model fully exploits the flexibility of the proposed sampling technique, enhancing the precision of sample probability distributions, interval segmentation, and data stratification. Empirical evidence demonstrates that the proposed algorithm exhibits superior predictive accuracy and enhanced statistical efficacy relative to conventional methodologies. Thus, it offers a robust and efficacious solution for assessing the reliability of wind power integration. This study evaluates the practical reliability of a local wind power integration system in Southwest China. Additionally, methods for discerning vulnerabilities are systematically applied to fortify critical power buses and augment overall system reliability.</div></div>","PeriodicalId":55219,"journal":{"name":"Computers in Industry","volume":"173 ","pages":"Article 104365"},"PeriodicalIF":9.1,"publicationDate":"2025-09-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145050104","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-09-10DOI: 10.1016/j.compind.2025.104364
Shihao Duan, Hengqian Wang, Chuang Peng, Lei Chen, Kuangrong Hao
Quality prediction holds significant importance in monitoring industrial processes, with soft sensors proving to be highly effective in this domain. However, industrial processes frequently exhibit multirate characteristics due to measurement and cost limitations. The characteristics lead to periodic missing and varying dynamics of variables at different sampling rates, further presenting substantial challenges to current soft sensor techniques. To tackle the obstacles, we propose a Multirate Dynamic Variational Compensation Network with Tracking (MR-TDVCN). Utilizing a generic preprocessor and dynamic variational inference, MR-TDVCN effectively captures and characterizes crucial and diverse temporal dynamics related to multiple sampling rates, enabling comprehensive dynamic modeling of inhomogeneous multirate data. Based on this, a feature prism dynamic compensation network is developed to process multirate sequences for local feature compensation and global temporal relationship correction hierarchically and progressively. This mitigates the information loss due to multirate sampling, providing richer and more holistic feature representations for quality prediction. Finally, a feature tracking strategy is customized for multirate processes to alleviate the label sparsity problem. MR-TDVCN demonstrates superior performance on the common debutanizer column dataset, outperforming existing models. It is further applied to the polyester esterification process dataset to address real-world multirate challenges.
{"title":"A novel dynamic variational compensation network with tracking for quality prediction of multirate industrial processes","authors":"Shihao Duan, Hengqian Wang, Chuang Peng, Lei Chen, Kuangrong Hao","doi":"10.1016/j.compind.2025.104364","DOIUrl":"10.1016/j.compind.2025.104364","url":null,"abstract":"<div><div>Quality prediction holds significant importance in monitoring industrial processes, with soft sensors proving to be highly effective in this domain. However, industrial processes frequently exhibit multirate characteristics due to measurement and cost limitations. The characteristics lead to periodic missing and varying dynamics of variables at different sampling rates, further presenting substantial challenges to current soft sensor techniques. To tackle the obstacles, we propose a Multirate Dynamic Variational Compensation Network with Tracking (MR-TDVCN). Utilizing a generic preprocessor and dynamic variational inference, MR-TDVCN effectively captures and characterizes crucial and diverse temporal dynamics related to multiple sampling rates, enabling comprehensive dynamic modeling of inhomogeneous multirate data. Based on this, a feature prism dynamic compensation network is developed to process multirate sequences for local feature compensation and global temporal relationship correction hierarchically and progressively. This mitigates the information loss due to multirate sampling, providing richer and more holistic feature representations for quality prediction. Finally, a feature tracking strategy is customized for multirate processes to alleviate the label sparsity problem. MR-TDVCN demonstrates superior performance on the common debutanizer column dataset, outperforming existing models. It is further applied to the polyester esterification process dataset to address real-world multirate challenges.</div></div>","PeriodicalId":55219,"journal":{"name":"Computers in Industry","volume":"173 ","pages":"Article 104364"},"PeriodicalIF":9.1,"publicationDate":"2025-09-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145027168","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-09-10DOI: 10.1016/j.compind.2025.104367
Shidong Wang, Renato Pajarola
We present a novel human-in-the-loop framework, CLOD-ReCo, for controllable residential community (ReCo) layout design in the form of multiple levels-of-detail (LODs) for a given construction plot boundary. Unlike other existing end-to-end methods that can only predict a basic 2D raster ReCo plan (LOD0), our approach simulates the design process of architects, which can not only be automated to generate diverse, vector-based, and high-quality 3D ReCo plans (LOD14), but can also interact with the users during the entire generation process, from sketching, including the building numbers and locations to LOD4 including a realistic representation of a group of buildings and their surroundings, making humans and AI co-design the final layout plan. Intensive experiments are conducted to demonstrate the strengths of our approach. The quantitative evaluation, the qualitative comparison, and the subjective evaluation by architects show the ability of our method to generate high-quality and plausible results, which are better than those produced by prior existing methods and comparable to the real-world ReCo plans designed by professional architects. Furthermore, the experiments on the variability of our automated method and user interaction show the ability of our approach to generate diverse results and to interact with users toward co-designing human-centric ReCo plans that meet the requirements of architects.
{"title":"A controllable generative design framework for residential communities with multi-scale architectural representations","authors":"Shidong Wang, Renato Pajarola","doi":"10.1016/j.compind.2025.104367","DOIUrl":"10.1016/j.compind.2025.104367","url":null,"abstract":"<div><div>We present a novel human-in-the-loop framework, CLOD-ReCo, for controllable residential community (ReCo) layout design in the form of multiple levels-of-detail (LODs) for a given construction plot boundary. Unlike other existing end-to-end methods that can only predict a basic 2D raster ReCo plan (LOD0), our approach simulates the design process of architects, which can not only be automated to generate diverse, vector-based, and high-quality 3D ReCo plans (LOD1<span><math><mo>∼</mo></math></span>4), but can also interact with the users during the entire generation process, from sketching, including the building numbers and locations to LOD4 including a realistic representation of a group of buildings and their surroundings, making humans and AI co-design the final layout plan. Intensive experiments are conducted to demonstrate the strengths of our approach. The quantitative evaluation, the qualitative comparison, and the subjective evaluation by architects show the ability of our method to generate high-quality and plausible results, which are better than those produced by prior existing methods and comparable to the real-world ReCo plans designed by professional architects. Furthermore, the experiments on the variability of our automated method and user interaction show the ability of our approach to generate diverse results and to interact with users toward co-designing human-centric ReCo plans that meet the requirements of architects.</div></div>","PeriodicalId":55219,"journal":{"name":"Computers in Industry","volume":"173 ","pages":"Article 104367"},"PeriodicalIF":9.1,"publicationDate":"2025-09-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145027231","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-09-10DOI: 10.1016/j.compind.2025.104358
Chengyi Shen, Changao Liu, Shijian Luo, Deyin Zhang, Yao Wang
As the automotive industry matures, automotive exterior design has become a key factor affecting market performance and user purchasing decisions. But the current assessment methods mainly rely on expert experience and lack systematic use of user perception knowledge. To remedy this issue, this study introduces a learning model for perceived visual complexity (PVC) assessment of automotive 3D shapes, grounded in user cognition. It aims to connect user perception with shape attributes. PVC offers key advantages, including quantifiability and relevance to both aesthetics and functionality, among others. To develop and validate this model, we first conducted paired comparison experiments to measure PVC of automotive 3D shapes, thereby establishing a dataset correlating user assessments with shape attributes influencing such evaluations. These attributes were then translated into computable features informed by human visual perception, followed by correlation analysis for feature selection. Finally, a variety of regression models and feature combinations were employed to construct learning models for assessment, from which the best-performing representative model was identified. The evaluation results demonstrated that the representative learning model underscored its efficacy in predicting the PVC of automotive 3D shapes. Its average Spearman correlation with human subjective evaluations was 0.7991 based on K-fold cross-validation. Notably, comparative analysis revealed that the representative model outperformed previous models of 3D complexity within the test set.
{"title":"A learning model for perceived visual complexity assessment of automotive 3D shapes based on visual perception elements","authors":"Chengyi Shen, Changao Liu, Shijian Luo, Deyin Zhang, Yao Wang","doi":"10.1016/j.compind.2025.104358","DOIUrl":"10.1016/j.compind.2025.104358","url":null,"abstract":"<div><div>As the automotive industry matures, automotive exterior design has become a key factor affecting market performance and user purchasing decisions. But the current assessment methods mainly rely on expert experience and lack systematic use of user perception knowledge. To remedy this issue, this study introduces a learning model for perceived visual complexity (PVC) assessment of automotive 3D shapes, grounded in user cognition. It aims to connect user perception with shape attributes. PVC offers key advantages, including quantifiability and relevance to both aesthetics and functionality, among others. To develop and validate this model, we first conducted paired comparison experiments to measure PVC of automotive 3D shapes, thereby establishing a dataset correlating user assessments with shape attributes influencing such evaluations. These attributes were then translated into computable features informed by human visual perception, followed by correlation analysis for feature selection. Finally, a variety of regression models and feature combinations were employed to construct learning models for assessment, from which the best-performing representative model was identified. The evaluation results demonstrated that the representative learning model underscored its efficacy in predicting the PVC of automotive 3D shapes. Its average Spearman correlation with human subjective evaluations was 0.7991 based on K-fold cross-validation. Notably, comparative analysis revealed that the representative model outperformed previous models of 3D complexity within the test set.</div></div>","PeriodicalId":55219,"journal":{"name":"Computers in Industry","volume":"173 ","pages":"Article 104358"},"PeriodicalIF":9.1,"publicationDate":"2025-09-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145027232","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-09-09DOI: 10.1016/j.compind.2025.104362
Yuxian Zhang, Xuhua Ren, Jixun Zhang
Conventional compliance checking for shield tunnel design models relies on two-dimensional drawings and the designer's subjective interpretation of specifications, which limits the efficiency and introduces potential errors. This study developed a semi-automated framework for shield tunnel design compliance checking using ontology and natural language processing. The adopted methodology establishes a shield tunnel design ontology (STDO) model, which includes six classes of information and relationships that need to be considered in the design phase. A novel method for converting text into a computer-readable format was proposed for design specification content. The design specification text is converted into a word sequence format, including STDO semantics, through word segmentation and semantic alignment. The pattern-matching method converts semantically enriched specification text into a Prolog rule format by extracting grammatical structure elements and transforming logical checking elements. The established design compliance checking framework generates facts through interaction with the building information model and performs compliance reasoning tasks using Prolog rules derived from the specification text. To demonstrate the effectiveness of the conversion method proposed in this study and the designed compliance checking framework, a shield tunnel project was selected for experimental verification. The results showed the following: (1) The proposed method of converting specification text into predicate logic achieved an of 86.25 %, providing a convenient approach for transforming it into a computer-readable format. (2) The established semi-automated framework could provide a convenient solution to assist in conducting model compliance checking tasks according to both quantitative and non-quantitative clauses. The results of this study provide significant guidance for the intelligent design of shield tunnels.
{"title":"A semi-automated compliance checking framework for shield tunnel design integrating ontology and natural language processing","authors":"Yuxian Zhang, Xuhua Ren, Jixun Zhang","doi":"10.1016/j.compind.2025.104362","DOIUrl":"10.1016/j.compind.2025.104362","url":null,"abstract":"<div><div>Conventional compliance checking for shield tunnel design models relies on two-dimensional drawings and the designer's subjective interpretation of specifications, which limits the efficiency and introduces potential errors. This study developed a semi-automated framework for shield tunnel design compliance checking using ontology and natural language processing. The adopted methodology establishes a shield tunnel design ontology (STDO) model, which includes six classes of information and relationships that need to be considered in the design phase. A novel method for converting text into a computer-readable format was proposed for design specification content. The design specification text is converted into a word sequence format, including STDO semantics, through word segmentation and semantic alignment. The pattern-matching method converts semantically enriched specification text into a Prolog rule format by extracting grammatical structure elements and transforming logical checking elements. The established design compliance checking framework generates facts through interaction with the building information model and performs compliance reasoning tasks using Prolog rules derived from the specification text. To demonstrate the effectiveness of the conversion method proposed in this study and the designed compliance checking framework, a shield tunnel project was selected for experimental verification. The results showed the following: (1) The proposed method of converting specification text into predicate logic achieved an <span><math><mrow><mi>F</mi><mn>1</mn></mrow></math></span> of 86.25 %, providing a convenient approach for transforming it into a computer-readable format. (2) The established semi-automated framework could provide a convenient solution to assist in conducting model compliance checking tasks according to both quantitative and non-quantitative clauses. The results of this study provide significant guidance for the intelligent design of shield tunnels.</div></div>","PeriodicalId":55219,"journal":{"name":"Computers in Industry","volume":"173 ","pages":"Article 104362"},"PeriodicalIF":9.1,"publicationDate":"2025-09-09","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145020157","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-09-09DOI: 10.1016/j.compind.2025.104359
Yiming He, Weiming Shen
Domain offset is an inevitable phenomenon in industrial signals for fault diagnosis. This article discusses a neglected problem of the traditional evaluation benchmark for data-driven fault diagnosis, i.e., the presence of identical individual information in both the training and testing sets. An individual generalization framework is explored using independent test individuals towards a more reasonable fault diagnosis benchmark. Furthermore, an improved lightweight transformer is applied to enhance the dynamic global feature extraction and irrelevant information filtering. Comprehensive experiments are performed on the Paderborn University bearing dataset and a machine-level motor dataset collected from real production lines. The results show that the traditional benchmark cannot effectively evaluate the screening ability for fault-irrelevant features and the generalization ability for new individuals. The proposed lightweight transformer achieves the highest generalization performance with great application potential.
{"title":"An individual generalization framework based on independent samples towards a more reasonable fault diagnosis benchmark","authors":"Yiming He, Weiming Shen","doi":"10.1016/j.compind.2025.104359","DOIUrl":"10.1016/j.compind.2025.104359","url":null,"abstract":"<div><div>Domain offset is an inevitable phenomenon in industrial signals for fault diagnosis. This article discusses a neglected problem of the traditional evaluation benchmark for data-driven fault diagnosis, i.e., the presence of identical individual information in both the training and testing sets. An individual generalization framework is explored using independent test individuals towards a more reasonable fault diagnosis benchmark. Furthermore, an improved lightweight transformer is applied to enhance the dynamic global feature extraction and irrelevant information filtering. Comprehensive experiments are performed on the Paderborn University bearing dataset and a machine-level motor dataset collected from real production lines. The results show that the traditional benchmark cannot effectively evaluate the screening ability for fault-irrelevant features and the generalization ability for new individuals. The proposed lightweight transformer achieves the highest generalization performance with great application potential.</div></div>","PeriodicalId":55219,"journal":{"name":"Computers in Industry","volume":"173 ","pages":"Article 104359"},"PeriodicalIF":9.1,"publicationDate":"2025-09-09","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145020158","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-09-08DOI: 10.1016/j.compind.2025.104356
Dandi Yang , Peng Wang , Jingyi Lu , Chuang Guan , Hongli Dong
In recent years, intelligent pipeline leakage detection technology has played a crucial role in ensuring pipeline safety and energy security. However, most existing methods assume balanced datasets, overlooking the inherent imbalance between normal and abnormal data in real-world scenarios. This limitation hampers effective feature extraction for anomaly detection. To address this challenge, we propose a novel multi-channel and multi-branch one-dimensional convolutional neural network (MCB1DCNN). The model integrates a multi-channel convolution module and a multi-branch network structure to extract both global and local signal features. To mitigate the impact of data imbalance, we propose an adaptive weighted cross-entropy loss function. This function dynamically adjusts the loss weight of minority class samples based on the imbalance ratio. Furthermore, we construct a multi-channel acoustic signal dataset for oil and gas pipelines using the overlapping sample segmentation method. Variational mode decomposition (VMD) is applied to decompose acoustic signals into different frequency components, enabling comprehensive feature extraction. Ablation experiments analyze the impact of key model parameters. Experimental results show that MCB1DCNN outperforms several state-of-the-art methods in terms of accuracy, F1 score, false alarm rate, and missing alarm rate. These findings demonstrate its superior performance and practical applicability in real-world pipeline leakage detection.
{"title":"Leakage detection of oil and gas pipelines based on a multi-channel and multi-branch one-dimensional convolutional neural network with imbalanced samples","authors":"Dandi Yang , Peng Wang , Jingyi Lu , Chuang Guan , Hongli Dong","doi":"10.1016/j.compind.2025.104356","DOIUrl":"10.1016/j.compind.2025.104356","url":null,"abstract":"<div><div>In recent years, intelligent pipeline leakage detection technology has played a crucial role in ensuring pipeline safety and energy security. However, most existing methods assume balanced datasets, overlooking the inherent imbalance between normal and abnormal data in real-world scenarios. This limitation hampers effective feature extraction for anomaly detection. To address this challenge, we propose a novel multi-channel and multi-branch one-dimensional convolutional neural network (MCB1DCNN). The model integrates a multi-channel convolution module and a multi-branch network structure to extract both global and local signal features. To mitigate the impact of data imbalance, we propose an adaptive weighted cross-entropy loss function. This function dynamically adjusts the loss weight of minority class samples based on the imbalance ratio. Furthermore, we construct a multi-channel acoustic signal dataset for oil and gas pipelines using the overlapping sample segmentation method. Variational mode decomposition (VMD) is applied to decompose acoustic signals into different frequency components, enabling comprehensive feature extraction. Ablation experiments analyze the impact of key model parameters. Experimental results show that MCB1DCNN outperforms several state-of-the-art methods in terms of accuracy, F1 score, false alarm rate, and missing alarm rate. These findings demonstrate its superior performance and practical applicability in real-world pipeline leakage detection.</div></div>","PeriodicalId":55219,"journal":{"name":"Computers in Industry","volume":"173 ","pages":"Article 104356"},"PeriodicalIF":9.1,"publicationDate":"2025-09-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145020142","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-09-08DOI: 10.1016/j.compind.2025.104361
Xuejiao Li , Yang Cheng , Charles Møller , Jay Lee
In the era of Industry 4.0, artificial intelligence (AI) is assumed to play an increasingly pivotal role within industrial systems. Despite the recent trend within various industries to adopt AI, the actual adoption of AI is not as developed as perceived. A significant factor contributing to this lag is the data issues in AI implementation. How to address these data issues stands as a significant concern confronting both industry and academia. Thus, this study conducts a comprehensive meta-review of data issues and corresponding methods in industrial AI. Eighty-two data issues are identified and categorized into seven stages of the data lifecycle. To supplement the existing research that focuses more on data issues arising in historical data, this study subsequently discusses the management of real-time sensor data and expert domain knowledge. Meanwhile, it proposes a model-aware data preparation approach, which integrates the data characteristics with specific AI model requirements to enhance data usability and algorithm alignment. This approach is further integrated into a conceptual framework that combines managerial and technical perspectives for systematically resolving data issues. The framework provides actionable insights and a systematic method for AI practitioners and industrial system developers to anticipate and address data-related challenges. Finally, the study highlights future research directions. This study advances the existing body of knowledge, supports a seamless transition from traditional model-centric AI to data-centric AI, and offers practical guidelines for professionals navigating the complexities of achieving data excellence in industrial AI applications.
{"title":"Data issues in industrial AI systems: A meta-review and research strategy","authors":"Xuejiao Li , Yang Cheng , Charles Møller , Jay Lee","doi":"10.1016/j.compind.2025.104361","DOIUrl":"10.1016/j.compind.2025.104361","url":null,"abstract":"<div><div>In the era of Industry 4.0, artificial intelligence (AI) is assumed to play an increasingly pivotal role within industrial systems. Despite the recent trend within various industries to adopt AI, the actual adoption of AI is not as developed as perceived. A significant factor contributing to this lag is the data issues in AI implementation. How to address these data issues stands as a significant concern confronting both industry and academia. Thus, this study conducts a comprehensive meta-review of data issues and corresponding methods in industrial AI. Eighty-two data issues are identified and categorized into seven stages of the data lifecycle. To supplement the existing research that focuses more on data issues arising in historical data, this study subsequently discusses the management of real-time sensor data and expert domain knowledge. Meanwhile, it proposes a model-aware data preparation approach, which integrates the data characteristics with specific AI model requirements to enhance data usability and algorithm alignment. This approach is further integrated into a conceptual framework that combines managerial and technical perspectives for systematically resolving data issues. The framework provides actionable insights and a systematic method for AI practitioners and industrial system developers to anticipate and address data-related challenges. Finally, the study highlights future research directions. This study advances the existing body of knowledge, supports a seamless transition from traditional model-centric AI to data-centric AI, and offers practical guidelines for professionals navigating the complexities of achieving data excellence in industrial AI applications.</div></div>","PeriodicalId":55219,"journal":{"name":"Computers in Industry","volume":"173 ","pages":"Article 104361"},"PeriodicalIF":9.1,"publicationDate":"2025-09-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145020141","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}