Pub Date : 2026-04-01Epub Date: 2025-12-15DOI: 10.1016/j.aei.2025.104223
Fan Ye , Naifu Deng , Chuping Wu , Jianjiang Peng , Wang Guo , Shuangxi Cao , Qinglong Zhang
Conventional methods used for compaction quality control and acceptance depend on random sampling for density validation and risk generating unrepresentative results and systematic errors. Intelligent compaction (IC) enhances highway subgrade evaluation but has limitations in terms of accuracy and availability of labeled data. Although acoustic indexes are effective for characterizing coarse-grained soils, their application is constrained by complex sound fields with significant air attenuation and multi-source noise interference. To address these challenges, this study presents a novel intelligent compaction assessment that integrates contact–type acoustic compaction model, a newly defined contact-type sound compaction value (CSCV), and few-shot intelligent assessment with uncertainty quantification. The proposed approach not only overcomes the weakening and extraction difficulties of effective acoustic signals but also enables reliable model training under limited labeled samples. A case study conducted on the Chenglong project in China reveals that intelligent assessment results highly correlate with actual values for weathered slate and gravelly soil, with maximum absolute errors of 1.2516 mm and 2.11 % respectively. Integrating this method into the IC system enhances highway quality and the promotion of IC technology.
{"title":"Intelligent compaction assessment of coarse-grained subgrade using contact-type acoustic wave detection with few-shot learning in complex sound fields","authors":"Fan Ye , Naifu Deng , Chuping Wu , Jianjiang Peng , Wang Guo , Shuangxi Cao , Qinglong Zhang","doi":"10.1016/j.aei.2025.104223","DOIUrl":"10.1016/j.aei.2025.104223","url":null,"abstract":"<div><div>Conventional methods used for compaction quality control and acceptance depend on random sampling for density validation and risk generating unrepresentative results and systematic errors. Intelligent compaction (IC) enhances highway subgrade evaluation but has limitations in terms of accuracy and availability of labeled data. Although acoustic indexes are effective for characterizing coarse-grained soils, their application is constrained by complex sound fields with significant air attenuation and multi-source noise interference. To address these challenges, this study presents a novel intelligent compaction assessment that integrates contact–type acoustic compaction model, a newly defined contact-type sound compaction value (CSCV), and few-shot intelligent assessment with uncertainty quantification. The proposed approach not only overcomes the weakening and extraction difficulties of effective acoustic signals but also enables reliable model training under limited labeled samples. A case study conducted on the Chenglong project in China reveals that intelligent assessment results highly correlate with actual values for weathered slate and gravelly soil, with maximum absolute errors of 1.2516 mm and 2.11 % respectively. Integrating this method into the IC system enhances highway quality and the promotion of IC technology.</div></div>","PeriodicalId":50941,"journal":{"name":"Advanced Engineering Informatics","volume":"71 ","pages":"Article 104223"},"PeriodicalIF":9.9,"publicationDate":"2026-04-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145792165","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2026-04-01Epub Date: 2025-12-30DOI: 10.1016/j.aei.2025.104288
Rui Yuan , Hengyu Liu , Yong Lv , Yuejian Chen , Xingkai Yang , Hewenxuan Li , David Chelidze
With the advancement of predictive maintenance strategies, the accuracy of degradation tracking in mechanical systems has become a growing concern. This paper proposes a novel ball tree structure-informed phase space warping (BTPSW) algorithm, which couples high-dimensional nonlinear dynamics with efficient geometric search strategies to robustly track bearing degradation. To tackle the challenges of high-dimensional data and uneven distribution of points in the reconstructed phase space (PS), a physics-informed dynamic model is constructed to simulate outer race crack evolution under speed fluctuations. The resulting vibration signals are then reconstructed into high-dimensional PS, where trajectory curvature serves as a degradation indicator. The BTPSW algorithm reduces overlap in high-dimensional spaces, improving data search efficiency. Furthermore, considering the fluctuations in the optimal reconstruction parameters, the BTPSW algorithm demonstrates enhanced data adaptability, mitigating the accuracy loss caused by parameter fluctuations. By constructing a simulation model of rolling bearing degradation to simulate the crack propagation in the outer race, the paper validates the application of the BTPSW algorithm in tracking crack degradation. Both simulation and accelerated degradation experiments confirm that BTPSW achieves high tracking accuracy, strong parameter robustness, and superior adaptability under fluctuating operating conditions, making it a powerful tool for predictive maintenance and long-term reliability assessment.
{"title":"Ball tree structure-informed phase space warping: a robust algorithm for dynamic degradation tracking under variable speed conditions","authors":"Rui Yuan , Hengyu Liu , Yong Lv , Yuejian Chen , Xingkai Yang , Hewenxuan Li , David Chelidze","doi":"10.1016/j.aei.2025.104288","DOIUrl":"10.1016/j.aei.2025.104288","url":null,"abstract":"<div><div>With the advancement of predictive maintenance strategies, the accuracy of degradation tracking in mechanical systems has become a growing concern. This paper proposes a novel ball tree structure-informed phase space warping (BTPSW) algorithm, which couples high-dimensional nonlinear dynamics with efficient geometric search strategies to robustly track bearing degradation. To tackle the challenges of high-dimensional data and uneven distribution of points in the reconstructed phase space (PS), a physics-informed dynamic model is constructed to simulate outer race crack evolution under speed fluctuations. The resulting vibration signals are then reconstructed into high-dimensional PS, where trajectory curvature serves as a degradation indicator. The BTPSW algorithm reduces overlap in high-dimensional spaces, improving data search efficiency. Furthermore, considering the fluctuations in the optimal reconstruction parameters, the BTPSW algorithm demonstrates enhanced data adaptability, mitigating the accuracy loss caused by parameter fluctuations. By constructing a simulation model of rolling bearing degradation to simulate the crack propagation in the outer race, the paper validates the application of the BTPSW algorithm in tracking crack degradation. Both simulation and accelerated degradation experiments confirm that BTPSW achieves high tracking accuracy, strong parameter robustness, and superior adaptability under fluctuating operating conditions, making it a powerful tool for predictive maintenance and long-term reliability assessment.</div></div>","PeriodicalId":50941,"journal":{"name":"Advanced Engineering Informatics","volume":"71 ","pages":"Article 104288"},"PeriodicalIF":9.9,"publicationDate":"2026-04-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145885023","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
This study aims to identify the most suitable technique for computing the effective wind speed (EWS) in wind farms. An Extended Kalman Filter (EKF), combined with measurements of the blade pitch angle, generator torque, and rotor speed, is employed to estimate the EWS of individual turbines accurately. Furthermore, information extracted from the turbine wake within the wind farm can be utilized to enhance estimation accuracy and improve robustness against potential faults and failures. For this purpose, the parametric Jensen wake model is adopted due to its capability for real-time analysis across different wake conditions. To integrate these sources of information, a deep neural network model (CNN-LSTM) is developed to fuse the EKF-based estimates with those derived from the wake model. The proposed method is validated using WFSim simulations of a wind farm comprising two turbines. Results show that the CNN-LSTM model outperforms the individual approaches, improving accuracy by about 40% while maintaining robustness under faulty data. In summary, simulations indicate that while the EKF alone provides the most accurate EWS estimates under fault-free conditions, a fusion with the parametric wake model ensures reliable and precise estimation in the presence of faults.
{"title":"Fault-tolerant effective wind speed estimation in wind farms via EKF and deep learning fusion","authors":"Seyyede Marzieh Mousavi , Sayyed Majid Esmailifar , Horst Schulte","doi":"10.1016/j.aei.2025.104306","DOIUrl":"10.1016/j.aei.2025.104306","url":null,"abstract":"<div><div>This study aims to identify the most suitable technique for computing the effective wind speed (EWS) in wind farms. An Extended Kalman Filter (EKF), combined with measurements of the blade pitch angle, generator torque, and rotor speed, is employed to estimate the EWS of individual turbines accurately. Furthermore, information extracted from the turbine wake within the wind farm can be utilized to enhance estimation accuracy and improve robustness against potential faults and failures. For this purpose, the parametric Jensen wake model is adopted due to its capability for real-time analysis across different wake conditions. To integrate these sources of information, a deep neural network model (CNN-LSTM) is developed to fuse the EKF-based estimates with those derived from the wake model. The proposed method is validated using WFSim simulations of a wind farm comprising two turbines. Results show that the CNN-LSTM model outperforms the individual approaches, improving accuracy by about 40% while maintaining robustness under faulty data. In summary, simulations indicate that while the EKF alone provides the most accurate EWS estimates under fault-free conditions, a fusion with the parametric wake model ensures reliable and precise estimation in the presence of faults.</div></div>","PeriodicalId":50941,"journal":{"name":"Advanced Engineering Informatics","volume":"71 ","pages":"Article 104306"},"PeriodicalIF":9.9,"publicationDate":"2026-04-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145885025","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2026-04-01Epub Date: 2026-01-08DOI: 10.1016/j.aei.2026.104312
Yuntao Zou , Zihui Lin , Qianqi Zhang , Zhichun Liu , Zeling Xu
The battery energy consumption system of lunar exploration rovers, as mission-critical equipment, confronts severe challenges under extreme environmental constraints. However, existing modeling methods face fundamental dilemmas: dynamic uncertainty leads to highly ambiguous constraint boundaries, making it difficult for traditional mathematical languages to describe complex coupling relationships; even when mathematical representations are constructed, high-dimensional nonlinear optimization problems become computationally intractable, with existing algorithms unable to address complexity barriers and lacking interpretability. In response to these challenges, this paper innovatively proposes a hierarchical Stackelberg game optimization framework based on semantic embedding. This framework transcends traditional optimization paradigms by deeply integrating the cognitive intelligence of large language models with the mathematical precision of game theory: large language models acknowledge that overall behavior cannot be predicted from simple combinations of parts, processing fuzzy constraints and cross-domain knowledge integration through semantic understanding; the hierarchical structure of Stackelberg games naturally adapts to the hierarchical decision-making requirements of battery allocation, with multi-agent game frameworks effectively handling coordination and competition relationships between batteries. Through semantic embedding technology, natural language constraints are automatically transformed into mathematical objects comprehensible to game participants, with cognitive intelligence handling the “incomputable” complexity components while game theory ensures “provable” mathematical convergence, synergistically achieving the important paradigm transition from “perfect rationality” to “bounded rationality,” thereby providing a theoretically rigorous and practically viable unified solution for intelligent decision-making in mission-critical systems.
{"title":"Large language models enable semantic-guided hierarchical games for intelligent battery coordination","authors":"Yuntao Zou , Zihui Lin , Qianqi Zhang , Zhichun Liu , Zeling Xu","doi":"10.1016/j.aei.2026.104312","DOIUrl":"10.1016/j.aei.2026.104312","url":null,"abstract":"<div><div>The battery energy consumption system of lunar exploration rovers, as mission-critical equipment, confronts severe challenges under extreme environmental constraints. However, existing modeling methods face fundamental dilemmas: dynamic uncertainty leads to highly ambiguous constraint boundaries, making it difficult for traditional mathematical languages to describe complex coupling relationships; even when mathematical representations are constructed, high-dimensional nonlinear optimization problems become computationally intractable, with existing algorithms unable to address complexity barriers and lacking interpretability. In response to these challenges, this paper innovatively proposes a hierarchical Stackelberg game optimization framework based on semantic embedding. This framework transcends traditional optimization paradigms by deeply integrating the cognitive intelligence of large language models with the mathematical precision of game theory: large language models acknowledge that overall behavior cannot be predicted from simple combinations of parts, processing fuzzy constraints and cross-domain knowledge integration through semantic understanding; the hierarchical structure of Stackelberg games naturally adapts to the hierarchical decision-making requirements of battery allocation, with multi-agent game frameworks effectively handling coordination and competition relationships between batteries. Through semantic embedding technology, natural language constraints are automatically transformed into mathematical objects comprehensible to game participants, with cognitive intelligence handling the “incomputable” complexity components while game theory ensures “provable” mathematical convergence, synergistically achieving the important paradigm transition from “perfect rationality” to “bounded rationality,” thereby providing a theoretically rigorous and practically viable unified solution for intelligent decision-making in mission-critical systems.</div></div>","PeriodicalId":50941,"journal":{"name":"Advanced Engineering Informatics","volume":"71 ","pages":"Article 104312"},"PeriodicalIF":9.9,"publicationDate":"2026-04-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145926893","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2026-04-01Epub Date: 2026-01-06DOI: 10.1016/j.aei.2025.104294
Wenbin Gu , Yushang Cao , Yuxin Li , Nuandong Li , Lei Wang , Na Tang , Minghai Yuan , Fengque Pei
With the emergence of personalized and small-batch production modes, multi-agent manufacturing systems (MAMS) have become a research hotspot for intelligent workshop owing to their self‑organizing capabilities. The hybrid flow shop scheduling problem with unrelated parallel machines (HFSP-UPM) presents significant decision-making challenges due to its heterogeneous resources and dynamic environment. Meanwhile, multi-agent deep reinforcement learning (MADRL) is a prevalent method for addressing complex decision‑making problems. Therefore, this paper proposes a pre-trained large language model (LLM) empowered MADRL method for HFSP-UPM considering stage-wise coordination to minimize the makespan. Specifically, a novel MAMS is developed first, where each processing stage is modeled as an agent to enable high autonomy and reduce decision dimensionality. Then, a multi-agent collaborative scheduling framework based on the centralized training with decentralized execution paradigm (CTDE) is proposed, and the communication mechanism among agents is proposed to promote coordination and collaboration. Through structured prompt engineering, an LLM empowered state space and action selection are designed to enhance semantic understanding and policy updates. Finally, the LLM empowered multi-agent proximal policy optimization (LLM-MAPPO) is employed to train the scheduling model. Experimental results on 330 instances show the superiority of the proposed method over scheduling rules, genetic programming (GP) rules, several advanced DRL-based methods, as well as the baseline MAPPO, achieving over 8% performance improvement in most instances. Furthermore, the generalization experiment demonstrates that the proposed method has self-adjustment capability in response to production scenario changes, and an example verification is provided to verify the proposed method and the experiment platform.
{"title":"Large language model-empowered dynamic scheduling for intelligent hybrid flow shop using multi-agent deep reinforcement learning","authors":"Wenbin Gu , Yushang Cao , Yuxin Li , Nuandong Li , Lei Wang , Na Tang , Minghai Yuan , Fengque Pei","doi":"10.1016/j.aei.2025.104294","DOIUrl":"10.1016/j.aei.2025.104294","url":null,"abstract":"<div><div>With the emergence of personalized and small-batch production modes, multi-agent manufacturing systems (MAMS) have become a research hotspot for intelligent workshop owing to their self‑organizing capabilities. The hybrid flow shop scheduling problem with unrelated parallel machines (HFSP-UPM) presents significant decision-making challenges due to its heterogeneous resources and dynamic environment. Meanwhile, multi-agent deep reinforcement learning (MADRL) is a prevalent method for addressing complex decision‑making problems. Therefore, this paper proposes a pre-trained large language model (LLM) empowered MADRL method for HFSP-UPM considering stage-wise coordination to minimize the makespan. Specifically, a novel MAMS is developed first, where each processing stage is modeled as an agent to enable high autonomy and reduce decision dimensionality. Then, a multi-agent collaborative scheduling framework based on the centralized training with decentralized execution paradigm (CTDE) is proposed, and the communication mechanism among agents is proposed to promote coordination and collaboration. Through structured prompt engineering, an LLM empowered state space and action selection are designed to enhance semantic understanding and policy updates. Finally, the LLM empowered multi-agent proximal policy optimization (LLM-MAPPO) is employed to train the scheduling model. Experimental results on 330 instances show the superiority of the proposed method over scheduling rules, genetic programming (GP) rules, several advanced DRL-based methods, as well as the baseline MAPPO, achieving over 8% performance improvement in most instances. Furthermore, the generalization experiment demonstrates that the proposed method has self-adjustment capability in response to production scenario changes, and an example verification is provided to verify the proposed method and the experiment platform.</div></div>","PeriodicalId":50941,"journal":{"name":"Advanced Engineering Informatics","volume":"71 ","pages":"Article 104294"},"PeriodicalIF":9.9,"publicationDate":"2026-04-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145927073","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2026-04-01Epub Date: 2026-02-09DOI: 10.1016/j.aei.2026.104427
Chenrui Bai , Wenzhong Shi , Min Zhang , Huilin Zhao
The escalating environmental pressures and traffic loads make the emergence of road defects inevitable. Road networks stretching over thousands of kilometers pose challenges to road maintenance work. Vehicle-based road fine-grained defect segmentation task becomes one of the effective supports for large-scale, periodic, and high-precision road maintenance work. However, the task faces challenges in practical applications, such as limited computational resources of mobile computing platforms, cluttered background interference and an extremely small region of interest. Therefore, to address the above challenges and meet the application requirements of large-scale and high-precision maintenance tasks for complex urban road scenes, this study proposes a lightweight dual-stream Mamba, called CrackDualMamba. It consists of (i) a dual-stream encoder, named DualMamba, which is designed to enhance detail awareness while maintaining computational efficiency by combining the complementary strengths of advanced DeepCrack and Mamba architectures; (ii) a skip-layer error ablation module (SEAM), which is introduced to improve cross-scale feature fusion between encoder and decoder outputs. In addition, a novel FDB loss is proposed to address the sample imbalance and small region of interest segmentation challenges inherent in the vehicle-based road fine-grained defect segmentation task. The evaluation on public road segmentation benchmark datasets (i.e., Edmcrack600 and CrackTree260) confirms the superior performance of the proposed network compared to ten established state-of-the-art models. In conclusion, our research not only provides a new solution in theory, but also exhibits potential wide applications. The lightweight design enables it to run efficiently on mobile platforms, providing new technical support for road maintenance and management.
{"title":"CrackDualMamba: A lightweight dual-stream Mamba with novel focal dice balanced loss for vehicle-based road crack segmentation","authors":"Chenrui Bai , Wenzhong Shi , Min Zhang , Huilin Zhao","doi":"10.1016/j.aei.2026.104427","DOIUrl":"10.1016/j.aei.2026.104427","url":null,"abstract":"<div><div>The escalating environmental pressures and traffic loads make the emergence of road defects inevitable. Road networks stretching over thousands of kilometers pose challenges to road maintenance work. Vehicle-based road fine-grained defect segmentation task becomes one of the effective supports for large-scale, periodic, and high-precision road maintenance work. However, the task faces challenges in practical applications, such as limited computational resources of mobile computing platforms, cluttered background interference and an extremely small region of interest. Therefore, to address the above challenges and meet the application requirements of large-scale and high-precision maintenance tasks for complex urban road scenes, this study proposes a lightweight dual-stream Mamba, called CrackDualMamba. It consists of (i) a dual-stream encoder, named DualMamba, which is designed to enhance detail awareness while maintaining computational efficiency by combining the complementary strengths of advanced DeepCrack and Mamba architectures; (ii) a skip-layer error ablation module (SEAM), which is introduced to improve cross-scale feature fusion between encoder and decoder outputs. In addition, a novel FDB loss is proposed to address the sample imbalance and small region of interest segmentation challenges inherent in the vehicle-based road fine-grained defect segmentation task. The evaluation on public road segmentation benchmark datasets (i.e., Edmcrack600 and CrackTree260) confirms the superior performance of the proposed network compared to ten established state-of-the-art models. In conclusion, our research not only provides a new solution in theory, but also exhibits potential wide applications. The lightweight design enables it to run efficiently on mobile platforms, providing new technical support for road maintenance and management.</div></div>","PeriodicalId":50941,"journal":{"name":"Advanced Engineering Informatics","volume":"71 ","pages":"Article 104427"},"PeriodicalIF":9.9,"publicationDate":"2026-04-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"146188695","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2026-04-01Epub Date: 2026-02-09DOI: 10.1016/j.aei.2026.104413
Tianqing Hei, Zheng Tong, Zhiwei Xie, Tao Ma
The maintenance history data contains knowledge on the variation patterns of pavement performance indices, the uncertainty of these variation patterns, and the boundary of normal pavement performance indices in pavement management engineering. Such knowledge is commonly mined by pavement performance prediction models to obtain interpretable representations. At present, due to task-specific model development, models used for pavement performance index prediction and maintenance plan decision-making lack interoperability. This also means that existing pavement performance prediction models are almost unusable for the task of detecting anomalies in indices, which further increases the workload of scheme development and model transfer across road networks in pavement management engineering. In addition, current models for maintenance decision-making rely on a fixed representation of data uncertainty. However, real-world data uncertainty is not fixed. Therefore, these models still present certain limitations. To address these issues, this study leverages the modeling capability of Bayesian neural networks to develop an upstream all-in-one model, namely the Bayesian Neural Network for Pavement Performance Prediction (BNN4Pav). This model enables a single architecture to generate task-specific outputs for three distinct tasks, thereby reducing the model development effort from three categories of models to a single model category. Extensible downstream models are further constructed for each of these tasks, and the upstream–downstream framework is validated using 460 km of maintenance history data from Anhui, Zhejiang, and Jiangsu provinces in China. The analysis results demonstrate that in the multi-index prediction task with uncertainty quantification, a 66.7% reduction in time consumption is achieved. In the anomaly detection task where anomalous data are completely detected, the manual workload for normal data can be reduced by approximately 70%–90%. In maintenance decision-making tasks, the BNN4Pav-based method achieves a 6%–17% improvement in maintenance effectiveness over existing methods, without compromising decision-making requirements.
{"title":"An all-in-one performance prediction model for pavement management engineering based on Bayesian Neural Network","authors":"Tianqing Hei, Zheng Tong, Zhiwei Xie, Tao Ma","doi":"10.1016/j.aei.2026.104413","DOIUrl":"10.1016/j.aei.2026.104413","url":null,"abstract":"<div><div>The maintenance history data contains knowledge on the variation patterns of pavement performance indices, the uncertainty of these variation patterns, and the boundary of normal pavement performance indices in pavement management engineering. Such knowledge is commonly mined by pavement performance prediction models to obtain interpretable representations. At present, due to task-specific model development, models used for pavement performance index prediction and maintenance plan decision-making lack interoperability. This also means that existing pavement performance prediction models are almost unusable for the task of detecting anomalies in indices, which further increases the workload of scheme development and model transfer across road networks in pavement management engineering. In addition, current models for maintenance decision-making rely on a fixed representation of data uncertainty. However, real-world data uncertainty is not fixed. Therefore, these models still present certain limitations. To address these issues, this study leverages the modeling capability of Bayesian neural networks to develop an upstream all-in-one model, namely the Bayesian Neural Network for Pavement Performance Prediction (BNN4Pav). This model enables a single architecture to generate task-specific outputs for three distinct tasks, thereby reducing the model development effort from three categories of models to a single model category. Extensible downstream models are further constructed for each of these tasks, and the upstream–downstream framework is validated using 460 km of maintenance history data from Anhui, Zhejiang, and Jiangsu provinces in China. The analysis results demonstrate that in the multi-index prediction task with uncertainty quantification, a 66.7% reduction in time consumption is achieved. In the anomaly detection task where anomalous data are completely detected, the manual workload for normal data can be reduced by approximately 70%–90%. In maintenance decision-making tasks, the BNN4Pav-based method achieves a 6%–17% improvement in maintenance effectiveness over existing methods, without compromising decision-making requirements.</div></div>","PeriodicalId":50941,"journal":{"name":"Advanced Engineering Informatics","volume":"71 ","pages":"Article 104413"},"PeriodicalIF":9.9,"publicationDate":"2026-04-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"146188822","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2026-04-01Epub Date: 2026-02-06DOI: 10.1016/j.aei.2026.104423
Xiaosheng Ni , Jingpu Duan , Xiong Li , Xin Zhang
To address the challenges in Vehicle-to-Infrastructure (V2I) network traffic prediction, this study proposes an innovative solution. We first establish a novel paradigm that integrates physical models to systematically convert publicly available vehicle trajectory data into V2I traffic data. On this basis, a gCNN–BiLSTM–MHA deep learning model is constructed, whose core advantage lies in its use of a lightweight GhostNet-based convolutional network (gCNN) to improve computational efficiency, while leveraging the synergistic effect of a bidirectional long short-term memory network (BiLSTM) and a multi-head attention mechanism (MHA) to effectively balance prediction efficiency and accuracy. The model’s superiority is comprehensively validated: compared to baseline models like LSTM, it demonstrates significant advantages across a series of key evaluation metrics — including running time, MBD, MAE, MAPE, RMSE, and — achieving an overall balanced performance. Furthermore, the model exhibits excellent performance on multiple benchmark datasets, confirming its strong robustness and high applicability for complex V2I network traffic prediction tasks.
{"title":"A novel hybrid neural network for high-accuracy vehicle-to-infrastructure network traffic prediction","authors":"Xiaosheng Ni , Jingpu Duan , Xiong Li , Xin Zhang","doi":"10.1016/j.aei.2026.104423","DOIUrl":"10.1016/j.aei.2026.104423","url":null,"abstract":"<div><div>To address the challenges in Vehicle-to-Infrastructure (V2I) network traffic prediction, this study proposes an innovative solution. We first establish a novel paradigm that integrates physical models to systematically convert publicly available vehicle trajectory data into V2I traffic data. On this basis, a gCNN–BiLSTM–MHA deep learning model is constructed, whose core advantage lies in its use of a lightweight GhostNet-based convolutional network (gCNN) to improve computational efficiency, while leveraging the synergistic effect of a bidirectional long short-term memory network (BiLSTM) and a multi-head attention mechanism (MHA) to effectively balance prediction efficiency and accuracy. The model’s superiority is comprehensively validated: compared to baseline models like LSTM, it demonstrates significant advantages across a series of key evaluation metrics — including running time, MBD, MAE, MAPE, RMSE, and <span><math><msup><mrow><mi>R</mi></mrow><mrow><mn>2</mn></mrow></msup></math></span> — achieving an overall balanced performance. Furthermore, the model exhibits excellent performance on multiple benchmark datasets, confirming its strong robustness and high applicability for complex V2I network traffic prediction tasks.</div></div>","PeriodicalId":50941,"journal":{"name":"Advanced Engineering Informatics","volume":"71 ","pages":"Article 104423"},"PeriodicalIF":9.9,"publicationDate":"2026-04-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"146188825","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2026-04-01Epub Date: 2025-12-23DOI: 10.1016/j.aei.2025.104263
Xujie Long , Jing Teng , Zhiwei Zhu , Shaobo Zhao , Mengyang Pu , Ruifeng Shi , You Lv , Jonathan Li , Guoqing Jing
The complex geometries, environmental variability, and inconsistent imaging conditions in shield tunnel linings pose substantial challenges to water leakage detection. Existing models heavily rely on extensive annotated data from diverse environments to ensure reliable performance across varying scenarios, which incurs significant time and labor costs in data annotation. To alleviate the annotation burden, we propose Co-MixPL, a novel semi-supervised learning approach that integrates labeled data with pseudo-labels generated by the Mixed Pseudo Label (MixPL) strategy to iteratively update the teacher-student models. Specifically, Co-MixPL integrates an additional head into the MixPL framework to enhance the encoder’s discriminative capability and introduces a Soft Regression method to mitigate the inherent localization bias in pseudo-labeling, refining the regression loss of pseudo-labels through adaptive reliability scores. Remarkably, experiments on the public “water leakage” dataset, Mendeley Data V1, demonstrate that Co-MixPL approaches state-of-the-art (SOTA) performance using only one-seventh of the training data and outperforms the SOTA by 2.8 AP with merely one-third of the annotations. These findings highlight the effectiveness of Co-MixPL in delivering superior detection performance with significantly reduced annotations, thus better meeting the practical demands of engineering inspection and maintenance. Codes are available at https://github.com/LXJ010/Co-MixPL.
{"title":"Co-MixPL: An optimized semi-supervised learning method for tunnel water leakage detection","authors":"Xujie Long , Jing Teng , Zhiwei Zhu , Shaobo Zhao , Mengyang Pu , Ruifeng Shi , You Lv , Jonathan Li , Guoqing Jing","doi":"10.1016/j.aei.2025.104263","DOIUrl":"10.1016/j.aei.2025.104263","url":null,"abstract":"<div><div>The complex geometries, environmental variability, and inconsistent imaging conditions in shield tunnel linings pose substantial challenges to water leakage detection. Existing models heavily rely on extensive annotated data from diverse environments to ensure reliable performance across varying scenarios, which incurs significant time and labor costs in data annotation. To alleviate the annotation burden, we propose Co-MixPL, a novel semi-supervised learning approach that integrates labeled data with pseudo-labels generated by the Mixed Pseudo Label (MixPL) strategy to iteratively update the teacher-student models. Specifically, Co-MixPL integrates an additional head into the MixPL framework to enhance the encoder’s discriminative capability and introduces a Soft Regression method to mitigate the inherent localization bias in pseudo-labeling, refining the regression loss of pseudo-labels through adaptive reliability scores. Remarkably, experiments on the public “water leakage” dataset, Mendeley Data V1, demonstrate that Co-MixPL approaches state-of-the-art (SOTA) performance using only one-seventh of the training data and outperforms the SOTA by 2.8 AP with merely one-third of the annotations. These findings highlight the effectiveness of Co-MixPL in delivering superior detection performance with significantly reduced annotations, thus better meeting the practical demands of engineering inspection and maintenance. Codes are available at <span><span>https://github.com/LXJ010/Co-MixPL</span><svg><path></path></svg></span>.</div></div>","PeriodicalId":50941,"journal":{"name":"Advanced Engineering Informatics","volume":"71 ","pages":"Article 104263"},"PeriodicalIF":9.9,"publicationDate":"2026-04-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145841974","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2026-04-01Epub Date: 2026-01-13DOI: 10.1016/j.aei.2026.104326
Sang Du , Lei Hou , Guomin (Kevin) Zhang , Yang Zou , Haosen Chen
Modular building design requires numerous context-dependent component variants that traditional constraint-based methods cannot exhaustively enumerate. Industry Foundation Classes (IFC) models encode rich spatial and semantic context from completed modular projects. This context could enable Artificial Intelligence (AI) models to generate component variants and complement constraint-based methods. However, IFC 3D geometry that carries spatial context is not directly usable by AI models. This stems from IFC’s complex data structure. To address this limitation, this paper proposes a readily deployable auto-decoder method that produces AI-compatible vectors from IFC geometry. First, an IFC export strategy that retains component spatial context is employed. Second, a sampling method that pairs 3D points with their distances to the nearest surface is applied. Third, an auto-decoder neural network that jointly optimises per-component vectors and the model weights is presented, yielding context-aware representation vectors for modular components. Finally, an octree-based decoder for accurate geometry recovery from vectors is employed. Experiments on real-world modular project data demonstrate that the resulting vectors preserve geometric fidelity and support component variant generation. Geometric fidelity is confirmed by the mean and maximum surface reconstruction errors of 14.57 mm and 51.94 mm, sufficient for modular building design analysis. Support for component variant generation is evidenced by geometric interpolation linearity exceeding 0.98 out of 1, showing excellent variant generation suitability. This method makes IFC spatial context accessible to AI-driven modular design methods, transforming Design for Manufacture and Assembly (DfMA) data into actionable knowledge. Codes available on GitHub.
模块化建筑设计需要大量与上下文相关的组件变体,而传统的基于约束的方法无法详尽地列举这些变体。工业基础类(IFC)模型从已完成的模块化项目中编码丰富的空间和语义上下文。该上下文可以使人工智能(AI)模型生成组件变体并补充基于约束的方法。然而,带有空间背景的IFC 3D几何图形不能直接用于AI模型。这源于IFC复杂的数据结构。为了解决这一限制,本文提出了一种易于部署的自动解码器方法,该方法可以从IFC几何形状中产生与ai兼容的向量。首先,采用了保留组件空间上下文的IFC出口策略。其次,采用一种将三维点与其最近表面的距离配对的采样方法。第三,提出了一种自动解码器神经网络,该网络联合优化每个组件向量和模型权重,生成模块化组件的上下文感知表示向量。最后,采用基于八叉树的解码器对矢量进行精确的几何恢复。在实际模块化工程数据上的实验表明,所得到的向量保持了几何保真度,并支持组件变体的生成。平均表面重构误差为14.57 mm,最大表面重构误差为51.94 mm,证实了几何保真度,足以进行模块化建筑设计分析。几何插补线性度超过0.98 (out of 1),显示出良好的变量生成适宜性。这种方法使人工智能驱动的模块化设计方法可以访问IFC的空间背景,将制造和装配设计(DfMA)数据转化为可操作的知识。代码可在GitHub。
{"title":"Enabling AI-driven modular building design: an auto-decoder approach for IFC 3D geometry representation","authors":"Sang Du , Lei Hou , Guomin (Kevin) Zhang , Yang Zou , Haosen Chen","doi":"10.1016/j.aei.2026.104326","DOIUrl":"10.1016/j.aei.2026.104326","url":null,"abstract":"<div><div>Modular building design requires numerous context-dependent component variants that traditional constraint-based methods cannot exhaustively enumerate. Industry Foundation Classes (IFC) models encode rich spatial and semantic context from completed modular projects. This context could enable Artificial Intelligence (AI) models to generate component variants and complement constraint-based methods. However, IFC 3D geometry that carries spatial context is not directly usable by AI models. This stems from IFC’s complex data structure. To address this limitation, this paper proposes a readily deployable auto-decoder method that produces AI-compatible vectors from IFC geometry. First, an IFC export strategy that retains component spatial context is employed. Second, a sampling method that pairs 3D points with their distances to the nearest surface is applied. Third, an auto-decoder neural network that jointly optimises per-component vectors and the model weights is presented, yielding context-aware representation vectors for modular components. Finally, an octree-based decoder for accurate geometry recovery from vectors is employed. Experiments on real-world modular project data demonstrate that the resulting vectors preserve geometric fidelity and support component variant generation. Geometric fidelity is confirmed by the mean and maximum surface reconstruction errors of 14.57 mm and 51.94 mm, sufficient for modular building design analysis. Support for component variant generation is evidenced by geometric interpolation linearity exceeding 0.98 out of 1, showing excellent variant generation suitability. This method makes IFC spatial context accessible to AI-driven modular design methods, transforming Design for Manufacture and Assembly (DfMA) data into actionable knowledge. Codes available on GitHub.</div></div>","PeriodicalId":50941,"journal":{"name":"Advanced Engineering Informatics","volume":"71 ","pages":"Article 104326"},"PeriodicalIF":9.9,"publicationDate":"2026-04-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145978257","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}