The role of transportation system in ensuring equitable access to essential services and promptly recovering it post-disaster is critical to community resilience. This research introduces a framework aiming to strengthen transportation systems against external shocks, with an emphasis on geographical equity. To evaluate equity and address multiple network design objectives, we develop a two-level consolidated resilience index that measures network performance and community equity, employing a data-driven analytic hierarchy process for objective metric weighting, surpassing traditional expert scoring methods. Furthermore, we have implemented an equity-weighted Shapley value method to prioritize candidate links prior to investment. Finally, we have established a multi-objective bi-level program that integrates traffic distribution and travel behavior analysis. Our findings reveal that integrating equity considerations into candidate links selection phase significantly enhances fairness outcomes. The results also underscore the inseparable relationship between pursuing fairness and efficiency. This framework could potentially extend to other transportation systems’ investment strategies during the preparation phase, contributing to broader applications in resilience planning.
{"title":"Integrating geographical equity and travel behavior dynamics into resilience enhancement of transport networks","authors":"Tingting Zhang, Chence Niu, Divya Jayakumar Nair, Vinayak Dixit, S. Travis Waller","doi":"10.1111/mice.70128","DOIUrl":"10.1111/mice.70128","url":null,"abstract":"<p>The role of transportation system in ensuring equitable access to essential services and promptly recovering it post-disaster is critical to community resilience. This research introduces a framework aiming to strengthen transportation systems against external shocks, with an emphasis on geographical equity. To evaluate equity and address multiple network design objectives, we develop a two-level consolidated resilience index that measures network performance and community equity, employing a data-driven analytic hierarchy process for objective metric weighting, surpassing traditional expert scoring methods. Furthermore, we have implemented an equity-weighted Shapley value method to prioritize candidate links prior to investment. Finally, we have established a multi-objective bi-level program that integrates traffic distribution and travel behavior analysis. Our findings reveal that integrating equity considerations into candidate links selection phase significantly enhances fairness outcomes. The results also underscore the inseparable relationship between pursuing fairness and efficiency. This framework could potentially extend to other transportation systems’ investment strategies during the preparation phase, contributing to broader applications in resilience planning.</p>","PeriodicalId":156,"journal":{"name":"Computer-Aided Civil and Infrastructure Engineering","volume":"40 30","pages":"5927-5951"},"PeriodicalIF":9.1,"publicationDate":"2025-11-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://onlinelibrary.wiley.com/doi/epdf/10.1111/mice.70128","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145535868","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Accurate 3D segmentation of hydro power plant (HPP) components from point cloud data is essential for building high-fidelity digital twin systems that enable automation in construction, monitoring, and maintenance. However, existing point cloud segmentation methods suffer from high annotation costs. To address these challenges, a novel fully automated segmentation framework is proposed that assigns 3D semantic labels directly from unannotated point cloud data using only a textual prompt, without prior training on HPP-specific data. Experiments on six real-world HPP scenarios demonstrate that it achieves superior performance compared to state-of-the-art zero-shot baselines, with an average positive ratio of 72.56% and negative ratio of 20.45%, while significantly reducing the human effort and time required for segmentation. This study advances automation in construction by providing a practical, annotation-free solution for large-scale, fine-grained 3D segmentation of complex HPP environments, laying the foundation for efficient, intelligent digital twin creation and automated decision support in hydropower engineering.
{"title":"Zero-shot point cloud segmentation for hydro power plant components","authors":"Yang Su, Weiwei Chen, Jiaxin Ling, Diran Yu","doi":"10.1111/mice.70150","DOIUrl":"10.1111/mice.70150","url":null,"abstract":"<p>Accurate 3D segmentation of hydro power plant (HPP) components from point cloud data is essential for building high-fidelity digital twin systems that enable automation in construction, monitoring, and maintenance. However, existing point cloud segmentation methods suffer from high annotation costs. To address these challenges, a novel fully automated segmentation framework is proposed that assigns 3D semantic labels directly from unannotated point cloud data using only a textual prompt, without prior training on HPP-specific data. Experiments on six real-world HPP scenarios demonstrate that it achieves superior performance compared to state-of-the-art zero-shot baselines, with an average positive ratio of 72.56% and negative ratio of 20.45%, while significantly reducing the human effort and time required for segmentation. This study advances automation in construction by providing a practical, annotation-free solution for large-scale, fine-grained 3D segmentation of complex HPP environments, laying the foundation for efficient, intelligent digital twin creation and automated decision support in hydropower engineering.</p>","PeriodicalId":156,"journal":{"name":"Computer-Aided Civil and Infrastructure Engineering","volume":"40 31","pages":"6261-6278"},"PeriodicalIF":9.1,"publicationDate":"2025-11-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://onlinelibrary.wiley.com/doi/epdf/10.1111/mice.70150","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145535869","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Chaemin Hwang, Seunghun Yang, Younseo Kim, Sangwoo Park, Hangseok Choi
Inspecting the backfill grout behind segment linings using ground-penetrating radar (GPR) is essential for the maintenance of shield tunnels. However, reinforcing rebars embedded in the segment linings generate strong clutter in GPR data, which obscures the detection of defect signals within the backfill grout. In addition, acquiring sufficient and consistent GPR data to train deep learning models is challenging due to restricted site access and variability in tunnel environments. To address these limitations, this study proposed a simulation-driven deep learning network for clutter elimination and defect detection in GPR images. A training database was constructed exclusively through finite-difference time-domain numerical simulations to model segment linings containing backfill grout defects. This configuration provides a standardized and well-controlled dataset for training and evaluating the network. Several architectures within the encoder–decoder framework, including U-Net 3+, were employed to develop models for eliminating rebar clutter. The performance of the reconstructed GPR B-scans was assessed using image quality and quantitative metrics, with U-Net 3+ demonstrating the highest accuracy. The findings confirm that realistic GPR signal characteristics can be learned and generalized through simulation-based data without relying on extensive field data. Finally, GPR B-scans collected from a full-scale tunnel lining segment were reconstructed using the proposed network to verify its practical applicability. This study demonstrates the potential feasibility of transfer learning from simulation-only data to real-world engineering applications, enabling more effective backfill grout inspection and supporting efficient maintenance.
利用探地雷达(GPR)对盾构隧道管片衬砌后的回填浆液进行检测是盾构隧道维修中必不可少的技术手段。然而,埋置在管片衬砌中的钢筋会在探地雷达数据中产生较强的杂波,从而掩盖了对充填体内部缺陷信号的检测。此外,由于隧道环境的限制和变化,获取足够和一致的GPR数据来训练深度学习模型具有挑战性。为了解决这些限制,本研究提出了一个模拟驱动的深度学习网络,用于GPR图像的杂波消除和缺陷检测。通过有限差分时域数值模拟,建立了包含回填浆液缺陷的管片衬砌模型的训练数据库。这种配置为训练和评估网络提供了标准化和良好控制的数据集。编码器-解码器框架中的几种架构,包括U‐Net 3+,被用于开发消除钢筋杂波的模型。利用图像质量和定量指标评估重建GPR B -扫描的性能,U - Net 3+显示出最高的精度。研究结果证实,可以通过基于模拟的数据来学习和推广真实的GPR信号特征,而无需依赖大量的现场数据。最后,利用该网络重建了全尺寸隧道衬砌段的GPR B扫描图,以验证其实际适用性。该研究证明了将模拟数据的学习转移到现实世界工程应用的潜在可行性,从而实现更有效的回填灌浆检查和支持有效的维护。
{"title":"Simulation-driven deep learning for rebar clutter elimination in ground-penetrating radar images to detect backfill grout defects in segment linings","authors":"Chaemin Hwang, Seunghun Yang, Younseo Kim, Sangwoo Park, Hangseok Choi","doi":"10.1111/mice.70142","DOIUrl":"10.1111/mice.70142","url":null,"abstract":"<p>Inspecting the backfill grout behind segment linings using ground-penetrating radar (GPR) is essential for the maintenance of shield tunnels. However, reinforcing rebars embedded in the segment linings generate strong clutter in GPR data, which obscures the detection of defect signals within the backfill grout. In addition, acquiring sufficient and consistent GPR data to train deep learning models is challenging due to restricted site access and variability in tunnel environments. To address these limitations, this study proposed a simulation-driven deep learning network for clutter elimination and defect detection in GPR images. A training database was constructed exclusively through finite-difference time-domain numerical simulations to model segment linings containing backfill grout defects. This configuration provides a standardized and well-controlled dataset for training and evaluating the network. Several architectures within the encoder–decoder framework, including U-Net 3+, were employed to develop models for eliminating rebar clutter. The performance of the reconstructed GPR B-scans was assessed using image quality and quantitative metrics, with U-Net 3+ demonstrating the highest accuracy. The findings confirm that realistic GPR signal characteristics can be learned and generalized through simulation-based data without relying on extensive field data. Finally, GPR B-scans collected from a full-scale tunnel lining segment were reconstructed using the proposed network to verify its practical applicability. This study demonstrates the potential feasibility of transfer learning from simulation-only data to real-world engineering applications, enabling more effective backfill grout inspection and supporting efficient maintenance.</p>","PeriodicalId":156,"journal":{"name":"Computer-Aided Civil and Infrastructure Engineering","volume":"40 31","pages":"6722-6740"},"PeriodicalIF":9.1,"publicationDate":"2025-11-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145531896","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
With the rapid development of transportation infrastructure, highway interchanges have become critical nodes in the network, where congestion frequently occurs. Existing research on congestion mitigation in such regions is limited, often focusing on individual bottlenecks, which may overlook interactions between diverging and merging areas. Some other research adopted a macroscopic traffic flow perspective, without microscopically mitigating right-of-way conflicts caused by lane-changing demands of vehicles. To address this gap, this study proposes a collaborative multi-lane scheduling strategy for connected and automated vehicles under a cloud control system. By jointly optimizing vehicle passing sequences in both diverging and merging zones, the proposed method aims to improve overall traffic efficiency. Key contributions include a rolling traversal mechanism for global scheduling, a discretionary lane-changing strategy for enhanced lane utilization, and a double-checked trajectory planning approach that balances efficiency and comfort. This framework offers a scalable solution to alleviate congestion at complex highway interchanges under high traffic demand.
{"title":"Collaborative multi-lane scheduling strategy for connected and automated vehicles on highway interchange using rolling traversal scheduling","authors":"Pengfei Li, Yihe Chen, Yunhao Hu, Jia Shi, Keqiang Li, Yugong Luo","doi":"10.1111/mice.70141","DOIUrl":"10.1111/mice.70141","url":null,"abstract":"<p>With the rapid development of transportation infrastructure, highway interchanges have become critical nodes in the network, where congestion frequently occurs. Existing research on congestion mitigation in such regions is limited, often focusing on individual bottlenecks, which may overlook interactions between diverging and merging areas. Some other research adopted a macroscopic traffic flow perspective, without microscopically mitigating right-of-way conflicts caused by lane-changing demands of vehicles. To address this gap, this study proposes a collaborative multi-lane scheduling strategy for connected and automated vehicles under a cloud control system. By jointly optimizing vehicle passing sequences in both diverging and merging zones, the proposed method aims to improve overall traffic efficiency. Key contributions include a rolling traversal mechanism for global scheduling, a discretionary lane-changing strategy for enhanced lane utilization, and a double-checked trajectory planning approach that balances efficiency and comfort. This framework offers a scalable solution to alleviate congestion at complex highway interchanges under high traffic demand.</p>","PeriodicalId":156,"journal":{"name":"Computer-Aided Civil and Infrastructure Engineering","volume":"40 30","pages":"6127-6148"},"PeriodicalIF":9.1,"publicationDate":"2025-11-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145531688","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Domen Šoberl, Jan Kalin, Andrej Anžlin, Maja Kreslin, Klen Čopič Pucihar, Matjaž Kljun, Doron Hekič, Aleš Žnidarič
Heavy goods vehicles (HGVs) have a significant impact on road and bridge infrastructure, with overloaded vehicles accelerating structural deterioration and increasing safety risks. Bridge weigh-in-motion (B-WIM) systems estimate gross vehicle weight (GVW) using strain measurements, but inaccuracies in axle configuration recognition can reduce reliability. This study presents a low-cost computer vision (CV) extension for existing B-WIM installations that verifies strain-inferred axle configurations using traffic camera images and flags GVW estimates as reliable or unreliable. Experiments on a data set of over 30,000 HGV records show that by combining convolutional neural networks with strain-based heuristics, GVW reliability can improve from 96.7% to 99.89%, effectively excluding nearly all erroneous measurements. The approach operates without interrupting ongoing B-WIM operations and can be applied retrospectively to historical data. Limitations include the inability to detect raised axles (RAs), which the method excludes as unreliable. This method provides a practical, high-precision enhancement for structural health monitoring of bridges.
{"title":"Enhanced precision in axle configuration inference for bridge weigh-in-motion systems using computer vision and deep learning","authors":"Domen Šoberl, Jan Kalin, Andrej Anžlin, Maja Kreslin, Klen Čopič Pucihar, Matjaž Kljun, Doron Hekič, Aleš Žnidarič","doi":"10.1111/mice.70144","DOIUrl":"10.1111/mice.70144","url":null,"abstract":"<p>Heavy goods vehicles (HGVs) have a significant impact on road and bridge infrastructure, with overloaded vehicles accelerating structural deterioration and increasing safety risks. Bridge weigh-in-motion (B-WIM) systems estimate gross vehicle weight (GVW) using strain measurements, but inaccuracies in axle configuration recognition can reduce reliability. This study presents a low-cost computer vision (CV) extension for existing B-WIM installations that verifies strain-inferred axle configurations using traffic camera images and flags GVW estimates as reliable or unreliable. Experiments on a data set of over 30,000 HGV records show that by combining convolutional neural networks with strain-based heuristics, GVW reliability can improve from 96.7% to 99.89%, effectively excluding nearly all erroneous measurements. The approach operates without interrupting ongoing B-WIM operations and can be applied retrospectively to historical data. Limitations include the inability to detect raised axles (RAs), which the method excludes as unreliable. This method provides a practical, high-precision enhancement for structural health monitoring of bridges.</p>","PeriodicalId":156,"journal":{"name":"Computer-Aided Civil and Infrastructure Engineering","volume":"40 30","pages":"6201-6216"},"PeriodicalIF":9.1,"publicationDate":"2025-11-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://onlinelibrary.wiley.com/doi/epdf/10.1111/mice.70144","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145531689","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Chenyu Zhang, Charlotte Liu, Ke Li, Zhaozheng Yin, Ruwen Qin
Accurately classifying damage levels from structural inspection images is critical for automated infrastructure assessment. Although deep neural networks achieve impressive performance, their black-box nature limits explainability, and prior studies using Grad-CAM often yield coarse or inaccurate saliency maps. To overcome these limitations, this paper introduces XIDLE-Net, a multitask model that simultaneously performs damage classification and saliency map prediction to enhance explainability in structural damage assessment. Combining a Swin Transformer encoder with a convolutional neural network decoder, XIDLE-Net is trained with dual supervision using damage labels and inspector gaze-derived attention maps, enhancing both classification accuracy and model explainability. Experimental results show that XIDLE-Net outperforms state-of-the-art methods in both classification and saliency explainability, achieving 78.1% accuracy, 94.3% area under the curve (AUC), and a 39.7% improvement in saliency prediction over ResNet-50 with Grad-CAM. To our knowledge, this is one of the first investigations to employ large-scale inspector gaze data for supervision and to quantitatively evaluate Grad-CAM in structural image classification. The results highlight the promise of human gaze data for advancing explainable vision-based structural health monitoring.
{"title":"Inspector gaze-guided multitask learning for explainable structural damage assessment","authors":"Chenyu Zhang, Charlotte Liu, Ke Li, Zhaozheng Yin, Ruwen Qin","doi":"10.1111/mice.70131","DOIUrl":"10.1111/mice.70131","url":null,"abstract":"<p>Accurately classifying damage levels from structural inspection images is critical for automated infrastructure assessment. Although deep neural networks achieve impressive performance, their black-box nature limits explainability, and prior studies using Grad-CAM often yield coarse or inaccurate saliency maps. To overcome these limitations, this paper introduces XIDLE-Net, a multitask model that simultaneously performs damage classification and saliency map prediction to enhance explainability in structural damage assessment. Combining a Swin Transformer encoder with a convolutional neural network decoder, XIDLE-Net is trained with dual supervision using damage labels and inspector gaze-derived attention maps, enhancing both classification accuracy and model explainability. Experimental results show that XIDLE-Net outperforms state-of-the-art methods in both classification and saliency explainability, achieving 78.1% accuracy, 94.3% area under the curve (AUC), and a 39.7% improvement in saliency prediction over ResNet-50 with Grad-CAM. To our knowledge, this is one of the first investigations to employ large-scale inspector gaze data for supervision and to quantitatively evaluate Grad-CAM in structural image classification. The results highlight the promise of human gaze data for advancing explainable vision-based structural health monitoring.</p>","PeriodicalId":156,"journal":{"name":"Computer-Aided Civil and Infrastructure Engineering","volume":"40 30","pages":"5824-5841"},"PeriodicalIF":9.1,"publicationDate":"2025-11-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145515626","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Tao Zhong, Yujie Lu, Zongjun Xia, Zhifei Chen, Shuo Wang, Yifei Wang
Three-dimensional (3D) site models form the digital foundation for modern construction management. However, creating these models from multi-source imagery presents two key challenges: accurately georeferencing camera poses during wide-view acquisition and precisely aligning multiple point clouds that possess non-uniform accuracy. This paper proposes a two-stage framework to address these challenges. The first stage performs local-to-world registration by integrating ground control points, detected via an enhanced HA-YOLOv8, as early-stage constraints in the 3D reconstruction process. The second stage, inter-model alignment, introduces a novel edge-aware method that utilizes refined structural edge features to merge local models. The framework was validated using images from crane cameras on a high-rise project, achieving a final modeling accuracy of 0.121 m for the main structures, resulting from precise registrations with low translation (0.102 m) and rotation (0.051°) errors. This approach provides a robust solution for generating high-fidelity 3D site models, supporting advanced digital construction applications.
{"title":"A depth–spatial alignment method for multi-source point clouds on large-scale construction sites","authors":"Tao Zhong, Yujie Lu, Zongjun Xia, Zhifei Chen, Shuo Wang, Yifei Wang","doi":"10.1111/mice.70120","DOIUrl":"10.1111/mice.70120","url":null,"abstract":"<p>Three-dimensional (3D) site models form the digital foundation for modern construction management. However, creating these models from multi-source imagery presents two key challenges: accurately georeferencing camera poses during wide-view acquisition and precisely aligning multiple point clouds that possess non-uniform accuracy. This paper proposes a two-stage framework to address these challenges. The first stage performs local-to-world registration by integrating ground control points, detected via an enhanced HA-YOLOv8, as early-stage constraints in the 3D reconstruction process. The second stage, inter-model alignment, introduces a novel edge-aware method that utilizes refined structural edge features to merge local models. The framework was validated using images from crane cameras on a high-rise project, achieving a final modeling accuracy of 0.121 m for the main structures, resulting from precise registrations with low translation (0.102 m) and rotation (0.051°) errors. This approach provides a robust solution for generating high-fidelity 3D site models, supporting advanced digital construction applications.</p>","PeriodicalId":156,"journal":{"name":"Computer-Aided Civil and Infrastructure Engineering","volume":"40 29","pages":"5719-5746"},"PeriodicalIF":9.1,"publicationDate":"2025-11-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://onlinelibrary.wiley.com/doi/epdf/10.1111/mice.70120","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145515627","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
While machine learning (ML) has advanced image-based damage detection, a critical gap remains: the automated translation of detected damage into standardized condition ratings used in structural assessments. Most existing approaches stop at semantic segmentation, overlooking the damage rating step essential for practical inspections. This paper presents a semiautomated system that bridges this gap by linking multi-label damage segmentation with condition rating prediction. Our contributions are: (1) a data-driven label taxonomy for damage segmentation, derived from statistical and semantic analysis of 2.2 million inspection records, and designed to support downstream condition rating; (2) a pipeline for converting textual inspection records into structured training data for automated condition rating, and a set of custom bidirectional long short-term memory (LSTM) models achieving up to