Pub Date : 2025-02-10DOI: 10.1016/j.autcon.2025.106035
Yuan Zhou, JoonOh Seo, Yue Gong, Kelvin HoLam Heung, Masood Khan, Ting Lei
This paper proposes a video-driven biomechanical analysis method for measuring muscular loads influenced by wearing an exoskeleton suit, combining vision-based motion capture and virtual modeling approaches. Motion data obtained from site videos is integrated with a newly developed human-exoskeleton model in biomechanical software, to simulate muscular loads on the human body and evaluate exoskeleton suits. This method has been validated through experimental tests, where simulated and directly measured muscle activations were compared for four types of lifting tasks. The results indicate that this method successfully estimates neuromuscular activations of the low back muscles with and without wearing an exoskeleton suit, though the effect of the exoskeleton suit tends to be overestimated in simulations. Despite this limitation, the proposed method is expected to assist in efficiently evaluating exoskeleton use in practice, thereby facilitating the more widespread adoption of passive exoskeletons in construction.
{"title":"Biomechanical assessment of a passive back exoskeleton using vision-based motion capture and virtual modeling","authors":"Yuan Zhou, JoonOh Seo, Yue Gong, Kelvin HoLam Heung, Masood Khan, Ting Lei","doi":"10.1016/j.autcon.2025.106035","DOIUrl":"10.1016/j.autcon.2025.106035","url":null,"abstract":"<div><div>This paper proposes a video-driven biomechanical analysis method for measuring muscular loads influenced by wearing an exoskeleton suit, combining vision-based motion capture and virtual modeling approaches. Motion data obtained from site videos is integrated with a newly developed human-exoskeleton model in biomechanical software, to simulate muscular loads on the human body and evaluate exoskeleton suits. This method has been validated through experimental tests, where simulated and directly measured muscle activations were compared for four types of lifting tasks. The results indicate that this method successfully estimates neuromuscular activations of the low back muscles with and without wearing an exoskeleton suit, though the effect of the exoskeleton suit tends to be overestimated in simulations. Despite this limitation, the proposed method is expected to assist in efficiently evaluating exoskeleton use in practice, thereby facilitating the more widespread adoption of passive exoskeletons in construction.</div></div>","PeriodicalId":8660,"journal":{"name":"Automation in Construction","volume":"172 ","pages":"Article 106035"},"PeriodicalIF":9.6,"publicationDate":"2025-02-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143378295","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-02-10DOI: 10.1016/j.autcon.2025.106036
Tianhong Zhang , Hongling Yu , Xiaoling Wang , Jiajun Wang , Binyu Ren
Accurate long-term forecasting of grouting construction parameters is essential for foundation safety and the advancement of grouting automation. Existing methods have limited generalization due to diverse equipment and complex geological conditions. This paper addressed these challenges by proposing the Knowledge-based cross-modal fusion for long-term forecasting of Grouting parameters using Large Language Model (KG-LLM). This method captured the variations and relationships among grouting parameters by integrating domain-specific knowledge through construction knowledge and cross-prompt. A cross-modal fusion method combined knowledge-driven prompts with multi-scale time embedding into the frozen LLM, ensuring high prediction accuracy and generalization. Case studies on three projects validated the predictive performance and cross-engineering generalization of KG-LLM, with notable improvements in the prediction of parameters. KG-LLM quickly adapted to other projects without further training and was not constrained by equipment type. Moreover, this method was compatible with any LLM, offering a scalable solution for advancing the intelligent of grouting construction.
{"title":"Knowledge-based cross-modal fusion for long-term forecasting of grouting construction parameters using large language model","authors":"Tianhong Zhang , Hongling Yu , Xiaoling Wang , Jiajun Wang , Binyu Ren","doi":"10.1016/j.autcon.2025.106036","DOIUrl":"10.1016/j.autcon.2025.106036","url":null,"abstract":"<div><div>Accurate long-term forecasting of grouting construction parameters is essential for foundation safety and the advancement of grouting automation. Existing methods have limited generalization due to diverse equipment and complex geological conditions. This paper addressed these challenges by proposing the Knowledge-based cross-modal fusion for long-term forecasting of Grouting parameters using Large Language Model (KG-LLM). This method captured the variations and relationships among grouting parameters by integrating domain-specific knowledge through construction knowledge and cross-prompt. A cross-modal fusion method combined knowledge-driven prompts with multi-scale time embedding into the frozen LLM, ensuring high prediction accuracy and generalization. Case studies on three projects validated the predictive performance and cross-engineering generalization of KG-LLM, with notable improvements in the prediction of parameters. KG-LLM quickly adapted to other projects without further training and was not constrained by equipment type. Moreover, this method was compatible with any LLM, offering a scalable solution for advancing the intelligent of grouting construction.</div></div>","PeriodicalId":8660,"journal":{"name":"Automation in Construction","volume":"172 ","pages":"Article 106036"},"PeriodicalIF":9.6,"publicationDate":"2025-02-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143378294","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-02-08DOI: 10.1016/j.autcon.2025.106013
Yuntae Jeon , Dai Quoc Tran , Khoa Tran Dang Vo , Jaehyun Jeon , Minsoo Park , Seunghee Park
Efficient progress monitoring is crucial for construction project management to ensure adherence to project timelines and cost control. Traditional methods, which rely on either 3D point cloud data or 2D image transformations, face challenges such as data sparsity in point cloud and the need for extensive human labeling. Recent NeRF-based methods offer high-quality image rendering for accurate evaluation, but challenges remain in comparing as-built scenes with as-planned designs or measuring actual dimensions. To address these limitations, this paper proposes a NeRF-based scene understanding approach synchronized with BIM. Additionally, a formalized progress evaluation method and the automatic generation of ground truth masks for comparison using BIM on NVIDIA Omniverse are introduced. This approach enables precise progress evaluation using smartphone-captured video, enhancing its applicability and generalizability. Experiments conducted on three different scenes from the concrete pouring process demonstrate that our method achieves a measurement error range of 1% to 2.2% and 8.7 mAE for element-wise segmentation performance in completed scenes. Furthermore, it achieves 5.7 mAE for progress tracking performance in ongoing process scenes. Overall, these findings are significant for improving vision-based progress monitoring and efficiency on construction sites.
{"title":"Neural radiance fields for construction site scene representation and progress evaluation with BIM","authors":"Yuntae Jeon , Dai Quoc Tran , Khoa Tran Dang Vo , Jaehyun Jeon , Minsoo Park , Seunghee Park","doi":"10.1016/j.autcon.2025.106013","DOIUrl":"10.1016/j.autcon.2025.106013","url":null,"abstract":"<div><div>Efficient progress monitoring is crucial for construction project management to ensure adherence to project timelines and cost control. Traditional methods, which rely on either 3D point cloud data or 2D image transformations, face challenges such as data sparsity in point cloud and the need for extensive human labeling. Recent NeRF-based methods offer high-quality image rendering for accurate evaluation, but challenges remain in comparing as-built scenes with as-planned designs or measuring actual dimensions. To address these limitations, this paper proposes a NeRF-based scene understanding approach synchronized with BIM. Additionally, a formalized progress evaluation method and the automatic generation of ground truth masks for comparison using BIM on NVIDIA Omniverse are introduced. This approach enables precise progress evaluation using smartphone-captured video, enhancing its applicability and generalizability. Experiments conducted on three different scenes from the concrete pouring process demonstrate that our method achieves a measurement error range of 1% to 2.2% and 8.7 mAE for element-wise segmentation performance in completed scenes. Furthermore, it achieves 5.7 mAE for progress tracking performance in ongoing process scenes. Overall, these findings are significant for improving vision-based progress monitoring and efficiency on construction sites.</div></div>","PeriodicalId":8660,"journal":{"name":"Automation in Construction","volume":"172 ","pages":"Article 106013"},"PeriodicalIF":9.6,"publicationDate":"2025-02-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143350316","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Assessing the integrity of structural systems throughout their aging process has capital importance in infrastructure management. Monitoring these infrastructures presents challenges in distinguishing early damage from slight variations in the structural behavior caused by environmental or operational variability.
This paper introduces the Spectral Jump Anomaly Detection (SJ-AD) algorithm, a data-driven method designed to identify minor structural damage using acceleration collected under considerable environmental variability. SJ-AD focuses on anomalies in the distribution of a distance measure, the minimum jump cost, calculated between power spectra. The method effectively identifies issues in the KW-51 bridge, even with minimal structural defects and varying temperatures. Additionally, numerical experiments show that SJ-AD can detect low damping variations in noisy conditions, demonstrating robustness against minor frequency changes. Its flexible approach and sensitivity to small damages make SJ-AD a promising solution for proactive maintenance and risk management in various structural systems.
{"title":"Spectral Jump Anomaly Detection: Temperature-compensated algorithm for structural damage detection using vibration data","authors":"Giulio Mariniello, Tommaso Pastore, Domenico Asprone","doi":"10.1016/j.autcon.2025.106031","DOIUrl":"10.1016/j.autcon.2025.106031","url":null,"abstract":"<div><div>Assessing the integrity of structural systems throughout their aging process has capital importance in infrastructure management. Monitoring these infrastructures presents challenges in distinguishing early damage from slight variations in the structural behavior caused by environmental or operational variability.</div><div>This paper introduces the Spectral Jump Anomaly Detection (<span>SJ-AD</span>) algorithm, a data-driven method designed to identify minor structural damage using acceleration collected under considerable environmental variability. <span>SJ-AD</span> focuses on anomalies in the distribution of a distance measure, the minimum jump cost, calculated between power spectra. The method effectively identifies issues in the KW-51 bridge, even with minimal structural defects and varying temperatures. Additionally, numerical experiments show that <span>SJ-AD</span> can detect low damping variations in noisy conditions, demonstrating robustness against minor frequency changes. Its flexible approach and sensitivity to small damages make <span>SJ-AD</span> a promising solution for proactive maintenance and risk management in various structural systems.</div></div>","PeriodicalId":8660,"journal":{"name":"Automation in Construction","volume":"172 ","pages":"Article 106031"},"PeriodicalIF":9.6,"publicationDate":"2025-02-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143350317","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Indoor Visual Positioning (IVP) is a prerequisite for applications like indoor location-based services in smart buildings. Building Information Modeling (BIM), representing physical and functional characteristics of buildings, is widely used in IVP. Existing BIM-based IVP methods register visual features from sensed images to BIM but suffer inaccuracies caused by dramatic disturbances from unstable objects like chairs. Stationary objects like walls may address this issue and provide a more reliable IVP scheme, yet it remains to be explored. This paper proposes an IVP scheme leveraging stationary object registration from sequential images to BIM, termed Stationary Semantic Distribution-driven Visual Positioning (S2VP). In the offline phase, S2VP generates “stationary semantic distribution-positions” datasets from BIM. During positioning, the stationary semantic distribution of sensed images is first estimated, and the indoor position is computed via a particle filter model. Experiments show that S2VP achieves an average positioning error of 0.37 m, outperforming existing methods.
{"title":"Indoor visual positioning using stationary semantic distribution registration and building information modeling","authors":"Xiaoping Zhou , Yukang Wang , Jichao Zhao , Maozu Guo","doi":"10.1016/j.autcon.2025.106033","DOIUrl":"10.1016/j.autcon.2025.106033","url":null,"abstract":"<div><div>Indoor Visual Positioning (IVP) is a prerequisite for applications like indoor location-based services in smart buildings. Building Information Modeling (BIM), representing physical and functional characteristics of buildings, is widely used in IVP. Existing BIM-based IVP methods register visual features from sensed images to BIM but suffer inaccuracies caused by dramatic disturbances from unstable objects like chairs. Stationary objects like walls may address this issue and provide a more reliable IVP scheme, yet it remains to be explored. This paper proposes an IVP scheme leveraging stationary object registration from sequential images to BIM, termed Stationary Semantic Distribution-driven Visual Positioning (S2VP). In the offline phase, S2VP generates “stationary semantic distribution-positions” datasets from BIM. During positioning, the stationary semantic distribution of sensed images is first estimated, and the indoor position is computed via a particle filter model. Experiments show that S2VP achieves an average positioning error of 0.37 m, outperforming existing methods.</div></div>","PeriodicalId":8660,"journal":{"name":"Automation in Construction","volume":"172 ","pages":"Article 106033"},"PeriodicalIF":9.6,"publicationDate":"2025-02-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143369764","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-02-07DOI: 10.1016/j.autcon.2025.106043
Yao Wang , Hai Liu , Xu Meng , Guiquan Yuan , Huiguo Wang , Ruige Shi , Mengxiong Tang , Billie F. Spencer
This paper investigated the potential of Rayleigh wave multimodal dispersion inversion to advance automatic construction through real-time, in-situ measurement of rockfill compaction. An acquisition system and inversion method were developed to automate the process of obtaining compaction depth profiles and implemented during a dynamic rolling test. A rockfill layer under 2 m was tested, with Rayleigh wave data collected after different compaction passes. Multi-mode dispersion inversion was used to analyze the material's velocity structure. The results show that multimodal dispersion curves accurately reflect changes in compaction. As compaction increased, the velocity structure transitioned from a complex layered to a uniform single-layered form, with a corresponding rise in the elastic modulus. Furthermore, the calculated Young's modulus exhibited a strong positive correlation with dry density measured by excavation tests. These findings offer an approach for intelligent compaction techniques, contributing to the automation of in-situ compaction monitoring in rockfill construction.
{"title":"Compaction test of rolled rockfill material using multimodal Rayleigh wave dispersion inversion","authors":"Yao Wang , Hai Liu , Xu Meng , Guiquan Yuan , Huiguo Wang , Ruige Shi , Mengxiong Tang , Billie F. Spencer","doi":"10.1016/j.autcon.2025.106043","DOIUrl":"10.1016/j.autcon.2025.106043","url":null,"abstract":"<div><div>This paper investigated the potential of Rayleigh wave multimodal dispersion inversion to advance automatic construction through real-time, in-situ measurement of rockfill compaction. An acquisition system and inversion method were developed to automate the process of obtaining compaction depth profiles and implemented during a dynamic rolling test. A rockfill layer under 2 m was tested, with Rayleigh wave data collected after different compaction passes. Multi-mode dispersion inversion was used to analyze the material's velocity structure. The results show that multimodal dispersion curves accurately reflect changes in compaction. As compaction increased, the velocity structure transitioned from a complex layered to a uniform single-layered form, with a corresponding rise in the elastic modulus. Furthermore, the calculated Young's modulus exhibited a strong positive correlation with dry density measured by excavation tests. These findings offer an approach for intelligent compaction techniques, contributing to the automation of in-situ compaction monitoring in rockfill construction.</div></div>","PeriodicalId":8660,"journal":{"name":"Automation in Construction","volume":"172 ","pages":"Article 106043"},"PeriodicalIF":9.6,"publicationDate":"2025-02-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143292210","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-02-07DOI: 10.1016/j.autcon.2025.106045
Ali Mahmoud Mayya , Nizar Faisal Alkayem
Condition assessment of stone structures is crucial to maintain their durability. To improve the identification of stone cracks, a triple-stage framework for crack detection, segmentation, and decision-support clustering is proposed. The framework starts with an ensemble of state-of-the-art YOLO models to improve crack detection. The detected crack regions are then fed to an enhanced MobileNetV2U-Net for better crack localization. Thereafter, features are extracted from the detected and segmented stone crack regions, and the K-means and Spectral clustering are utilized to categorize crack patterns. Intensive experiments and detailed comparisons are performed to test the proposed approach. Finally, a user-friendly GUI is designed to simplify the complexity of the proposed framework. Results prove that the YOLO ensemble detector and MobileNetV2U-Net model exhibit the best performances based on statistical metrics. Moreover, it is proven that spectral clustering using five clusters applied to the detected-segmented crack patterns is the best-employed scenario.
{"title":"Triple-stage crack detection in stone masonry using YOLO-ensemble, MobileNetV2U-net, and spectral clustering","authors":"Ali Mahmoud Mayya , Nizar Faisal Alkayem","doi":"10.1016/j.autcon.2025.106045","DOIUrl":"10.1016/j.autcon.2025.106045","url":null,"abstract":"<div><div>Condition assessment of stone structures is crucial to maintain their durability. To improve the identification of stone cracks, a triple-stage framework for crack detection, segmentation, and decision-support clustering is proposed. The framework starts with an ensemble of state-of-the-art YOLO models to improve crack detection. The detected crack regions are then fed to an enhanced MobileNetV2U-Net for better crack localization. Thereafter, features are extracted from the detected and segmented stone crack regions, and the K-means and Spectral clustering are utilized to categorize crack patterns. Intensive experiments and detailed comparisons are performed to test the proposed approach. Finally, a user-friendly GUI is designed to simplify the complexity of the proposed framework. Results prove that the YOLO ensemble detector and MobileNetV2U-Net model exhibit the best performances based on statistical metrics. Moreover, it is proven that spectral clustering using five clusters applied to the detected-segmented crack patterns is the best-employed scenario.</div></div>","PeriodicalId":8660,"journal":{"name":"Automation in Construction","volume":"172 ","pages":"Article 106045"},"PeriodicalIF":9.6,"publicationDate":"2025-02-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143350085","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-02-07DOI: 10.1016/j.autcon.2025.106032
Yue Gong , JoonOh Seo , Kyung-Su Kang , Mengnan Shi
This paper proposes an automated approach for construction worker activity recognition by integrating video and acceleration data, employing a decision-level fusion method that combines classification results from each data modality using the Dempster-Shafer Theory (DS). To address uneven sensor reliability, the Category-wise Weighted Dempster-Shafer (CWDS) approach is further proposed, estimating category-wise weights during training and embedding them into the fusion process. An experimental study with ten participants performing eight construction activities showed that models trained using DS and CWDS outperformed single-modal approaches, achieving accuracies of 91.8% and 95.6%, about 7% and 10% higher than those of vision-based and acceleration-based models, respectively. Category-wise improvements were also observed, indicating that the proposed multimodal fusion approaches result in a more robust and balanced model. These results highlight the effectiveness of integrating vision and accelerometer data through decision-level fusion to reduce uncertainty in multimodal data and leverage the strengths of single sensor-based approaches.
{"title":"Automated recognition of construction worker activities using multimodal decision-level fusion","authors":"Yue Gong , JoonOh Seo , Kyung-Su Kang , Mengnan Shi","doi":"10.1016/j.autcon.2025.106032","DOIUrl":"10.1016/j.autcon.2025.106032","url":null,"abstract":"<div><div>This paper proposes an automated approach for construction worker activity recognition by integrating video and acceleration data, employing a decision-level fusion method that combines classification results from each data modality using the Dempster-Shafer Theory (DS). To address uneven sensor reliability, the Category-wise Weighted Dempster-Shafer (CWDS) approach is further proposed, estimating category-wise weights during training and embedding them into the fusion process. An experimental study with ten participants performing eight construction activities showed that models trained using DS and CWDS outperformed single-modal approaches, achieving accuracies of 91.8% and 95.6%, about 7% and 10% higher than those of vision-based and acceleration-based models, respectively. Category-wise improvements were also observed, indicating that the proposed multimodal fusion approaches result in a more robust and balanced model. These results highlight the effectiveness of integrating vision and accelerometer data through decision-level fusion to reduce uncertainty in multimodal data and leverage the strengths of single sensor-based approaches.</div></div>","PeriodicalId":8660,"journal":{"name":"Automation in Construction","volume":"172 ","pages":"Article 106032"},"PeriodicalIF":9.6,"publicationDate":"2025-02-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143349418","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-02-07DOI: 10.1016/j.autcon.2025.106017
Hongyu Zhao , Xiangyu Wang , Zhaohui Chen , Xianda Liu , Yufei Wang , Jun Wang , Junbo Sun
Extrusion-filament and no-framework craft significantly influence microcracks in 3D printing concrete (3DPC). A detailed analysis of these microcracks is essential to improve overall performance of material. However, fast and automated methods for capturing and measuring representative microcrack information in 3DPC are currently lacking. This paper presents a transformer based method for automatic quantization of microcosmic information in 3DPC, enabling a comprehensive analysis of microcracks. Additionally, a transformer network to rapidly and cost-effectively obtain high-quality microscopic images is introduced. The proposed quantization method involves a range of enhancement tactics over an existing baseline model, demonstrating higher accuracy in detecting inner microcracks of 3DPC compared to current advanced algorithms. This method surpasses existing microscopic imaging technologies in terms of information content, computational speed, and cost-efficiency. Therefore, this method will have promising applications for analyzing other micro-details in concrete when it is supplemented with a diverse and extensive training dataset.
{"title":"Microcrack investigations of 3D printing concrete using multiple transformer networks","authors":"Hongyu Zhao , Xiangyu Wang , Zhaohui Chen , Xianda Liu , Yufei Wang , Jun Wang , Junbo Sun","doi":"10.1016/j.autcon.2025.106017","DOIUrl":"10.1016/j.autcon.2025.106017","url":null,"abstract":"<div><div>Extrusion-filament and no-framework craft significantly influence microcracks in 3D printing concrete (3DPC). A detailed analysis of these microcracks is essential to improve overall performance of material. However, fast and automated methods for capturing and measuring representative microcrack information in 3DPC are currently lacking. This paper presents a transformer based method for automatic quantization of microcosmic information in 3DPC, enabling a comprehensive analysis of microcracks. Additionally, a transformer network to rapidly and cost-effectively obtain high-quality microscopic images is introduced. The proposed quantization method involves a range of enhancement tactics over an existing baseline model, demonstrating higher accuracy in detecting inner microcracks of 3DPC compared to current advanced algorithms. This method surpasses existing microscopic imaging technologies in terms of information content, computational speed, and cost-efficiency. Therefore, this method will have promising applications for analyzing other micro-details in concrete when it is supplemented with a diverse and extensive training dataset.</div></div>","PeriodicalId":8660,"journal":{"name":"Automation in Construction","volume":"172 ","pages":"Article 106017"},"PeriodicalIF":9.6,"publicationDate":"2025-02-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143349417","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-02-07DOI: 10.1016/j.autcon.2025.106006
Ali Faisal, Suliman Gargoum
Pothole-induced vehicle damage and accidents have significantly increased recently, motivating urgent needs for effective detection and maintenance strategies. This paper introduces an algorithm optimized for low-cost LiDAR sensors that improves the detection and quantification of potholes on road surfaces. The algorithm uses curvature-based analysis to detect potholes in spatially thinned, structured LiDAR datasets and assesses their size through boundary delineation and voxelization. Testing on high-resolution LiDAR scans in Edmonton, Alberta demonstrated consistent detection of varying pothole sizes and shapes, with measurements matching manual LiDAR analysis. Statistical sensitivity analysis revealed that reducing point density significantly to 205 points/m (ppsm) had no measurable impact on detection and geometric assessment accuracy, maintaining measurement errors consistently within 3%–10%. The algorithm proved highly efficient with processing times of 88”/km and 23”/km for test segments with reduced point density, suggesting potential integration with city fleet vehicles for continuous and automated road maintenance monitoring.
{"title":"Cost-effective LiDAR for pothole detection and quantification using a low-point-density approach","authors":"Ali Faisal, Suliman Gargoum","doi":"10.1016/j.autcon.2025.106006","DOIUrl":"10.1016/j.autcon.2025.106006","url":null,"abstract":"<div><div>Pothole-induced vehicle damage and accidents have significantly increased recently, motivating urgent needs for effective detection and maintenance strategies. This paper introduces an algorithm optimized for low-cost LiDAR sensors that improves the detection and quantification of potholes on road surfaces. The algorithm uses curvature-based analysis to detect potholes in spatially thinned, structured LiDAR datasets and assesses their size through boundary delineation and voxelization. Testing on high-resolution LiDAR scans in Edmonton, Alberta demonstrated consistent detection of varying pothole sizes and shapes, with measurements matching manual LiDAR analysis. Statistical sensitivity analysis revealed that reducing point density significantly to 205 points/m<span><math><msup><mrow></mrow><mrow><mn>2</mn></mrow></msup></math></span> (ppsm) had no measurable impact on detection and geometric assessment accuracy, maintaining measurement errors consistently within 3%–10%. The algorithm proved highly efficient with processing times of 88”/km and 23”/km for test segments with reduced point density, suggesting potential integration with city fleet vehicles for continuous and automated road maintenance monitoring.</div></div>","PeriodicalId":8660,"journal":{"name":"Automation in Construction","volume":"172 ","pages":"Article 106006"},"PeriodicalIF":9.6,"publicationDate":"2025-02-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143349419","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}