Pub Date : 2024-12-24DOI: 10.1016/j.autcon.2024.105947
Malgorzata A. Zboinska, Frederik Göbel
Building material reuse can reduce the environmental impact of construction yet its advanced digital support is still limited. Which digital tools could effectively support repair of highly irregular, salvaged materials? To probe this question, a framework featuring six advanced digital tools is proposed and verified through six design and prototyping experiments. The experiments demonstrate that a digital toolkit integrating photogrammetry, robot vision, machine learning, computer vision, computational design, and robotic 3D printing effectively supports repair and recovery of irregular reclaimed materials, enabling their robust digitization, damage detection, and feature-informed computational redesign and refabrication. These findings contribute to the advancement of digitally aided reuse practices in the construction sector, providing valuable insights into accommodating highly heterogeneous reclaimed materials by leveraging advanced automation and digitization. They provide the crucial and currently missing technological and methodological foundation needed to inform future research on industrial digital solutions for reuse.
{"title":"Digital tool integrations for architectural reuse of salvaged building materials","authors":"Malgorzata A. Zboinska, Frederik Göbel","doi":"10.1016/j.autcon.2024.105947","DOIUrl":"https://doi.org/10.1016/j.autcon.2024.105947","url":null,"abstract":"Building material reuse can reduce the environmental impact of construction yet its advanced digital support is still limited. Which digital tools could effectively support repair of highly irregular, salvaged materials? To probe this question, a framework featuring six advanced digital tools is proposed and verified through six design and prototyping experiments. The experiments demonstrate that a digital toolkit integrating photogrammetry, robot vision, machine learning, computer vision, computational design, and robotic 3D printing effectively supports repair and recovery of irregular reclaimed materials, enabling their robust digitization, damage detection, and feature-informed computational redesign and refabrication. These findings contribute to the advancement of digitally aided reuse practices in the construction sector, providing valuable insights into accommodating highly heterogeneous reclaimed materials by leveraging advanced automation and digitization. They provide the crucial and currently missing technological and methodological foundation needed to inform future research on industrial digital solutions for reuse.","PeriodicalId":8660,"journal":{"name":"Automation in Construction","volume":"64 1","pages":""},"PeriodicalIF":10.3,"publicationDate":"2024-12-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142888280","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-12-24DOI: 10.1016/j.autcon.2024.105938
Juntong Zhang, Xin Ruan, Han Si, Xiangyu Wang
Construction, as a significant production activity, is inherently prone to accidents. These accidents often result from a chain of multiple hazards. However, existing methods of hazard analysis are limited to single-dimensional network modeling and static analysis, which makes them inadequate for addressing the complexity and variability of construction sites. This paper presents a dynamic construction hazard analysis method that integrates real-time information into knowledge graphs. In this approach, label entities are added to general knowledge graphs, linking hazard entities to their labels. Labels identified through vision-based methods are then incorporated into the graphs, allowing for the effective extraction and updating of subgraphs in response to spatiotemporal changes in the scenario. Additionally, graph analysis metrics have been proposed to evaluate the system from multiple levels. Finally, the method was applied to a bridge foundation construction case, demonstrating its practicality and significance in preventing accidents by enabling dynamic hazard analysis.
{"title":"Dynamic hazard analysis on construction sites using knowledge graphs integrated with real-time information","authors":"Juntong Zhang, Xin Ruan, Han Si, Xiangyu Wang","doi":"10.1016/j.autcon.2024.105938","DOIUrl":"https://doi.org/10.1016/j.autcon.2024.105938","url":null,"abstract":"Construction, as a significant production activity, is inherently prone to accidents. These accidents often result from a chain of multiple hazards. However, existing methods of hazard analysis are limited to single-dimensional network modeling and static analysis, which makes them inadequate for addressing the complexity and variability of construction sites. This paper presents a dynamic construction hazard analysis method that integrates real-time information into knowledge graphs. In this approach, label entities are added to general knowledge graphs, linking hazard entities to their labels. Labels identified through vision-based methods are then incorporated into the graphs, allowing for the effective extraction and updating of subgraphs in response to spatiotemporal changes in the scenario. Additionally, graph analysis metrics have been proposed to evaluate the system from multiple levels. Finally, the method was applied to a bridge foundation construction case, demonstrating its practicality and significance in preventing accidents by enabling dynamic hazard analysis.","PeriodicalId":8660,"journal":{"name":"Automation in Construction","volume":"66 1","pages":""},"PeriodicalIF":10.3,"publicationDate":"2024-12-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142888984","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-12-24DOI: 10.1016/j.autcon.2024.105948
Erfan Hedayati, Ali Zabihi Kolaei, Mostafa Khanzadi, Gholamreza Ghodrati Amiri
Attention to offsite construction (OSC) is increasing as it can reduce construction problems. At the same time, researchers are exploring various technologies to maximize the benefits of OSC and minimize its challenges. In contrast to other review papers that have studied the implementation of technologies in OSC with a particular focus on a specific application, a wide range of or a group of technologies, this paper presents a systematic review of hardware technologies that can be used physically in OSC. After analyzing 130 articles published in the last decade from 2014, the technologies were categorized into three groups. These technologies are examined along with their integrations under mono- and multi-technology approaches to determine the applications of technologies, their implementation maturity in studies, and their advantages and disadvantages. Ultimately, this paper outlines its impacts on practitioners and identifies future needs, clarifying the path for practitioners and researchers.
{"title":"Implementation of hardware technologies in offsite construction (2014–2023)","authors":"Erfan Hedayati, Ali Zabihi Kolaei, Mostafa Khanzadi, Gholamreza Ghodrati Amiri","doi":"10.1016/j.autcon.2024.105948","DOIUrl":"https://doi.org/10.1016/j.autcon.2024.105948","url":null,"abstract":"Attention to offsite construction (OSC) is increasing as it can reduce construction problems. At the same time, researchers are exploring various technologies to maximize the benefits of OSC and minimize its challenges. In contrast to other review papers that have studied the implementation of technologies in OSC with a particular focus on a specific application, a wide range of or a group of technologies, this paper presents a systematic review of hardware technologies that can be used physically in OSC. After analyzing 130 articles published in the last decade from 2014, the technologies were categorized into three groups. These technologies are examined along with their integrations under mono- and multi-technology approaches to determine the applications of technologies, their implementation maturity in studies, and their advantages and disadvantages. Ultimately, this paper outlines its impacts on practitioners and identifies future needs, clarifying the path for practitioners and researchers.","PeriodicalId":8660,"journal":{"name":"Automation in Construction","volume":"3 1","pages":""},"PeriodicalIF":10.3,"publicationDate":"2024-12-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142888279","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-12-24DOI: 10.1016/j.autcon.2024.105944
Duho Chung, Juhyeon Kim, Sunwoong Paik, Seunghun Im, Hyoungkwan Kim
This paper introduces Automated system of Scaffold Point cloud data Acquisition using a Robot dog (ASPAR), a method for automating scaffold point cloud data acquisition using a quadruped robot. The method consists of three stages: (1) Initial Exploration, where the robot autonomously explores the site and detects scaffolds in real-time; (2) Scan Plan Generation, which uses 3D SLAM data and scaffold detection results to determine optimal scan positions and generate paths between them; and (3) Scan Plan Execution, where the robot follows these paths and performs scans at the designated positions. ASPAR demonstrated its effectiveness in scanning scaffold structures on construction sites without prior information. Experimental results showed that, compared to manual scans by skilled workers, it secured an average of 0.7 additional scan positions, achieving a coverage rate of 106.1 %. In a large-scale outdoor construction site experiment, it recorded a coverage rate of 96.8 %, validating its real-world applicability.
本文介绍了利用机器狗进行脚手架点云数据自动采集系统(ASPAR),一种利用四足机器人进行脚手架点云数据自动采集的方法。该方法分为三个阶段:(1)初始探索,机器人自主探索场地,实时检测脚手架;(2) Scan Plan Generation,利用3D SLAM数据和支架检测结果确定最佳扫描位置并生成扫描位置之间的路径;(3)扫描计划执行,机器人沿着这些路径在指定位置进行扫描。ASPAR在没有先验信息的情况下对建筑工地的脚手架结构进行了有效的扫描。实验结果表明,与熟练工人的手动扫描相比,它平均增加了0.7个扫描位置,覆盖率达到106.1%。在大型室外施工现场实验中,该方法的覆盖率达到96.8%,验证了其在现实世界中的适用性。
{"title":"Automated system of scaffold point cloud data acquisition using a robot dog","authors":"Duho Chung, Juhyeon Kim, Sunwoong Paik, Seunghun Im, Hyoungkwan Kim","doi":"10.1016/j.autcon.2024.105944","DOIUrl":"https://doi.org/10.1016/j.autcon.2024.105944","url":null,"abstract":"This paper introduces Automated system of Scaffold Point cloud data Acquisition using a Robot dog (ASPAR), a method for automating scaffold point cloud data acquisition using a quadruped robot. The method consists of three stages: (1) Initial Exploration, where the robot autonomously explores the site and detects scaffolds in real-time; (2) Scan Plan Generation, which uses 3D SLAM data and scaffold detection results to determine optimal scan positions and generate paths between them; and (3) Scan Plan Execution, where the robot follows these paths and performs scans at the designated positions. ASPAR demonstrated its effectiveness in scanning scaffold structures on construction sites without prior information. Experimental results showed that, compared to manual scans by skilled workers, it secured an average of 0.7 additional scan positions, achieving a coverage rate of 106.1 %. In a large-scale outdoor construction site experiment, it recorded a coverage rate of 96.8 %, validating its real-world applicability.","PeriodicalId":8660,"journal":{"name":"Automation in Construction","volume":"14 1","pages":""},"PeriodicalIF":10.3,"publicationDate":"2024-12-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142888281","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-12-21DOI: 10.1016/j.autcon.2024.105934
Hanyun Huang, Mingyang Ma, Suli Bai, Lei Yang, Yanhong Liu
In this paper, a multi-scale feature aggregation and adaptive fusion network, is proposed for automatic and accurate pavement crack defect segmentation. Specifically, faced with the linear characteristic of pavement crack defects, a multiple-dimension attention (MDA) module is proposed to effectively capture long-range correlation from three directions, including space, width and height, and help identify the pavement crack defect boundaries. On this basis, a multi-scale skip connection (MSK) module is proposed, which can effectively utilize the feature information from multiple receptive fields to support accurate feature reconstruction in the decoding stage. Furthermore, a multi-scale attention fusion (MSAF) module is proposed to realize effective multi-scale feature representation and aggregation. Finally, an adaptive weight fusion (AWL) module is proposed to dynamically fuse the output features across different network layers for accurate multi-scale crack defect segmentation. Experiments indicate that proposed network is superior to other mainstream segmentation networks on pixelwise crack defect detection task.
{"title":"Automatic crack defect detection via multiscale feature aggregation and adaptive fusion","authors":"Hanyun Huang, Mingyang Ma, Suli Bai, Lei Yang, Yanhong Liu","doi":"10.1016/j.autcon.2024.105934","DOIUrl":"https://doi.org/10.1016/j.autcon.2024.105934","url":null,"abstract":"In this paper, a multi-scale feature aggregation and adaptive fusion network, is proposed for automatic and accurate pavement crack defect segmentation. Specifically, faced with the linear characteristic of pavement crack defects, a multiple-dimension attention (MDA) module is proposed to effectively capture long-range correlation from three directions, including space, width and height, and help identify the pavement crack defect boundaries. On this basis, a multi-scale skip connection (MSK) module is proposed, which can effectively utilize the feature information from multiple receptive fields to support accurate feature reconstruction in the decoding stage. Furthermore, a multi-scale attention fusion (MSAF) module is proposed to realize effective multi-scale feature representation and aggregation. Finally, an adaptive weight fusion (AWL) module is proposed to dynamically fuse the output features across different network layers for accurate multi-scale crack defect segmentation. Experiments indicate that proposed network is superior to other mainstream segmentation networks on pixelwise crack defect detection task.","PeriodicalId":8660,"journal":{"name":"Automation in Construction","volume":"155 1","pages":""},"PeriodicalIF":10.3,"publicationDate":"2024-12-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142887890","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-12-20DOI: 10.1016/j.autcon.2024.105936
Jaechang Ko, Donghyuk Lee
The assessment and classification of architectural sectional drawings is critical in the architecture, engineering, and construction (AEC) field, where the accurate representation of complex structures and the extraction of meaningful patterns are key challenges. This paper established a framework for standardizing different forms of architectural drawings into a consistent graph format, and evaluated different Graph Neural Networks (GNNs) architectures, pooling methods, node features, and masking techniques. This paper demonstrates that GNNs can be practically applied in the design and review process, particularly for categorizing details and detecting errors in architectural drawings. The potential for visual explanations of model decisions using Explainable AI (XAI) is also explored to enhance the reliability and user understanding of AI models in architecture. This paper highlights the potential of GNNs in architectural data analysis and outlines the challenges and future directions for broader application in the AEC field.
{"title":"Graph neural networks for classification and error detection in 2D architectural detail drawings","authors":"Jaechang Ko, Donghyuk Lee","doi":"10.1016/j.autcon.2024.105936","DOIUrl":"https://doi.org/10.1016/j.autcon.2024.105936","url":null,"abstract":"The assessment and classification of architectural sectional drawings is critical in the architecture, engineering, and construction (AEC) field, where the accurate representation of complex structures and the extraction of meaningful patterns are key challenges. This paper established a framework for standardizing different forms of architectural drawings into a consistent graph format, and evaluated different Graph Neural Networks (GNNs) architectures, pooling methods, node features, and masking techniques. This paper demonstrates that GNNs can be practically applied in the design and review process, particularly for categorizing details and detecting errors in architectural drawings. The potential for visual explanations of model decisions using Explainable AI (XAI) is also explored to enhance the reliability and user understanding of AI models in architecture. This paper highlights the potential of GNNs in architectural data analysis and outlines the challenges and future directions for broader application in the AEC field.","PeriodicalId":8660,"journal":{"name":"Automation in Construction","volume":"12 1","pages":""},"PeriodicalIF":10.3,"publicationDate":"2024-12-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142887891","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-12-19DOI: 10.1016/j.autcon.2024.105930
Zhengyi Chen, Changhao Song, Boyu Wang, Xingyu Tao, Xiao Zhang, Fangzhou Lin, Jack C.P. Cheng
This paper presents a real-time, cost-effective navigation and localization framework tailored for quadruped robot-based indoor inspections. A 4D Building Information Model is utilized to generate a navigation map, supporting robotic pose initialization and path planning. The framework integrates a cost-effective, multi-sensor SLAM system that combines inertial-corrected 2D laser scans with fused laser and visual-inertial SLAM. Additionally, a deep-learning-based object recognition model is trained for multi-dimensional reality capture, enhancing comprehensive indoor element inspection. Validated on a quadruped robot equipped with an RGB-D camera, IMU, and 2D LiDAR in an academic setting, the framework achieved collision-free navigation, reduced localization drift by 71.77 % compared to traditional SLAM methods, and provided accurate large-scale point cloud reconstruction with 0.119-m precision. Furthermore, the object detection model attained mean average precision scores of 73.7 % for 2D detection and 62.9 % for 3D detection.
{"title":"Automated reality capture for indoor inspection using BIM and a multi-sensor quadruped robot","authors":"Zhengyi Chen, Changhao Song, Boyu Wang, Xingyu Tao, Xiao Zhang, Fangzhou Lin, Jack C.P. Cheng","doi":"10.1016/j.autcon.2024.105930","DOIUrl":"https://doi.org/10.1016/j.autcon.2024.105930","url":null,"abstract":"This paper presents a real-time, cost-effective navigation and localization framework tailored for quadruped robot-based indoor inspections. A 4D Building Information Model is utilized to generate a navigation map, supporting robotic pose initialization and path planning. The framework integrates a cost-effective, multi-sensor SLAM system that combines inertial-corrected 2D laser scans with fused laser and visual-inertial SLAM. Additionally, a deep-learning-based object recognition model is trained for multi-dimensional reality capture, enhancing comprehensive indoor element inspection. Validated on a quadruped robot equipped with an RGB-D camera, IMU, and 2D LiDAR in an academic setting, the framework achieved collision-free navigation, reduced localization drift by 71.77 % compared to traditional SLAM methods, and provided accurate large-scale point cloud reconstruction with 0.119-m precision. Furthermore, the object detection model attained mean average precision scores of 73.7 % for 2D detection and 62.9 % for 3D detection.","PeriodicalId":8660,"journal":{"name":"Automation in Construction","volume":"14 1","pages":""},"PeriodicalIF":10.3,"publicationDate":"2024-12-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142867664","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-12-19DOI: 10.1016/j.autcon.2024.105940
Dyala Aljagoub, Ri Na, Chongsheng Cheng
The potential of concrete bridge delamination detection using infrared thermography (IRT) has grown with technological advancements. However, most current studies require an external input (subjective threshold), reducing the detection's objectivity and accuracy. Deep learning enables automation and streamlines data processing, potentially enhancing accuracy. Yet, data scarcity poses a challenge to deep learning applications, hindering their performance. This paper aims to develop a deep learning approach using supervised learning object detection models with extended data from real and simulated images. The numerical simulation image supplementation seeks to eliminate the limited data barrier by creating a comprehensive dataset, potentially improving model performance and robustness. Mask R-CNN and YOLOv5 were tested across various training data and model parameter combinations to develop an optimal detection model. Lastly, when tested, the model showed a remarkable ability to detect delamination of varying properties accurately compared to currently employed IRT techniques.
{"title":"Delamination detection in concrete decks using numerical simulation and UAV-based infrared thermography with deep learning","authors":"Dyala Aljagoub, Ri Na, Chongsheng Cheng","doi":"10.1016/j.autcon.2024.105940","DOIUrl":"https://doi.org/10.1016/j.autcon.2024.105940","url":null,"abstract":"The potential of concrete bridge delamination detection using infrared thermography (IRT) has grown with technological advancements. However, most current studies require an external input (subjective threshold), reducing the detection's objectivity and accuracy. Deep learning enables automation and streamlines data processing, potentially enhancing accuracy. Yet, data scarcity poses a challenge to deep learning applications, hindering their performance. This paper aims to develop a deep learning approach using supervised learning object detection models with extended data from real and simulated images. The numerical simulation image supplementation seeks to eliminate the limited data barrier by creating a comprehensive dataset, potentially improving model performance and robustness. Mask R-CNN and YOLOv5 were tested across various training data and model parameter combinations to develop an optimal detection model. Lastly, when tested, the model showed a remarkable ability to detect delamination of varying properties accurately compared to currently employed IRT techniques.","PeriodicalId":8660,"journal":{"name":"Automation in Construction","volume":"31 1","pages":""},"PeriodicalIF":10.3,"publicationDate":"2024-12-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142867662","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-12-18DOI: 10.1016/j.autcon.2024.105933
Jingjing Guo, Lu Deng, Pengkun Liu, Tao Sun
Construction quality supervision is essential for project success and safety. Traditional methods relying on manual inspections and paper records are time-consuming, error-prone, and difficult to verify. In-process construction quality supervision offers a more direct and effective approach. Recent advancements in computer vision and egocentric video analysis present opportunities to enhance these processes. This paper introduces the use of key activity queries on egocentric video data for construction quality supervision. A framework, Egocentric Video-Based Construction Quality Supervision (EgoConQS), is developed using a video self-stitching graph network to identify key activities in egocentric videos. EgoConQS facilitates efficient monitoring and quick review of key activity frames. Empirical evaluation with real-world data demonstrates an average recall of 35.85 % and a mAP score of 6.07 %, highlighting the potential of key activity queries for reliable and convenient quality supervision.
{"title":"Egocentric-video-based construction quality supervision (EgoConQS): Application of automatic key activity queries","authors":"Jingjing Guo, Lu Deng, Pengkun Liu, Tao Sun","doi":"10.1016/j.autcon.2024.105933","DOIUrl":"https://doi.org/10.1016/j.autcon.2024.105933","url":null,"abstract":"Construction quality supervision is essential for project success and safety. Traditional methods relying on manual inspections and paper records are time-consuming, error-prone, and difficult to verify. In-process construction quality supervision offers a more direct and effective approach. Recent advancements in computer vision and egocentric video analysis present opportunities to enhance these processes. This paper introduces the use of key activity queries on egocentric video data for construction quality supervision. A framework, Egocentric Video-Based Construction Quality Supervision (EgoConQS), is developed using a video self-stitching graph network to identify key activities in egocentric videos. EgoConQS facilitates efficient monitoring and quick review of key activity frames. Empirical evaluation with real-world data demonstrates an average recall of 35.85 % and a mAP score of 6.07 %, highlighting the potential of key activity queries for reliable and convenient quality supervision.","PeriodicalId":8660,"journal":{"name":"Automation in Construction","volume":"88 1","pages":""},"PeriodicalIF":10.3,"publicationDate":"2024-12-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142867668","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-12-18DOI: 10.1016/j.autcon.2024.105923
Xiangpeng Cao, Shuoli Wu, Hongzhi Cui
The lack of reinforcements persisted as a significant issue in 3D-printed concrete, particularly concerning the continuous vertical reinforcement along the direction of mortar stacking. This paper introduced an in-situ mesh fabrication technique that involved injecting high-flowability material to connect reinforcement segments, resulting in a reinforcing mesh within the stacked mortar. Parallel and interwoven reinforcing steel fibers were inserted and epoxy-coated in-situ within the cast and 3D-printed beams for flexural experiments and interfacial characterizations. The in-situ fabricated mesh exhibited more significant enhancement than the parallel independent reinforcements, both in the horizontal and vertical directions, achieving a maximum flexural enhancement of 123.6 % by an epoxy-coated steel fiber mesh. The high-flowability epoxy healed the gaps inside the concrete caused by the mesh fabrication. This paper provides experimental validation for the feasibility of reinforcement integration in all directions within the final 3D-printed concrete structure, thereby supporting the practical application of 3D printing technology.
{"title":"Experimental study on in-situ mesh fabrication for reinforcing 3D-printed concrete","authors":"Xiangpeng Cao, Shuoli Wu, Hongzhi Cui","doi":"10.1016/j.autcon.2024.105923","DOIUrl":"https://doi.org/10.1016/j.autcon.2024.105923","url":null,"abstract":"The lack of reinforcements persisted as a significant issue in 3D-printed concrete, particularly concerning the continuous vertical reinforcement along the direction of mortar stacking. This paper introduced an in-situ mesh fabrication technique that involved injecting high-flowability material to connect reinforcement segments, resulting in a reinforcing mesh within the stacked mortar. Parallel and interwoven reinforcing steel fibers were inserted and epoxy-coated in-situ within the cast and 3D-printed beams for flexural experiments and interfacial characterizations. The in-situ fabricated mesh exhibited more significant enhancement than the parallel independent reinforcements, both in the horizontal and vertical directions, achieving a maximum flexural enhancement of 123.6 % by an epoxy-coated steel fiber mesh. The high-flowability epoxy healed the gaps inside the concrete caused by the mesh fabrication. This paper provides experimental validation for the feasibility of reinforcement integration in all directions within the final 3D-printed concrete structure, thereby supporting the practical application of 3D printing technology.","PeriodicalId":8660,"journal":{"name":"Automation in Construction","volume":"274 1","pages":""},"PeriodicalIF":10.3,"publicationDate":"2024-12-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142867672","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}