Ruying Cai , Zhigang Guo , Xiangsheng Chen , Jingru Li , Yi Tan , Jingyuan Tang
{"title":"Automatic identification of integrated construction elements using open-set object detection based on image and text modality fusion","authors":"Ruying Cai , Zhigang Guo , Xiangsheng Chen , Jingru Li , Yi Tan , Jingyuan Tang","doi":"10.1016/j.aei.2024.103075","DOIUrl":null,"url":null,"abstract":"<div><div>The application of object detection technology in the field of construction safety contributes significantly to on-site safety management and has already shown considerable progress. However, current research primarily focuses on detecting pre-defined classes annotated within single datasets. In-depth research in construction safety requires the detection of all influencing factors related to construction safety. The emergence of large language models offers new possibilities, and multimodal models that combine these with computer vision technology could break through the existing limitations. Therefore, this paper proposes the Grounding DINO multimodal model for the automatic detection of integrated construction elements, enhancing construction safety. First, this study reviews the literature to collect relevant datasets, summarizes their characteristics, and processes the data, including the processing of annotation files and the integration of classes. Subsequently, the Grounding DINO model is constructed, encompassing image and text feature extraction and enhancement, and a cross-modal decoder that fuses image and text features. Multiple dataset experimental strategies are designed to validate Grounding DINO’s capabilities in continuous learning, with a unified class system created based on integrated classes for model detection input text prompts. Finally, experiments involving zero-shot and fine-tuning evaluations, continuous learning validation, and effectiveness testing are conducted. The experimental results demonstrate the generalization capability and potential for continuous learning of the multimodal model.</div></div>","PeriodicalId":50941,"journal":{"name":"Advanced Engineering Informatics","volume":"64 ","pages":"Article 103075"},"PeriodicalIF":8.0000,"publicationDate":"2025-01-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Advanced Engineering Informatics","FirstCategoryId":"5","ListUrlMain":"https://www.sciencedirect.com/science/article/pii/S1474034624007262","RegionNum":1,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q1","JCRName":"COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE","Score":null,"Total":0}
引用次数: 0
Abstract
The application of object detection technology in the field of construction safety contributes significantly to on-site safety management and has already shown considerable progress. However, current research primarily focuses on detecting pre-defined classes annotated within single datasets. In-depth research in construction safety requires the detection of all influencing factors related to construction safety. The emergence of large language models offers new possibilities, and multimodal models that combine these with computer vision technology could break through the existing limitations. Therefore, this paper proposes the Grounding DINO multimodal model for the automatic detection of integrated construction elements, enhancing construction safety. First, this study reviews the literature to collect relevant datasets, summarizes their characteristics, and processes the data, including the processing of annotation files and the integration of classes. Subsequently, the Grounding DINO model is constructed, encompassing image and text feature extraction and enhancement, and a cross-modal decoder that fuses image and text features. Multiple dataset experimental strategies are designed to validate Grounding DINO’s capabilities in continuous learning, with a unified class system created based on integrated classes for model detection input text prompts. Finally, experiments involving zero-shot and fine-tuning evaluations, continuous learning validation, and effectiveness testing are conducted. The experimental results demonstrate the generalization capability and potential for continuous learning of the multimodal model.
期刊介绍:
Advanced Engineering Informatics is an international Journal that solicits research papers with an emphasis on 'knowledge' and 'engineering applications'. The Journal seeks original papers that report progress in applying methods of engineering informatics. These papers should have engineering relevance and help provide a scientific base for more reliable, spontaneous, and creative engineering decision-making. Additionally, papers should demonstrate the science of supporting knowledge-intensive engineering tasks and validate the generality, power, and scalability of new methods through rigorous evaluation, preferably both qualitatively and quantitatively. Abstracting and indexing for Advanced Engineering Informatics include Science Citation Index Expanded, Scopus and INSPEC.