The application of object detection technology in the field of construction safety contributes significantly to on-site safety management and has already shown considerable progress. However, current research primarily focuses on detecting pre-defined classes annotated within single datasets. In-depth research in construction safety requires the detection of all influencing factors related to construction safety. The emergence of large language models offers new possibilities, and multimodal models that combine these with computer vision technology could break through the existing limitations. Therefore, this paper proposes the Grounding DINO multimodal model for the automatic detection of integrated construction elements, enhancing construction safety. First, this study reviews the literature to collect relevant datasets, summarizes their characteristics, and processes the data, including the processing of annotation files and the integration of classes. Subsequently, the Grounding DINO model is constructed, encompassing image and text feature extraction and enhancement, and a cross-modal decoder that fuses image and text features. Multiple dataset experimental strategies are designed to validate Grounding DINO’s capabilities in continuous learning, with a unified class system created based on integrated classes for model detection input text prompts. Finally, experiments involving zero-shot and fine-tuning evaluations, continuous learning validation, and effectiveness testing are conducted. The experimental results demonstrate the generalization capability and potential for continuous learning of the multimodal model.