Deep learning–based defect segmentation enables automated detection, classification, localisation, and quantification of structural defects. Although supervised segmentation methods yield strong performance, they require extensive pixel-level annotations and struggle with data scarcity, class imbalance, thin crack segmentation, and generalisation across diverse conditions. This paper proposes a semi-supervised two-stage segmentation framework that integrates a high-accuracy object detection model (Stage I: localisation using YOLO11/Oriented YOLO11) with zero-shot unsupervised segmentation (Stage II: pixel-level mapping using Segment Anything Model with box prompts). The method requires only object-detection labels, eliminating the need for dedicated segmentation annotations. Experiments on benchmark datasets involving steel, masonry, concrete cracks, spalling, and corrosion demonstrate that the hybrid Oriented YOLO11 and SAM model achieves state-of-the-art performance, with an average Dice score of 0.7, comparable to those of fully supervised models. The proposed framework offers real-time performance, strong generalisation, and high scalability, making it a promising solution for robust structural defect segmentation.
扫码关注我们
求助内容:
应助结果提醒方式:
