{"title":"Automatic identification of rock fractures based on deep learning","authors":"Yaopeng Ji, Shengyuan Song, Wen Zhang, Yuchao Li, Jingyu Xue, Jianping Chen","doi":"10.1016/j.enggeo.2024.107874","DOIUrl":null,"url":null,"abstract":"<div><div>Rock fractures are one of the main factors leading to rock failure. Accurately extracting fracture characteristics is crucial for understanding the rock failure mechanism. Inspired by the latest developments in computer vision, we introduce a state-of-the-art deep learning model YOLACT++ for the automated interpretation of rock fractures. YOLACT++ inherits the basic architecture of YOLACT (You Only Look At CoefficienTs) and optimizes the backbone network, which improves segmentation accuracy while ensuring real-time performance. Based on Unmanned Aerial Vehicle multi-angled proximity photography, the dataset is collected from various rocky slopes for model training and validation. We propose performance evaluation metrics for the model, including intersection over union, precision, and recall, as well as quantitative parameters for describing fractures, including orientation, trace length, roughness, aperture, spacing, and fracture intensity. The segmentation results of YOLACT++ are compared with two other classic instance segmentation models, the Mask Region-based Convolutional Neural Network (Mask R-CNN) and the You Only Look Once (YOLO) V8. The results show that YOLACT++ has a stronger generalization ability, with more accurate segmentation results at image boundaries. With the ResNet-101 backbone network, YOLACT++ achieves 93.8 %, 87.1 % and 92.2 % for precision, intersection over union and recall, respectively. This represents improvements of 5.4 %, 3.6 %, and 8.3 % compared to Mask R-CNN, and 3.3 %, 7.8 %, and 4.2 % compared to YOLO V8. Overall, the deep learning-based YOLACT++ model proposed in this study provides an efficient and reliable approach for the automated interpretation of rock fractures. It can also be applied to crack recognition in other materials.</div></div>","PeriodicalId":11567,"journal":{"name":"Engineering Geology","volume":"345 ","pages":"Article 107874"},"PeriodicalIF":6.9000,"publicationDate":"2025-02-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Engineering Geology","FirstCategoryId":"89","ListUrlMain":"https://www.sciencedirect.com/science/article/pii/S0013795224004745","RegionNum":1,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q1","JCRName":"ENGINEERING, GEOLOGICAL","Score":null,"Total":0}
引用次数: 0
Abstract
Rock fractures are one of the main factors leading to rock failure. Accurately extracting fracture characteristics is crucial for understanding the rock failure mechanism. Inspired by the latest developments in computer vision, we introduce a state-of-the-art deep learning model YOLACT++ for the automated interpretation of rock fractures. YOLACT++ inherits the basic architecture of YOLACT (You Only Look At CoefficienTs) and optimizes the backbone network, which improves segmentation accuracy while ensuring real-time performance. Based on Unmanned Aerial Vehicle multi-angled proximity photography, the dataset is collected from various rocky slopes for model training and validation. We propose performance evaluation metrics for the model, including intersection over union, precision, and recall, as well as quantitative parameters for describing fractures, including orientation, trace length, roughness, aperture, spacing, and fracture intensity. The segmentation results of YOLACT++ are compared with two other classic instance segmentation models, the Mask Region-based Convolutional Neural Network (Mask R-CNN) and the You Only Look Once (YOLO) V8. The results show that YOLACT++ has a stronger generalization ability, with more accurate segmentation results at image boundaries. With the ResNet-101 backbone network, YOLACT++ achieves 93.8 %, 87.1 % and 92.2 % for precision, intersection over union and recall, respectively. This represents improvements of 5.4 %, 3.6 %, and 8.3 % compared to Mask R-CNN, and 3.3 %, 7.8 %, and 4.2 % compared to YOLO V8. Overall, the deep learning-based YOLACT++ model proposed in this study provides an efficient and reliable approach for the automated interpretation of rock fractures. It can also be applied to crack recognition in other materials.
岩石断裂是导致岩石破坏的主要因素之一。准确提取裂缝特征对于理解岩石破坏机制至关重要。受计算机视觉最新发展的启发,我们推出了最先进的深度学习模型yolact++,用于自动解释岩石裂缝。yolact++继承了YOLACT (You Only Look At CoefficienTs)的基本架构,对骨干网进行了优化,在保证实时性的同时提高了分割精度。基于无人机多角度近距离摄影,从不同的岩质斜坡上采集数据集,进行模型训练和验证。我们提出了该模型的性能评估指标,包括相交超过联合、精度和召回率,以及描述裂缝的定量参数,包括方向、痕迹长度、粗糙度、孔径、间距和裂缝强度。将yolact++的分割结果与另外两种经典的实例分割模型——基于Mask区域的卷积神经网络(Mask R-CNN)和You Only Look Once (YOLO) V8进行了比较。结果表明,yolact++具有更强的泛化能力,在图像边界处的分割结果更加准确。在ResNet-101骨干网下,yolact++的准确率、交集/联合率和召回率分别达到93.8%、87.1%和92.2%。这与Mask R-CNN相比分别提高了5.4%、3.6%和8.3%,与YOLO V8相比分别提高了3.3%、7.8%和4.2%。综上所述,本文提出的基于深度学习的yolact++模型为岩石裂缝的自动解释提供了一种高效可靠的方法。该方法也可用于其他材料的裂纹识别。
期刊介绍:
Engineering Geology, an international interdisciplinary journal, serves as a bridge between earth sciences and engineering, focusing on geological and geotechnical engineering. It welcomes studies with relevance to engineering, environmental concerns, and safety, catering to engineering geologists with backgrounds in geology or civil/mining engineering. Topics include applied geomorphology, structural geology, geophysics, geochemistry, environmental geology, hydrogeology, land use planning, natural hazards, remote sensing, soil and rock mechanics, and applied geotechnical engineering. The journal provides a platform for research at the intersection of geology and engineering disciplines.