In landslide susceptibility assessment (LSA), inventory incompleteness impacts the accuracy of different models to varying degrees. However, this area remains under-researched. This study investigated six LSA models from heuristic, statistical, machine learning and ensemble learning models (analytical hierarchy process (AHP), frequency ratio (FR), logistic regression (LR), Keras based deep learning (KBDL), XGBoost, and LightGBM) across six different sample sizes (100%, 90%, 75%, 50%, 25%, and 10%). Results revealed that XGBoost and LightGBM consistently outperformed other models across all sample sizes. The LR and KBDL models followed, while FR model was the most affected by sample size variations. AHP, an empirical model, remained unaffected by sample size. Through SHapley Additive exPlanations (SHAP) analysis, elevation, NDVI, slope, land use, and distance to roads and rivers emerged as pivotal indicators for landslide occurrences in the study area, suggesting that human activities significantly influence these events. Five time-varying indicators regarding human activity and climate validated this inference, which provides a new method to identify landslide triggering factors, especially in areas of intense human activity. Based on the findings, a comprehensive framework for LSA is proposed to assist landslide managers in making informed decisions. Future research should focus on expanding model diversity to address the effects of sample size, enhancing the adaptability of the LSA framework, deepening the analysis of human activity impacts on landslides using explainable machine learning techniques, addressing temporal inventory incompleteness in LSA, and critically evaluating model sensitivity to sample size variations across multiple disciplines.
Traveltime calculations play an important role in the field of exploration seismology, such as traveltime tomography and seismic imaging and so on. Seismic anisotropy poses a challenge for traveltime calculation, because anisotropic eikonal solvers are more complex than the isotropic counter part. To solve the eikonal equations in 2D tilted transversely isotropic (TTI) media, we have developed a fast algorithm combine with fast sweeping method to compute the first arrival traveltimes of quasi-P (qP)-, quasi-SV (qSV)-, and quasi-SH(qSH)-waves. For the qP- and qSV-waves, we analyzed the quartic coupled slowness surface equation derived from the Christoffel equation. Then, we constructed a local solver to relate traveltime and slowness. We found that in the local solver, one component of the slowness vector is known and the corresponding slowness equation is monotonic. This provides a strong basis for the fast iterative algorithm we proposed, where we use the Newton method to solve the qP- and qSV-wave slowness equation to determine the related traveltimes. For the qSH wave, the slowness equation is quadratic and simple to solve. Numerical experiments demonstrate that the proposed method can obtain accurate traveltimes for simple and complicated 2D TTI models.
Fractures or faults in the subsurface exert a significant impact on fluid flow and engineering activities in that environment. Fracture modelling is one of the crucial techniques, providing essential insights into the mechanisms underlying these impacts. As a useful tool, the Discrete Fracture Network (DFN) method is often utilized to simulate fracture networks and to integrate fracture statistics into 3D numerical models. However, the current DFN modeling technology suffers from low operational efficiency, particularly when handling a substantial quantity of fractures in 3D models. This paper proposes two ways to improve the efficiency and accuracy of modelling fractures: the matrix-based random sampling method (for faster generation of fracture loactions) and the quaternion method (for more accurate description of fractures). These proposed approaches simplify the management of large number of fractures within 3D models. The paper provides a comprehensive description of the proposed methods, accompanied by pseudo-code for the algorithms. The effectiveness of the proposed approach is validated through a practical case study, demonstrating superior computational efficiency and enhanced applicability for large-scale fracture modeling.
During the process of visualization, format exchange, and spatial analysis, the 3D geological model tends to emphasize its geometric features, thereby diminishing its geological significance to some extent. However, extracting corresponding geological elements directly from the model based solely on the pure geometric features of geologic bodies proves to be difficult and few studies have focused on related problems. This research aims to extract geological elements from existing geological models under the constraints of geological knowledge to enhance the reusability of existing models and the efficacy of their applications in subsequent research. Firstly, each stratum is assigned its geological significance under the constraints of geological knowledge. Then, the study introduces extraction methods for the topographic interface, eroded interface, stratigraphic top and bottom interfaces, and various constraint boundaries. Furthermore, the potential importance of the studies presented in this paper and their application scenarios are analyzed and explored. Finally, the feasibility and effectiveness of the method for extracting geological elements are validated through a case study. This method holds significant scientific importance for efficiently updating and conducting fine application analyses of geological models. Additionally, this research provides valuable insights that enhance the efficiency of model updating, property model construction, and the splicing of block models across extensive areas.
CO2 injection is a highly effective technique to enhance oil recovery, achieved through continuous or alternative injection. However, the intricate interactions between different phases within porous media present significant challenges when predicting the performance of CO2 injection. To address this, it is crucial to employ compositional simulation, which accounts for the multiphase multicomponent transport. Nonetheless, conventional multiphase flash calculations can be computationally inefficient for large-scale reservoir simulations. Therefore, it is necessary to accelerate the Equation-of-State (EoS)-based compositional simulation, given the widespread use of CO2 enhanced oil recovery (CO2-EOR) in recent years. The phase-state identification bypass method has proven to be superior to other methods in terms of efficiency. However, this approach struggles with regions near phase boundaries, resulting in reduced computational efficiency in those areas.
In this study, an enhanced phase-state identification bypass approach is developed to address this limitation. The first step involves discretising the pressure-temperature space using rectangular grids. Additionally, the tie-simplexes, which represent regions defined by the maximum number of phases formed by the fluid under consideration, are discretized in the phase-fraction space at the pressure and temperature of each discretization node. Subsequently, the discretization grid associated with the given point (the overall composition, pressure, and temperature) is located, and the phase states of the grid nodes are determined using the conventional multiphase flash method. If all nodes exhibit the same phase state, that phase state is assigned to the given point. However, if multiple phase states are obtained, a novel process is proposed to determine the phase state of the given point. To validate this improvement to the phase-state identification bypass method, phase diagram calculations and simulation cases are conducted, and the results demonstrate the robustness of the proposed method and its superior computational efficiency compared to the previous method.
Deep-learning-based surrogate models provide an efficient complement to numerical simulations for subsurface flow problems such as CO geological storage. Accurately capturing the impact of faults on CO plume migration remains a challenge for many existing deep learning surrogate models based on Convolutional Neural Networks (CNNs) or Neural Operators. We address this challenge with a graph-based neural model leveraging recent developments in the field of Graph Neural Networks (GNNs). Our model combines graph-based convolution Long-Short-Term-Memory (GConvLSTM) with a one-step GNN model, MeshGraphNet (MGN), to operate on complex unstructured meshes and limit temporal error accumulation. We demonstrate that our approach can accurately predict the temporal evolution of gas saturation and pore pressure in a synthetic reservoir with impermeable faults. Our results exhibit a better accuracy and a reduced temporal error accumulation compared to the standard MGN model. We also show the excellent generalizability of our algorithm to mesh configurations, boundary conditions, and heterogeneous permeability fields not included in the training set. This work highlights the potential of GNN-based methods to accurately and rapidly model subsurface flow with complex faults and fractures.
In geological research, precise segmentation of sandstone thin sections is crucial for detailed subsurface material analysis. Traditional methods often fall short in accurately capturing the complexities of these samples. This study presents an innovative segmentation approach that integrates an adaptive Global and Local Fuzzy Image Fitting (GLFIF) algorithm with Otsu's thresholding, significantly enhancing segmentation accuracy and efficiency. Our method combines deep learning and traditional image processing techniques. The adaptive GLFIF algorithm, powered by deep learning, automates parameter tuning, thereby reducing manual intervention and improving precision. Unlike conventional methods that learn fixed parameters, our model dynamically adjusts the segmentation process to achieve accurate results. The dual-phase segmentation strategy effectively isolates small features and handles intricate boundaries, ensuring high-quality outcomes. Experimental results demonstrate that our approach improves segmentation accuracy by 11.2% (from 82.6% to 93.8%), the Jaccard index by 15.4% (from 76.8% to 92.2%), and the Dice coefficient by 9% (from 86.9% to 95.9%) compared to traditional methods. This technique bridges the gap between conventional image analysis and deep learning, combining precise segmentation with the automation and computational power of advanced algorithms. Our segmentation algorithm represents a significant advancement in automated petrographic thin section analysis. Traditional image processing methods, such as thresholding and level sets, excel in handling small objects and complex boundaries but require significant manual intervention and cannot achieve full automation. Recent deep learning methods, particularly semantic segmentation, offer end-to-end automation but struggle with small targets and intricate boundaries. Our approach effectively combines the strengths of both methodologies, providing a comprehensive and efficient solution for geological image analysis that ensures both high accuracy and full automation.