Pub Date : 2024-06-27DOI: 10.1016/j.cageo.2024.105667
Zhenghai Xue , Xiaoyu Yi , Wenkai Feng , Linghao Kong , Mingtang Wu
Accurate measurements of soil thickness are crucial for assessing landslide susceptibility, slope stability, and soil conservation. However, there is a relative scarcity of research on the spatial distribution of soil thickness in areas with complex terrains, such as alpine canyon regions. Given this research gap, the aim of this study is to develop a reliable method for predicting soil thickness in these regions. In this study, the Baihetan Reservoir area (China), characterized by typical alpine canyon regions, was selected as the research site. The slope index (SI) and slope (S) factor, in addition to other factors, were used to predict soil thickness. Subsequently, the random forest (RF) model and its version based on the whale optimization algorithm (WOA) were used to model soil thickness. The results showed that compared to the other models, the WOA-RF model, which considers the slope index factor, performed best in 100 tests, achieving the highest coefficient of determination (R2 = 0.93) and the lowest root mean square error (RMSE = 5.6 m). Furthermore, the soil thickness data from the WOA-RF (SI) model displayed the highest congruence with the soil thickness data obtained from environmental noise measurements. Therefore, predicting soil thickness in alpine canyon regions by comprehensively considering environmental variables and using the WOA-RF model is feasible. The resulting soil thickness maps can serve as key fundamental inputs for further analysis.
{"title":"Prediction and mapping of soil thickness in alpine canyon regions based on whale optimization algorithm optimized random forest: A case study of Baihetan Reservoir area in China","authors":"Zhenghai Xue , Xiaoyu Yi , Wenkai Feng , Linghao Kong , Mingtang Wu","doi":"10.1016/j.cageo.2024.105667","DOIUrl":"https://doi.org/10.1016/j.cageo.2024.105667","url":null,"abstract":"<div><p>Accurate measurements of soil thickness are crucial for assessing landslide susceptibility, slope stability, and soil conservation. However, there is a relative scarcity of research on the spatial distribution of soil thickness in areas with complex terrains, such as alpine canyon regions. Given this research gap, the aim of this study is to develop a reliable method for predicting soil thickness in these regions. In this study, the Baihetan Reservoir area (China), characterized by typical alpine canyon regions, was selected as the research site. The slope index (SI) and slope (S) factor, in addition to other factors, were used to predict soil thickness. Subsequently, the random forest (RF) model and its version based on the whale optimization algorithm (WOA) were used to model soil thickness. The results showed that compared to the other models, the WOA-RF model, which considers the slope index factor, performed best in 100 tests, achieving the highest coefficient of determination (R<sup>2</sup> = 0.93) and the lowest root mean square error (RMSE = 5.6 m). Furthermore, the soil thickness data from the WOA-RF (SI) model displayed the highest congruence with the soil thickness data obtained from environmental noise measurements. Therefore, predicting soil thickness in alpine canyon regions by comprehensively considering environmental variables and using the WOA-RF model is feasible. The resulting soil thickness maps can serve as key fundamental inputs for further analysis.</p></div>","PeriodicalId":55221,"journal":{"name":"Computers & Geosciences","volume":"191 ","pages":"Article 105667"},"PeriodicalIF":4.2,"publicationDate":"2024-06-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141485229","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"地球科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-06-27DOI: 10.1016/j.cageo.2024.105660
Bingluo Gu , Shanshan Zhang , Xingnong Liu , Jianguang Han
The fractional viscoacoustic/viscoelastic wave equation, which accurately quantifies the frequency-independent anelastic effects, has been the focus of seismic industry in recent years. The pseudo-spectral (PS) method stands as one of the most widely used numerical methods for solving the fractional wave equation. However, the PS method often suffers from low accuracy and efficiency, particularly when modeling wave propagation in heterogeneous media. To address these issues, we propose a novel and efficient fractional finite-difference (FD) method for solving the wave equation with fractional Laplacian operators. This method develops an arbitrary high-order FD operator via the generating function of our fractional FD (F-FD) scheme, enhancing accuracy with L2-optimal FD coefficients. Similar to classic FD methods, our F-FD method is characterized by straightforward programming and excellent 3D extensibility. It surpasses the PS method by eliminating the need for Fast Fourier Transform (FFT) and inverse-FFT (IFFT) operations at each time step, offering significant benefits for 3D applications. Consequently, the F-FD method proves more adept for wave-equation-based seismic data processes like imaging and inversion. Compared with existing F-FD methods, our approach uniquely approximates the entire fractional Laplacian operator and stands as a local numerical algorithm, with an adjustable F-FD operator order based on model parameters for enhanced practicality. Accuracy analyses confirm that our method matches the precision of the PS method with a correctly ordered F-FD operator. Numerical examples show that the proposed method has good applicability for complex models. Finally, we have carried out reverse time migration on the Marmousi-2 model, and the imaging profiles indicate that the proposed method can be effectively applied to seismic imaging, demonstrating good practicability.
{"title":"Efficient modeling of fractional Laplacian viscoacoustic wave equation with fractional finite-difference method","authors":"Bingluo Gu , Shanshan Zhang , Xingnong Liu , Jianguang Han","doi":"10.1016/j.cageo.2024.105660","DOIUrl":"https://doi.org/10.1016/j.cageo.2024.105660","url":null,"abstract":"<div><p>The fractional viscoacoustic/viscoelastic wave equation, which accurately quantifies the frequency-independent anelastic effects, has been the focus of seismic industry in recent years. The pseudo-spectral (PS) method stands as one of the most widely used numerical methods for solving the fractional wave equation. However, the PS method often suffers from low accuracy and efficiency, particularly when modeling wave propagation in heterogeneous media. To address these issues, we propose a novel and efficient fractional finite-difference (FD) method for solving the wave equation with fractional Laplacian operators. This method develops an arbitrary high-order FD operator via the generating function of our fractional FD (F-FD) scheme, enhancing accuracy with L2-optimal FD coefficients. Similar to classic FD methods, our F-FD method is characterized by straightforward programming and excellent 3D extensibility. It surpasses the PS method by eliminating the need for Fast Fourier Transform (FFT) and inverse-FFT (IFFT) operations at each time step, offering significant benefits for 3D applications. Consequently, the F-FD method proves more adept for wave-equation-based seismic data processes like imaging and inversion. Compared with existing F-FD methods, our approach uniquely approximates the entire fractional Laplacian operator and stands as a local numerical algorithm, with an adjustable F-FD operator order based on model parameters for enhanced practicality. Accuracy analyses confirm that our method matches the precision of the PS method with a correctly ordered F-FD operator. Numerical examples show that the proposed method has good applicability for complex models. Finally, we have carried out reverse time migration on the Marmousi-2 model, and the imaging profiles indicate that the proposed method can be effectively applied to seismic imaging, demonstrating good practicability.</p></div>","PeriodicalId":55221,"journal":{"name":"Computers & Geosciences","volume":"191 ","pages":"Article 105660"},"PeriodicalIF":4.2,"publicationDate":"2024-06-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141541910","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"地球科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-06-25DOI: 10.1016/j.cageo.2024.105659
Nasrin Tavakolizadeh , Hamzeh Mohammadigheymasi , Francesco Visini , Nuno Pombo
In regions experiencing ongoing aseismic deformation, fault’s Activity Rate (AR) calculations often lead to an overestimation of hazard potential. This study proposes a novel methodology that integrates the Seismic Coupling Coefficient (SCC) into the fault Seismic Activity Rate (SAR) calculation process to discriminate seismic moment rates. We introduce FaultQuake, an open-source Python tool equipped with a Graphical User Interface (GUI), designed to implement this methodology and accurately estimate SAR for faults. These activity rates can be included in Probabilistic Seismic Hazard Assessment (PSHA) frameworks and assist in differentiating the seismic and aseismic deformation. FaultQuake also presents an innovative embedded workflow, the Optimal Value Computation Workflow (OVCW), based on Conflation of Probabilities (CoP), for calculating the Maximum Magnitude () from the empirical relationships and the observed magnitudes () assigned to a single fault. This enhancement improves the estimation of seismic moment rates and the SAR calculation process. FaultQuake outputs are provided in the format of OpenQuake engine input files to facilitate the PSHA process. We present a sample case study focusing on the PSHA of a region in southern Iran characterized by a substantial aseismic deformation to illustrate the practical application of FaultQuake in seismic hazard analysis. Peak Ground Acceleration (PGA) maps for 10% and 2% Probabilities of Exceedance (PoE) are plotted to compare PGAs with and without applying the FaultQuake algorithm. The results provide an enhanced view of the area’s hazard with mitigation of the overestimation, resulting in more representative hazard maps. The source codes of FaultQuake are available at the FaultQuake GitHub repository, contributing to the computer and geoscience community.
{"title":"FaultQuake: An open-source Python tool for estimating Seismic Activity Rates in faults","authors":"Nasrin Tavakolizadeh , Hamzeh Mohammadigheymasi , Francesco Visini , Nuno Pombo","doi":"10.1016/j.cageo.2024.105659","DOIUrl":"https://doi.org/10.1016/j.cageo.2024.105659","url":null,"abstract":"<div><p>In regions experiencing ongoing aseismic deformation, fault’s Activity Rate (AR) calculations often lead to an overestimation of hazard potential. This study proposes a novel methodology that integrates the Seismic Coupling Coefficient (SCC) into the fault Seismic Activity Rate (SAR) calculation process to discriminate seismic moment rates. We introduce FaultQuake, an open-source Python tool equipped with a Graphical User Interface (GUI), designed to implement this methodology and accurately estimate SAR for faults. These activity rates can be included in Probabilistic Seismic Hazard Assessment (PSHA) frameworks and assist in differentiating the seismic and aseismic deformation. FaultQuake also presents an innovative embedded workflow, the Optimal Value Computation Workflow (OVCW), based on Conflation of Probabilities (CoP), for calculating the Maximum Magnitude (<span><math><msub><mrow><mi>M</mi></mrow><mrow><mi>m</mi><mi>a</mi><mi>x</mi></mrow></msub></math></span>) from the empirical relationships and the observed magnitudes (<span><math><msub><mrow><mi>M</mi></mrow><mrow><mi>o</mi><mi>b</mi><mi>s</mi></mrow></msub></math></span>) assigned to a single fault. This enhancement improves the estimation of seismic moment rates and the SAR calculation process. FaultQuake outputs are provided in the format of OpenQuake engine input files to facilitate the PSHA process. We present a sample case study focusing on the PSHA of a region in southern Iran characterized by a substantial aseismic deformation to illustrate the practical application of FaultQuake in seismic hazard analysis. Peak Ground Acceleration (PGA) maps for 10% and 2% Probabilities of Exceedance (PoE) are plotted to compare PGAs with and without applying the FaultQuake algorithm. The results provide an enhanced view of the area’s hazard with mitigation of the overestimation, resulting in more representative hazard maps. The source codes of FaultQuake are available at the <span>FaultQuake</span><svg><path></path></svg> GitHub repository, contributing to the computer and geoscience community.</p></div>","PeriodicalId":55221,"journal":{"name":"Computers & Geosciences","volume":"191 ","pages":"Article 105659"},"PeriodicalIF":4.2,"publicationDate":"2024-06-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141541899","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"地球科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-06-25DOI: 10.1016/j.cageo.2024.105661
Xiran You , Jifeng Zhang , Jiao Luo
Frequency–time conversion is a crucial step in grounded electrical-source transient electromagnetic response calculation, and the performance of the algorithm is directly related to the overall accuracy and speed of forward modeling. In mainstream algorithms, algorithms with high accuracy often have slow computation speed while algorithms with high efficiency have unsatisfactory accuracy, especially when facing inversion problems that are difficult to meet requirements. This paper introduces three inverse Laplace transform algorithms for this problem: the Gaver–Stehfest algorithm, the Euler algorithm, and the Talbot algorithm. The performance of each algorithm in forward modeling was analyzed using half-space and layered models, and the optimal selection schemes for algorithm weight coefficients were provided. The numerical calculation results show that the Gaver–Stehfest algorithm has a unique advantage in computational efficiency, while the Talbot algorithm and Euler algorithm meet the accuracy requirements. After considering both accuracy and efficiency, the Talbot algorithm is selected to replace conventional algorithms for calculation of grounded electrical-source transient electromagnetic forward modeling. In addition, this paper combines the characteristics of the Gaver–Stehfest algorithm and the Talbot algorithm to implement an adaptive hybrid algorithm. This algorithm uses the Gaver–Stehfest algorithm for forward modeling in the early times and the Talbot algorithm to compensate for the decrease in accuracy in the later times. Through the comparison of forward modeling calculations, it can be seen that the hybrid algorithm proposed in this paper fully utilizes the advantages of both algorithms. The hybrid algorithm greatly improves computational speed while meeting accuracy requirements, and has significant advantages over conventional algorithms.
{"title":"Fast forward modeling of grounded electrical-source transient electromagnetic based on inverse Laplace transform adaptive hybrid algorithm","authors":"Xiran You , Jifeng Zhang , Jiao Luo","doi":"10.1016/j.cageo.2024.105661","DOIUrl":"https://doi.org/10.1016/j.cageo.2024.105661","url":null,"abstract":"<div><p>Frequency–time conversion is a crucial step in grounded electrical-source transient electromagnetic response calculation, and the performance of the algorithm is directly related to the overall accuracy and speed of forward modeling. In mainstream algorithms, algorithms with high accuracy often have slow computation speed while algorithms with high efficiency have unsatisfactory accuracy, especially when facing inversion problems that are difficult to meet requirements. This paper introduces three inverse Laplace transform algorithms for this problem: the Gaver–Stehfest algorithm, the Euler algorithm, and the Talbot algorithm. The performance of each algorithm in forward modeling was analyzed using half-space and layered models, and the optimal selection schemes for algorithm weight coefficients were provided. The numerical calculation results show that the Gaver–Stehfest algorithm has a unique advantage in computational efficiency, while the Talbot algorithm and Euler algorithm meet the accuracy requirements. After considering both accuracy and efficiency, the Talbot algorithm is selected to replace conventional algorithms for calculation of grounded electrical-source transient electromagnetic forward modeling. In addition, this paper combines the characteristics of the Gaver–Stehfest algorithm and the Talbot algorithm to implement an adaptive hybrid algorithm. This algorithm uses the Gaver–Stehfest algorithm for forward modeling in the early times and the Talbot algorithm to compensate for the decrease in accuracy in the later times. Through the comparison of forward modeling calculations, it can be seen that the hybrid algorithm proposed in this paper fully utilizes the advantages of both algorithms. The hybrid algorithm greatly improves computational speed while meeting accuracy requirements, and has significant advantages over conventional algorithms.</p></div>","PeriodicalId":55221,"journal":{"name":"Computers & Geosciences","volume":"191 ","pages":"Article 105661"},"PeriodicalIF":4.2,"publicationDate":"2024-06-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141593893","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"地球科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-06-25DOI: 10.1016/j.cageo.2024.105662
Carla Santana , Ramon C.F. Araújo , Idalmis Milian Sardina , Ítalo A.S. Assis , Tiago Barros , Calebe P. Bianchini , Antonio D. de S. Oliveira , João M. de Araújo , Hervé Chauris , Claude Tadonki , Samuel Xavier-de-Souza
Many geophysical imaging applications, such as full-waveform inversion, often rely on high-performance computing to meet their demanding computational requirements. The failure of a subset of computer nodes during the execution of such applications can have a significant impact, as it may take several days or even weeks to recover the lost computation. To mitigate the consequences of these failures, it is crucial to employ effective fault tolerance techniques that do not introduce substantial overhead or hinder code optimization efforts. This paper addresses the primary research challenge of developing fault tolerance techniques with minimal impact on execution and optimization. To achieve this, we propose DeLIA, a Dependability Library for Iterative Applications designed for parallel programs that require data synchronization among all processes to maintain a globally consistent state after each iteration. DeLIA efficiently performs checkpointing and rollback of both the application’s global state and each process’s local state. Furthermore, DeLIA incorporates interruption detection mechanisms. One of the key advantages of DeLIA is its flexibility, allowing users to configure various parameters such as checkpointing frequency, selection of data to be saved, and the specific fault tolerance techniques to be applied. To validate the effectiveness of DeLIA, we applied it to a 3D full-waveform inversion code and conducted experiments to measure its overhead under different configurations using two workload schedulers. We also analyzed its behavior in preemptive circumstances. Our experiments revealed a maximum overhead of 8.8%, and DeLIA demonstrated its capability to detect termination signals and save the state of nodes in preemptive scenarios. Overall, the results of our study demonstrate the suitability of DeLIA to provide fault tolerance for iterative parallel applications.
{"title":"DeLIA: A Dependability Library for Iterative Applications applied to parallel geophysical problems","authors":"Carla Santana , Ramon C.F. Araújo , Idalmis Milian Sardina , Ítalo A.S. Assis , Tiago Barros , Calebe P. Bianchini , Antonio D. de S. Oliveira , João M. de Araújo , Hervé Chauris , Claude Tadonki , Samuel Xavier-de-Souza","doi":"10.1016/j.cageo.2024.105662","DOIUrl":"https://doi.org/10.1016/j.cageo.2024.105662","url":null,"abstract":"<div><p>Many geophysical imaging applications, such as full-waveform inversion, often rely on high-performance computing to meet their demanding computational requirements. The failure of a subset of computer nodes during the execution of such applications can have a significant impact, as it may take several days or even weeks to recover the lost computation. To mitigate the consequences of these failures, it is crucial to employ effective fault tolerance techniques that do not introduce substantial overhead or hinder code optimization efforts. This paper addresses the primary research challenge of developing fault tolerance techniques with minimal impact on execution and optimization. To achieve this, we propose DeLIA, a Dependability Library for Iterative Applications designed for parallel programs that require data synchronization among all processes to maintain a globally consistent state after each iteration. DeLIA efficiently performs checkpointing and rollback of both the application’s global state and each process’s local state. Furthermore, DeLIA incorporates interruption detection mechanisms. One of the key advantages of DeLIA is its flexibility, allowing users to configure various parameters such as checkpointing frequency, selection of data to be saved, and the specific fault tolerance techniques to be applied. To validate the effectiveness of DeLIA, we applied it to a 3D full-waveform inversion code and conducted experiments to measure its overhead under different configurations using two workload schedulers. We also analyzed its behavior in preemptive circumstances. Our experiments revealed a maximum overhead of 8.8%, and DeLIA demonstrated its capability to detect termination signals and save the state of nodes in preemptive scenarios. Overall, the results of our study demonstrate the suitability of DeLIA to provide fault tolerance for iterative parallel applications.</p></div>","PeriodicalId":55221,"journal":{"name":"Computers & Geosciences","volume":"191 ","pages":"Article 105662"},"PeriodicalIF":4.2,"publicationDate":"2024-06-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.sciencedirect.com/science/article/pii/S0098300424001456/pdfft?md5=d3a34eb9baf8c143c8aae12bcda4ed57&pid=1-s2.0-S0098300424001456-main.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141485228","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"地球科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Passive surface-wave methods using dense seismic arrays have gained growing attention in near-surface high-resolution imaging in urban environments. Deep learning (DL) in the extraction of dispersion curves and inversion can release a tremendous workload brought by dense seismic arrays. We presented a case study of imaging shear-wave velocity (Vs) structure and detecting low-velocity layer (LVL) in the Hangzhou urban area (eastern China). We used traffic-induced passive surface-wave data recorded by dense linear arrays. We extracted phase-velocity dispersion curves from noise recordings using seismic interferometry and multichannel analysis of surface waves. We adopted a convolutional neural network to estimate near-surface Vs models by inverting Rayleigh-wave fundamental-mode phase velocities. To improve the accuracy of the inversion, we utilized the sensitivities to weight the loss function. The average root mean square error from the weighted inversion is 46% lower than that from the unweighted DL inversion. The estimated pseudo-2D Vs profiles correspond to the velocities obtained from downhole seismic measurements. Compared with an investigation on the same survey area, our inversion results are more consistent with the Vs provided by downhole seismic measurements within 50–60 m where the LVL exists. The trained neural network successfully identified that the LVL is located at 50–60 m deep. To check the applicability of the trained neural network, we applied it to a nearby passive surface-wave survey and the inversion results agree with the existing investigation results. The two applications demonstrate the accuracy and efficiency of delineating near-surface Vs structures with the LVL from traffic-induced noise using the DL technique. The DL inversion has great potential for monitoring subsurface medium changes in urban areas.
使用密集地震阵列的被动面波方法在城市环境的近地表高分辨率成像中日益受到关注。深度学习(DL)在频散曲线提取和反演中可以释放密集地震阵列带来的巨大工作量。我们介绍了杭州城区(中国东部)剪切波速度(Vs)结构成像和低速层(LVL)探测的案例研究。我们使用了密集线性阵列记录的交通诱发的被动面波数据。我们利用地震干涉测量和多通道面波分析从噪声记录中提取了相位速度频散曲线。我们采用卷积神经网络,通过反演雷利波基模相速来估计近地表 Vs 模型。为了提高反演的准确性,我们利用灵敏度对损失函数进行了加权。加权反演的平均均方根误差比未加权的 DL 反演低 46%。估计的伪二维 Vs 剖面与井下地震测量获得的速度一致。与在同一勘测区进行的调查相比,我们的反演结果与井下地震测量提供的 Vs 更为一致,即在 LVL 存在的 50-60 米范围内。经过训练的神经网络成功识别出 LVL 位于 50-60 米深处。为了检验训练有素的神经网络的适用性,我们将其应用于附近的被动面波勘探,反演结果与现有勘探结果一致。这两项应用证明了利用 DL 技术从交通诱导噪声中用 LVL 划分近地表 Vs 结构的准确性和效率。DL 反演在监测城市地区地下介质变化方面具有巨大潜力。
{"title":"Detection of the low-velocity layer using a convolutional neural network on passive surface-wave data: An application in Hangzhou, China","authors":"Xinhua Chen, Jianghai Xia, Jingyin Pang, Changjiang Zhou","doi":"10.1016/j.cageo.2024.105663","DOIUrl":"https://doi.org/10.1016/j.cageo.2024.105663","url":null,"abstract":"<div><p>Passive surface-wave methods using dense seismic arrays have gained growing attention in near-surface high-resolution imaging in urban environments. Deep learning (DL) in the extraction of dispersion curves and inversion can release a tremendous workload brought by dense seismic arrays. We presented a case study of imaging shear-wave velocity (Vs) structure and detecting low-velocity layer (LVL) in the Hangzhou urban area (eastern China). We used traffic-induced passive surface-wave data recorded by dense linear arrays. We extracted phase-velocity dispersion curves from noise recordings using seismic interferometry and multichannel analysis of surface waves. We adopted a convolutional neural network to estimate near-surface Vs models by inverting Rayleigh-wave fundamental-mode phase velocities. To improve the accuracy of the inversion, we utilized the sensitivities to weight the loss function. The average root mean square error from the weighted inversion is 46% lower than that from the unweighted DL inversion. The estimated pseudo-2D Vs profiles correspond to the velocities obtained from downhole seismic measurements. Compared with an investigation on the same survey area, our inversion results are more consistent with the Vs provided by downhole seismic measurements within 50–60 m where the LVL exists. The trained neural network successfully identified that the LVL is located at 50–60 m deep. To check the applicability of the trained neural network, we applied it to a nearby passive surface-wave survey and the inversion results agree with the existing investigation results. The two applications demonstrate the accuracy and efficiency of delineating near-surface Vs structures with the LVL from traffic-induced noise using the DL technique. The DL inversion has great potential for monitoring subsurface medium changes in urban areas.</p></div>","PeriodicalId":55221,"journal":{"name":"Computers & Geosciences","volume":"190 ","pages":"Article 105663"},"PeriodicalIF":4.2,"publicationDate":"2024-06-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141444683","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"地球科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-06-21DOI: 10.1016/j.cageo.2024.105664
Dongyu Zheng , Li Hou , Xiumian Hu , Mingcai Hou , Kai Dong , Sihai Hu , Runlin Teng , Chao Ma
Accurately identifying grain types in thin sections of sandy sediments or sandstones is crucial for understanding their provenance, depositional environments, and potential as natural resources. Although traditional computer vision methods and machine learning algorithms have been used for automatic grain identification, recent advancements in deep learning techniques have opened up new possibilities for achieving more reliable results with less manual labor. In this study, we present Trans-SedNet, a state-of-the-art dual-modal Vision-Transformer (ViT) model that uses both cross- (XPL) and plane-polarized light (PPL) images to achieve semantic segmentation of thin-section images. Our model classifies a total of ten grain types, including subtypes of quartz, feldspar, and lithic fragments, to emulate the manual identification process in sedimentary petrology. To optimize performance, we use SegFormer as the model backbone and add window- and mix-attention to the encoder to identify local information in the images and to best use XPL and PPL images. We also use a combination of focal and dice loss and a smoothing procedure to address imbalances and reduce over-segmentation. Our comparative analysis of several deep convolution neural networks and ViT models, including FCN, U-Net, DeepLabV3Plus, SegNeXT, and CMX, shows that Trans-SedNet outperforms the other models with a significant increase in evaluation metrics of mIoU and mPA. We also conduct an experiment to test the models' ability to handle dual-modal information, which reveals that the dual-modal models, including Trans-SedNet, achieve better results than single-modal models with the extra input of PPL images. Our study demonstrates the potential of ViT models in semantic segmentation of thin-section images and highlights the importance of dual-modal models for handling complex input in various geoscience disciplines. By improving data quality and quantity, our model has the potential to enhance the efficiency and reliability of grain identification in sedimentary petrology and relevant subjects.
{"title":"Sediment grain segmentation in thin-section images using dual-modal Vision Transformer","authors":"Dongyu Zheng , Li Hou , Xiumian Hu , Mingcai Hou , Kai Dong , Sihai Hu , Runlin Teng , Chao Ma","doi":"10.1016/j.cageo.2024.105664","DOIUrl":"https://doi.org/10.1016/j.cageo.2024.105664","url":null,"abstract":"<div><p>Accurately identifying grain types in thin sections of sandy sediments or sandstones is crucial for understanding their provenance, depositional environments, and potential as natural resources. Although traditional computer vision methods and machine learning algorithms have been used for automatic grain identification, recent advancements in deep learning techniques have opened up new possibilities for achieving more reliable results with less manual labor. In this study, we present Trans-SedNet, a state-of-the-art dual-modal Vision-Transformer (ViT) model that uses both cross- (XPL) and plane-polarized light (PPL) images to achieve semantic segmentation of thin-section images. Our model classifies a total of ten grain types, including subtypes of quartz, feldspar, and lithic fragments, to emulate the manual identification process in sedimentary petrology. To optimize performance, we use SegFormer as the model backbone and add window- and mix-attention to the encoder to identify local information in the images and to best use XPL and PPL images. We also use a combination of focal and dice loss and a smoothing procedure to address imbalances and reduce over-segmentation. Our comparative analysis of several deep convolution neural networks and ViT models, including FCN, U-Net, DeepLabV3Plus, SegNeXT, and CMX, shows that Trans-SedNet outperforms the other models with a significant increase in evaluation metrics of mIoU and mPA. We also conduct an experiment to test the models' ability to handle dual-modal information, which reveals that the dual-modal models, including Trans-SedNet, achieve better results than single-modal models with the extra input of PPL images. Our study demonstrates the potential of ViT models in semantic segmentation of thin-section images and highlights the importance of dual-modal models for handling complex input in various geoscience disciplines. By improving data quality and quantity, our model has the potential to enhance the efficiency and reliability of grain identification in sedimentary petrology and relevant subjects.</p></div>","PeriodicalId":55221,"journal":{"name":"Computers & Geosciences","volume":"191 ","pages":"Article 105664"},"PeriodicalIF":4.2,"publicationDate":"2024-06-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141485227","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"地球科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-06-18DOI: 10.1016/j.cageo.2024.105656
Paul Joseph Namongo Soro , Juliette Lamarche , Sophie Viseur , Pascal Richard , Fateh Messaadi
In Naturally Fractured Reservoirs (NFR) diffuse fractures arrangement results from mechanical stratigraphy and tectonic history during failure. Thus, modelling Discrete Fracture Network (DFN) requires to understand and to account for fracture relationships at bed-interface (abutment or crosscutting) in 3D through time (loading path). However, sampling fractures data meaningfully in subsurface has always been a challenge for geologist due to data scarcity.
To better understand and forecast fracture networks in stratified rocks, we study outcrops with a focus on geometric relationships between stratigraphic interfaces and fractures. This paper presents an original python toolbox called FracAbut. It is composed of 1 main and 2 auxiliary codes that quantify the geometric relation between fractures and stratigraphic interfaces from 1D (wells, scan-line) and 2D (digital image, photographs data). We calculate the Interface Impedance (II) that accounts for fracture abutment (crossing or not), persistence (single- or multi-bed) and propagation polarity (upward or downward). For each stratigraphic interface FracAbut provides information on fractures (type, number) and interface sensitivity (coupling strength).
First, we apply FracAbut on synthetic case studies, then, on naturally fractured and stratified carbonates in Berat, Albania. Using both 1D scan-line and 2D outcrop photograph, we show that i) a mechanical interface can have different coupling above and below based on propagation polarity, ii) FracAbut results can give useful insight on fracture transmissivity, iii) FracAbut is fast and efficient to quantify fracture patterns and classify mechanical interface impact; iv) they are no relation between bed thickness and fracture propagation.
{"title":"FracAbut: A python toolbox for computing fracture stratigraphy using interface impedance","authors":"Paul Joseph Namongo Soro , Juliette Lamarche , Sophie Viseur , Pascal Richard , Fateh Messaadi","doi":"10.1016/j.cageo.2024.105656","DOIUrl":"https://doi.org/10.1016/j.cageo.2024.105656","url":null,"abstract":"<div><p>In Naturally Fractured Reservoirs (NFR) diffuse fractures arrangement results from mechanical stratigraphy and tectonic history during failure. Thus, modelling Discrete Fracture Network (DFN) requires to understand and to account for fracture relationships at bed-interface (abutment or crosscutting) in 3D through time (loading path). However, sampling fractures data meaningfully in subsurface has always been a challenge for geologist due to data scarcity.</p><p>To better understand and forecast fracture networks in stratified rocks, we study outcrops with a focus on geometric relationships between stratigraphic interfaces and fractures. This paper presents an original python toolbox called FracAbut. It is composed of 1 main and 2 auxiliary codes that quantify the geometric relation between fractures and stratigraphic interfaces from 1D (wells, scan-line) and 2D (digital image, photographs data). We calculate the Interface Impedance (<em>II</em>) that accounts for fracture abutment (crossing or not), persistence (single- or multi-bed) and propagation polarity (upward or downward). For each stratigraphic interface FracAbut provides information on fractures (type, number) and interface sensitivity (coupling strength).</p><p>First, we apply FracAbut on synthetic case studies, then, on naturally fractured and stratified carbonates in Berat, Albania. Using both 1D scan-line and 2D outcrop photograph, we show that i) a mechanical interface can have different coupling above and below based on propagation polarity, ii) FracAbut results can give useful insight on fracture transmissivity, iii) FracAbut is fast and efficient to quantify fracture patterns and classify mechanical interface impact; iv) they are no relation between bed thickness and fracture propagation.</p></div>","PeriodicalId":55221,"journal":{"name":"Computers & Geosciences","volume":"190 ","pages":"Article 105656"},"PeriodicalIF":4.4,"publicationDate":"2024-06-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141424358","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"地球科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Machine learning (ML) and deep learning (DL) techniques have recently shown encouraging performance in recognizing metal-vectoring geochemical anomalies within complex Earth systems. However, the generalization of these techniques to detect subtle anomalies may be precluded due to overlooking non-stationary spatial structures and intra-pattern local dependencies contained in geochemical exploration data. Motivated by this, we conceptualize in this paper an innovative algorithm connecting a DL architecture to a spatial ML processor to account for local neighborhood information and spatial non-stationarities in support of spatially aware anomaly detection. A deep autoencoder network (DAN) is trained to abstract deep feature codings (DFCs) of multi-element input data. The encoded DFCs represent the typical performance of a nonlinear Earth system, i.e., multi-element signatures of geochemical background populations developed by different geo-processes. A local version of the random forest algorithm, geographical random forest (GRF), is then connected to the input and code layers of the DAN processor to establish nonlinear and spatially aware regressions between original geochemical signals (dependent variables) and DFCs (independent variables). After contributions of the latter on the former are determined, residuals of GRF regressions are quantified and interpreted as spatially aware anomaly scores related to mineralization. The proposed algorithm (i.e., DAN‒GRF) is implemented in the R language environment and examined in a case study with stream sediment geochemical data pertaining to the Takht-e-Soleyman district, Iran. The high-scored anomalies mapped by DAN‒GRF, compared to those by the stand-alone DAN technique, indicated a stronger spatial correlation with locations of known metal occurrences, which was statistically confirmed by success-rate curves, Student's ‒statistic method, and prediction-area plots. The findings suggested that the proposed algorithm has an enhanced capability to recognize subtle multi-element geochemical anomalies and extract reliable insights into metal exploration targeting.
最近,机器学习(ML)和深度学习(DL)技术在识别复杂地球系统中的金属矢量地球化学异常方面表现出令人鼓舞的性能。然而,由于忽略了地球化学勘探数据中包含的非稳态空间结构和模式内局部依赖性,这些技术在检测微妙异常方面的普适性可能被排除在外。受此启发,我们在本文中构思了一种创新算法,将 DL 架构与空间 ML 处理器相连接,以考虑局部邻域信息和空间非稳态性,支持空间感知异常检测。对深度自动编码器网络(DAN)进行训练,以抽象出多元素输入数据的深度特征编码(DFC)。编码后的 DFCs 代表了非线性地球系统的典型性能,即由不同地质过程形成的地球化学背景种群的多元素特征。然后,将随机森林算法的本地版本--地理随机森林(GRF)连接到 DAN 处理器的输入层和代码层,在原始地球化学信号(因变量)和 DFCs(自变量)之间建立非线性和空间感知回归。在确定后者对前者的贡献之后,对 GRF 回归的残差进行量化,并将其解释为与矿化有关的空间感知异常分数。建议的算法(即 DAN-GRF)在 R 语言环境中实现,并在伊朗 Takht-e-Soleyman 地区流沉积物地球化学数据的案例研究中进行了检验。与独立的 DAN 技术相比,DAN-GRF 所绘制的高分异常显示与已知金属矿藏的位置具有更强的空间相关性,成功率曲线、Student's t 统计法和预测区域图在统计学上证实了这一点。研究结果表明,所提出的算法具有更强的能力来识别微妙的多元素地球化学异常,并为金属勘探目标的确定提供可靠的见解。
{"title":"A deep autoencoder network connected to geographical random forest for spatially aware geochemical anomaly detection","authors":"Zeinab Soltani , Hossein Hassani , Saeid Esmaeiloghli","doi":"10.1016/j.cageo.2024.105657","DOIUrl":"https://doi.org/10.1016/j.cageo.2024.105657","url":null,"abstract":"<div><p>Machine learning (ML) and deep learning (DL) techniques have recently shown encouraging performance in recognizing metal-vectoring geochemical anomalies within complex Earth systems. However, the generalization of these techniques to detect subtle anomalies may be precluded due to overlooking non-stationary spatial structures and intra-pattern local dependencies contained in geochemical exploration data. Motivated by this, we conceptualize in this paper an innovative algorithm connecting a DL architecture to a spatial ML processor to account for local neighborhood information and spatial non-stationarities in support of spatially aware anomaly detection. A deep autoencoder network (DAN) is trained to abstract deep feature codings (DFCs) of multi-element input data. The encoded DFCs represent the typical performance of a nonlinear Earth system, i.e., multi-element signatures of geochemical background populations developed by different geo-processes. A local version of the random forest algorithm, geographical random forest (GRF), is then connected to the input and code layers of the DAN processor to establish nonlinear and spatially aware regressions between original geochemical signals (dependent variables) and DFCs (independent variables). After contributions of the latter on the former are determined, residuals of GRF regressions are quantified and interpreted as spatially aware anomaly scores related to mineralization. The proposed algorithm (i.e., DAN‒GRF) is implemented in the R language environment and examined in a case study with stream sediment geochemical data pertaining to the Takht-e-Soleyman district, Iran. The high-scored anomalies mapped by DAN‒GRF, compared to those by the stand-alone DAN technique, indicated a stronger spatial correlation with locations of known metal occurrences, which was statistically confirmed by success-rate curves, Student's <span><math><mrow><mi>t</mi></mrow></math></span>‒statistic method, and prediction-area plots. The findings suggested that the proposed algorithm has an enhanced capability to recognize subtle multi-element geochemical anomalies and extract reliable insights into metal exploration targeting.</p></div>","PeriodicalId":55221,"journal":{"name":"Computers & Geosciences","volume":"190 ","pages":"Article 105657"},"PeriodicalIF":4.2,"publicationDate":"2024-06-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141433891","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"地球科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-06-13DOI: 10.1016/j.cageo.2024.105655
Yuchen Hu , Xingxiang Jiang , Changqing Zhu , Na Ren , Shuitao Guo , Jia Duan , Luanyun Hu
Digital watermarking technology plays a crucial role in securing trajectory data. However, as trajectory data usage scenarios continue to expand, the security requirements for it have changed from a single copyright protection to one that takes into account data integrity. Existing digital watermarking algorithms for trajectory data can only choose between implementing copyright protection or ensuring integrity, unable to simultaneously achieve both functionalities. This limitation impedes the sharing and utilization of trajectory data. A dual watermarking algorithm that combines robust and fragile watermarking was innovatively proposed to solve this problem based on the geometric domain. Firstly, a set of feature points is extracted from the trajectory, and the farthest point pair of the minimum convex hull of the feature points is set as fixed points. The robust watermark is then embedded in the angles constructed by the feature points and the fixed points using quantization index modulation. Meanwhile, the trajectory points are grouped based on the angle and distance ratio constructed from the trajectory points to the fixed points. In each group, the spatiotemporal attributes of the trajectory points are mapped to the fragile watermark, which is then embedded into the distance ratios constructed by the trajectory points. Experimental results show that the proposed algorithm achieves both copyright protection and integrity verification for trajectory data and exhibits stronger robustness and tampering localization ability. This research can provide security and privacy protection for trajectory data and contribute positively to the application of trajectory data.
{"title":"A dual watermarking algorithm for trajectory data based on robust watermarking and fragile watermarking","authors":"Yuchen Hu , Xingxiang Jiang , Changqing Zhu , Na Ren , Shuitao Guo , Jia Duan , Luanyun Hu","doi":"10.1016/j.cageo.2024.105655","DOIUrl":"10.1016/j.cageo.2024.105655","url":null,"abstract":"<div><p>Digital watermarking technology plays a crucial role in securing trajectory data. However, as trajectory data usage scenarios continue to expand, the security requirements for it have changed from a single copyright protection to one that takes into account data integrity. Existing digital watermarking algorithms for trajectory data can only choose between implementing copyright protection or ensuring integrity, unable to simultaneously achieve both functionalities. This limitation impedes the sharing and utilization of trajectory data. A dual watermarking algorithm that combines robust and fragile watermarking was innovatively proposed to solve this problem based on the geometric domain. Firstly, a set of feature points is extracted from the trajectory, and the farthest point pair of the minimum convex hull of the feature points is set as fixed points. The robust watermark is then embedded in the angles constructed by the feature points and the fixed points using quantization index modulation. Meanwhile, the trajectory points are grouped based on the angle and distance ratio constructed from the trajectory points to the fixed points. In each group, the spatiotemporal attributes of the trajectory points are mapped to the fragile watermark, which is then embedded into the distance ratios constructed by the trajectory points. Experimental results show that the proposed algorithm achieves both copyright protection and integrity verification for trajectory data and exhibits stronger robustness and tampering localization ability. This research can provide security and privacy protection for trajectory data and contribute positively to the application of trajectory data.</p></div>","PeriodicalId":55221,"journal":{"name":"Computers & Geosciences","volume":"191 ","pages":"Article 105655"},"PeriodicalIF":4.2,"publicationDate":"2024-06-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141411656","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"地球科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}