In this investigation, the bearing capacity solution of a strip footing in anisotropic clay under inclined and eccentric load is analyzed using the numerical simulation model. The lower and upper bound finite element limit analysis (FELA) approaches are utilized to establish precise modeling and derive the numerical outcomes of a strip footing's bearing capacity. All analyses use effective automated adaptive meshes with three iteration stages to enhance the accuracy of the outcomes. The parametric analysis is performed to examine the influence of four dimensionless parameters which are taken into account in this study, namely the anisotropic strength ratio, the dimensionless eccentricity, the load inclination angle, and the adhesion factor to the bearing capacity factor. Furthermore, a new model has been proposed to predict the bearing capacity factor for the calculation of the undrained bearing capacity for footings resting on an anisotropic clay using an advanced data-driven method (MOGA-EPR). The new model takes into account the anisotropy, eccentricity, and inclination of the applied load and could be used with confidence in routine designs of shallow foundations in undrained conditions with the consideration of the anisotropic strengths of clays.
{"title":"Developing soft-computing regression model for predicting bearing capacity of eccentrically loaded footings on anisotropic clay","authors":"Kongtawan Sangjinda , Rungkhun Banyong , Saif Alzabeebee , Suraparb Keawsawasvong","doi":"10.1016/j.aiig.2023.05.001","DOIUrl":"https://doi.org/10.1016/j.aiig.2023.05.001","url":null,"abstract":"<div><p>In this investigation, the bearing capacity solution of a strip footing in anisotropic clay under inclined and eccentric load is analyzed using the numerical simulation model. The lower and upper bound finite element limit analysis (FELA) approaches are utilized to establish precise modeling and derive the numerical outcomes of a strip footing's bearing capacity. All analyses use effective automated adaptive meshes with three iteration stages to enhance the accuracy of the outcomes. The parametric analysis is performed to examine the influence of four dimensionless parameters which are taken into account in this study, namely the anisotropic strength ratio, the dimensionless eccentricity, the load inclination angle, and the adhesion factor to the bearing capacity factor. Furthermore, a new model has been proposed to predict the bearing capacity factor for the calculation of the undrained bearing capacity for footings resting on an anisotropic clay using an advanced data-driven method (MOGA-EPR). The new model takes into account the anisotropy, eccentricity, and inclination of the applied load and could be used with confidence in routine designs of shallow foundations in undrained conditions with the consideration of the anisotropic strengths of clays.</p></div>","PeriodicalId":100124,"journal":{"name":"Artificial Intelligence in Geosciences","volume":"4 ","pages":"Pages 68-75"},"PeriodicalIF":0.0,"publicationDate":"2023-05-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"49721515","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2023-04-26DOI: 10.1016/j.aiig.2023.04.001
Priyadarshi Chinmoy Kumar , Kalachand Sain
A carbonate build-up or reef is a thick carbonate deposit consisting of mainly skeletal remains of organisms that can be large enough to develop a favourable topography. Delineation of such geologic features provides important input in understanding the basin's evolution and petroleum prospects. Here, we introduce a new attribute called the Reef Cube (RC) meta-attribute that has been computed by fusing several other seismic attributes that are characteristics of the reef through a supervised machine-learning algorithm. The neural learning resulted in a minimum nRMS error of 0.28 and 0.30 and a misclassification percentage of 1.13% and 1.06% for the train and test data sets. The Reef Cube meta-attribute has efficiently captured the anatomy of carbonate reef buried at ∼450 m below the seafloor from high-resolution 3D seismic data in the NW shelf of Australia. The novel approach not only picks up the subsurface architecture of the carbonate reef accurately but also accelerates the process of interpretation with a much-reduced intervention of human analysts. This can be efficiently suited for delimiting any subsurface geologic feature from a large volume of surface seismic data.
{"title":"Machine learning elucidates the anatomy of buried carbonate reef from seismic reflection data","authors":"Priyadarshi Chinmoy Kumar , Kalachand Sain","doi":"10.1016/j.aiig.2023.04.001","DOIUrl":"https://doi.org/10.1016/j.aiig.2023.04.001","url":null,"abstract":"<div><p>A carbonate build-up or reef is a thick carbonate deposit consisting of mainly skeletal remains of organisms that can be large enough to develop a favourable topography. Delineation of such geologic features provides important input in understanding the basin's evolution and petroleum prospects. Here, we introduce a new attribute called the Reef Cube (RC) meta-attribute that has been computed by fusing several other seismic attributes that are characteristics of the reef through a supervised machine-learning algorithm. The neural learning resulted in a minimum nRMS error of 0.28 and 0.30 and a misclassification percentage of 1.13% and 1.06% for the train and test data sets. The Reef Cube meta-attribute has efficiently captured the anatomy of carbonate reef buried at ∼450 m below the seafloor from high-resolution 3D seismic data in the NW shelf of Australia. The novel approach not only picks up the subsurface architecture of the carbonate reef accurately but also accelerates the process of interpretation with a much-reduced intervention of human analysts. This can be efficiently suited for delimiting any subsurface geologic feature from a large volume of surface seismic data.</p></div>","PeriodicalId":100124,"journal":{"name":"Artificial Intelligence in Geosciences","volume":"4 ","pages":"Pages 59-67"},"PeriodicalIF":0.0,"publicationDate":"2023-04-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"49709854","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2023-04-05DOI: 10.1016/j.aiig.2023.03.002
Peter Mora , Gabriele Morra , David A. Yuen
Modern geodynamics is based on the study of a large set of models, with the variation of many parameters, whose analysis in the future will require Machine Learning to be analyzed. We introduce here for the first time how a formulation of the Lattice Boltzmann Method capable of modeling plate tectonics, with the introduction of plastic non-linear rheology, is able to reproduce the breaking of the upper boundary layer of the convecting mantle in plates. Numerical simulation of the earth’s mantle and lithospheric plates is a challenging task for traditional methods of numerical solution to partial differential equations (PDE’s) due to the need to model sharp and large viscosity contrasts, temperature dependent viscosity and highly nonlinear rheologies. Nonlinear rheologies such as plastic or dislocation creep are important in giving mantle convection a past history. We present a thermal Lattice Boltzmann Method (LBM) as an alternative to PDE-based solutions for simulating time-dependent mantle dynamics, and demonstrate that the LBM is capable of modeling an extremely nonlinear plastic rheology. This nonlinear rheology leads to the emergence plate tectonic like behavior and history from a two layer viscosity model. These results demonstrate that the LBM offers a means to study the effect of highly nonlinear rheologies on earth and exoplanet dynamics and evolution.
{"title":"Models of plate tectonics with the Lattice Boltzmann Method","authors":"Peter Mora , Gabriele Morra , David A. Yuen","doi":"10.1016/j.aiig.2023.03.002","DOIUrl":"https://doi.org/10.1016/j.aiig.2023.03.002","url":null,"abstract":"<div><p>Modern geodynamics is based on the study of a large set of models, with the variation of many parameters, whose analysis in the future will require Machine Learning to be analyzed. We introduce here for the first time how a formulation of the Lattice Boltzmann Method capable of modeling plate tectonics, with the introduction of plastic non-linear rheology, is able to reproduce the breaking of the upper boundary layer of the convecting mantle in plates. Numerical simulation of the earth’s mantle and lithospheric plates is a challenging task for traditional methods of numerical solution to partial differential equations (PDE’s) due to the need to model sharp and large viscosity contrasts, temperature dependent viscosity and highly nonlinear rheologies. Nonlinear rheologies such as plastic or dislocation creep are important in giving mantle convection a past history. We present a thermal Lattice Boltzmann Method (LBM) as an alternative to PDE-based solutions for simulating time-dependent mantle dynamics, and demonstrate that the LBM is capable of modeling an extremely nonlinear plastic rheology. This nonlinear rheology leads to the emergence plate tectonic like behavior and history from a two layer viscosity model. These results demonstrate that the LBM offers a means to study the effect of highly nonlinear rheologies on earth and exoplanet dynamics and evolution.</p></div>","PeriodicalId":100124,"journal":{"name":"Artificial Intelligence in Geosciences","volume":"4 ","pages":"Pages 47-58"},"PeriodicalIF":0.0,"publicationDate":"2023-04-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"49709977","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2023-03-30DOI: 10.1016/j.aiig.2023.03.001
Fanchun Meng, Tao Ren, Zhenxian Liu, Zhida Zhong
Earthquake early warning (EEW) is one of the important tools to reduce the hazard of earthquakes. In contemporary seismology, EEW is typically transformed into a fast classification of earthquake magnitude, i.e., large magnitude earthquakes that require warning are in the positive category and vice versa in the negative category. However, the current standard information signal processing routines for magnitude fast classification are time-consuming and vulnerable to data imbalance. Therefore, in this study, Deep Learning (DL) algorithms are introduced to assist with EEW. For the three-component seismic waveform record of 7 s obtained from the China Earthquake Network Center (CENC), this paper proposes a DL model (EEWMagNet), which accomplishes the extraction of spatial and temporal features through DenseBlock with Bottleneck and Multi-Head Attention. Extensive experiments on Chinese field data demonstrate that the proposed model performs well in the fast classification of magnitude. Moreover, the comparison experiments demonstrate that the epicenter distance information is indispensable, and the normalization has a negative effect on the model to capture accurate amplitude information.
{"title":"Toward earthquake early warning: A convolutional neural network for repaid earthquake magnitude estimation","authors":"Fanchun Meng, Tao Ren, Zhenxian Liu, Zhida Zhong","doi":"10.1016/j.aiig.2023.03.001","DOIUrl":"https://doi.org/10.1016/j.aiig.2023.03.001","url":null,"abstract":"<div><p>Earthquake early warning (EEW) is one of the important tools to reduce the hazard of earthquakes. In contemporary seismology, EEW is typically transformed into a fast classification of earthquake magnitude, i.e., large magnitude earthquakes that require warning are in the positive category and vice versa in the negative category. However, the current standard information signal processing routines for magnitude fast classification are time-consuming and vulnerable to data imbalance. Therefore, in this study, Deep Learning (DL) algorithms are introduced to assist with EEW. For the three-component seismic waveform record of 7 s obtained from the China Earthquake Network Center (CENC), this paper proposes a DL model (EEWMagNet), which accomplishes the extraction of spatial and temporal features through DenseBlock with Bottleneck and Multi-Head Attention. Extensive experiments on Chinese field data demonstrate that the proposed model performs well in the fast classification of magnitude. Moreover, the comparison experiments demonstrate that the epicenter distance information is indispensable, and the normalization has a negative effect on the model to capture accurate amplitude information.</p></div>","PeriodicalId":100124,"journal":{"name":"Artificial Intelligence in Geosciences","volume":"4 ","pages":"Pages 39-46"},"PeriodicalIF":0.0,"publicationDate":"2023-03-30","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"49709919","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2023-02-17DOI: 10.1016/j.aiig.2023.02.002
Jianchao Lin, Jing Zheng, Dewei Li, Zhixiang Wu
Noise suppression is an important part of microseismic monitoring technology. Signal and noise can be separated by denoising and filtering to improve the subsequent analysis. In this paper, we propose a new denoising method based on convolutional blind denoising network (CBDNet). The method is partially modified for image denoising network CBDNet to make it suitable for one–dimensional data denoising. At present, most of the existing filtering methods are proposed for the Gaussian white noise denoising. In contrast, the proposed method also learns the wind noise, construction noise, traffic noise and mixed noise through the strategy of residual learning. The full convolution subnetwork is used to estimate the noise level, which significantly improves the signal-to-noise ratio and its performance of removing the correlated noise. The model is trained with different types of real noise and random noise. The denoising result is evaluated by corresponding indexes and compared with other denoising methods. The results show that the proposed method has better denoising performance than traditional methods, and it has a superior noise suppression level for oil well construction noise and mixed noise. The proposed method can suppress the interference of time–frequency overlapped end to end and still have noise suppression and event detection capability even if the signal is superimposed on other types of noise.
{"title":"Research on microseismic denoising method based on CBDNet","authors":"Jianchao Lin, Jing Zheng, Dewei Li, Zhixiang Wu","doi":"10.1016/j.aiig.2023.02.002","DOIUrl":"https://doi.org/10.1016/j.aiig.2023.02.002","url":null,"abstract":"<div><p>Noise suppression is an important part of microseismic monitoring technology. Signal and noise can be separated by denoising and filtering to improve the subsequent analysis. In this paper, we propose a new denoising method based on convolutional blind denoising network (CBDNet). The method is partially modified for image denoising network CBDNet to make it suitable for one–dimensional data denoising. At present, most of the existing filtering methods are proposed for the Gaussian white noise denoising. In contrast, the proposed method also learns the wind noise, construction noise, traffic noise and mixed noise through the strategy of residual learning. The full convolution subnetwork is used to estimate the noise level, which significantly improves the signal-to-noise ratio and its performance of removing the correlated noise. The model is trained with different types of real noise and random noise. The denoising result is evaluated by corresponding indexes and compared with other denoising methods. The results show that the proposed method has better denoising performance than traditional methods, and it has a superior noise suppression level for oil well construction noise and mixed noise. The proposed method can suppress the interference of time–frequency overlapped end to end and still have noise suppression and event detection capability even if the signal is superimposed on other types of noise.</p></div>","PeriodicalId":100124,"journal":{"name":"Artificial Intelligence in Geosciences","volume":"4 ","pages":"Pages 28-38"},"PeriodicalIF":0.0,"publicationDate":"2023-02-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"49709789","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2023-02-10DOI: 10.1016/j.aiig.2023.01.003
Yifeng Fei, Hanpeng Cai, Junhui Yang, Jiandong Liang, Guangmin Hu
Seismic facies analysis plays important roles in geological research, especially in sedimentary environment identification. Traditional method is mainly based on seismic waveform or attributes of a single seismic gather to classify the seismic facies. Ignoring the correlation between adjacent seismic gathers leads to poor lateral continuities in generated facies map, which cannot fit the sedimentary characteristics well. In fact, according to sedimentology theory, the horizontal continuities of the stratum can be utilized as priori information to provide more information for waveform classification. Therefore, we develop an unsupervised method for pre-stack seismic facies analysis, which is constrained by spatial continuity. The proposed method establishes a probabilistic model to characterize the correlation between neighboring reflection elements. Subsequently, this correlation is used as a regularization term to modify the objective function of the clustering algorithm, allowing the mode assignment of reflective elements to be influenced by the labels of their neighbors. Test on synthetic data confirms that, compared with traditional seismic facies analysis methods, the facies maps generated by the proposed method have more continuous and homogeneous textures, and less uncertainty on the boundary. The test on actual seismic data further confirms that the proposed method can describe more details of the distribution of lithological bodies of interest. The proposed method is an effective tool for pre-stack seismic facies analysis.
{"title":"Unsupervised pre-stack seismic facies analysis constrained by spatial continuity","authors":"Yifeng Fei, Hanpeng Cai, Junhui Yang, Jiandong Liang, Guangmin Hu","doi":"10.1016/j.aiig.2023.01.003","DOIUrl":"https://doi.org/10.1016/j.aiig.2023.01.003","url":null,"abstract":"<div><p>Seismic facies analysis plays important roles in geological research, especially in sedimentary environment identification. Traditional method is mainly based on seismic waveform or attributes of a single seismic gather to classify the seismic facies. Ignoring the correlation between adjacent seismic gathers leads to poor lateral continuities in generated facies map, which cannot fit the sedimentary characteristics well. In fact, according to sedimentology theory, the horizontal continuities of the stratum can be utilized as priori information to provide more information for waveform classification. Therefore, we develop an unsupervised method for pre-stack seismic facies analysis, which is constrained by spatial continuity. The proposed method establishes a probabilistic model to characterize the correlation between neighboring reflection elements. Subsequently, this correlation is used as a regularization term to modify the objective function of the clustering algorithm, allowing the mode assignment of reflective elements to be influenced by the labels of their neighbors. Test on synthetic data confirms that, compared with traditional seismic facies analysis methods, the facies maps generated by the proposed method have more continuous and homogeneous textures, and less uncertainty on the boundary. The test on actual seismic data further confirms that the proposed method can describe more details of the distribution of lithological bodies of interest. The proposed method is an effective tool for pre-stack seismic facies analysis.</p></div>","PeriodicalId":100124,"journal":{"name":"Artificial Intelligence in Geosciences","volume":"4 ","pages":"Pages 22-27"},"PeriodicalIF":0.0,"publicationDate":"2023-02-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"49709961","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Seismic inversion, such as velocity and impedance, is an ill-posed problem. To solve this problem, swarm intelligence (SI) algorithms have been increasingly applied as the global optimization approach, such as differential evolution (DE) and particle swarm optimization (PSO). Based on the well logs, the sparse probability distribution (PD) of the reflectivity distribution is spatial stationarity. Therefore, we proposed a general SI scheme with constrained by a priori sparse distribution of the reflectivity, which helps to provide more accurate potential solutions for the seismic inversion. In the proposed scheme, as two key operations, the creating of probability density function library and probability transformation are inserted into standard SI algorithms. In particular, two targeted DE-PD and PSO-PD algorithms are implemented. Numerical example of Marmousi2 model and field example of gas hydrates show that the DE-PD and PSO-PD estimate better inversion solutions than the results of the original DE and PSO. In particular, the DE-PD is the best performer both in terms of mean error and fitness value of velocity and impendence inversion. Overall, the proposed SI with sparse distribution scheme is feasible and effective for seismic inversion.
{"title":"Seismic swarm intelligence inversion with sparse probability distribution of reflectivity","authors":"Zhiguo Wang , Bing Zhang , Zhaoqi Gao , Jinghuai Gao","doi":"10.1016/j.aiig.2023.02.001","DOIUrl":"https://doi.org/10.1016/j.aiig.2023.02.001","url":null,"abstract":"<div><p>Seismic inversion, such as velocity and impedance, is an ill-posed problem. To solve this problem, swarm intelligence (SI) algorithms have been increasingly applied as the global optimization approach, such as differential evolution (DE) and particle swarm optimization (PSO). Based on the well logs, the sparse probability distribution (PD) of the reflectivity distribution is spatial stationarity. Therefore, we proposed a general SI scheme with constrained by a priori sparse distribution of the reflectivity, which helps to provide more accurate potential solutions for the seismic inversion. In the proposed scheme, as two key operations, the creating of probability density function library and probability transformation are inserted into standard SI algorithms. In particular, two targeted DE-PD and PSO-PD algorithms are implemented. Numerical example of Marmousi2 model and field example of gas hydrates show that the DE-PD and PSO-PD estimate better inversion solutions than the results of the original DE and PSO. In particular, the DE-PD is the best performer both in terms of mean error and fitness value of velocity and impendence inversion. Overall, the proposed SI with sparse distribution scheme is feasible and effective for seismic inversion.</p></div>","PeriodicalId":100124,"journal":{"name":"Artificial Intelligence in Geosciences","volume":"4 ","pages":"Pages 1-8"},"PeriodicalIF":0.0,"publicationDate":"2023-02-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"49721383","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2023-02-06DOI: 10.1016/j.aiig.2023.01.005
Steven E. Zhang , Glen T. Nwaila , Julie E. Bourdeau , Yousef Ghorbani , Emmanuel John M. Carranza
Remote sensing data is a cheap form of surficial geoscientific data, and in terms of veracity, velocity and volume, can sometimes be considered big data. Its spatial and spectral resolution continues to improve over time, and some modern satellites, such as the Copernicus Programme's Sentinel-2 remote sensing satellites, offer a spatial resolution of 10 m across many of their spectral bands. The abundance and quality of remote sensing data combined with accumulated primary geochemical data has provided an unprecedented opportunity to inferentially invert remote sensing data into geochemical data. The ability to derive geochemical data from remote sensing data would provide a form of secondary big geochemical data, which can be used for numerous downstream activities, particularly where data timeliness, volume and velocity are important. Major benefactors of secondary geochemical data would be environmental monitoring and applications of artificial intelligence and machine learning in geochemistry, which currently entirely relies on manually derived data that is primarily guided by scientific reduction. Furthermore, it permits the usage of well-established data analysis techniques from geochemistry to remote sensing that allows useable insights to be extracted beyond those typically associated with strictly remote sensing data analysis. Currently, no generally applicable and systematic method to derive chemical elemental concentrations from large-scale remote sensing data have been documented in geosciences. In this paper, we demonstrate that fusing geostatistically-augmented geochemical and remote sensing data produces an abundance of data that enables a more generalized machine learning-based geochemical data generation. We use gold grade data from a South African tailing storage facility (TSF) and data from both the Landsat-8 and Sentinel remote sensing satellites. We show that various machine learning algorithms can be used given the abundance of training data. Consequently, we are able to produce a high resolution (10 m grid size) gold concentration map of the TSF, which demonstrates the potential of our method to be used to guide extraction planning, online resource exploration, environmental monitoring and resource estimation.
{"title":"Deriving big geochemical data from high-resolution remote sensing data via machine learning: Application to a tailing storage facility in the Witwatersrand goldfields","authors":"Steven E. Zhang , Glen T. Nwaila , Julie E. Bourdeau , Yousef Ghorbani , Emmanuel John M. Carranza","doi":"10.1016/j.aiig.2023.01.005","DOIUrl":"https://doi.org/10.1016/j.aiig.2023.01.005","url":null,"abstract":"<div><p>Remote sensing data is a cheap form of surficial geoscientific data, and in terms of veracity, velocity and volume, can sometimes be considered big data. Its spatial and spectral resolution continues to improve over time, and some modern satellites, such as the Copernicus Programme's Sentinel-2 remote sensing satellites, offer a spatial resolution of 10 m across many of their spectral bands. The abundance and quality of remote sensing data combined with accumulated primary geochemical data has provided an unprecedented opportunity to inferentially invert remote sensing data into geochemical data. The ability to derive geochemical data from remote sensing data would provide a form of secondary big geochemical data, which can be used for numerous downstream activities, particularly where data timeliness, volume and velocity are important. Major benefactors of secondary geochemical data would be environmental monitoring and applications of artificial intelligence and machine learning in geochemistry, which currently entirely relies on manually derived data that is primarily guided by scientific reduction. Furthermore, it permits the usage of well-established data analysis techniques from geochemistry to remote sensing that allows useable insights to be extracted beyond those typically associated with strictly remote sensing data analysis. Currently, no generally applicable and systematic method to derive chemical elemental concentrations from large-scale remote sensing data have been documented in geosciences. In this paper, we demonstrate that fusing geostatistically-augmented geochemical and remote sensing data produces an abundance of data that enables a more generalized machine learning-based geochemical data generation. We use gold grade data from a South African tailing storage facility (TSF) and data from both the Landsat-8 and Sentinel remote sensing satellites. We show that various machine learning algorithms can be used given the abundance of training data. Consequently, we are able to produce a high resolution (10 m grid size) gold concentration map of the TSF, which demonstrates the potential of our method to be used to guide extraction planning, online resource exploration, environmental monitoring and resource estimation.</p></div>","PeriodicalId":100124,"journal":{"name":"Artificial Intelligence in Geosciences","volume":"4 ","pages":"Pages 9-21"},"PeriodicalIF":0.0,"publicationDate":"2023-02-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"49721386","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2022-12-01DOI: 10.1016/j.aiig.2022.12.004
Ting Chen, Yaojun Wang, Hanpeng Cai, Gang Yu, Guangmin Hu
We propose to use a Few-Shot Learning (FSL) method for the pre-stack seismic inversion problem in obtaining a high resolution reservoir model from recorded seismic data. Recently, artificial neural network (ANN) demonstrates great advantages for seismic inversion because of its powerful feature extraction and parameter learning ability. Hence, ANN method could provide a high resolution inversion result that are critical for reservoir characterization. However, the ANN approach requires plenty of labeled samples for training in order to obtain a satisfactory result. For the common problem of scarce samples in the ANN seismic inversion, we create a novel pre-stack seismic inversion method that takes advantage of the FSL. The results of conventional inversion are used as the auxiliary dataset for ANN based on FSL, while the well log is regarded the scarce training dataset. According to the characteristics of seismic inversion (large amount and high dimensional), we construct an arch network (A-Net) architecture to implement this method. An example shows that this method can improve the accuracy and resolution of inversion results.
{"title":"High resolution pre-stack seismic inversion using few-shot learning","authors":"Ting Chen, Yaojun Wang, Hanpeng Cai, Gang Yu, Guangmin Hu","doi":"10.1016/j.aiig.2022.12.004","DOIUrl":"10.1016/j.aiig.2022.12.004","url":null,"abstract":"<div><p>We propose to use a Few-Shot Learning (FSL) method for the pre-stack seismic inversion problem in obtaining a high resolution reservoir model from recorded seismic data. Recently, artificial neural network (ANN) demonstrates great advantages for seismic inversion because of its powerful feature extraction and parameter learning ability. Hence, ANN method could provide a high resolution inversion result that are critical for reservoir characterization. However, the ANN approach requires plenty of labeled samples for training in order to obtain a satisfactory result. For the common problem of scarce samples in the ANN seismic inversion, we create a novel pre-stack seismic inversion method that takes advantage of the FSL. The results of conventional inversion are used as the auxiliary dataset for ANN based on FSL, while the well log is regarded the scarce training dataset. According to the characteristics of seismic inversion (large amount and high dimensional), we construct an arch network (A-Net) architecture to implement this method. An example shows that this method can improve the accuracy and resolution of inversion results.</p></div>","PeriodicalId":100124,"journal":{"name":"Artificial Intelligence in Geosciences","volume":"3 ","pages":"Pages 203-208"},"PeriodicalIF":0.0,"publicationDate":"2022-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.sciencedirect.com/science/article/pii/S2666544122000375/pdfft?md5=520f968b5df6289799123c0b528338d6&pid=1-s2.0-S2666544122000375-main.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"72730307","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2022-12-01DOI: 10.1016/j.aiig.2022.12.002
Francky Fouedjio , Hassan Talebi
Geoscientists are increasingly tasked with spatially predicting a target variable in the presence of auxiliary information using supervised machine learning algorithms. Typically, the target variable is observed at a few sampling locations due to the relatively time-consuming and costly process of obtaining measurements. In contrast, auxiliary variables are often exhaustively observed within the region under study through the increasing development of remote sensing platforms and sensor networks. Supervised machine learning methods do not fully leverage this large amount of auxiliary spatial data. Indeed, in these methods, the training dataset includes only labeled data locations (where both target and auxiliary variables were measured). At the same time, unlabeled data locations (where auxiliary variables were measured but not the target variable) are not considered during the model training phase. Consequently, only a limited amount of auxiliary spatial data is utilized during the model training stage. As an alternative to supervised learning, semi-supervised learning, which learns from labeled as well as unlabeled data, can be used to address this problem. However, conventional semi-supervised learning techniques do not account for the specificities of spatial data. This paper introduces a spatial semi-supervised learning framework where geostatistics and machine learning are combined to harness a large amount of unlabeled spatial data in combination with typically a smaller set of labeled spatial data. The main idea consists of leveraging the target variable’s spatial autocorrelation to generate pseudo labels at unlabeled data points that are geographically close to labeled data points. This is achieved through geostatistical conditional simulation, where an ensemble of pseudo labels is generated to account for the uncertainty in the pseudo labeling process. The observed labels are augmented by this ensemble of pseudo labels to create an ensemble of pseudo training datasets. A supervised machine learning model is then trained on each pseudo training dataset, followed by an aggregation of trained models. The proposed geostatistical semi-supervised learning method is applied to synthetic and real-world spatial datasets. Its predictive performance is compared with some classical supervised and semi-supervised machine learning methods. It appears that it can effectively leverage a large amount of unlabeled spatial data to improve the target variable’s spatial prediction.
{"title":"Geostatistical semi-supervised learning for spatial prediction","authors":"Francky Fouedjio , Hassan Talebi","doi":"10.1016/j.aiig.2022.12.002","DOIUrl":"10.1016/j.aiig.2022.12.002","url":null,"abstract":"<div><p>Geoscientists are increasingly tasked with spatially predicting a target variable in the presence of auxiliary information using supervised machine learning algorithms. Typically, the target variable is observed at a few sampling locations due to the relatively time-consuming and costly process of obtaining measurements. In contrast, auxiliary variables are often exhaustively observed within the region under study through the increasing development of remote sensing platforms and sensor networks. Supervised machine learning methods do not fully leverage this large amount of auxiliary spatial data. Indeed, in these methods, the training dataset includes only labeled data locations (where both target and auxiliary variables were measured). At the same time, unlabeled data locations (where auxiliary variables were measured but not the target variable) are not considered during the model training phase. Consequently, only a limited amount of auxiliary spatial data is utilized during the model training stage. As an alternative to supervised learning, semi-supervised learning, which learns from labeled as well as unlabeled data, can be used to address this problem. However, conventional semi-supervised learning techniques do not account for the specificities of spatial data. This paper introduces a spatial semi-supervised learning framework where geostatistics and machine learning are combined to harness a large amount of unlabeled spatial data in combination with typically a smaller set of labeled spatial data. The main idea consists of leveraging the target variable’s spatial autocorrelation to generate pseudo labels at unlabeled data points that are geographically close to labeled data points. This is achieved through geostatistical conditional simulation, where an ensemble of pseudo labels is generated to account for the uncertainty in the pseudo labeling process. The observed labels are augmented by this ensemble of pseudo labels to create an ensemble of pseudo training datasets. A supervised machine learning model is then trained on each pseudo training dataset, followed by an aggregation of trained models. The proposed geostatistical semi-supervised learning method is applied to synthetic and real-world spatial datasets. Its predictive performance is compared with some classical supervised and semi-supervised machine learning methods. It appears that it can effectively leverage a large amount of unlabeled spatial data to improve the target variable’s spatial prediction.</p></div>","PeriodicalId":100124,"journal":{"name":"Artificial Intelligence in Geosciences","volume":"3 ","pages":"Pages 162-178"},"PeriodicalIF":0.0,"publicationDate":"2022-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.sciencedirect.com/science/article/pii/S2666544122000351/pdfft?md5=94a8bd0caaee0a5284420ed1a1305ce9&pid=1-s2.0-S2666544122000351-main.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"77537181","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}