Ao Wang, Fayçal Rejiba, Ludovic Bodet, Cécile Finco, Cyrille Fauchard
The dynamic cone penetrometer (DCP) provides local soil resistance information. The difference in the vertical and horizontal data resolution (centimetric vs. multi‐metric) makes it difficult to spatialize the DCP data directly. This study uses a high‐resolution section, extracted by the seismic surface‐wave method, as the auxiliary and physical constraint for mapping the DCP index (DCPI). Geostatistical formalism (kriging and cokriging) is used. The associated measurement error of the seismic surface‐wave data is also included in the cokriging system, that is, the cokriging with variance of measurement error (CKVME). The proposed methods are validated for the first time on a test site designed and constructed for this study, with known geotechnical perspectives. Seismic and high‐intensity DCP campaigns were performed on the test site. The results show that with decimating the number of DCP soundings, the kriging approach is no longer capable of estimating the lateral variation in the test site, and the root‐mean‐square error (RMSE) value of the kriging section is increased by . With the help of sections constraining the lateral variability model, the RMSE values of the cokriging and the CKVME sections are increased by and .
{"title":"High‐resolution surface‐wave‐constrained mapping of sparse dynamic cone penetrometer tests","authors":"Ao Wang, Fayçal Rejiba, Ludovic Bodet, Cécile Finco, Cyrille Fauchard","doi":"10.1002/nsg.12321","DOIUrl":"https://doi.org/10.1002/nsg.12321","url":null,"abstract":"The dynamic cone penetrometer (DCP) provides local soil resistance information. The difference in the vertical and horizontal data resolution (centimetric vs. multi‐metric) makes it difficult to spatialize the DCP data directly. This study uses a high‐resolution section, extracted by the seismic surface‐wave method, as the auxiliary and physical constraint for mapping the DCP index (DCPI). Geostatistical formalism (kriging and cokriging) is used. The associated measurement error of the seismic surface‐wave data is also included in the cokriging system, that is, the cokriging with variance of measurement error (CKVME). The proposed methods are validated for the first time on a test site designed and constructed for this study, with known geotechnical perspectives. Seismic and high‐intensity DCP campaigns were performed on the test site. The results show that with decimating the number of DCP soundings, the kriging approach is no longer capable of estimating the lateral variation in the test site, and the root‐mean‐square error (RMSE) value of the kriging section is increased by . With the help of sections constraining the lateral variability model, the RMSE values of the cokriging and the CKVME sections are increased by and .","PeriodicalId":49771,"journal":{"name":"Near Surface Geophysics","volume":"189 1","pages":""},"PeriodicalIF":1.6,"publicationDate":"2024-09-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142264726","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"地球科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Maria Grohmann, Ernst Niederleithinger, Christoph Büttner, Stefan Buske
The ultrasonic echo technique is broadly applied in non‐destructive testing (NDT) of concrete structures involving tasks such as measuring thickness, determining geometry and locating built‐in elements. To address the challenge of enhancing ultrasonic imaging for complex concrete constructions, we adapted a seismic imaging algorithm – reverse time migration (RTM) – for NDT in civil engineering. Unlike the traditionally applied synthetic aperture focusing technique (SAFT), RTM takes into account the full wavefield including primary and reflected arrivals as well as multiples. This capability enables RTM to effectively handle all wave phenomena, unlimited by changes in velocity and reflector inclinations. This paper concentrates on applying and evaluating a two‐dimensional elastic RTM algorithm that specifically addresses horizontally polarized shear (SH) waves only, as these are predominantly used in ultrasonic NDT of concrete structures. The elastic SH RTM algorithm was deployed for imaging real ultrasonic echo SH‐wave data obtained at a concrete specimen exhibiting a complex back wall geometry and containing four tendon ducts. As these features are frequently encountered in practical NDT scenarios, their precise imaging holds significant importance. By applying the elastic SH RTM algorithm, we successfully reproduced nearly all reflectors within the concrete specimen. In particular, we were capable of accurately reconstructing all vertically oriented reflectors as well as the circular cross sections of three tendon ducts, which was not achievable with traditional SAFT imaging. These findings demonstrate that elastic SH RTM holds the ability to considerably improve the imaging of complex concrete geometries, marking a crucial advancement for accurate, high‐quality ultrasonic NDT in civil engineering.
超声波回波技术广泛应用于混凝土结构的无损检测(NDT),涉及厚度测量、几何形状确定和内置元件定位等任务。为解决复杂混凝土结构的超声波成像增强难题,我们将地震成像算法--反向时间迁移(RTM)--应用于土木工程的无损检测。与传统应用的合成孔径聚焦技术(SAFT)不同,RTM 考虑到了整个波场,包括初至、反射到达和多重到达。这种能力使 RTM 能够有效处理所有波现象,不受速度和反射器倾斜度变化的限制。本文主要介绍一种二维弹性 RTM 算法的应用和评估,该算法只专门处理水平极化剪切(SH)波,因为这些波主要用于混凝土结构的超声无损检测。弹性 SH RTM 算法用于对混凝土试样获得的真实超声回波 SH 波数据进行成像,该试样具有复杂的后墙几何形状,并包含四条肌腱导管。由于这些特征在实际无损检测中经常出现,因此对它们进行精确成像具有重要意义。通过应用弹性 SH RTM 算法,我们成功地再现了混凝土试样内的几乎所有反射体。特别是,我们能够精确地重建所有垂直方向的反射体以及三个肌腱导管的圆形横截面,这是传统的 SAFT 成像无法实现的。这些研究结果表明,弹性 SH RTM 能够显著改善复杂混凝土几何形状的成像,标志着土木工程中精确、高质量超声无损检测的重要进步。
{"title":"Application of iterative elastic reverse time migration to shear horizontal ultrasonic echo data obtained at a concrete step specimen","authors":"Maria Grohmann, Ernst Niederleithinger, Christoph Büttner, Stefan Buske","doi":"10.1002/nsg.12318","DOIUrl":"https://doi.org/10.1002/nsg.12318","url":null,"abstract":"The ultrasonic echo technique is broadly applied in non‐destructive testing (NDT) of concrete structures involving tasks such as measuring thickness, determining geometry and locating built‐in elements. To address the challenge of enhancing ultrasonic imaging for complex concrete constructions, we adapted a seismic imaging algorithm – reverse time migration (RTM) – for NDT in civil engineering. Unlike the traditionally applied synthetic aperture focusing technique (SAFT), RTM takes into account the full wavefield including primary and reflected arrivals as well as multiples. This capability enables RTM to effectively handle all wave phenomena, unlimited by changes in velocity and reflector inclinations. This paper concentrates on applying and evaluating a two‐dimensional elastic RTM algorithm that specifically addresses horizontally polarized shear (SH) waves only, as these are predominantly used in ultrasonic NDT of concrete structures. The elastic SH RTM algorithm was deployed for imaging real ultrasonic echo SH‐wave data obtained at a concrete specimen exhibiting a complex back wall geometry and containing four tendon ducts. As these features are frequently encountered in practical NDT scenarios, their precise imaging holds significant importance. By applying the elastic SH RTM algorithm, we successfully reproduced nearly all reflectors within the concrete specimen. In particular, we were capable of accurately reconstructing all vertically oriented reflectors as well as the circular cross sections of three tendon ducts, which was not achievable with traditional SAFT imaging. These findings demonstrate that elastic SH RTM holds the ability to considerably improve the imaging of complex concrete geometries, marking a crucial advancement for accurate, high‐quality ultrasonic NDT in civil engineering.","PeriodicalId":49771,"journal":{"name":"Near Surface Geophysics","volume":"58 1","pages":""},"PeriodicalIF":1.6,"publicationDate":"2024-08-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141939426","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"地球科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Amir Mardan, Martin Blouin, Gabriel Fabien‐Ouellet, Bernard Giroux, Christophe Vergniault, Jeremy Gendreau
First‐break picking is an essential step in seismic data processing. For reliable results, first arrivals should be picked by an expert. This is a time‐consuming procedure and subjective to a certain degree, leading to different results for different operators. In this study, we have used a U‐Net architecture with residual blocks to perform automatic first‐break picking based on deep learning. Focusing on the effects of weight initialization on first‐break picking, we conduct this research by using the weights of a pre‐trained network that is used for object detection on the ImageNet dataset. The efficiency of the proposed method is tested on two real datasets. For both datasets, we pick manually the first breaks for less than 10 of the seismic shots. The pre‐trained network is fine‐tuned on the picked shots, and the rest of the shots are automatically picked by the neural network. It is shown that this strategy allows to reduce the size of the training set, requiring fine‐tuning with only a few picked shots per survey. Using random weights and more training epochs can lead to a lower training loss, but such a strategy leads to overfitting as the test error is higher than the one of the pre‐trained network. We also assess the possibility of using a general dataset by training a network with data from three different projects that are acquired with different equipment and at different locations. This study shows that if the general dataset is created carefully it can lead to more accurate first‐break picking; otherwise, the general dataset can decrease the accuracy. Focusing on near‐surface geophysics, we perform traveltime tomography and compare the inverted velocity models based on different first‐break picking methodologies. The results of the inversion show that the first breaks obtained by the pre‐trained network lead to a velocity model that is closer to the one obtained from the inversion of expert‐picked first breaks.
{"title":"A fine‐tuning workflow for automatic first‐break picking with deep learning","authors":"Amir Mardan, Martin Blouin, Gabriel Fabien‐Ouellet, Bernard Giroux, Christophe Vergniault, Jeremy Gendreau","doi":"10.1002/nsg.12316","DOIUrl":"https://doi.org/10.1002/nsg.12316","url":null,"abstract":"First‐break picking is an essential step in seismic data processing. For reliable results, first arrivals should be picked by an expert. This is a time‐consuming procedure and subjective to a certain degree, leading to different results for different operators. In this study, we have used a U‐Net architecture with residual blocks to perform automatic first‐break picking based on deep learning. Focusing on the effects of weight initialization on first‐break picking, we conduct this research by using the weights of a pre‐trained network that is used for object detection on the ImageNet dataset. The efficiency of the proposed method is tested on two real datasets. For both datasets, we pick manually the first breaks for less than 10 of the seismic shots. The pre‐trained network is fine‐tuned on the picked shots, and the rest of the shots are automatically picked by the neural network. It is shown that this strategy allows to reduce the size of the training set, requiring fine‐tuning with only a few picked shots per survey. Using random weights and more training epochs can lead to a lower training loss, but such a strategy leads to overfitting as the test error is higher than the one of the pre‐trained network. We also assess the possibility of using a general dataset by training a network with data from three different projects that are acquired with different equipment and at different locations. This study shows that if the general dataset is created carefully it can lead to more accurate first‐break picking; otherwise, the general dataset can decrease the accuracy. Focusing on near‐surface geophysics, we perform traveltime tomography and compare the inverted velocity models based on different first‐break picking methodologies. The results of the inversion show that the first breaks obtained by the pre‐trained network lead to a velocity model that is closer to the one obtained from the inversion of expert‐picked first breaks.","PeriodicalId":49771,"journal":{"name":"Near Surface Geophysics","volume":"182 1","pages":""},"PeriodicalIF":1.6,"publicationDate":"2024-08-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141883143","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"地球科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
A. Tabbagh, B. Souffaché, D. Jougnot, A. Maineult, F. Rejiba, P. M. Adler, C. Schamper, J. Thiesson, C. Finco, A. Mendieta, F. Rembert, R. Guérin, C. Camerlynck
SummaryThe recent developments of electromagnetic induction and electrostatic prospection devices dedicated to critical zone surveys in both rural and urban contexts necessitate improving the interpretation of electrical properties through complementary laboratory studies. In a first interpretation step, the various experimental results obtained in the 100 Hz–10 MHz frequency range can be empirically fitted by a simple six‐term formula. It allows the reproduction of the logarithmic decrease of the real component of the effective relative permittivity and its corresponding imaginary component, the part associated with the direct current conductivity, one Cole–Cole relaxation and the real and imaginary components of the high‐frequency relative permittivity. For elucidating physical phenomena contributing to both the logarithmic decrease and the observed Cole–Cole relaxation, we first consider the Maxwell–Wagner–Sillars polarization. Using the method of moments, we establish that this continuous medium approach can reproduce a large range of relaxation characteristics. At the microscopic scale, the possible role of the rotation of the water molecules bound to solid grains is then investigated. In this case, contrary to the Maxwell–Wagner–Sillars approach, the relaxation parameters do not depend on the external medium properties.
{"title":"Experimental and numerical analysis of dielectric polarization effects in near‐surface earth materials in the 100 Hz–10 MHz frequency range: First interpretation paths","authors":"A. Tabbagh, B. Souffaché, D. Jougnot, A. Maineult, F. Rejiba, P. M. Adler, C. Schamper, J. Thiesson, C. Finco, A. Mendieta, F. Rembert, R. Guérin, C. Camerlynck","doi":"10.1002/nsg.12302","DOIUrl":"https://doi.org/10.1002/nsg.12302","url":null,"abstract":"SummaryThe recent developments of electromagnetic induction and electrostatic prospection devices dedicated to critical zone surveys in both rural and urban contexts necessitate improving the interpretation of electrical properties through complementary laboratory studies. In a first interpretation step, the various experimental results obtained in the 100 Hz–10 MHz frequency range can be empirically fitted by a simple six‐term formula. It allows the reproduction of the logarithmic decrease of the real component of the effective relative permittivity and its corresponding imaginary component, the part associated with the direct current conductivity, one Cole–Cole relaxation and the real and imaginary components of the high‐frequency relative permittivity. For elucidating physical phenomena contributing to both the logarithmic decrease and the observed Cole–Cole relaxation, we first consider the Maxwell–Wagner–Sillars polarization. Using the method of moments, we establish that this continuous medium approach can reproduce a large range of relaxation characteristics. At the microscopic scale, the possible role of the rotation of the water molecules bound to solid grains is then investigated. In this case, contrary to the Maxwell–Wagner–Sillars approach, the relaxation parameters do not depend on the external medium properties.","PeriodicalId":49771,"journal":{"name":"Near Surface Geophysics","volume":"58 1","pages":""},"PeriodicalIF":1.6,"publicationDate":"2024-05-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141168585","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"地球科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Quantification of non‐uniqueness and uncertainty is important for transient electromagnetism (TEM). To address this issue, we develop a trans‐dimensional Bayesian inversion schema for TEM data interpretation. The trans‐dimensional posterior probability density (PPD) offers a solution to model selection and quantifies parameter uncertainty resulting from the model selection from all possible models rather than determining a single model. We use the reversible‐jump Markov chain Monte Carlo sampler to draw ensembles of models to approximate PPD. In addition to providing reasonable model selection, we address the reliability of the inversion results for uncertainty analysis. This strategy offers reasonable guidance when interpreting the inversion results. We make the following improvements in this paper. First, in terms of algorithmic acceleration, we use the nonlinear optimization inversion results as the initial model and implement the multi‐chain parallel method. Second, we develop double factors to control the sampling step size of the proposed distribution, so that the sampling models cover the high‐probability region of the parameter space as much as possible. Finally, we provide the potential scale reduction factor‐η convergence criteria to assess the convergence of the samples and ensure the rationality of the output models. The proposed methodology is first tested on synthetic data and subsequently applied to a field dataset. The TEM inversion results show that probability inversion can provide reliable references for data interpretation through uncertainty analysis.
非唯一性和不确定性的量化对于瞬态电磁学(TEM)非常重要。为解决这一问题,我们开发了一种用于 TEM 数据解释的跨维贝叶斯反演模式。跨维后验概率密度(PPD)为模型选择提供了一种解决方案,并量化了从所有可能模型中选择模型而不是确定单一模型所产生的参数不确定性。我们使用可逆跳转马尔科夫链蒙特卡洛采样器绘制模型集合,以近似 PPD。除了提供合理的模型选择,我们还解决了不确定性分析中反演结果的可靠性问题。这一策略为解释反演结果提供了合理的指导。我们在本文中做了以下改进。首先,在算法加速方面,我们将非线性优化反演结果作为初始模型,并实现了多链并行方法。其次,我们开发了双因子来控制建议分布的采样步长,从而使采样模型尽可能覆盖参数空间的高概率区域。最后,我们提供了潜在规模缩减因子-η收敛标准来评估样本的收敛性,确保输出模型的合理性。建议的方法首先在合成数据上进行了测试,随后应用于实地数据集。TEM 反演结果表明,概率反演可通过不确定性分析为数据解释提供可靠的参考。
{"title":"Bayesian inversion and uncertainty analysis","authors":"Nuoya Zhang, Huaifeng Sun, Dong Liu, Shangbin Liu","doi":"10.1002/nsg.12299","DOIUrl":"https://doi.org/10.1002/nsg.12299","url":null,"abstract":"Quantification of non‐uniqueness and uncertainty is important for transient electromagnetism (TEM). To address this issue, we develop a trans‐dimensional Bayesian inversion schema for TEM data interpretation. The trans‐dimensional posterior probability density (PPD) offers a solution to model selection and quantifies parameter uncertainty resulting from the model selection from all possible models rather than determining a single model. We use the reversible‐jump Markov chain Monte Carlo sampler to draw ensembles of models to approximate PPD. In addition to providing reasonable model selection, we address the reliability of the inversion results for uncertainty analysis. This strategy offers reasonable guidance when interpreting the inversion results. We make the following improvements in this paper. First, in terms of algorithmic acceleration, we use the nonlinear optimization inversion results as the initial model and implement the multi‐chain parallel method. Second, we develop double factors to control the sampling step size of the proposed distribution, so that the sampling models cover the high‐probability region of the parameter space as much as possible. Finally, we provide the potential scale reduction factor‐<jats:italic>η</jats:italic> convergence criteria to assess the convergence of the samples and ensure the rationality of the output models. The proposed methodology is first tested on synthetic data and subsequently applied to a field dataset. The TEM inversion results show that probability inversion can provide reliable references for data interpretation through uncertainty analysis.","PeriodicalId":49771,"journal":{"name":"Near Surface Geophysics","volume":"19 1","pages":""},"PeriodicalIF":1.6,"publicationDate":"2024-04-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140637390","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"地球科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Han Che, Hongyan Shen, Qingchun Li, Guoxin Liu, Chenrui Yang, Yunpeng Sun, Shuai Liu
Dispersion curve inversion is one of the core contents of Rayleigh wave data processing. However, the dispersion curve inversion has the characteristics of multi‐parameter, multi‐extremum as well as nonlinearity. In the face of Rayleigh wave data processing under complex seismic‐geological conditions, it is difficult to reconstruct an underground structure quickly and accurately apply a single global‐searching non‐linear inversion algorithm. For this reason, we proposed a strategy to invert multi‐order mode Rayleigh wave dispersion curves by combining with grey wolf optimization (GWO) and cuckoo search (CS) algorithms. On the basis of introducing the mechanism of iterative chaotic map with infinite collapses (ICMIC) and the strategy of dimension learning–based hunting (DLH), an improved GWO was developed that was called IDGWO (ICMIC and DLH GWO). After searching the near‐optimal region through IDGWO, the CS with a variable step‐size Lévy flight search mechanism was switched adaptively to complete the final inversion. The correctness of our method was verified by the multi‐order mode dispersion curve inversion of a six‐layer high‐velocity interlayer model. Then it was further applied to the processing of real seismic datasets. The research results show that our method fully utilizes the advantages of each of the two global‐searching non‐linear algorithms after integrating IDGWO and CS, while effectively balancing the ability between global search and local exploitation, further improving the convergence speed and inversion accuracy and having good anti‐noise performance.
{"title":"Multi‐mode non‐linear inversion of Rayleigh wave dispersion curves with grey wolf optimization and cuckoo search algorithm","authors":"Han Che, Hongyan Shen, Qingchun Li, Guoxin Liu, Chenrui Yang, Yunpeng Sun, Shuai Liu","doi":"10.1002/nsg.12296","DOIUrl":"https://doi.org/10.1002/nsg.12296","url":null,"abstract":"Dispersion curve inversion is one of the core contents of Rayleigh wave data processing. However, the dispersion curve inversion has the characteristics of multi‐parameter, multi‐extremum as well as nonlinearity. In the face of Rayleigh wave data processing under complex seismic‐geological conditions, it is difficult to reconstruct an underground structure quickly and accurately apply a single global‐searching non‐linear inversion algorithm. For this reason, we proposed a strategy to invert multi‐order mode Rayleigh wave dispersion curves by combining with grey wolf optimization (GWO) and cuckoo search (CS) algorithms. On the basis of introducing the mechanism of iterative chaotic map with infinite collapses (ICMIC) and the strategy of dimension learning–based hunting (DLH), an improved GWO was developed that was called IDGWO (ICMIC and DLH GWO). After searching the near‐optimal region through IDGWO, the CS with a variable step‐size Lévy flight search mechanism was switched adaptively to complete the final inversion. The correctness of our method was verified by the multi‐order mode dispersion curve inversion of a six‐layer high‐velocity interlayer model. Then it was further applied to the processing of real seismic datasets. The research results show that our method fully utilizes the advantages of each of the two global‐searching non‐linear algorithms after integrating IDGWO and CS, while effectively balancing the ability between global search and local exploitation, further improving the convergence speed and inversion accuracy and having good anti‐noise performance.","PeriodicalId":49771,"journal":{"name":"Near Surface Geophysics","volume":"188 1","pages":""},"PeriodicalIF":1.6,"publicationDate":"2024-04-03","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140563446","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"地球科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Multi‐channel analysis of surface waves is a seismic method employed to obtain useful information about shear‐wave velocities in the near surface. A fundamental step in this methodology is the extraction of dispersion curves from dispersion spectra, with the latter usually obtained by applying specific processing algorithms onto the recorded shot gathers. Although the extraction process can be automated to some extent, it usually requires extensive quality control, which can be arduous for large datasets. We present a novel approach that leverages deep learning to identify a direct mapping between seismic shot gathers and their associated dispersion curves (both fundamental and first higher order modes), therefore by‐passing the need to compute dispersion spectra. Given a site of interest, a set of 1D compressional and shear velocities and density models are created using prior knowledge of the local geology; pairs of seismic shot gathers and Rayleigh‐wave phase dispersion curves are then numerically modelled and used to train a simplified residual network. The proposed approach is shown to achieve high‐quality predictions of dispersion curves on a synthetic test dataset and is, ultimately, successfully deployed on a field dataset. Various uncertainty quantification and convolutional neural network visualization techniques are also presented to assess the quality of the inference process and better understand the underlying learning process of the network. The predicted dispersion curves are inverted for both the synthetic and field data; in the latter case, the resulting shear‐wave velocity model is plausible and consistent with prior geological knowledge of the area. Finally, a comparison between the manually picked fundamental modes with the predictions from our model allows for a benchmark of the performance of the proposed workflow.
{"title":"Deep learning‐based extraction of surface wave dispersion curves from seismic shot gathers","authors":"Danilo Chamorro, Jiahua Zhao, Claire Birnie, Myrna Staring, Moritz Fliedner, Matteo Ravasi","doi":"10.1002/nsg.12298","DOIUrl":"https://doi.org/10.1002/nsg.12298","url":null,"abstract":"Multi‐channel analysis of surface waves is a seismic method employed to obtain useful information about shear‐wave velocities in the near surface. A fundamental step in this methodology is the extraction of dispersion curves from dispersion spectra, with the latter usually obtained by applying specific processing algorithms onto the recorded shot gathers. Although the extraction process can be automated to some extent, it usually requires extensive quality control, which can be arduous for large datasets. We present a novel approach that leverages deep learning to identify a direct mapping between seismic shot gathers and their associated dispersion curves (both fundamental and first higher order modes), therefore by‐passing the need to compute dispersion spectra. Given a site of interest, a set of 1D compressional and shear velocities and density models are created using prior knowledge of the local geology; pairs of seismic shot gathers and Rayleigh‐wave phase dispersion curves are then numerically modelled and used to train a simplified residual network. The proposed approach is shown to achieve high‐quality predictions of dispersion curves on a synthetic test dataset and is, ultimately, successfully deployed on a field dataset. Various uncertainty quantification and convolutional neural network visualization techniques are also presented to assess the quality of the inference process and better understand the underlying learning process of the network. The predicted dispersion curves are inverted for both the synthetic and field data; in the latter case, the resulting shear‐wave velocity model is plausible and consistent with prior geological knowledge of the area. Finally, a comparison between the manually picked fundamental modes with the predictions from our model allows for a benchmark of the performance of the proposed workflow.","PeriodicalId":49771,"journal":{"name":"Near Surface Geophysics","volume":"165 1","pages":""},"PeriodicalIF":1.6,"publicationDate":"2024-04-03","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140563680","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"地球科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Nikolaus Rein, Marius P. Isken, Dorina Domigall, Matthias Ohrnberger, Katrin Hannemann, Frank Krüger, Michael Korn, Torsten Dahm
Within the framework of the Intercontinental Scientific Drilling Programme (ICDP) ‘Drilling the Eger Rift’ project, five boreholes were drilled in the Vogtland (Germany) and West Bohemia (Czech Republic) regions. Three of them will be used to install high‐frequency three‐dimensional (3D) seismic arrays. The pilot 3D array is located 1.5 km south of Landwüst (Vogtland). The borehole, with a depth of 402 m, was equipped with eight geophones and a fibre optic cable behind the casing used for distributed acoustic sensing (DAS) measurements. The borehole is surrounded by a surface array consisting of 12 seismic stations with an aperture of 400 m. During drilling, a highly fractured zone was detected between 90 m and 165 m depth and interpreted as a possible fault zone. To characterize the fault zone, two vertical seismic profiling (VSP) experiments with drop weight sources at the surface were conducted. The aim of the VSP experiments was to estimate a local 3D seismic velocity tomography including the imaging of the steep fault zone. Our 3D tomography indicates P‐wave velocities between 1500 m/s and 3000 m/s at shallow depths (0–20 m) and higher P‐wave velocities of up to 5000 m/s at greater depths. In addition, the results suggest a NW–SE striking low‐velocity zone (LVZ; characterized by = 1500–3000 m/s), which crosses the borehole at a depth of about 90–165 m. This LVZ is inferred to be a shallow non‐tectonic, steep fault zone with a dip angle of about . The depth and width of the fault zone are supported by logging data as electrical conductivity, core recovery and changes in lithology. In this study, we present an example to test and verify 3D tomography and imaging approaches of shallow non‐tectonic fault zones based on active seismic experiments using simple surface drop weights as sources and borehole chains as well as borehole DAS behind casing as sensors, complemented by seismic stand‐alone surface arrays.
{"title":"Characterizing shallow fault zones by integrating profile, borehole and array measurements of seismic data and distributed acoustic sensing","authors":"Nikolaus Rein, Marius P. Isken, Dorina Domigall, Matthias Ohrnberger, Katrin Hannemann, Frank Krüger, Michael Korn, Torsten Dahm","doi":"10.1002/nsg.12293","DOIUrl":"https://doi.org/10.1002/nsg.12293","url":null,"abstract":"Within the framework of the Intercontinental Scientific Drilling Programme (ICDP) ‘Drilling the Eger Rift’ project, five boreholes were drilled in the Vogtland (Germany) and West Bohemia (Czech Republic) regions. Three of them will be used to install high‐frequency three‐dimensional (3D) seismic arrays. The pilot 3D array is located 1.5 km south of Landwüst (Vogtland). The borehole, with a depth of 402 m, was equipped with eight geophones and a fibre optic cable behind the casing used for distributed acoustic sensing (DAS) measurements. The borehole is surrounded by a surface array consisting of 12 seismic stations with an aperture of 400 m. During drilling, a highly fractured zone was detected between 90 m and 165 m depth and interpreted as a possible fault zone. To characterize the fault zone, two vertical seismic profiling (VSP) experiments with drop weight sources at the surface were conducted. The aim of the VSP experiments was to estimate a local 3D seismic velocity tomography including the imaging of the steep fault zone. Our 3D tomography indicates P‐wave velocities between 1500 m/s and 3000 m/s at shallow depths (0–20 m) and higher P‐wave velocities of up to 5000 m/s at greater depths. In addition, the results suggest a NW–SE striking low‐velocity zone (LVZ; characterized by = 1500–3000 m/s), which crosses the borehole at a depth of about 90–165 m. This LVZ is inferred to be a shallow non‐tectonic, steep fault zone with a dip angle of about . The depth and width of the fault zone are supported by logging data as electrical conductivity, core recovery and changes in lithology. In this study, we present an example to test and verify 3D tomography and imaging approaches of shallow non‐tectonic fault zones based on active seismic experiments using simple surface drop weights as sources and borehole chains as well as borehole DAS behind casing as sensors, complemented by seismic stand‐alone surface arrays.","PeriodicalId":49771,"journal":{"name":"Near Surface Geophysics","volume":"65 1","pages":""},"PeriodicalIF":1.6,"publicationDate":"2024-04-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140563435","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"地球科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
There are various methods suggested for modelling the geometry of sedimentary basins by using gravity anomalies in the literature. When dealing with datasets that are non-uniformly distributed across a study area, the choice of modelling method can significantly impact data reliability and computational resource usage. In this study, we present a gravity modelling approach utilizing prismatic vertical polyhedra. First, we summarize the requirement of such a method by highlighting limitations associated with a commonly employed modelling method that uses rectangular grid-following vertical prisms for modelling. By contrast, we propose a method that adapts a polygonal mesh to the distribution of input gravity data points, each polygonal mesh cell containing one data point and using polygonal grid-following vertical prisms for gravity modelling. To validate our method, we conduct tests using two synthetically constructed subsurface models – one featuring a normal fault and the other a deep basin. These are used to generate synthetic gravity observation data at irregularly spaced points that broadly follow the geology. The data are then inverted for obtaining subsurface structures by modelling with (a) rectangular prisms on a regular grid and (b) with our polygonal prisms on the tessellated grid. The inversion process involves calculating the heights of the prisms in both approaches, assuming a constant density contrast. The comparative analysis demonstrates the superior effectiveness of our approach (b). Finally, we apply our newly developed method to real gravity data recently collected from Gezin province, situated in the north-eastern region of the Lake Hazar pull-apart basin in Eastern Turkey. Our modelling results reveal previously underestimated basin geometry, suggesting the presence of an additional, previously unidentified fault to the east of Gezin, which forms the southern boundary of the basin.
{"title":"Gravity modelling by using vertical prismatic polyhedra and application to a sedimentary basin in Eastern Anatolia","authors":"Nedim Gökhan Aydın, Turgay İşseven","doi":"10.1002/nsg.12297","DOIUrl":"https://doi.org/10.1002/nsg.12297","url":null,"abstract":"There are various methods suggested for modelling the geometry of sedimentary basins by using gravity anomalies in the literature. When dealing with datasets that are non-uniformly distributed across a study area, the choice of modelling method can significantly impact data reliability and computational resource usage. In this study, we present a gravity modelling approach utilizing prismatic vertical polyhedra. First, we summarize the requirement of such a method by highlighting limitations associated with a commonly employed modelling method that uses rectangular grid-following vertical prisms for modelling. By contrast, we propose a method that adapts a polygonal mesh to the distribution of input gravity data points, each polygonal mesh cell containing one data point and using polygonal grid-following vertical prisms for gravity modelling. To validate our method, we conduct tests using two synthetically constructed subsurface models – one featuring a normal fault and the other a deep basin. These are used to generate synthetic gravity observation data at irregularly spaced points that broadly follow the geology. The data are then inverted for obtaining subsurface structures by modelling with (a) rectangular prisms on a regular grid and (b) with our polygonal prisms on the tessellated grid. The inversion process involves calculating the heights of the prisms in both approaches, assuming a constant density contrast. The comparative analysis demonstrates the superior effectiveness of our approach (b). Finally, we apply our newly developed method to real gravity data recently collected from Gezin province, situated in the north-eastern region of the Lake Hazar pull-apart basin in Eastern Turkey. Our modelling results reveal previously underestimated basin geometry, suggesting the presence of an additional, previously unidentified fault to the east of Gezin, which forms the southern boundary of the basin.","PeriodicalId":49771,"journal":{"name":"Near Surface Geophysics","volume":"16 1","pages":""},"PeriodicalIF":1.6,"publicationDate":"2024-03-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140324689","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"地球科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Ali Gebril, Mohamed A. Khalil, R. M. Joeckel, James Rose
Shallow, dominantly silt‐ and clay‐filled erosional troughs in Quaternary sediments under the Flathead Valley (northwestern Montana, USA) are very likely to be hydraulic barriers limiting the horizontal flow of groundwater. Accurately mapping them is important because of increasing demand for groundwater. We used a legacy Bouguer gravity map measured in 1968. The directional derivatives of the map are computed, and the map was enhanced by implementing edge detection tools. We produced generalized derivative, maximum horizontal gradient, total gradient and tilt gradient maps through two‐dimensional Fourier transform analysis. These maps were remarkably successful in locating buried troughs in the northern and northwestern parts of the study area, closely matching locations determined previously from compiled borehole data. Our results also identify hitherto unknown extensions of troughs and indicate that some of the buried troughs may be connected.
{"title":"Analysis of legacy gravity data reveals sediment‐filled troughs buried under Flathead Valley, Montana, USA","authors":"Ali Gebril, Mohamed A. Khalil, R. M. Joeckel, James Rose","doi":"10.1002/nsg.12295","DOIUrl":"https://doi.org/10.1002/nsg.12295","url":null,"abstract":"Shallow, dominantly silt‐ and clay‐filled erosional troughs in Quaternary sediments under the Flathead Valley (northwestern Montana, USA) are very likely to be hydraulic barriers limiting the horizontal flow of groundwater. Accurately mapping them is important because of increasing demand for groundwater. We used a legacy Bouguer gravity map measured in 1968. The directional derivatives of the map are computed, and the map was enhanced by implementing edge detection tools. We produced generalized derivative, maximum horizontal gradient, total gradient and tilt gradient maps through two‐dimensional Fourier transform analysis. These maps were remarkably successful in locating buried troughs in the northern and northwestern parts of the study area, closely matching locations determined previously from compiled borehole data. Our results also identify hitherto unknown extensions of troughs and indicate that some of the buried troughs may be connected.","PeriodicalId":49771,"journal":{"name":"Near Surface Geophysics","volume":"24 1","pages":""},"PeriodicalIF":1.6,"publicationDate":"2024-03-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140203533","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"地球科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}