Minghao Xian, Zhengwei Xu, Michael S. Zhdanov, Yaming Ding, Rui Wang, Xuben Wang, Jun Li, Guangdong Zhao
In geophysical research, gravity-based inversion is essential for identifying geological anomalies, mapping rock structures, and extracting resources such as oil and minerals. Traditional gravity inversion methods, however, face challenges such as the volumetric effects of gravity fields and the management of large, complex matrices. Unsupervised learning techniques often struggle with overfitting and interpreting gravity data. This study explores the application of various U-Net-based network architectures in gravity inversion, each offering distinct challenges and advantages. Nested U-Net, although effective, requires a high parameter count, leading to extended training periods. Recurrent Residual U-Net's implicit attention mechanism restricts its dynamic adaptability, while Attention U-Net's lack of residual connections raises concerns about gradient issues. This research comprehensively analyzes the training processes, core functionalities, and module distribution of these networks, including Residual U-Net++. Our synthetic studies compare these networks with traditional focused regularized gravity inversion for reconstructing density anomalies. The results demonstrate that Nested U-Net closely approximates the actual model, despite some redundancy. Recurrent Residual U-Net shows improved alignment with minimal redundancies, and Attention U-Net is effective in density prediction but encounters difficulties in areas of low density. Notably, Residual U-Net++ excels in inversion modeling, achieving the lowest misfit percentage and accurately replicating density values. In practical applications, Residual U-Net++ impressively reconstructed the F2 salt diapir in the Nordkapp Basin with well-defined boundaries that closely match seismic data interpretations. These results underscore the capabilities of Residual U-Net++ in geophysical data analysis, structural reconstruction, and inversion, demonstrating its effectiveness in both simulated settings and real-world scenarios.
{"title":"Recovering 3D Salt Dome by Gravity Data Inversion Using ResU-Net++","authors":"Minghao Xian, Zhengwei Xu, Michael S. Zhdanov, Yaming Ding, Rui Wang, Xuben Wang, Jun Li, Guangdong Zhao","doi":"10.1190/geo2023-0551.1","DOIUrl":"https://doi.org/10.1190/geo2023-0551.1","url":null,"abstract":"In geophysical research, gravity-based inversion is essential for identifying geological anomalies, mapping rock structures, and extracting resources such as oil and minerals. Traditional gravity inversion methods, however, face challenges such as the volumetric effects of gravity fields and the management of large, complex matrices. Unsupervised learning techniques often struggle with overfitting and interpreting gravity data. This study explores the application of various U-Net-based network architectures in gravity inversion, each offering distinct challenges and advantages. Nested U-Net, although effective, requires a high parameter count, leading to extended training periods. Recurrent Residual U-Net's implicit attention mechanism restricts its dynamic adaptability, while Attention U-Net's lack of residual connections raises concerns about gradient issues. This research comprehensively analyzes the training processes, core functionalities, and module distribution of these networks, including Residual U-Net++. Our synthetic studies compare these networks with traditional focused regularized gravity inversion for reconstructing density anomalies. The results demonstrate that Nested U-Net closely approximates the actual model, despite some redundancy. Recurrent Residual U-Net shows improved alignment with minimal redundancies, and Attention U-Net is effective in density prediction but encounters difficulties in areas of low density. Notably, Residual U-Net++ excels in inversion modeling, achieving the lowest misfit percentage and accurately replicating density values. In practical applications, Residual U-Net++ impressively reconstructed the F2 salt diapir in the Nordkapp Basin with well-defined boundaries that closely match seismic data interpretations. These results underscore the capabilities of Residual U-Net++ in geophysical data analysis, structural reconstruction, and inversion, demonstrating its effectiveness in both simulated settings and real-world scenarios.","PeriodicalId":509604,"journal":{"name":"GEOPHYSICS","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2024-05-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141104478","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Seismic data denoising is a critical component of seismic data processing, yet effectively removing erratic noise, characterized by its non-Gaussian distribution and high amplitude, remains a substantial challenge for conventional methods and deep learning (DL) algorithms. Supervised learning frameworks typically outperform others, but they require pairs of noisy datasets alongside corresponding clean ground truth, which is impractical for real-world seismic datasets. On the other hand, unsupervised learning methods, which do not rely on ground truth during training, often fall short in performance when compared to their supervised or traditional denoising counterparts. Moreover, current unsupervised deep learning methods fail to address the specific challenges posed by erratic seismic noise adequately. This paper introduces a novel zero-shot unsupervised DL framework designed specifically to mitigate random and erratic noise, with a particular emphasis on blending noise. Drawing inspiration from Noise2Noise and data augmentation principles, we present a robust self-supervised denoising network named “Robust Noiser2Noiser.”. Our approach eliminates the need for paired noisy and clean datasets as required by supervised methods or paired noisy datasets as in Noise2Noise (N2N). Instead, our framework relies solely on the original noisy seismic dataset. Our methodology generates two independent re-corrupted datasets from the original noisy dataset, using one as the input and the other as the training target. Subsequently, we employ a deep-learning-based denoiser, DnCNN, for training purposes. To address various types of random and erratic noise, the original noisy dataset is re-corrupted with the same noise type. Detailed explanations for generating training input and target data for blended data are provided in the paper. We apply our proposed network to both synthetic and real marine data examples, demonstrating significantly improved noise attenuation performance compared to traditional denoising methods and state-of-the-art unsupervised learning methods.
{"title":"Robust Seismic data denoising via self-supervised deep learning","authors":"Ji Li, Daniel Trad, Dawei Liu","doi":"10.1190/geo2023-0762.1","DOIUrl":"https://doi.org/10.1190/geo2023-0762.1","url":null,"abstract":"Seismic data denoising is a critical component of seismic data processing, yet effectively removing erratic noise, characterized by its non-Gaussian distribution and high amplitude, remains a substantial challenge for conventional methods and deep learning (DL) algorithms. Supervised learning frameworks typically outperform others, but they require pairs of noisy datasets alongside corresponding clean ground truth, which is impractical for real-world seismic datasets. On the other hand, unsupervised learning methods, which do not rely on ground truth during training, often fall short in performance when compared to their supervised or traditional denoising counterparts. Moreover, current unsupervised deep learning methods fail to address the specific challenges posed by erratic seismic noise adequately. This paper introduces a novel zero-shot unsupervised DL framework designed specifically to mitigate random and erratic noise, with a particular emphasis on blending noise. Drawing inspiration from Noise2Noise and data augmentation principles, we present a robust self-supervised denoising network named “Robust Noiser2Noiser.”. Our approach eliminates the need for paired noisy and clean datasets as required by supervised methods or paired noisy datasets as in Noise2Noise (N2N). Instead, our framework relies solely on the original noisy seismic dataset. Our methodology generates two independent re-corrupted datasets from the original noisy dataset, using one as the input and the other as the training target. Subsequently, we employ a deep-learning-based denoiser, DnCNN, for training purposes. To address various types of random and erratic noise, the original noisy dataset is re-corrupted with the same noise type. Detailed explanations for generating training input and target data for blended data are provided in the paper. We apply our proposed network to both synthetic and real marine data examples, demonstrating significantly improved noise attenuation performance compared to traditional denoising methods and state-of-the-art unsupervised learning methods.","PeriodicalId":509604,"journal":{"name":"GEOPHYSICS","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2024-05-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141105822","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Zhang Bo, Kelin Qu, C. Yin, Yunhe Liu, X. Ren, Yang Su, V. Baranwal
Airborne electromagnetic (AEM) surveys usually covers a large area and create a large amount of data. This has limited the application of three-dimensional (3D) AEM inversions. To make 3D AEM data inversion at a large scale possible, the local mesh method has been proposed to avoid solving large matrix equations in 3D AEM modeling. However, the local mesh only saves the computational cost and memory during forward modeling and Jacobian calculations. When the survey area is very large, the cost for storing and solving the inversion equations can be very high. This brings big challenges to practical 3D AEM inversions. To solve this problem, we develop a 3D scheme based on the block coordinate descent (BCD) method for inversions of large-scale AEM data. The BCD method divides the inversion for large models into series of small-local inversions, so that we can avoid solving the large matrix equations. Numerical experiments demonstrate that the BCD method can get very similar results to those from the existing inversion methods but saves huge amounts of memory.
{"title":"3D airborne electromagnetic data inversion basing on the block coordinate descent method","authors":"Zhang Bo, Kelin Qu, C. Yin, Yunhe Liu, X. Ren, Yang Su, V. Baranwal","doi":"10.1190/geo2023-0673.1","DOIUrl":"https://doi.org/10.1190/geo2023-0673.1","url":null,"abstract":"Airborne electromagnetic (AEM) surveys usually covers a large area and create a large amount of data. This has limited the application of three-dimensional (3D) AEM inversions. To make 3D AEM data inversion at a large scale possible, the local mesh method has been proposed to avoid solving large matrix equations in 3D AEM modeling. However, the local mesh only saves the computational cost and memory during forward modeling and Jacobian calculations. When the survey area is very large, the cost for storing and solving the inversion equations can be very high. This brings big challenges to practical 3D AEM inversions. To solve this problem, we develop a 3D scheme based on the block coordinate descent (BCD) method for inversions of large-scale AEM data. The BCD method divides the inversion for large models into series of small-local inversions, so that we can avoid solving the large matrix equations. Numerical experiments demonstrate that the BCD method can get very similar results to those from the existing inversion methods but saves huge amounts of memory.","PeriodicalId":509604,"journal":{"name":"GEOPHYSICS","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2024-05-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141109906","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Mohamad Sadegh Roudsari, R. Ghanati, Charles L. Bérubé
Induced polarization tomography offers the potential to better characterize the subsurface structures by considering spectral content from the data acquisition over a broad frequency range. Spectral induced polarization tomography is generally defined as a non-linear inverse problem commonly solved through deterministic gradient-based methods. To this end, the spectral parameters, i.e., DC resistivity, chargeability, relaxation time, and frequency exponent, are resolved by individually or simultaneously inverting all frequency data followed by fitting a generalized Cole-Cole model to the inverted complex resistivities. Due to the high correlation between Cole-Cole model parameters and a lack of knowledge about the initial approximation of the spectral parameters, using the classical least-square methods may lead to inaccurate solutions and impede reliable uncertainty analysis. To cope with these limitations, we introduce a new approach based on a hybrid application of a globally convergent homotopic continuation method and Bayesian inference to reconstruct the distribution of the subsurface spectral parameters. The homotopic optimization, owing to its fast and global convergence, is first implemented to invert multi-frequency spectral induced polarization datasets aimed at retrieving the complex-valued resistivity. Then, Bayesian inversion based on a Markov-chain Monte Carlo (McMC) sampling method along with a priori information including lower and upper bounds of the prior distributions is utilized to invert the complex resistivity for Cole-Cole model parameters. By applying the McMC inversion algorithm a full nonlinear uncertainty appraisal can be provided. We numerically evaluate the performance of the proposed method using synthetic and real data examples in the presence of topographical effects. Numerical results prove that the homotopic continuation method outperforms the classic smooth inversion algorithm in the sense of approximation accuracy and computational efficiency. we demonstrate that the proposed hybrid inversion strategy provides reliable representations of the main features and structure of the Earths subsurface in terms of the spectral parameters.
{"title":"Spectral Induced Polarization Tomography Inversion: Hybridizing Homotopic Continuation with Bayesian Inversion","authors":"Mohamad Sadegh Roudsari, R. Ghanati, Charles L. Bérubé","doi":"10.1190/geo2023-0644.1","DOIUrl":"https://doi.org/10.1190/geo2023-0644.1","url":null,"abstract":"Induced polarization tomography offers the potential to better characterize the subsurface structures by considering spectral content from the data acquisition over a broad frequency range. Spectral induced polarization tomography is generally defined as a non-linear inverse problem commonly solved through deterministic gradient-based methods. To this end, the spectral parameters, i.e., DC resistivity, chargeability, relaxation time, and frequency exponent, are resolved by individually or simultaneously inverting all frequency data followed by fitting a generalized Cole-Cole model to the inverted complex resistivities. Due to the high correlation between Cole-Cole model parameters and a lack of knowledge about the initial approximation of the spectral parameters, using the classical least-square methods may lead to inaccurate solutions and impede reliable uncertainty analysis. To cope with these limitations, we introduce a new approach based on a hybrid application of a globally convergent homotopic continuation method and Bayesian inference to reconstruct the distribution of the subsurface spectral parameters. The homotopic optimization, owing to its fast and global convergence, is first implemented to invert multi-frequency spectral induced polarization datasets aimed at retrieving the complex-valued resistivity. Then, Bayesian inversion based on a Markov-chain Monte Carlo (McMC) sampling method along with a priori information including lower and upper bounds of the prior distributions is utilized to invert the complex resistivity for Cole-Cole model parameters. By applying the McMC inversion algorithm a full nonlinear uncertainty appraisal can be provided. We numerically evaluate the performance of the proposed method using synthetic and real data examples in the presence of topographical effects. Numerical results prove that the homotopic continuation method outperforms the classic smooth inversion algorithm in the sense of approximation accuracy and computational efficiency. we demonstrate that the proposed hybrid inversion strategy provides reliable representations of the main features and structure of the Earths subsurface in terms of the spectral parameters.","PeriodicalId":509604,"journal":{"name":"GEOPHYSICS","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2024-05-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141109236","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
The regional-residual separation of gravity anomalies in crustal and mineral exploration was a graphical-based procedure before the advent of fast digital computers and the need for more efficient algorithms to process large data sets. However, since requiring the supervision of an experienced interpreter, the results once obtained with graphical procedures are often accepted as second to none in producing anomalies with geological significance. Numerical methods based on spectral filtering and robust polynomial fitting have worked in many scenarios but seem not fully effective in replicating in algorithms the kind of results once obtained with interpreter-assisted graphical methods. We develop a procedure (CARRS- Computed Assisted Regional Residual Separation) implemented by a set of short MATLAB scripts which in many aspects simulates the operations of former graphical methods but requires few decisions and minor hand-work from the interpreter. CARRS applies robust polynomial fitting to points with low horizontal gradient and vertical second-order derivative, thus selecting fitting points as a real interpreter would do with graphical approaches in outlining the regional field. A regularized procedure is used to calculate stable first and second-order derivatives. MATLAB codes in companion allow results replication and further exploration with different threshold levels to identify flat domain regions. CARRS is illustrated with airborne gravity data CPRM-1123 freely available to download. A data window of the residual field is used to analyze the distribution of cassiterite deposits in greisen zones of the Paleoproterozoic Velho Guilherme granites in the Amazon Craton, as their distribution appears in the observed gravity anomaly and its corresponding residual field.
{"title":"COMPUTER-ASSISTED REGIONAL-RESIDUAL GRAVITY ANOMALY SEPARATION WITH REGULARIZED FIRST AND SECOND-ORDER DERIVATIVES","authors":"Carlos Alberto Mendonça","doi":"10.1190/geo2023-0546.1","DOIUrl":"https://doi.org/10.1190/geo2023-0546.1","url":null,"abstract":"The regional-residual separation of gravity anomalies in crustal and mineral exploration was a graphical-based procedure before the advent of fast digital computers and the need for more efficient algorithms to process large data sets. However, since requiring the supervision of an experienced interpreter, the results once obtained with graphical procedures are often accepted as second to none in producing anomalies with geological significance. Numerical methods based on spectral filtering and robust polynomial fitting have worked in many scenarios but seem not fully effective in replicating in algorithms the kind of results once obtained with interpreter-assisted graphical methods. We develop a procedure (CARRS- Computed Assisted Regional Residual Separation) implemented by a set of short MATLAB scripts which in many aspects simulates the operations of former graphical methods but requires few decisions and minor hand-work from the interpreter. CARRS applies robust polynomial fitting to points with low horizontal gradient and vertical second-order derivative, thus selecting fitting points as a real interpreter would do with graphical approaches in outlining the regional field. A regularized procedure is used to calculate stable first and second-order derivatives. MATLAB codes in companion allow results replication and further exploration with different threshold levels to identify flat domain regions. CARRS is illustrated with airborne gravity data CPRM-1123 freely available to download. A data window of the residual field is used to analyze the distribution of cassiterite deposits in greisen zones of the Paleoproterozoic Velho Guilherme granites in the Amazon Craton, as their distribution appears in the observed gravity anomaly and its corresponding residual field.","PeriodicalId":509604,"journal":{"name":"GEOPHYSICS","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2024-05-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141116624","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Interpolation is a critical step in seismic data processing. Gaps in seismic traces can lead to severe spatial aliasing phenomena in the corresponding F-K spectra. The aliasing caused by regularly spaced gaps has similar F-K spectra as those of the actual data. Existing dealiasing interpolation algorithms generally assume that seismic events are linear, and cannot handle non-stationary events. To address this shortcoming, we proposed a novel dealiased seismic data interpolation approach using dynamic matching. First, we matched two adjacent seismic traces using the local affine regional dynamic time-warping algorithm. Subsequently, we calculated the local slope between two seismic traces. Finally, we performed linear interpolation on the regularly missing seismic data using local slope information. The proposed approach was tested on both synthetic and field seismic datasets. The interpolation results showed that the proposed approach has a better anti-aliasing ability and computational efficiency than the traditional Spitz and seislet-based approaches. Additionally, this method can also be applied to interpolate irregularly sampled seismic data and for simultaneous seismic data interpolation and denoising.
{"title":"Dealiased seismic data interpolation by dynamic matching","authors":"Yingjie Xu, Siwei Yu, Lieqian Dong, Jianwei Ma","doi":"10.1190/geo2023-0249.1","DOIUrl":"https://doi.org/10.1190/geo2023-0249.1","url":null,"abstract":"Interpolation is a critical step in seismic data processing. Gaps in seismic traces can lead to severe spatial aliasing phenomena in the corresponding F-K spectra. The aliasing caused by regularly spaced gaps has similar F-K spectra as those of the actual data. Existing dealiasing interpolation algorithms generally assume that seismic events are linear, and cannot handle non-stationary events. To address this shortcoming, we proposed a novel dealiased seismic data interpolation approach using dynamic matching. First, we matched two adjacent seismic traces using the local affine regional dynamic time-warping algorithm. Subsequently, we calculated the local slope between two seismic traces. Finally, we performed linear interpolation on the regularly missing seismic data using local slope information. The proposed approach was tested on both synthetic and field seismic datasets. The interpolation results showed that the proposed approach has a better anti-aliasing ability and computational efficiency than the traditional Spitz and seislet-based approaches. Additionally, this method can also be applied to interpolate irregularly sampled seismic data and for simultaneous seismic data interpolation and denoising.","PeriodicalId":509604,"journal":{"name":"GEOPHYSICS","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2024-05-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141117114","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Changes in reservoir parameters cause differences in seismic response characteristics, which can reflect changes in the formation lithology and fluids. Herein, seismic physical modeling and seismic response characteristic analysis of P-, SV-, and SH- wave fields were conducted. A seismic physical model was developed in the laboratory, which included several groups of sandstone blocks for simulating reservoir parameters, such as different fluid and clay contents. Two-dimensional wave field data of P-P, SV-SV, and SH-SH waves were acquired and processed in the laboratory. Compared with P- waves, SV- and SH- waves were insensitive to oil and air mediums and were almost unaffected by pore fluids, and the SH- stack profile was superior to the SV- stack profile. Compared with P- waves, shear waves are insensitive to fluids and are less affected by fluid saturation. The reflection events at the interface were slightly better for the SH- wave section than for the SV- wave section. The reflection coefficient of P- waves varied greatly on AVA gathers and was significantly influenced by factors, such as fluids. The variation in the SV- wave reflection coefficient on AVA gathers was not as significant as that of P- waves, and SV- and SH- wave were more conducive to identifying whether the block contained clay. The SH- wave was more reliable for seismic imaging. Overall, this study can assist in combining different seismic wave data for better hydrocarbon identification and reservoir description.
储层参数的变化会导致地震响应特征的差异,从而反映出地层岩性和流体的变化。在此,对 P 波、SV 波和 SH 波场进行了地震物理建模和地震响应特性分析。在实验室中建立了地震物理模型,其中包括几组砂岩块,用于模拟储层参数,如不同的流体和粘土含量。实验室采集并处理了 P-P、SV-SV 和 SH-SH 波的二维波场数据。与 P 波相比,SV 波和 SH 波对石油和空气介质不敏感,几乎不受孔隙流体的影响,SH 波叠加剖面优于 SV 波叠加剖面。与 P- 波相比,剪切波对流体不敏感,受流体饱和度的影响较小。SH 波剖面在界面处的反射事件略好于 SV 波剖面。P 波的反射系数在 AVA 波段上变化很大,受流体等因素的影响也很大。SV 波在 AVA 集块上的反射系数变化没有 P 波那么大,SV 波和 SH 波更有利于识别该区块是否含有粘土。在地震成像方面,SH 波更为可靠。总之,这项研究有助于结合不同的地震波数据,更好地识别油气和描述储层。
{"title":"Analyzing the seismic response characteristics of P-, SV-, and SH- waves to reservoir parameters using physical modeling","authors":"Pinbo Ding, Feng Zhang, Xiangyang Li, Y. Chai","doi":"10.1190/geo2023-0454.1","DOIUrl":"https://doi.org/10.1190/geo2023-0454.1","url":null,"abstract":"Changes in reservoir parameters cause differences in seismic response characteristics, which can reflect changes in the formation lithology and fluids. Herein, seismic physical modeling and seismic response characteristic analysis of P-, SV-, and SH- wave fields were conducted. A seismic physical model was developed in the laboratory, which included several groups of sandstone blocks for simulating reservoir parameters, such as different fluid and clay contents. Two-dimensional wave field data of P-P, SV-SV, and SH-SH waves were acquired and processed in the laboratory. Compared with P- waves, SV- and SH- waves were insensitive to oil and air mediums and were almost unaffected by pore fluids, and the SH- stack profile was superior to the SV- stack profile. Compared with P- waves, shear waves are insensitive to fluids and are less affected by fluid saturation. The reflection events at the interface were slightly better for the SH- wave section than for the SV- wave section. The reflection coefficient of P- waves varied greatly on AVA gathers and was significantly influenced by factors, such as fluids. The variation in the SV- wave reflection coefficient on AVA gathers was not as significant as that of P- waves, and SV- and SH- wave were more conducive to identifying whether the block contained clay. The SH- wave was more reliable for seismic imaging. Overall, this study can assist in combining different seismic wave data for better hydrocarbon identification and reservoir description.","PeriodicalId":509604,"journal":{"name":"GEOPHYSICS","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2024-05-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141118102","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Accurate seismic attenuation models of subsurface structures not only enhance subsequent migration processes by improving fidelity, resolution, and facilitating amplitude-compliant angle gather generation, but also provide valuable constraints on subsurface physical properties. Leveraging full wavefield information, multiparameter viscoacoustic full-waveform inversion ( Q-FWI) simultaneously estimates seismic velocity and attenuation ( Q) models. However, a major challenge in Q-FWI is the contamination of crosstalk artifacts, where inaccuracies in the velocity model get mistakenly mapped to the inverted attenuation model. While incorporating the Hessian is expected to mitigate these artifacts, the explicit implementation is prohibitively expensive due to its formidable computational cost. In this study, we formulate and develop a Q-FWI algorithm via the Newton-CG framework, where the search direction at each iteration is determined through an internal conjugate gradient (CG) loop. In particular, the Hessian is integrated into each CG step in a matrix-free fashion using the second-order adjoint-state method. We find through synthetic experiments that the proposed Newton-CG Q-FWI significantly mitigates crosstalk artifacts compared to the L-BFGS method and the conjugate gradient (CG) method, albeit with a notable computational cost. In the discussion of several key implementation details, we also demonstrate the significance of the approximate Gauss-Newton Hessian, the second-order adjoint-state method, and the two-stage inversion strategy.
{"title":"Advancing attenuation estimation through integration of the Hessian in multiparameter viscoacoustic full-Waveform inversion","authors":"G. Xing, Tieyuan Zhu","doi":"10.1190/geo2023-0634.1","DOIUrl":"https://doi.org/10.1190/geo2023-0634.1","url":null,"abstract":"Accurate seismic attenuation models of subsurface structures not only enhance subsequent migration processes by improving fidelity, resolution, and facilitating amplitude-compliant angle gather generation, but also provide valuable constraints on subsurface physical properties. Leveraging full wavefield information, multiparameter viscoacoustic full-waveform inversion ( Q-FWI) simultaneously estimates seismic velocity and attenuation ( Q) models. However, a major challenge in Q-FWI is the contamination of crosstalk artifacts, where inaccuracies in the velocity model get mistakenly mapped to the inverted attenuation model. While incorporating the Hessian is expected to mitigate these artifacts, the explicit implementation is prohibitively expensive due to its formidable computational cost. In this study, we formulate and develop a Q-FWI algorithm via the Newton-CG framework, where the search direction at each iteration is determined through an internal conjugate gradient (CG) loop. In particular, the Hessian is integrated into each CG step in a matrix-free fashion using the second-order adjoint-state method. We find through synthetic experiments that the proposed Newton-CG Q-FWI significantly mitigates crosstalk artifacts compared to the L-BFGS method and the conjugate gradient (CG) method, albeit with a notable computational cost. In the discussion of several key implementation details, we also demonstrate the significance of the approximate Gauss-Newton Hessian, the second-order adjoint-state method, and the two-stage inversion strategy.","PeriodicalId":509604,"journal":{"name":"GEOPHYSICS","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2024-05-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141114227","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
In recent years, the discontinuous Galerkin method (DGM) has been rapidly developed for the numerical simulation of seismic waves. For wavefield propagation between two adjacent elements, it is common practice to apply a numerical flux to the boundary of each element to propagate waves between adjacent elements. Several fluxes, including the center, penalty, Local LaxFriedrich (LLF), upwind, and RankineHugoniot jump condition-based (RH-condition) fluxes are widely used in numerical seismic wave simulation. However, some fluxes do not account for media differences between adjacent elements. Although different fluxes have been successfully used in DGM for many velocity models, it is unclear whether they can produce sufficiently accurate or stable results for strongly heterogeneous models, such as checkerboard models commonly used in tomographic studies. We test different fluxes using the acoustic wave equation. We analyzed the accuracy of the penalty, LLF, upwind, and RH-condition fluxes based on the results of the numerical simulations of the homogeneous and two-layer models. We conducted simulations using checkerboard models, and the results indicated that the LLF, penalty, and upwind fluxes may have instability problems in heterogeneous models with long-time simulations. We observed instability issues in the LLF, penalty, and upwind fluxes when the wave-impedance contrast was high at the media interface. However, the results of RH-condition flux remained consistently stable. The series of numerical examples presented in this work provide insights into the characteristics and application of fluxes for seismic wave modeling.
{"title":"Acoustic wave simulation in strongly heterogeneous models using a discontinuous Galerkin method","authors":"Wenzhong Cao, Wei Zhang, Weitao Wang","doi":"10.1190/geo2023-0525.1","DOIUrl":"https://doi.org/10.1190/geo2023-0525.1","url":null,"abstract":"In recent years, the discontinuous Galerkin method (DGM) has been rapidly developed for the numerical simulation of seismic waves. For wavefield propagation between two adjacent elements, it is common practice to apply a numerical flux to the boundary of each element to propagate waves between adjacent elements. Several fluxes, including the center, penalty, Local LaxFriedrich (LLF), upwind, and RankineHugoniot jump condition-based (RH-condition) fluxes are widely used in numerical seismic wave simulation. However, some fluxes do not account for media differences between adjacent elements. Although different fluxes have been successfully used in DGM for many velocity models, it is unclear whether they can produce sufficiently accurate or stable results for strongly heterogeneous models, such as checkerboard models commonly used in tomographic studies. We test different fluxes using the acoustic wave equation. We analyzed the accuracy of the penalty, LLF, upwind, and RH-condition fluxes based on the results of the numerical simulations of the homogeneous and two-layer models. We conducted simulations using checkerboard models, and the results indicated that the LLF, penalty, and upwind fluxes may have instability problems in heterogeneous models with long-time simulations. We observed instability issues in the LLF, penalty, and upwind fluxes when the wave-impedance contrast was high at the media interface. However, the results of RH-condition flux remained consistently stable. The series of numerical examples presented in this work provide insights into the characteristics and application of fluxes for seismic wave modeling.","PeriodicalId":509604,"journal":{"name":"GEOPHYSICS","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2024-05-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141119593","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
The fast automatic technique for determining the source parameter is very commonly used to interpret magnetic data. A new method is proposed to estimate the magnetic source parameter based on the any order analytic signals of magnetic anomalies at different altitudes. The new method is based on the relationship between the location, depth and structural index of the source and the expressions of analytic signals, and employs the altitude z and a depth scaling factor β to establish a new multiscale imaging method called Variable Depth Mirror Imaging (VDMI), whose extreme points are related to the source parameters. Two equations are given to calculate the source depth and structural index on the basis of the vertical positions of the peaks of VDMI sections with two different β. Moreover, a series of solutions of source parameters will be obtained when a number of β are selected, which can make the results more reasonable. The method is stable and can be directly applied to noisy anomalies or high-order derivatives because it is based on magnetic anomalies of upward continuation. In addition, the method is flexible as we can select different β as desired. Moreover, the method can be applied to multisource cases, and can simultaneously estimate the depth and structural index for each source. The method was tested on noise-free and noise-corrupted synthetic magnetic anomalies. In all cases, the VDMI method effectively estimates the depths and structural indices of the sources. The VDMI method was also applied to real aeromagnetic data from the Hamrawien area, Egypt, and ground magnetic data over Neibei Farm of Heilongjiang Province, China.
{"title":"A new multiscale imaging method for estimating the depth and structural index of magnetic source","authors":"Yanguo Wang, Ye Tian, Juzhi Deng","doi":"10.1190/geo2022-0674.1","DOIUrl":"https://doi.org/10.1190/geo2022-0674.1","url":null,"abstract":"The fast automatic technique for determining the source parameter is very commonly used to interpret magnetic data. A new method is proposed to estimate the magnetic source parameter based on the any order analytic signals of magnetic anomalies at different altitudes. The new method is based on the relationship between the location, depth and structural index of the source and the expressions of analytic signals, and employs the altitude z and a depth scaling factor β to establish a new multiscale imaging method called Variable Depth Mirror Imaging (VDMI), whose extreme points are related to the source parameters. Two equations are given to calculate the source depth and structural index on the basis of the vertical positions of the peaks of VDMI sections with two different β. Moreover, a series of solutions of source parameters will be obtained when a number of β are selected, which can make the results more reasonable. The method is stable and can be directly applied to noisy anomalies or high-order derivatives because it is based on magnetic anomalies of upward continuation. In addition, the method is flexible as we can select different β as desired. Moreover, the method can be applied to multisource cases, and can simultaneously estimate the depth and structural index for each source. The method was tested on noise-free and noise-corrupted synthetic magnetic anomalies. In all cases, the VDMI method effectively estimates the depths and structural indices of the sources. The VDMI method was also applied to real aeromagnetic data from the Hamrawien area, Egypt, and ground magnetic data over Neibei Farm of Heilongjiang Province, China.","PeriodicalId":509604,"journal":{"name":"GEOPHYSICS","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2024-05-15","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141128155","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}