Pub Date : 2024-07-27DOI: 10.1007/s10712-024-09848-6
Gui Chen, Yang Liu, Mi Zhang, Yuhang Sun, Haoran Zhang
Low-rank approximation has emerged as a promising technique for recovering five-dimensional (5D) seismic data, yet the quest for higher accuracy and stronger rank robustness remains a critical pursuit. We introduce a low-rank approximation method by leveraging the complete graph tensor network (CGTN) decomposition and the learnable transform (LT), referred to as the LRA-LTCGTN method, to simultaneously denoise and reconstruct 5D seismic data. In the LRA-LTCGTN framework, the LT is employed to project the frequency tensor of the original 5D data onto a small-scale latent space. Subsequently, the CGTN decomposition is executed on this latent space. We adopt the proximal alternating minimization algorithm to optimize each variable. Both 5D synthetic data and field data examples indicate that the LRA-LTCGTN method exhibits notable advantages and superior efficiency compared to the damped rank-reduction (DRR), parallel matrix factorization (PMF), and LRA-CGTN methods. Moreover, a sensitivity analysis underscores the remarkably stronger robustness of the LRA-LTCGTN method in terms of rank without any optimization procedure with respect to rank, compared to the LRA-CGTN method.
{"title":"Low-Rank Approximation Reconstruction of Five-Dimensional Seismic Data","authors":"Gui Chen, Yang Liu, Mi Zhang, Yuhang Sun, Haoran Zhang","doi":"10.1007/s10712-024-09848-6","DOIUrl":"10.1007/s10712-024-09848-6","url":null,"abstract":"<div><p>Low-rank approximation has emerged as a promising technique for recovering five-dimensional (5D) seismic data, yet the quest for higher accuracy and stronger rank robustness remains a critical pursuit. We introduce a low-rank approximation method by leveraging the complete graph tensor network (CGTN) decomposition and the learnable transform (LT), referred to as the LRA-LTCGTN method, to simultaneously denoise and reconstruct 5D seismic data. In the LRA-LTCGTN framework, the LT is employed to project the frequency tensor of the original 5D data onto a small-scale latent space. Subsequently, the CGTN decomposition is executed on this latent space. We adopt the proximal alternating minimization algorithm to optimize each variable. Both 5D synthetic data and field data examples indicate that the LRA-LTCGTN method exhibits notable advantages and superior efficiency compared to the damped rank-reduction (DRR), parallel matrix factorization (PMF), and LRA-CGTN methods. Moreover, a sensitivity analysis underscores the remarkably stronger robustness of the LRA-LTCGTN method in terms of rank without any optimization procedure with respect to rank, compared to the LRA-CGTN method.</p></div>","PeriodicalId":49458,"journal":{"name":"Surveys in Geophysics","volume":"45 5","pages":"1459 - 1492"},"PeriodicalIF":4.9,"publicationDate":"2024-07-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141769067","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"地球科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-07-26DOI: 10.1007/s10712-024-09850-y
José M. Carcione, Francesco Mainardi, Ayman N. Qadrouh, Mamdoh Alajmi, Jing Ba
The quality factor Q is a dimensionless measure of the energy loss per cycle of a wave field, and a proper understanding of this factor is important in a variety of fields, from seismology, geophysical prospecting to electrical science. Here, the focus is on viscoelasticity. When interpreting experimental values, several factors must be taken into account, in particular the shape of the medium (rods, bars or unbounded media) and the fact that the measurements are made on stationary or propagating modes. From a theoretical point of view, the expressions of Q may differ due to different definitions, the spatial dimension and the inhomogeneity of the wave, i.e. the fact that the vectors of propagation (or wavenumber) and attenuation do not point in the same direction. We show the difference between temporal and spatial Q, the relationships between compressional and shear Q, the dependence on frequency, the case of poro-viscoelasticity and anisotropy, the effect of inhomogeneous waves and various loss mechanisms, and consider the analogy between elastic and electromagnetic waves. We discuss physical theories describing relaxation peaks, bounds on Q and experiments showing the behaviour of Q as a function of frequency, saturation and pore pressure. Finally, we propose an application example where Q can be used to estimate porosity and saturation.
{"title":"Q: A Review","authors":"José M. Carcione, Francesco Mainardi, Ayman N. Qadrouh, Mamdoh Alajmi, Jing Ba","doi":"10.1007/s10712-024-09850-y","DOIUrl":"10.1007/s10712-024-09850-y","url":null,"abstract":"<div><p>The quality factor <i>Q</i> is a dimensionless measure of the energy loss per cycle of a wave field, and a proper understanding of this factor is important in a variety of fields, from seismology, geophysical prospecting to electrical science. Here, the focus is on viscoelasticity. When interpreting experimental values, several factors must be taken into account, in particular the shape of the medium (rods, bars or unbounded media) and the fact that the measurements are made on stationary or propagating modes. From a theoretical point of view, the expressions of <i>Q</i> may differ due to different definitions, the spatial dimension and the inhomogeneity of the wave, i.e. the fact that the vectors of propagation (or wavenumber) and attenuation do not point in the same direction. We show the difference between temporal and spatial <i>Q</i>, the relationships between compressional and shear <i>Q</i>, the dependence on frequency, the case of poro-viscoelasticity and anisotropy, the effect of inhomogeneous waves and various loss mechanisms, and consider the analogy between elastic and electromagnetic waves. We discuss physical theories describing relaxation peaks, bounds on <i>Q</i> and experiments showing the behaviour of <i>Q</i> as a function of frequency, saturation and pore pressure. Finally, we propose an application example where <i>Q</i> can be used to estimate porosity and saturation.</p></div>","PeriodicalId":49458,"journal":{"name":"Surveys in Geophysics","volume":"45 5","pages":"1435 - 1458"},"PeriodicalIF":4.9,"publicationDate":"2024-07-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141768463","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"地球科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-07-09DOI: 10.1007/s10712-024-09835-x
Zhuowei Li, Jiawen Song, Rongzhi Lin, Benfeng Wang
Blended acquisition offers significant cost and period reduction in seismic data acquisition. However, fired blended sources are usually deployed at off-the-grid (OffG) samples due to obstacle limitation and economic cost considerations. The irregular distribution of coordinates, along with the blending noise, has a detrimental effect on the performance of subsequent seismic processing and imaging. The interpolated multichannel singular spectrum analysis (I-MSSA) algorithm effectively provides on-the-grid deblended results by employing an interpolator, in conjunction with a projected gradient descent strategy. However, the deblending accuracy and computational efficiency of the I-MSSA are still a concern due to the limitations of the traditional singular value decomposition (SVD). To address these limitations, we propose an interpolated fast damped multichannel singular spectrum analysis (I-FDMSSA) rank-reduction algorithm. The proposed algorithm incorporates the damping operator, the randomized SVD (RSVD) and the fast Fourier transform (FFT) strategy. The damping operator can further attenuate the remaining noise in the estimated signal obtained from the truncated SVD, resulting in an improved deblending performance. The RSVD accelerates the rank-reduction process by shrinking the size of the Hankel matrix. To expedite the rank-reduction and anti-diagonal averaging stages without explicitly constructing large-scale block Hankel matrices, the FFT strategy is employed. By incorporating a 2D separable sinc interpolator, the I-FDMSSA enables an efficient and accurate deblending of 3D OffG blended data. The deblending performance and operational efficiency improvements of the proposed I-FDMSSA algorithm over the traditional I-MSSA algorithm are demonstrated through OffG synthetic and field blended data examples.
{"title":"Interpolated Fast Damped Multichannel Singular Spectrum Analysis for Deblending of Off-the-Grid Blended Data","authors":"Zhuowei Li, Jiawen Song, Rongzhi Lin, Benfeng Wang","doi":"10.1007/s10712-024-09835-x","DOIUrl":"10.1007/s10712-024-09835-x","url":null,"abstract":"<div><p>Blended acquisition offers significant cost and period reduction in seismic data acquisition. However, fired blended sources are usually deployed at off-the-grid (OffG) samples due to obstacle limitation and economic cost considerations. The irregular distribution of coordinates, along with the blending noise, has a detrimental effect on the performance of subsequent seismic processing and imaging. The interpolated multichannel singular spectrum analysis (I-MSSA) algorithm effectively provides on-the-grid deblended results by employing an interpolator, in conjunction with a projected gradient descent strategy. However, the deblending accuracy and computational efficiency of the I-MSSA are still a concern due to the limitations of the traditional singular value decomposition (SVD). To address these limitations, we propose an interpolated fast damped multichannel singular spectrum analysis (I-FDMSSA) rank-reduction algorithm. The proposed algorithm incorporates the damping operator, the randomized SVD (RSVD) and the fast Fourier transform (FFT) strategy. The damping operator can further attenuate the remaining noise in the estimated signal obtained from the truncated SVD, resulting in an improved deblending performance. The RSVD accelerates the rank-reduction process by shrinking the size of the Hankel matrix. To expedite the rank-reduction and anti-diagonal averaging stages without explicitly constructing large-scale block Hankel matrices, the FFT strategy is employed. By incorporating a 2D separable sinc interpolator, the I-FDMSSA enables an efficient and accurate deblending of 3D OffG blended data. The deblending performance and operational efficiency improvements of the proposed I-FDMSSA algorithm over the traditional I-MSSA algorithm are demonstrated through OffG synthetic and field blended data examples.</p></div>","PeriodicalId":49458,"journal":{"name":"Surveys in Geophysics","volume":"45 4","pages":"1177 - 1204"},"PeriodicalIF":4.9,"publicationDate":"2024-07-09","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141561243","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"地球科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-06-22DOI: 10.1007/s10712-024-09845-9
Xiaojiao Pang, Guiwen Wang, Lichun Kuang, Jin Lai, Nigel P. Mountney
Lacustrine shale oil resources are essential for the maintenance of energy supply. Fluid types and contents play important roles in estimating resource potential and oil recovery from organic-rich shales. Precise identification of fluid types hosted in shale oil reservoir successions that are characterized by marked lithological heterogeneity from only a single well is a significant challenge. Although previous research has proposed a large number of methods for determining both porosity and fluid saturation, many can only be applied in limited situations, and several have limited accuracy. In this study, an advanced logging technique, combinable magnetic resonance logging (CMR-NG), is used to evaluate fluid types. Two-dimensional nuclear magnetic resonance (2D-NMR) experiments on reservoir rocks subject to different conditions (as received, after being dried at 105 ℃, and kerosene imbibed) were carried out to define the fluid types and classification criteria. Then, with the corresponding Rock–Eval pyrolysis parameters and various mineral contents from X-ray diffraction, the contribution of organic matter and mineral compositions was investigated. Subsequently, the content of different fluid types is calculated by CMR-NG (combinable magnetic resonance logging, viz. 2D NMR logging). According to the fluid classification criteria under experimental conditions and the production data, the most favorable model and optimal solution for logging evaluation was selected. Finally, fluid saturations of the Cretaceous Qingshankou Formation in the Gulong Sag were calculated for a single well. Results show that six fluid types (kerogen-bitumen-group OH, irreducible oil, movable oil, clay-bound water, irreducible water, and movable water) can be recognized through the applied 2D NMR test. The kerogen-bitumen-group OH was mostly affected by pyrolysis hydrocarbon (S2) and irreducible oil by soluble hydrocarbon (S1). However, kerogen-bitumen-group OH and clay-bound water cannot be detected by CMR-NG due to the effects of underground environmental conditions on the instruments. Strata Q8–Q9 of the Qing 2 member of the cretaceous Qingshankou Formation are the most favorable layers of shale oil. This research provides insights into the factors controlling fluid types and contents; it provides guidance in the exploration and development of unconventional resources, for example, for geothermal and carbon capture, utilization, and storage reservoirs.
{"title":"Investigation of Fluid Types in Shale Oil Reservoirs","authors":"Xiaojiao Pang, Guiwen Wang, Lichun Kuang, Jin Lai, Nigel P. Mountney","doi":"10.1007/s10712-024-09845-9","DOIUrl":"10.1007/s10712-024-09845-9","url":null,"abstract":"<div><p>Lacustrine shale oil resources are essential for the maintenance of energy supply. Fluid types and contents play important roles in estimating resource potential and oil recovery from organic-rich shales. Precise identification of fluid types hosted in shale oil reservoir successions that are characterized by marked lithological heterogeneity from only a single well is a significant challenge. Although previous research has proposed a large number of methods for determining both porosity and fluid saturation, many can only be applied in limited situations, and several have limited accuracy. In this study, an advanced logging technique, combinable magnetic resonance logging (CMR-NG), is used to evaluate fluid types. Two-dimensional nuclear magnetic resonance (2D-NMR) experiments on reservoir rocks subject to different conditions (as received, after being dried at 105 ℃, and kerosene imbibed) were carried out to define the fluid types and classification criteria. Then, with the corresponding Rock–Eval pyrolysis parameters and various mineral contents from X-ray diffraction, the contribution of organic matter and mineral compositions was investigated. Subsequently, the content of different fluid types is calculated by CMR-NG (combinable magnetic resonance logging, viz. 2D NMR logging). According to the fluid classification criteria under experimental conditions and the production data, the most favorable model and optimal solution for logging evaluation was selected. Finally, fluid saturations of the Cretaceous Qingshankou Formation in the Gulong Sag were calculated for a single well. Results show that six fluid types (kerogen-bitumen-group OH, irreducible oil, movable oil, clay-bound water, irreducible water, and movable water) can be recognized through the applied 2D NMR test. The kerogen-bitumen-group OH was mostly affected by pyrolysis hydrocarbon (S<sub>2</sub>) and irreducible oil by soluble hydrocarbon (S<sub>1</sub>). However, kerogen-bitumen-group OH and clay-bound water cannot be detected by CMR-NG due to the effects of underground environmental conditions on the instruments. Strata Q8–Q9 of the Qing 2 member of the cretaceous Qingshankou Formation are the most favorable layers of shale oil. This research provides insights into the factors controlling fluid types and contents; it provides guidance in the exploration and development of unconventional resources, for example, for geothermal and carbon capture, utilization, and storage reservoirs.</p></div>","PeriodicalId":49458,"journal":{"name":"Surveys in Geophysics","volume":"45 5","pages":"1561 - 1594"},"PeriodicalIF":4.9,"publicationDate":"2024-06-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141439814","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"地球科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-06-14DOI: 10.1007/s10712-024-09846-8
Qiang Feng, Liguo Han, Liyun Ma, Qiang Li
Microseismic source localization methods with deep learning can directly predict the source location from recorded microseismic data, showing remarkably high accuracy and efficiency. Two main categories of deep learning-based localization methods are coordinate prediction methods and heatmap prediction methods. Coordinate prediction methods provide only a source coordinate and generally do not provide a measure of confidence in the source location. Heatmap prediction methods require the assumption that the microseismic source is located on a grid point. Thus, they tend to provide lower resolution information and localization results may lose precision. This study reviews and compares previous methods for locating the source based on deep learning. To address the limitations of existing methods, we devise a network fusing a convolutional neural network and a Transformer to locate microseismic sources. We first introduce the multi-modal heatmap combining the Gaussian heatmap and the offset coefficient map to represent the source location. The offset coefficients are utilized to correct the source locations predicted by the Gaussian heatmap so that the source is no longer confined to the grid point. We then propose a fusion network to accurately estimate the source location. A gated multi-scale feature fusion module is developed to efficiently fuse features from different branches. Experiments on synthetic and field data demonstrate that the proposed method yields highly accurate localization results. A comprehensive comparison of coordinate prediction method and heatmap prediction methods with our proposed method demonstrates that the proposed method outperforms the other methods.
{"title":"High-Precision Microseismic Source Localization Using a Fusion Network Combining Convolutional Neural Network and Transformer","authors":"Qiang Feng, Liguo Han, Liyun Ma, Qiang Li","doi":"10.1007/s10712-024-09846-8","DOIUrl":"10.1007/s10712-024-09846-8","url":null,"abstract":"<div><p>Microseismic source localization methods with deep learning can directly predict the source location from recorded microseismic data, showing remarkably high accuracy and efficiency. Two main categories of deep learning-based localization methods are coordinate prediction methods and heatmap prediction methods. Coordinate prediction methods provide only a source coordinate and generally do not provide a measure of confidence in the source location. Heatmap prediction methods require the assumption that the microseismic source is located on a grid point. Thus, they tend to provide lower resolution information and localization results may lose precision. This study reviews and compares previous methods for locating the source based on deep learning. To address the limitations of existing methods, we devise a network fusing a convolutional neural network and a Transformer to locate microseismic sources. We first introduce the multi-modal heatmap combining the Gaussian heatmap and the offset coefficient map to represent the source location. The offset coefficients are utilized to correct the source locations predicted by the Gaussian heatmap so that the source is no longer confined to the grid point. We then propose a fusion network to accurately estimate the source location. A gated multi-scale feature fusion module is developed to efficiently fuse features from different branches. Experiments on synthetic and field data demonstrate that the proposed method yields highly accurate localization results. A comprehensive comparison of coordinate prediction method and heatmap prediction methods with our proposed method demonstrates that the proposed method outperforms the other methods.</p></div>","PeriodicalId":49458,"journal":{"name":"Surveys in Geophysics","volume":"45 5","pages":"1527 - 1560"},"PeriodicalIF":4.9,"publicationDate":"2024-06-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141319814","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"地球科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Prior model construction is a fundamental component in geophysical inversion, especially Bayesian inversion. The prior model, usually derived from available geological information, can reduce the uncertainty of model characteristics during the inversion. However, the prior geological data for inferring a prior distribution model are often limited in real cases. Our work presents a novel framework to create 3D geophysical prior models using soil geochemistry and borehole rock sample measurements. We focus on the Bayesian inversion, which enables encoding of knowledge and multiple non-geophysical data into the prior. The new framework developed in our research comprises three main parts, namely correlation analysis, prior model reconstruction, and Bayesian inversion. We investigate the correlations between surface and subsurface geochemical features, as well as the correlation between geochemistry and geophysics, using canonical correlation analysis for the surface and borehole geochemistry. Based on the resulting correlations, we construct the prior susceptibility model. The informed prior model is then tested using geophysical forward modeling and outlier detection methods. In this test, we aim to falsify the prior model, which happens when the model cannot predict the field geophysical observation. To obtain the posterior models, the reliable prior models are incorporated into a Bayesian inversion framework. Using a real case of exploration in the Central African Copperbelt, we illustrate the workflow of constructing the high-resolution 3D stratigraphic model conditioned on soil geochemistry, borehole data, and airborne geophysics.
{"title":"Constructing Priors for Geophysical Inversions Constrained by Surface and Borehole Geochemistry","authors":"Xiaolong Wei, Zhen Yin, Celine Scheidt, Kris Darnell, Lijing Wang, Jef Caers","doi":"10.1007/s10712-024-09843-x","DOIUrl":"10.1007/s10712-024-09843-x","url":null,"abstract":"<div><p>Prior model construction is a fundamental component in geophysical inversion, especially Bayesian inversion. The prior model, usually derived from available geological information, can reduce the uncertainty of model characteristics during the inversion. However, the prior geological data for inferring a prior distribution model are often limited in real cases. Our work presents a novel framework to create 3D geophysical prior models using soil geochemistry and borehole rock sample measurements. We focus on the Bayesian inversion, which enables encoding of knowledge and multiple non-geophysical data into the prior. The new framework developed in our research comprises three main parts, namely correlation analysis, prior model reconstruction, and Bayesian inversion. We investigate the correlations between surface and subsurface geochemical features, as well as the correlation between geochemistry and geophysics, using canonical correlation analysis for the surface and borehole geochemistry. Based on the resulting correlations, we construct the prior susceptibility model. The informed prior model is then tested using geophysical forward modeling and outlier detection methods. In this test, we aim to falsify the prior model, which happens when the model cannot predict the field geophysical observation. To obtain the posterior models, the reliable prior models are incorporated into a Bayesian inversion framework. Using a real case of exploration in the Central African Copperbelt, we illustrate the workflow of constructing the high-resolution 3D stratigraphic model conditioned on soil geochemistry, borehole data, and airborne geophysics.</p></div>","PeriodicalId":49458,"journal":{"name":"Surveys in Geophysics","volume":"45 4","pages":"1047 - 1079"},"PeriodicalIF":4.9,"publicationDate":"2024-06-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141319820","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"地球科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-06-13DOI: 10.1007/s10712-024-09841-z
Xinpeng Pan, Jianxin Liu
The subsurface in situ stress fields significantly influence the elastic and anisotropic properties of rocks, yet traditional linear elastic theories often overlook the impact of stress on seismic response characteristics. Nonlinear acoustoelastic theory integrates third-order elastic constants (TOECs) to elucidate the influence of stress on changes in elastic and anisotropic properties of stressed rocks. A comprehensive examination of recent scholarly investigations on nonlinear acoustoelastic phenomena precedes the introduction of an innovative stress-dependent equation for the PP-wave reflection coefficient. This equation delineates the dependence of azimuthal seismic response on horizontal uniaxial stress in inherently vertical transversely isotropic (VTI) media, or those VTI formations induced by a single set of horizontal aligned fractures. Emphasis is placed on delineating stress-induced anisotropy and elucidating azimuthal PP-wave reflection characteristics in horizontally uniaxially stressed VTI media. Additionally, this discourse extends to more intricate scenarios involving horizontally biaxially and triaxially stressed VTI media, as delineated by nonlinear acoustoelastic theory. Subsequently, the reflection coefficient of horizontally uniaxially stressed VTI media is expressed in terms of azimuthal Fourier coefficients (FCs), revealing that the unstressed VTI background exhibits heightened sensitivity to zeroth-order FC, while the stress-induced anisotropy manifests greater sensitivity to second-order FC. Through the application of azimuthal FCs-based amplitude versus offset and azimuth (AVOAz) inversion method to both synthetic and field datasets, the proposed model and approach offer promising avenues for reservoir characterization in VTI media subject to horizontal uniaxial stress conditions.
{"title":"Stress-Dependent PP-Wave Reflection Coefficient for Fourier-Coefficients-Based Seismic Inversion in Horizontally Stressed Vertical Transversely Isotropic Media","authors":"Xinpeng Pan, Jianxin Liu","doi":"10.1007/s10712-024-09841-z","DOIUrl":"10.1007/s10712-024-09841-z","url":null,"abstract":"<div><p>The subsurface in situ stress fields significantly influence the elastic and anisotropic properties of rocks, yet traditional linear elastic theories often overlook the impact of stress on seismic response characteristics. Nonlinear acoustoelastic theory integrates third-order elastic constants (TOECs) to elucidate the influence of stress on changes in elastic and anisotropic properties of stressed rocks. A comprehensive examination of recent scholarly investigations on nonlinear acoustoelastic phenomena precedes the introduction of an innovative stress-dependent equation for the PP-wave reflection coefficient. This equation delineates the dependence of azimuthal seismic response on horizontal uniaxial stress in inherently vertical transversely isotropic (VTI) media, or those VTI formations induced by a single set of horizontal aligned fractures. Emphasis is placed on delineating stress-induced anisotropy and elucidating azimuthal PP-wave reflection characteristics in horizontally uniaxially stressed VTI media. Additionally, this discourse extends to more intricate scenarios involving horizontally biaxially and triaxially stressed VTI media, as delineated by nonlinear acoustoelastic theory. Subsequently, the reflection coefficient of horizontally uniaxially stressed VTI media is expressed in terms of azimuthal Fourier coefficients (FCs), revealing that the unstressed VTI background exhibits heightened sensitivity to zeroth-order FC, while the stress-induced anisotropy manifests greater sensitivity to second-order FC. Through the application of azimuthal FCs-based amplitude versus offset and azimuth (AVOAz) inversion method to both synthetic and field datasets, the proposed model and approach offer promising avenues for reservoir characterization in VTI media subject to horizontal uniaxial stress conditions.</p></div>","PeriodicalId":49458,"journal":{"name":"Surveys in Geophysics","volume":"45 4","pages":"1143 - 1176"},"PeriodicalIF":4.9,"publicationDate":"2024-06-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141319782","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"地球科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-06-04DOI: 10.1007/s10712-024-09844-w
T. F. Stocker, R. G. Jones, M. I. Hegglin, T. M. Lenton, G. C. Hegerl, S. I. Seneviratne, N. van der Wel, R. A. Wood
There is a diverging perception of climate tipping points, abrupt changes and surprises in the scientific community and the public. While such dynamics have been observed in the past, e.g., frequent reductions of the Atlantic meridional overturning circulation during the last ice age, or ice sheet collapses, tipping points might also be a possibility in an anthropogenically perturbed climate. In this context, high impact—low likelihood events, both in the physical realm as well as in ecosystems, will be potentially dangerous. Here we argue that a formalized assessment of the state of science is needed in order to establish a consensus on this issue and to reconcile diverging views. This has been the approach taken by the Intergovernmental Panel on Climate Change (IPCC). Since 1990, the IPCC has consistently generated robust consensus on several complex issues, ranging from the detection and attribution of climate change, the global carbon budget and climate sensitivity, to the projection of extreme events and their impact. Here, we suggest that a scientific assessment on tipping points, conducted collaboratively by the IPCC and the Intergovernmental Science-Policy Platform on Biodiversity and Ecosystem Services, would represent an ambitious yet necessary goal to be accomplished within the next decade.
{"title":"Reflecting on the Science of Climate Tipping Points to Inform and Assist Policy Making and Address the Risks they Pose to Society","authors":"T. F. Stocker, R. G. Jones, M. I. Hegglin, T. M. Lenton, G. C. Hegerl, S. I. Seneviratne, N. van der Wel, R. A. Wood","doi":"10.1007/s10712-024-09844-w","DOIUrl":"https://doi.org/10.1007/s10712-024-09844-w","url":null,"abstract":"<p>There is a diverging perception of climate tipping points, abrupt changes and surprises in the scientific community and the public. While such dynamics have been observed in the past, e.g., frequent reductions of the Atlantic meridional overturning circulation during the last ice age, or ice sheet collapses, tipping points might also be a possibility in an anthropogenically perturbed climate. In this context, high impact—low likelihood events, both in the physical realm as well as in ecosystems, will be potentially dangerous. Here we argue that a formalized assessment of the state of science is needed in order to establish a consensus on this issue and to reconcile diverging views. This has been the approach taken by the Intergovernmental Panel on Climate Change (IPCC). Since 1990, the IPCC has consistently generated robust consensus on several complex issues, ranging from the detection and attribution of climate change, the global carbon budget and climate sensitivity, to the projection of extreme events and their impact. Here, we suggest that a scientific assessment on tipping points, conducted collaboratively by the IPCC and the Intergovernmental Science-Policy Platform on Biodiversity and Ecosystem Services, would represent an ambitious yet necessary goal to be accomplished within the next decade.</p>","PeriodicalId":49458,"journal":{"name":"Surveys in Geophysics","volume":"72 1","pages":""},"PeriodicalIF":4.6,"publicationDate":"2024-06-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141246343","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"地球科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-05-31DOI: 10.1007/s10712-024-09831-1
Graeme L. Stephens, Kathleen A. Shiro, Maria Z. Hakuba, Hanii Takahashi, Juliet A. Pilewskie, Timothy Andrews, Claudia J. Stubenrauch, Longtao Wu
This paper is concerned with how the diabatically-forced overturning circulations of the atmosphere, established by the deep convection within the tropical trough zone (TTZ), first introduced by Riehl and (Malkus) Simpson, in Contr Atmos Phys 52:287–305 (1979), fundamentally shape the distributions of tropical and subtropical cloudiness and the changes to cloudiness as Earth warms. The study first draws on an analysis of a range of observations to understand the connections between the energetics of the TTZ, convection and clouds. These observations reveal a tight coupling of the two main components of the diabatic heating, the cloud component of radiative heating, shaped mostly by high clouds formed by deep convection, and the latent heating associated with the precipitation. Interannual variability of the TTZ reveals a marked variation that connects the depth of the tropical troposphere, the depth of convection, the thickness of high clouds and the TOA radiative imbalance. The study examines connections between this convective zone and cloud changes further afield in the context of CMIP6 model experiments of climate warming. The warming realized in the CMIP6 SSP5-8.5 scenario multi-model experiments, for example, produces an enhanced Hadley circulation with increased heating in the zone of tropical deep convection and increased radiative cooling and subsidence in the subtropical regions. This impacts low cloud changes and in turn the model warming response through low cloud feedbacks. The pattern of warming produced by models, also influenced by convection in the tropical region, has a profound influence on the projected global warming.
{"title":"Tropical Deep Convection, Cloud Feedbacks and Climate Sensitivity","authors":"Graeme L. Stephens, Kathleen A. Shiro, Maria Z. Hakuba, Hanii Takahashi, Juliet A. Pilewskie, Timothy Andrews, Claudia J. Stubenrauch, Longtao Wu","doi":"10.1007/s10712-024-09831-1","DOIUrl":"https://doi.org/10.1007/s10712-024-09831-1","url":null,"abstract":"<p>This paper is concerned with how the diabatically-forced overturning circulations of the atmosphere, established by the deep convection within the tropical trough zone (TTZ), first introduced by Riehl and (Malkus) Simpson, in Contr Atmos Phys 52:287–305 (1979), fundamentally shape the distributions of tropical and subtropical cloudiness and the changes to cloudiness as Earth warms. The study first draws on an analysis of a range of observations to understand the connections between the energetics of the TTZ, convection and clouds. These observations reveal a tight coupling of the two main components of the diabatic heating, the cloud component of radiative heating, shaped mostly by high clouds formed by deep convection, and the latent heating associated with the precipitation. Interannual variability of the TTZ reveals a marked variation that connects the depth of the tropical troposphere, the depth of convection, the thickness of high clouds and the TOA radiative imbalance. The study examines connections between this convective zone and cloud changes further afield in the context of CMIP6 model experiments of climate warming. The warming realized in the CMIP6 SSP5-8.5 scenario multi-model experiments, for example, produces an enhanced Hadley circulation with increased heating in the zone of tropical deep convection and increased radiative cooling and subsidence in the subtropical regions. This impacts low cloud changes and in turn the model warming response through low cloud feedbacks. The pattern of warming produced by models, also influenced by convection in the tropical region, has a profound influence on the projected global warming.</p>","PeriodicalId":49458,"journal":{"name":"Surveys in Geophysics","volume":"99 1","pages":""},"PeriodicalIF":4.6,"publicationDate":"2024-05-31","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141182373","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"地球科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Rayleigh wave exploration is a powerful method for estimating near-surface shear-wave (S-wave) velocities, providing valuable insights into the stiffness properties of subsurface materials inside the Earth. The dispersion curve inversion of Rayleigh wave corresponds to the optimization process of searching for the optimal solutions of earth model parameters based on the measured dispersion curves. At present, diversified inversion algorithms have been introduced into the process of Rayleigh wave inversion. However, limited studies have been conducted to uncover the variations in inversion performance among commonly used inversion algorithms. To obtain a comprehensive understanding of the optimization performance of these inversion algorithms, we systematically investigate and quantitatively assess the inversion performance of two bionic algorithms, two probabilistic algorithms, a gradient-based algorithm, and two neural network algorithms. The evaluation indices include the computational cost, accuracy, stability, generalization ability, noise effects, and field data processing capability. It is found that the Bound-constrained limited-memory Broyden–Fletcher–Goldfarb–Shanno (L-BFGS-B) algorithm and the broad learning (BL) network have the lowest computational cost among candidate algorithms. Furthermore, the transitional Markov Chain Monte Carlo algorithm, deep learning (DL) network, and BL network outperform the other four algorithms regarding accuracy, stability, resistance to noise effects, and capability to process field data. The DL and BL networks demonstrate the highest level of generalization compared to the other algorithms. The comparison results reveal the variations in candidate algorithms for the inversion task, causing a clear understanding of the inversion performance of candidate algorithms. This study can promote the S-wave velocity estimation by Rayleigh wave inversion.
瑞利波探测是一种估算近地表剪切波(S 波)速度的强大方法,可为了解地球内部地下材料的刚度特性提供宝贵的信息。雷利波频散曲线反演相当于根据测得的频散曲线寻找地球模型参数最优解的优化过程。目前,已有多种反演算法被引入到瑞利波反演过程中。然而,对常用反演算法之间反演性能差异的研究还很有限。为了全面了解这些反演算法的优化性能,我们对两种仿生算法、两种概率算法、一种基于梯度的算法和两种神经网络算法的反演性能进行了系统研究和定量评估。评价指标包括计算成本、精度、稳定性、泛化能力、噪声影响和现场数据处理能力。结果发现,在候选算法中,有界约束的有限内存 Broyden-Fletcher-Goldfarb-Shanno 算法(L-BFGS-B)和广义学习(BL)网络的计算成本最低。此外,过渡马尔可夫链蒙特卡洛算法、深度学习(DL)网络和广义学习(BL)网络在准确性、稳定性、抗噪声影响和处理现场数据的能力方面都优于其他四种算法。与其他算法相比,DL 和 BL 网络的泛化程度最高。比较结果揭示了反演任务中候选算法的差异,使人们对候选算法的反演性能有了清晰的认识。这项研究可促进通过瑞利波反演估算 S 波速度。
{"title":"Near-Surface Rayleigh Wave Dispersion Curve Inversion Algorithms: A Comprehensive Comparison","authors":"Xiao-Hui Yang, Yuanyuan Zhou, Peng Han, Xuping Feng, Xiaofei Chen","doi":"10.1007/s10712-024-09826-y","DOIUrl":"10.1007/s10712-024-09826-y","url":null,"abstract":"<div><p>Rayleigh wave exploration is a powerful method for estimating near-surface shear-wave (S-wave) velocities, providing valuable insights into the stiffness properties of subsurface materials inside the Earth. The dispersion curve inversion of Rayleigh wave corresponds to the optimization process of searching for the optimal solutions of earth model parameters based on the measured dispersion curves. At present, diversified inversion algorithms have been introduced into the process of Rayleigh wave inversion. However, limited studies have been conducted to uncover the variations in inversion performance among commonly used inversion algorithms. To obtain a comprehensive understanding of the optimization performance of these inversion algorithms, we systematically investigate and quantitatively assess the inversion performance of two bionic algorithms, two probabilistic algorithms, a gradient-based algorithm, and two neural network algorithms. The evaluation indices include the computational cost, accuracy, stability, generalization ability, noise effects, and field data processing capability. It is found that the Bound-constrained limited-memory Broyden–Fletcher–Goldfarb–Shanno (L-BFGS-B) algorithm and the broad learning (BL) network have the lowest computational cost among candidate algorithms. Furthermore, the transitional Markov Chain Monte Carlo algorithm, deep learning (DL) network, and BL network outperform the other four algorithms regarding accuracy, stability, resistance to noise effects, and capability to process field data. The DL and BL networks demonstrate the highest level of generalization compared to the other algorithms. The comparison results reveal the variations in candidate algorithms for the inversion task, causing a clear understanding of the inversion performance of candidate algorithms. This study can promote the S-wave velocity estimation by Rayleigh wave inversion.</p></div>","PeriodicalId":49458,"journal":{"name":"Surveys in Geophysics","volume":"45 3","pages":"773 - 818"},"PeriodicalIF":4.9,"publicationDate":"2024-05-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://link.springer.com/content/pdf/10.1007/s10712-024-09826-y.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141074295","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"地球科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}