Pub Date : 2024-06-13DOI: 10.1007/s10596-024-10296-9
Yujie Chen, K. Ling, Xiaoyu Zhang, Yue Xiang, Dongliang Sun, Bo Yu, Wei Zhang, Wen Tao
{"title":"Application of IDEAL algorithm based on the collocated unstructured grid for incompressible flows","authors":"Yujie Chen, K. Ling, Xiaoyu Zhang, Yue Xiang, Dongliang Sun, Bo Yu, Wei Zhang, Wen Tao","doi":"10.1007/s10596-024-10296-9","DOIUrl":"https://doi.org/10.1007/s10596-024-10296-9","url":null,"abstract":"","PeriodicalId":10662,"journal":{"name":"Computational Geosciences","volume":null,"pages":null},"PeriodicalIF":2.5,"publicationDate":"2024-06-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141346842","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"地球科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-05-28DOI: 10.1007/s10596-024-10289-8
Trond Mannseth
I consider the problem of model diagnostics, that is, the problem of criticizing a model prior to history matching by comparing data to an ensemble of simulated data based on the prior model (prior predictions). If the data are not deemed as a credible prior prediction by the model diagnostics, some settings of the model should be changed before history matching is attempted. I particularly target methodologies that are computationally feasible for large models with large amounts of data. A multiscale methodology, that can be applied to analyze differences between data and prior predictions in a scale-by-scale fashion, is proposed for this purpose. The methodology is computationally inexpensive, straightforward to apply, and can handle correlated observation errors without making approximations. The multiscale methodology is tested on a set of toy models, on two simplistic reservoir models with synthetic data, and on real data and prior predictions from the Norne field. The tests include comparisons with a previously published method (termed the Mahalanobis methodology in this paper). For the Norne case, both methodologies led to the same decisions regarding whether to accept or discard the data as a credible prior prediction. The multiscale methodology led to correct decisions for the toy models and the simplistic reservoir models. For these models, the Mahalanobis methodology either led to incorrect decisions, and/or was unstable with respect to selection of the ensemble of prior predictions.
{"title":"Multiscale model diagnostics","authors":"Trond Mannseth","doi":"10.1007/s10596-024-10289-8","DOIUrl":"https://doi.org/10.1007/s10596-024-10289-8","url":null,"abstract":"<p>I consider the problem of model diagnostics, that is, the problem of criticizing a model prior to history matching by comparing data to an ensemble of simulated data based on the prior model (prior predictions). If the data are not deemed as a credible prior prediction by the model diagnostics, some settings of the model should be changed before history matching is attempted. I particularly target methodologies that are computationally feasible for large models with large amounts of data. A multiscale methodology, that can be applied to analyze differences between data and prior predictions in a scale-by-scale fashion, is proposed for this purpose. The methodology is computationally inexpensive, straightforward to apply, and can handle correlated observation errors without making approximations. The multiscale methodology is tested on a set of toy models, on two simplistic reservoir models with synthetic data, and on real data and prior predictions from the Norne field. The tests include comparisons with a previously published method (termed the Mahalanobis methodology in this paper). For the Norne case, both methodologies led to the same decisions regarding whether to accept or discard the data as a credible prior prediction. The multiscale methodology led to correct decisions for the toy models and the simplistic reservoir models. For these models, the Mahalanobis methodology either led to incorrect decisions, and/or was unstable with respect to selection of the ensemble of prior predictions.</p>","PeriodicalId":10662,"journal":{"name":"Computational Geosciences","volume":null,"pages":null},"PeriodicalIF":2.5,"publicationDate":"2024-05-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141167756","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"地球科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-05-18DOI: 10.1007/s10596-024-10294-x
Bo Zhang, Weidong Li, Chuanrong Zhang
Markov chain geostatistics is a methodology for simulating categorical fields. Its fundamental model for conditional simulation is the Markov chain random field (MCRF) model, with the transiogram serving as its basic spatial correlation measure. There are different methods to obtain transiogram models for MCRF simulation based on sample data and expert knowledge: linear interpolation, mathematical model joint-fitting, and a mixed approach combining both. This study aims to explore the sensitivity of the MCRF model to different transiogram jointing modeling methods. Two case studies were conducted to examine how simulated results, including optimal prediction maps and simulated realization maps, vary with different sets of transiogram models. The results indicate that all three transiogram joint modeling methods are applicable, and the MCRF model exhibits a general insensitivity to transiogram models produced by different methods, particularly when sample data are sufficient to generate reliable experimental transiograms. The variations in overall simulation accuracies based on different sets of transiogram models are not significant. However, notable improvements in simulation accuracy for minor classes were observed when theoretical transiogram models (generated by mathematical model fitting with expert knowledge) were utilized. This study suggests that methods for deriving transiogram models from experimental transiograms perform well in conditional simulations of categorical soil variables when meaningful experimental transiograms can be estimated. Employing mathematical models for transiogram modeling of minor classes provides a way to incorporate expert knowledge and improve the simulation accuracy of minor classes.
{"title":"Sensitivity analysis of the MCRF model to different transiogram joint modeling methods for simulating categorical spatial variables","authors":"Bo Zhang, Weidong Li, Chuanrong Zhang","doi":"10.1007/s10596-024-10294-x","DOIUrl":"https://doi.org/10.1007/s10596-024-10294-x","url":null,"abstract":"<p>Markov chain geostatistics is a methodology for simulating categorical fields. Its fundamental model for conditional simulation is the Markov chain random field (MCRF) model, with the transiogram serving as its basic spatial correlation measure. There are different methods to obtain transiogram models for MCRF simulation based on sample data and expert knowledge: linear interpolation, mathematical model joint-fitting, and a mixed approach combining both. This study aims to explore the sensitivity of the MCRF model to different transiogram jointing modeling methods. Two case studies were conducted to examine how simulated results, including optimal prediction maps and simulated realization maps, vary with different sets of transiogram models. The results indicate that all three transiogram joint modeling methods are applicable, and the MCRF model exhibits a general insensitivity to transiogram models produced by different methods, particularly when sample data are sufficient to generate reliable experimental transiograms. The variations in overall simulation accuracies based on different sets of transiogram models are not significant. However, notable improvements in simulation accuracy for minor classes were observed when theoretical transiogram models (generated by mathematical model fitting with expert knowledge) were utilized. This study suggests that methods for deriving transiogram models from experimental transiograms perform well in conditional simulations of categorical soil variables when meaningful experimental transiograms can be estimated. Employing mathematical models for transiogram modeling of minor classes provides a way to incorporate expert knowledge and improve the simulation accuracy of minor classes.</p>","PeriodicalId":10662,"journal":{"name":"Computational Geosciences","volume":null,"pages":null},"PeriodicalIF":2.5,"publicationDate":"2024-05-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141060551","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"地球科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-05-13DOI: 10.1007/s10596-024-10293-y
Wansheng Gao, Insa Neuweiler, Thomas Wick
In this work, various high-accuracy numerical schemes for transport problems in fractured media are further developed and compared. Specifically, to capture sharp gradients and abrupt changes in time, schemes with low order of accuracy are not always sufficient. To this end, discontinuous Galerkin up to order two, Streamline Upwind Petrov-Galerkin, and finite differences, are formulated. The resulting schemes are solved with sparse direct numerical solvers. Moreover, time discontinuous Galerkin methods of order one and two are solved monolithically and in a decoupled fashion, respectively, employing finite elements in space on locally refined meshes. Our algorithmic developments are substantiated with one regular fracture network and several further configurations in fractured media with large parameter contrasts on small length scales. Therein, the evaluation of the numerical schemes and implementations focuses on three key aspects, namely accuracy, monotonicity, and computational costs.
{"title":"A comparison study of spatial and temporal schemes for flow and transport problems in fractured media with large parameter contrasts on small length scales","authors":"Wansheng Gao, Insa Neuweiler, Thomas Wick","doi":"10.1007/s10596-024-10293-y","DOIUrl":"https://doi.org/10.1007/s10596-024-10293-y","url":null,"abstract":"<p>In this work, various high-accuracy numerical schemes for transport problems in fractured media are further developed and compared. Specifically, to capture sharp gradients and abrupt changes in time, schemes with low order of accuracy are not always sufficient. To this end, discontinuous Galerkin up to order two, Streamline Upwind Petrov-Galerkin, and finite differences, are formulated. The resulting schemes are solved with sparse direct numerical solvers. Moreover, time discontinuous Galerkin methods of order one and two are solved monolithically and in a decoupled fashion, respectively, employing finite elements in space on locally refined meshes. Our algorithmic developments are substantiated with one regular fracture network and several further configurations in fractured media with large parameter contrasts on small length scales. Therein, the evaluation of the numerical schemes and implementations focuses on three key aspects, namely accuracy, monotonicity, and computational costs.</p>","PeriodicalId":10662,"journal":{"name":"Computational Geosciences","volume":null,"pages":null},"PeriodicalIF":2.5,"publicationDate":"2024-05-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140942038","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"地球科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-05-09DOI: 10.1007/s10596-024-10288-9
Samah El Mohtar, Olivier Le Maître, Omar Knio, Ibrahim Hoteit
Identifying the source of an oil spill is an essential step in environmental forensics. The Bayesian approach allows to estimate the source parameters of an oil spill from available observations. Sampling the posterior distribution, however, can be computationally prohibitive unless the forward model is replaced by an inexpensive surrogate. Yet the construction of globally accurate surrogates can be challenging when the forward model exhibits strong nonlinear variations. We present an iterative data-driven algorithm for the construction of polynomial chaos surrogates whose accuracy is localized in regions of high posterior probability. Two synthetic oil spill experiments, in which the construction of prior-based surrogates is not feasible, are conducted to assess the performance of the proposed algorithm in estimating five source parameters. The algorithm successfully provided a good approximation of the posterior distribution and accelerated the estimation of the oil spill source parameters and their uncertainties by an order of 100 folds.
{"title":"Iterative data-driven construction of surrogates for an efficient Bayesian identification of oil spill source parameters from image contours","authors":"Samah El Mohtar, Olivier Le Maître, Omar Knio, Ibrahim Hoteit","doi":"10.1007/s10596-024-10288-9","DOIUrl":"https://doi.org/10.1007/s10596-024-10288-9","url":null,"abstract":"<p>Identifying the source of an oil spill is an essential step in environmental forensics. The Bayesian approach allows to estimate the source parameters of an oil spill from available observations. Sampling the posterior distribution, however, can be computationally prohibitive unless the forward model is replaced by an inexpensive surrogate. Yet the construction of globally accurate surrogates can be challenging when the forward model exhibits strong nonlinear variations. We present an iterative data-driven algorithm for the construction of polynomial chaos surrogates whose accuracy is localized in regions of high posterior probability. Two synthetic oil spill experiments, in which the construction of prior-based surrogates is not feasible, are conducted to assess the performance of the proposed algorithm in estimating five source parameters. The algorithm successfully provided a good approximation of the posterior distribution and accelerated the estimation of the oil spill source parameters and their uncertainties by an order of 100 folds.</p>","PeriodicalId":10662,"journal":{"name":"Computational Geosciences","volume":null,"pages":null},"PeriodicalIF":2.5,"publicationDate":"2024-05-09","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140930919","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"地球科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-05-07DOI: 10.1007/s10596-024-10290-1
Sui Bun Lo, Oubay Hassan, Jason Jones, Xiaolong Liu, Nevan C Himmelberg, Dean Thornton
This work proposes a novel meshing technique that is able to extract surfaces from processed seismic data and integrate surfaces that were constructed using other extraction techniques. Contrary to other existing methods, the process is fully automated and does not require any user intervention. The proposed system includes an approach for closing the gaps that arise from the different techniques used for surface extraction. The developed process is able to handle non-manifold domains that result from multiple surface intersections. Surface and volume meshing that comply with user specified mesh control techniques are implemented to ensure the desired mesh quality. The integrated procedures provide a unique facility to handle geotechnical models and accelerate the generation of quality meshes for geophysics modelling. The developed procedure enables the creation of meshes for complex reservoir models to be reduced from weeks to a few hours. Various industrial examples are shown to demonstrate the practicable use of the developed approach to handle real life data.
{"title":"Automation of the meshing process of geological data","authors":"Sui Bun Lo, Oubay Hassan, Jason Jones, Xiaolong Liu, Nevan C Himmelberg, Dean Thornton","doi":"10.1007/s10596-024-10290-1","DOIUrl":"https://doi.org/10.1007/s10596-024-10290-1","url":null,"abstract":"<p>This work proposes a novel meshing technique that is able to extract surfaces from processed seismic data and integrate surfaces that were constructed using other extraction techniques. Contrary to other existing methods, the process is fully automated and does not require any user intervention. The proposed system includes an approach for closing the gaps that arise from the different techniques used for surface extraction. The developed process is able to handle non-manifold domains that result from multiple surface intersections. Surface and volume meshing that comply with user specified mesh control techniques are implemented to ensure the desired mesh quality. The integrated procedures provide a unique facility to handle geotechnical models and accelerate the generation of quality meshes for geophysics modelling. The developed procedure enables the creation of meshes for complex reservoir models to be reduced from weeks to a few hours. Various industrial examples are shown to demonstrate the practicable use of the developed approach to handle real life data.</p>","PeriodicalId":10662,"journal":{"name":"Computational Geosciences","volume":null,"pages":null},"PeriodicalIF":2.5,"publicationDate":"2024-05-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140887882","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"地球科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-05-01DOI: 10.1007/s10596-024-10292-z
Matthias A. Cremon, Jacques Franc, François P. Hamon
This work studies the performance of a novel preconditioner, designed for thermal reservoir simulation cases and recently introduced in Roy et al. (SIAM J. Sci. Comput. 42, 2020) and Cremon et al. (J. Comput. Phys. 418C, 2020), on large-scale thermal CO(_2) injection cases. For Carbon Capture and Sequestration (CCS) projects, injecting CO(_2) under supercritical conditions is typically tens of degrees colder than the reservoir temperature. Thermal effects can have a significant impact on the simulation results, but they also add many challenges for the solvers. More specifically, the usual combination of an iterative linear solver (such as GMRES) and the Constrained Pressure Residual (CPR) physics-based block-preconditioner is known to perform rather poorly or fail to converge when thermal effects play a significant role. The Constrained Pressure-Temperature Residual (CPTR) preconditioner retains the (2times 2) block structure (elliptic/hyperbolic) of CPR but includes the temperature in the elliptic subsystem. Doing so allows the solver to appropriately handle the long-range, elliptic part of the parabolic energy equation. The elliptic subsystem is now formed by two equations, and is dealt with by the system-solver of BoomerAMG (from the HYPRE library). Then a global smoother, ILU(0), is applied to the full system to handle the local, hyperbolic temperature fronts. We implemented CPTR in the multi-physics solver GEOS and present results on various large-scale thermal CCS simulation cases, including both Cartesian and fully unstructured meshes, up to tens of millions of degrees of freedom. The CPTR preconditioner severely reduces the number of GMRES iterations and the runtime, with cases timing out in 24h with CPR now requiring a few hours with CPTR. We present strong scaling results using hundreds of CPU cores for multiple cases, and show close to linear scaling. CPTR is also virtually insensitive to the thermal Péclet number (which compares advection and diffusion effects) and is suitable to any thermal regime.
{"title":"Constrained pressure-temperature residual (CPTR) preconditioner performance for large-scale thermal CO $$_2$$ injection simulation","authors":"Matthias A. Cremon, Jacques Franc, François P. Hamon","doi":"10.1007/s10596-024-10292-z","DOIUrl":"https://doi.org/10.1007/s10596-024-10292-z","url":null,"abstract":"<p>This work studies the performance of a novel preconditioner, designed for thermal reservoir simulation cases and recently introduced in Roy et al. (SIAM J. Sci. Comput. <b>42</b>, 2020) and Cremon et al. (J. Comput. Phys. <b>418C</b>, 2020), on large-scale thermal CO<span>(_2)</span> injection cases. For Carbon Capture and Sequestration (CCS) projects, injecting CO<span>(_2)</span> under supercritical conditions is typically tens of degrees colder than the reservoir temperature. Thermal effects can have a significant impact on the simulation results, but they also add many challenges for the solvers. More specifically, the usual combination of an iterative linear solver (such as GMRES) and the Constrained Pressure Residual (CPR) physics-based block-preconditioner is known to perform rather poorly or fail to converge when thermal effects play a significant role. The Constrained Pressure-Temperature Residual (CPTR) preconditioner retains the <span>(2times 2)</span> block structure (elliptic/hyperbolic) of CPR but includes the temperature in the elliptic subsystem. Doing so allows the solver to appropriately handle the long-range, elliptic part of the parabolic energy equation. The elliptic subsystem is now formed by two equations, and is dealt with by the system-solver of BoomerAMG (from the HYPRE library). Then a global smoother, ILU(0), is applied to the full system to handle the local, hyperbolic temperature fronts. We implemented CPTR in the multi-physics solver GEOS and present results on various large-scale thermal CCS simulation cases, including both Cartesian and fully unstructured meshes, up to tens of millions of degrees of freedom. The CPTR preconditioner severely reduces the number of GMRES iterations and the runtime, with cases timing out in 24h with CPR now requiring a few hours with CPTR. We present strong scaling results using hundreds of CPU cores for multiple cases, and show close to linear scaling. CPTR is also virtually insensitive to the thermal Péclet number (which compares advection and diffusion effects) and is suitable to any thermal regime.</p>","PeriodicalId":10662,"journal":{"name":"Computational Geosciences","volume":null,"pages":null},"PeriodicalIF":2.5,"publicationDate":"2024-05-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140829198","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"地球科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-04-23DOI: 10.1007/s10596-024-10286-x
Elfitra Desifatma, I. Djaja, P. M. Pratomo, Supriyadi, E. Mustopa, M. Evita, M. Djamal, Wahyu Srigutomo
{"title":"Robust inversion of 1D magnetotelluric data using the Huber loss function","authors":"Elfitra Desifatma, I. Djaja, P. M. Pratomo, Supriyadi, E. Mustopa, M. Evita, M. Djamal, Wahyu Srigutomo","doi":"10.1007/s10596-024-10286-x","DOIUrl":"https://doi.org/10.1007/s10596-024-10286-x","url":null,"abstract":"","PeriodicalId":10662,"journal":{"name":"Computational Geosciences","volume":null,"pages":null},"PeriodicalIF":2.5,"publicationDate":"2024-04-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140667258","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"地球科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
We study linear models for the prediction of the initial guess for the nonlinear Newton-Raphson solver. These models use one or more of the previous simulation steps for prediction, and their parameters are estimated by the ordinary least-squares method. A key feature of the approach is that the parameter estimation is performed using data obtained directly during the simulation and the models are updated in real time. Thus we avoid the expensive process of dataset generation and the need for pre-trained models. We validate the workflow on a standard benchmark Egg dataset of two-phase flow in porous media and compare it to standard approaches for the estimation of initial guess. We demonstrate that the proposed approach leads to reduction in the number of iterations in the Newton-Raphson algorithm and speeds up simulation time. In particular, for the Egg dataset, we obtained a 30% reduction in the number of nonlinear iterations and a 20% reduction in the simulation time.
{"title":"Speeding up the reservoir simulation by real time prediction of the initial guess for the Newton-Raphson’s iterations","authors":"Musheg Petrosyants, Vladislav Trifonov, Egor Illarionov, Dmitry Koroteev","doi":"10.1007/s10596-024-10284-z","DOIUrl":"https://doi.org/10.1007/s10596-024-10284-z","url":null,"abstract":"<p>We study linear models for the prediction of the initial guess for the nonlinear Newton-Raphson solver. These models use one or more of the previous simulation steps for prediction, and their parameters are estimated by the ordinary least-squares method. A key feature of the approach is that the parameter estimation is performed using data obtained directly during the simulation and the models are updated in real time. Thus we avoid the expensive process of dataset generation and the need for pre-trained models. We validate the workflow on a standard benchmark Egg dataset of two-phase flow in porous media and compare it to standard approaches for the estimation of initial guess. We demonstrate that the proposed approach leads to reduction in the number of iterations in the Newton-Raphson algorithm and speeds up simulation time. In particular, for the Egg dataset, we obtained a 30% reduction in the number of nonlinear iterations and a 20% reduction in the simulation time.</p>","PeriodicalId":10662,"journal":{"name":"Computational Geosciences","volume":null,"pages":null},"PeriodicalIF":2.5,"publicationDate":"2024-04-09","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140569732","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"地球科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}