Tight sandstone reservoirs have low porosity, low permeability, and a complex pore structure. The seepage from tight sandstones is a key factor in evaluating the oil and gas accumulation in these reservoirs. Therefore, reservoir permeability prediction has become the focus of researchers. Using nuclear magnetic resonance (NMR), high-pressure mercury injection, scanning electron microscopy, and other experimental methods, scholars have established various permeability prediction models, which have obvious advantages and disadvantages. However, there is less research conducted on predicting the permeability of tight sandstone reservoirs according to their single-peak NMR T2 distribution. Based on NMR experiments and the bimodal Gaussian density formula, this study identified the criteria for determining the types of reservoir pore structures with single-peak NMR T2 distribution and established the parameters (η1 and η2) that can be used in the evaluation of reservoir pore structure. A novel model for predicting the permeability of tight sandstone reservoirs was established using η1 and η2. The results of the prediction model proposed in this study were found to be superior to the results of eight permeability prediction models established by other scholars in the studied case of the Huangliu Formation. However, permeability prediction models established using the NMR experimental results of different sources were found to be ineffective. Additionally, the new model is suitable for use with sandstone reservoirs with both single-peak and double-peak NMR T2 distributions in the studied case of the Yanchang Formation. Logging curves can be used to predict η1 and η2, and the permeability of a single well of a tight sandstone reservoir. The study findings would be useful for predicting tight sandstone reservoir permeability.
{"title":"A Permeability Prediction Model of Single-Peak NMR T2 Distribution in Tight Sandstones: A Case Study on the Huangliu Formation, Yinggehai Basin, China","authors":"Jing Zhao, Zhilong Huang, Jin Dong, Jingyuan Zhang, Rui Wang, Chonglin Ma, Guangjun Deng, Maguang Xu","doi":"10.1007/s11004-023-10118-1","DOIUrl":"https://doi.org/10.1007/s11004-023-10118-1","url":null,"abstract":"<p>Tight sandstone reservoirs have low porosity, low permeability, and a complex pore structure. The seepage from tight sandstones is a key factor in evaluating the oil and gas accumulation in these reservoirs. Therefore, reservoir permeability prediction has become the focus of researchers. Using nuclear magnetic resonance (NMR), high-pressure mercury injection, scanning electron microscopy, and other experimental methods, scholars have established various permeability prediction models, which have obvious advantages and disadvantages. However, there is less research conducted on predicting the permeability of tight sandstone reservoirs according to their single-peak NMR <i>T</i><sub>2</sub> distribution. Based on NMR experiments and the bimodal Gaussian density formula, this study identified the criteria for determining the types of reservoir pore structures with single-peak NMR <i>T</i><sub>2</sub> distribution and established the parameters (<i>η</i><sub>1</sub> and <i>η</i><sub>2</sub>) that can be used in the evaluation of reservoir pore structure. A novel model for predicting the permeability of tight sandstone reservoirs was established using <i>η</i><sub>1</sub> and <i>η</i><sub>2</sub>. The results of the prediction model proposed in this study were found to be superior to the results of eight permeability prediction models established by other scholars in the studied case of the Huangliu Formation. However, permeability prediction models established using the NMR experimental results of different sources were found to be ineffective. Additionally, the new model is suitable for use with sandstone reservoirs with both single-peak and double-peak NMR <i>T</i><sub>2</sub> distributions in the studied case of the Yanchang Formation. Logging curves can be used to predict <i>η</i><sub>1</sub> and <i>η</i><sub>2</sub>, and the permeability of a single well of a tight sandstone reservoir. The study findings would be useful for predicting tight sandstone reservoir permeability.</p>","PeriodicalId":51117,"journal":{"name":"Mathematical Geosciences","volume":"79 1","pages":""},"PeriodicalIF":2.6,"publicationDate":"2024-01-03","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"139082663","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"地球科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-01-02DOI: 10.1007/s11004-023-10120-7
João Lino Pereira, Emmanouil A. Varouchakis, George P. Karatzas, Leonardo Azevedo
Groundwater resources in Mediterranean coastal aquifers are under several threats including saltwater intrusion. This situation is exacerbated by the absence of sustainable management plans for groundwater resources. Management and monitoring of groundwater systems require an integrated approach and the joint interpretation of any available information. This work investigates how uncertainty can be integrated within the geo-modelling workflow when creating numerical three-dimensional aquifer models with electrical resistivity borehole logs, geostatistical simulation and Bayesian model averaging. Multiple geological scenarios of electrical resistivity are created with geostatistical simulation by removing one borehole at a time from the set of available boreholes. To account for the spatial uncertainty simultaneously reflected by the multiple geostatistical scenarios, Bayesian model averaging is used to combine the probability distribution functions of each scenario into a global one, thus providing more credible uncertainty intervals. The proposed methodology is applied to a water-stressed groundwater system located in Crete that is threatened by saltwater intrusion. The results obtained agree with the general knowledge of this complex environment and enable sustainable groundwater management policies to be devised considering optimistic and pessimistic scenarios.
{"title":"Uncertainty Quantification in Geostatistical Modelling of Saltwater Intrusion at a Coastal Aquifer System","authors":"João Lino Pereira, Emmanouil A. Varouchakis, George P. Karatzas, Leonardo Azevedo","doi":"10.1007/s11004-023-10120-7","DOIUrl":"https://doi.org/10.1007/s11004-023-10120-7","url":null,"abstract":"<p>Groundwater resources in Mediterranean coastal aquifers are under several threats including saltwater intrusion. This situation is exacerbated by the absence of sustainable management plans for groundwater resources. Management and monitoring of groundwater systems require an integrated approach and the joint interpretation of any available information. This work investigates how uncertainty can be integrated within the geo-modelling workflow when creating numerical three-dimensional aquifer models with electrical resistivity borehole logs, geostatistical simulation and Bayesian model averaging. Multiple geological scenarios of electrical resistivity are created with geostatistical simulation by removing one borehole at a time from the set of available boreholes. To account for the spatial uncertainty simultaneously reflected by the multiple geostatistical scenarios, Bayesian model averaging is used to combine the probability distribution functions of each scenario into a global one, thus providing more credible uncertainty intervals. The proposed methodology is applied to a water-stressed groundwater system located in Crete that is threatened by saltwater intrusion. The results obtained agree with the general knowledge of this complex environment and enable sustainable groundwater management policies to be devised considering optimistic and pessimistic scenarios.</p>","PeriodicalId":51117,"journal":{"name":"Mathematical Geosciences","volume":"106 1","pages":""},"PeriodicalIF":2.6,"publicationDate":"2024-01-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"139078232","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"地球科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2023-12-19DOI: 10.1007/s11004-023-10122-5
Manfred Mudelsee
The linear calibration model is a powerful statistical tool that can be utilized to predict an unknown response variable, Y, through observations of a proxy or predictor variable, X. Since calibration involves estimation of regression model parameters on the basis of a limited amount of noisy data, an unbiased calibration slope estimation is of utmost importance. This can be achieved by means of state-of-the-art, data-driven statistical techniques. The present paper shows that weighted least-squares for both variables estimation (WLSXY) is able to deliver unbiased slope estimations under heteroscedasticity. In the case of homoscedasticity, besides WLSXY, ordinary least-squares (OLS) estimation with bias correction (OLSBC) also performs well. For achieving unbiasedness, it is further necessary to take the correct regression direction (i.e., of Y on X) into account. The present paper introduces a pairwise moving block bootstrap resampling approach for obtaining accurate estimation confidence intervals (CIs) under real-world climate conditions (i.e., non-Gaussian distributional shapes and autocorrelations in the noise components). A Monte Carlo simulation experiment confirms the feasibility and validity of this approach. The parameter estimates and bootstrap replications serve to predict the response with CIs. The methodological approach to unbiased calibration is illustrated for a paired time series dataset of sea-surface temperature and coral oxygen isotopic composition. Fortran software with implementation of OLSBC and WLSXY accompanies this paper.
线性校准模型是一种强大的统计工具,可用于通过观测替代变量或预测变量 X 来预测未知响应变量 Y。这可以通过最先进的数据驱动统计技术来实现。本文表明,双变量加权最小二乘法估计(WLSXY)能够在异方差情况下提供无偏的斜率估计。在同方差情况下,除 WLSXY 外,带偏差修正的普通最小二乘法(OLS)估计(OLSBC)也有很好的表现。为了实现无偏,还需要考虑正确的回归方向(即 Y 对 X 的回归方向)。本文介绍了一种成对移动块引导重采样方法,用于在实际气候条件下(即噪声成分的非高斯分布形状和自相关性)获得准确的估计置信区间(CI)。蒙特卡罗模拟实验证实了这种方法的可行性和有效性。参数估计和引导复制可用于预测具有 CIs 的响应。以海面温度和珊瑚氧同位素组成的成对时间序列数据集为例,说明了无偏校准的方法。本文附有实现 OLSBC 和 WLSXY 的 Fortran 软件。
{"title":"Unbiased Proxy Calibration","authors":"Manfred Mudelsee","doi":"10.1007/s11004-023-10122-5","DOIUrl":"https://doi.org/10.1007/s11004-023-10122-5","url":null,"abstract":"<p>The linear calibration model is a powerful statistical tool that can be utilized to predict an unknown response variable, <i>Y</i>, through observations of a proxy or predictor variable, <i>X</i>. Since calibration involves estimation of regression model parameters on the basis of a limited amount of noisy data, an unbiased calibration slope estimation is of utmost importance. This can be achieved by means of state-of-the-art, data-driven statistical techniques. The present paper shows that weighted least-squares for both variables estimation (WLSXY) is able to deliver unbiased slope estimations under heteroscedasticity. In the case of homoscedasticity, besides WLSXY, ordinary least-squares (OLS) estimation with bias correction (OLSBC) also performs well. For achieving unbiasedness, it is further necessary to take the correct regression direction (i.e., of <i>Y</i> on <i>X</i>) into account. The present paper introduces a pairwise moving block bootstrap resampling approach for obtaining accurate estimation confidence intervals (CIs) under real-world climate conditions (i.e., non-Gaussian distributional shapes and autocorrelations in the noise components). A Monte Carlo simulation experiment confirms the feasibility and validity of this approach. The parameter estimates and bootstrap replications serve to predict the response with CIs. The methodological approach to unbiased calibration is illustrated for a paired time series dataset of sea-surface temperature and coral oxygen isotopic composition. Fortran software with implementation of OLSBC and WLSXY accompanies this paper.</p>","PeriodicalId":51117,"journal":{"name":"Mathematical Geosciences","volume":"6 1","pages":""},"PeriodicalIF":2.6,"publicationDate":"2023-12-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"138741196","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"地球科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2023-12-19DOI: 10.1007/s11004-023-10124-3
Artur Posenato Garcia, Zoya Heidari
A thorough understanding of the interplay between polarization mechanisms is pivotal for the interpretation of electrical measurements, since sub-megahertz electrical measurements in sedimentary rocks are dominated by interfacial polarization mechanisms. Nonetheless, rock-physics models oversimplify pore-network geometry and the interaction of electric double layers relating to adjacent grains. Numerical algorithms present the best possible framework in which to characterize the electrical response of sedimentary rocks, avoiding the constraints intrinsic to rock-physics models. Recently, an algorithm was introduced that can simulate the interactions of electric fields with the ions in solution. The sub-kilohertz permittivity enhancement in sedimentary rocks is dominated by Stern-layer polarization, but a model for the polarization mechanism associated with the Stern layer has not been developed. Hence, the aim of this paper is to develop and test a numerical simulation framework to quantify the influence of Stern- and diffuse-layer polarization, temperature, ion concentration, and pore-network geometry on multi-frequency complex electrical measurements. The algorithm numerically solves the Poisson–Nernst–Planck equations in the time domain and a mineral-dependent electrochemical adsorption/desorption equilibrium model to determine surface charge distribution. Then, the numerical simulator is utilized to perform a sensitivity analysis to quantify the influence of electrolyte and interfacial properties on the permittivity of pore-scale samples at different frequencies.
{"title":"A New Numerical Simulation Framework to Model the Electric Interfacial Polarization Effects and Corresponding Impacts on Complex Dielectric Permittivity Measurements in Sedimentary Rocks","authors":"Artur Posenato Garcia, Zoya Heidari","doi":"10.1007/s11004-023-10124-3","DOIUrl":"https://doi.org/10.1007/s11004-023-10124-3","url":null,"abstract":"<p>A thorough understanding of the interplay between polarization mechanisms is pivotal for the interpretation of electrical measurements, since sub-megahertz electrical measurements in sedimentary rocks are dominated by interfacial polarization mechanisms. Nonetheless, rock-physics models oversimplify pore-network geometry and the interaction of electric double layers relating to adjacent grains. Numerical algorithms present the best possible framework in which to characterize the electrical response of sedimentary rocks, avoiding the constraints intrinsic to rock-physics models. Recently, an algorithm was introduced that can simulate the interactions of electric fields with the ions in solution. The sub-kilohertz permittivity enhancement in sedimentary rocks is dominated by Stern-layer polarization, but a model for the polarization mechanism associated with the Stern layer has not been developed. Hence, the aim of this paper is to develop and test a numerical simulation framework to quantify the influence of Stern- and diffuse-layer polarization, temperature, ion concentration, and pore-network geometry on multi-frequency complex electrical measurements. The algorithm numerically solves the Poisson–Nernst–Planck equations in the time domain and a mineral-dependent electrochemical adsorption/desorption equilibrium model to determine surface charge distribution. Then, the numerical simulator is utilized to perform a sensitivity analysis to quantify the influence of electrolyte and interfacial properties on the permittivity of pore-scale samples at different frequencies.</p>","PeriodicalId":51117,"journal":{"name":"Mathematical Geosciences","volume":"17 1","pages":""},"PeriodicalIF":2.6,"publicationDate":"2023-12-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"138741194","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"地球科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Machine learning (ML)-based landslide susceptibility mapping (LSM) has achieved substantial success in landslide risk management applications. However, the complexity of classically trained ML is often beyond nonexperts. With the rapid growth of practical applications, an “off-the-shelf” ML technique that can be easily used by nonexperts is highly relevant. In the present study, a new paradigm for an end-to-end ML modeling was adopted for LSM in the Three Gorges Reservoir area (TGRA) using automated machine learning (AutoML) as the backend model support for the paradigm. A well-defined database consisting of data from 290 landslides and 11 conditioning factors was collected for implementing AutoML and compared with classically trained ML approaches. The stacked ensemble model from AutoML achieved the best performance (0.954), surpassing the support vector machine with artificial bee colony optimization (ABC-SVM, 0.931), gray wolf optimization (GWO-SVM, 0.925), particle swarm optimization (PSO-SVM, 0.925), water cycle algorithm (WCA-SVM, 0.925), grid search (GS-SVM, 0.920), multilayer perceptron (MLP, 0.908), classification and regression tree (CART, 0.891), K-nearest neighbor (KNN, 0.898), and random forest (RF, 0.909) in terms of the area under the receiver operating characteristic curve (AUC). Notable improvements of up to 11% in AUC demonstrate that the AutoML approach succeeded in LSM and could be used to select the best model with minimal effort or intervention from the user. Moreover, a simple model that has been customarily ignored by practitioners and researchers has been identified with performance satisfying practical requirements. The experimental results indicate that AutoML provides an attractive alternative to manual ML practice, especially for practitioners with little expert knowledge in ML, by delivering a high-performance off-the-shelf solution for ML model development for LSM.
{"title":"Automated Machine Learning-Based Landslide Susceptibility Mapping for the Three Gorges Reservoir Area, China","authors":"Junwei Ma, Dongze Lei, Zhiyuan Ren, Chunhai Tan, Ding Xia, Haixiang Guo","doi":"10.1007/s11004-023-10116-3","DOIUrl":"https://doi.org/10.1007/s11004-023-10116-3","url":null,"abstract":"<p>Machine learning (ML)-based landslide susceptibility mapping (LSM) has achieved substantial success in landslide risk management applications. However, the complexity of classically trained ML is often beyond nonexperts. With the rapid growth of practical applications, an “off-the-shelf” ML technique that can be easily used by nonexperts is highly relevant. In the present study, a new paradigm for an end-to-end ML modeling was adopted for LSM in the Three Gorges Reservoir area (TGRA) using automated machine learning (AutoML) as the backend model support for the paradigm. A well-defined database consisting of data from 290 landslides and 11 conditioning factors was collected for implementing AutoML and compared with classically trained ML approaches. The stacked ensemble model from AutoML achieved the best performance (0.954), surpassing the support vector machine with artificial bee colony optimization (ABC-SVM, 0.931), gray wolf optimization (GWO-SVM, 0.925), particle swarm optimization (PSO-SVM, 0.925), water cycle algorithm (WCA-SVM, 0.925), grid search (GS-SVM, 0.920), multilayer perceptron (MLP, 0.908), classification and regression tree (CART, 0.891), K-nearest neighbor (KNN, 0.898), and random forest (RF, 0.909) in terms of the area under the receiver operating characteristic curve (AUC). Notable improvements of up to 11% in AUC demonstrate that the AutoML approach succeeded in LSM and could be used to select the best model with minimal effort or intervention from the user. Moreover, a simple model that has been customarily ignored by practitioners and researchers has been identified with performance satisfying practical requirements. The experimental results indicate that AutoML provides an attractive alternative to manual ML practice, especially for practitioners with little expert knowledge in ML, by delivering a high-performance off-the-shelf solution for ML model development for LSM.</p>","PeriodicalId":51117,"journal":{"name":"Mathematical Geosciences","volume":"1 1","pages":""},"PeriodicalIF":2.6,"publicationDate":"2023-11-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"138514896","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"地球科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2023-11-23DOI: 10.1007/s11004-023-10119-0
Runhai Feng, Klaus Mosegaard, Dario Grana, Tapan Mukerji, Thomas Mejer Hansen
Probabilistic methods for geophysical inverse problems allow the use of arbitrarily complex prior information in principle. Geostatistical techniques, such as multiple-point statistics (MPS), for describing spatial correlation models and higher-order statistics have been proposed to achieve this inversion task, in which stochastic algorithms such as Markov chain Monte Carlo (McMC) are incorporated. However, stochastic sampling and optimization often require a large number of iterations, and thus geostatistical sampling of the prior model can become computationally demanding. To overcome this challenge, a deep learning model, namely conditional generative adversarial networks (CGANs), is proposed, which allows one to perform a random walk to sample the complex prior distribution. CGANs simulate conditional realizations conditioned to the available hard conditioning data, that is, direct measurements, while preserving the geometrical structure of the model parameters of interest and replicating the sequential Gibbs sampling algorithm. Despite the need for a training step, for a large number of simulations, CGANs are more efficient than traditional geostatistical simulation algorithms such as single normal equation simulation (SNESIM). The proposed methodology is used as part of the extended Metropolis algorithm to predict the distributions of categorical facies in two examples, a dune environment in the Gobi Desert and a channel system in an idealized subsurface reservoir, from indirect observational data such as acoustic impedance. The inversion results are compared to the extended Metropolis algorithm using standard MPS sampling.
{"title":"Stochastic Facies Inversion with Prior Sampling by Conditional Generative Adversarial Networks Based on Training Image","authors":"Runhai Feng, Klaus Mosegaard, Dario Grana, Tapan Mukerji, Thomas Mejer Hansen","doi":"10.1007/s11004-023-10119-0","DOIUrl":"https://doi.org/10.1007/s11004-023-10119-0","url":null,"abstract":"<p>Probabilistic methods for geophysical inverse problems allow the use of arbitrarily complex prior information in principle. Geostatistical techniques, such as multiple-point statistics (MPS), for describing spatial correlation models and higher-order statistics have been proposed to achieve this inversion task, in which stochastic algorithms such as Markov chain Monte Carlo (McMC) are incorporated. However, stochastic sampling and optimization often require a large number of iterations, and thus geostatistical sampling of the prior model can become computationally demanding. To overcome this challenge, a deep learning model, namely conditional generative adversarial networks (CGANs), is proposed, which allows one to perform a random walk to sample the complex prior distribution. CGANs simulate conditional realizations conditioned to the available hard conditioning data, that is, direct measurements, while preserving the geometrical structure of the model parameters of interest and replicating the sequential Gibbs sampling algorithm. Despite the need for a training step, for a large number of simulations, CGANs are more efficient than traditional geostatistical simulation algorithms such as single normal equation simulation (SNESIM). The proposed methodology is used as part of the extended Metropolis algorithm to predict the distributions of categorical facies in two examples, a dune environment in the Gobi Desert and a channel system in an idealized subsurface reservoir, from indirect observational data such as acoustic impedance. The inversion results are compared to the extended Metropolis algorithm using standard MPS sampling.</p>","PeriodicalId":51117,"journal":{"name":"Mathematical Geosciences","volume":"13 24","pages":""},"PeriodicalIF":2.6,"publicationDate":"2023-11-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"138514891","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"地球科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2023-11-17DOI: 10.1007/s11004-023-10111-8
Donald A. Singer
Exponentially increasing amounts of copper mined over the last 120 years and Cu’s central place in modern society raise concerns about its long-term availability. Estimates of copper production from mines made here based on projected population (R2 = 0.95) are lower than many previous studies. Projected world production of copper from mines in 2100 of 28.2 million tons Cu is approximately 34% more than 2021 production. Rough estimates of recycled Cu added to mine production are less than previous estimates of future consumed Cu. Although annual mined copper will peak in about 2086, production will continue in a gentle decline through 2100. Future availability of consumed copper is dependent on availability of mined copper plus recycled copper. Estimated total copper demand including new technologies is 33 million tons in 2040. Total expected copper from mines estimated here is 24 million tons in 2040, but with a recycling rate of 30%, required demand of 33 million tons would be satisfied. Per capita GDP effects on copper consumption require a logistic growth curve to model. In countries with high per capita GDP, per capita copper consumption is likely to reach saturation and stabilize or perhaps reduce demand for copper. Most countries will achieve high incomes at some point. If earlier studies of high-income copper consumption rates hold in the future, 10 kg per capita of copper for 10 billion people expected before 2100 leads to estimated total annual copper consumption of 100 million tons. This worst-case demand estimate greatly exceeds projected copper from mines and recycling and ignores increased demand due to electrification scenarios and declines in demand due to declining population by 2100 and possible dematerialization.
{"title":"Long-Term Copper Production to 2100","authors":"Donald A. Singer","doi":"10.1007/s11004-023-10111-8","DOIUrl":"https://doi.org/10.1007/s11004-023-10111-8","url":null,"abstract":"<p>Exponentially increasing amounts of copper mined over the last 120 years and Cu’s central place in modern society raise concerns about its long-term availability. Estimates of copper production from mines made here based on projected population (<i>R</i><sup>2</sup> = 0.95) are lower than many previous studies. Projected world production of copper from mines in 2100 of 28.2 million tons Cu is approximately 34% more than 2021 production. Rough estimates of recycled Cu added to mine production are less than previous estimates of future consumed Cu. Although annual mined copper will peak in about 2086, production will continue in a gentle decline through 2100. Future availability of consumed copper is dependent on availability of mined copper plus recycled copper. Estimated total copper demand including new technologies is 33 million tons in 2040. Total expected copper from mines estimated here is 24 million tons in 2040, but with a recycling rate of 30%, required demand of 33 million tons would be satisfied. Per capita GDP effects on copper consumption require a logistic growth curve to model. In countries with high per capita GDP, per capita copper consumption is likely to reach saturation and stabilize or perhaps reduce demand for copper. Most countries will achieve high incomes at some point. If earlier studies of high-income copper consumption rates hold in the future, 10 kg per capita of copper for 10 billion people expected before 2100 leads to estimated total annual copper consumption of 100 million tons. This worst-case demand estimate greatly exceeds projected copper from mines and recycling and ignores increased demand due to electrification scenarios and declines in demand due to declining population by 2100 and possible dematerialization.</p>","PeriodicalId":51117,"journal":{"name":"Mathematical Geosciences","volume":"15 1","pages":""},"PeriodicalIF":2.6,"publicationDate":"2023-11-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"138514905","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"地球科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2023-11-16DOI: 10.1007/s11004-023-10115-4
Josep Antoni Martín-Fernández, Valentino Di Donato, Vera Pawlowsky-Glahn, Juan José Egozcue
R-mode hierarchical clustering is a method for forming hierarchical groups of mutually exclusive subsets of variables. This R-mode cluster method identifies interrelationships between variables which are useful for variable selection and dimension reduction. Importantly, the method is based on metric elements defined on the sample space of variables. Consequently, hierarchical clustering of compositional parts should respect the particular geometry of the simplex. In this work, the connections between concepts such as distance, cluster representative, compositional biplot, and log-ratio basis are explored within the framework of the most popular R-mode agglomerative hierarchical clustering methods. The approach is illustrated in a paleoecological study to identify groups of species sharing similar behavior.
{"title":"Insights in Hierarchical Clustering of Variables for Compositional Data","authors":"Josep Antoni Martín-Fernández, Valentino Di Donato, Vera Pawlowsky-Glahn, Juan José Egozcue","doi":"10.1007/s11004-023-10115-4","DOIUrl":"https://doi.org/10.1007/s11004-023-10115-4","url":null,"abstract":"<p>R-mode hierarchical clustering is a method for forming hierarchical groups of mutually exclusive subsets of variables. This R-mode cluster method identifies interrelationships between variables which are useful for variable selection and dimension reduction. Importantly, the method is based on metric elements defined on the sample space of variables. Consequently, hierarchical clustering of compositional parts should respect the particular geometry of the simplex. In this work, the connections between concepts such as distance, cluster representative, compositional biplot, and log-ratio basis are explored within the framework of the most popular R-mode agglomerative hierarchical clustering methods. The approach is illustrated in a paleoecological study to identify groups of species sharing similar behavior.\u0000</p>","PeriodicalId":51117,"journal":{"name":"Mathematical Geosciences","volume":"66 1","pages":""},"PeriodicalIF":2.6,"publicationDate":"2023-11-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"138514892","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"地球科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}