Pub Date : 2024-09-19DOI: 10.1007/s00180-024-01554-6
Ankur Chakraborty, Nabakumar Jana
We consider two inverse Gaussian populations with a common mean but different scale-like parameters, where all parameters are unknown. We construct noninformative priors for the ratio of the scale-like parameters to derive matching priors of different orders. Reference priors are proposed for different groups of parameters. The Bayes estimators of the common mean and ratio of the scale-like parameters are also derived. We propose confidence intervals of the conditional error rate in classifying an observation into inverse Gaussian distributions. A generalized variable-based confidence interval and the highest posterior density credible intervals for the error rate are computed. We estimate parameters of the mixture of these inverse Gaussian distributions and obtain estimates of the expected probability of correct classification. An intensive simulation study has been carried out to compare the estimators and expected probability of correct classification. Real data-based examples are given to show the practicality and effectiveness of the estimators.
{"title":"Bayes estimation of ratio of scale-like parameters for inverse Gaussian distributions and applications to classification","authors":"Ankur Chakraborty, Nabakumar Jana","doi":"10.1007/s00180-024-01554-6","DOIUrl":"https://doi.org/10.1007/s00180-024-01554-6","url":null,"abstract":"<p>We consider two inverse Gaussian populations with a common mean but different scale-like parameters, where all parameters are unknown. We construct noninformative priors for the ratio of the scale-like parameters to derive matching priors of different orders. Reference priors are proposed for different groups of parameters. The Bayes estimators of the common mean and ratio of the scale-like parameters are also derived. We propose confidence intervals of the conditional error rate in classifying an observation into inverse Gaussian distributions. A generalized variable-based confidence interval and the highest posterior density credible intervals for the error rate are computed. We estimate parameters of the mixture of these inverse Gaussian distributions and obtain estimates of the expected probability of correct classification. An intensive simulation study has been carried out to compare the estimators and expected probability of correct classification. Real data-based examples are given to show the practicality and effectiveness of the estimators.</p>","PeriodicalId":55223,"journal":{"name":"Computational Statistics","volume":"50 1","pages":""},"PeriodicalIF":1.3,"publicationDate":"2024-09-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142262400","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"数学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-09-17DOI: 10.1007/s00180-024-01553-7
Antonello D’Ambra, Pietro Amenta, Antonio Lucadamo
Compared to other European competitions, participation in the Uefa Champions League is a real “bargain” for football clubs due to the hefty bonuses awarded based on performance during the group qualification phase. To perform successfully in football depends on several multidimensional factors, and analyzing the main ones remains challenging. In the performance study, little attention has been paid to teams’ behavior when playing at home and away. Our study combines statistical techniques to develop a procedure to examine teams’ performance. Several considerations make the 2022–2023 Serie A league season particularly interesting to analyze with our approach. Except for Napoli, all the teams showed different home-and-away behaviors concerning the results obtained at the season’s end. Ball possession and corners have positively influenced scored points in both home and away games with a different impact. The precision indicator was not an essential variable. The procedure highlighted the negative roles played by offside, as well as yellow and red cards.
{"title":"Multivariate approaches to investigate the home and away behavior of football teams playing football matches","authors":"Antonello D’Ambra, Pietro Amenta, Antonio Lucadamo","doi":"10.1007/s00180-024-01553-7","DOIUrl":"https://doi.org/10.1007/s00180-024-01553-7","url":null,"abstract":"<p>Compared to other European competitions, participation in the Uefa Champions League is a real “bargain” for football clubs due to the hefty bonuses awarded based on performance during the group qualification phase. To perform successfully in football depends on several multidimensional factors, and analyzing the main ones remains challenging. In the performance study, little attention has been paid to teams’ behavior when playing at home and away. Our study combines statistical techniques to develop a procedure to examine teams’ performance. Several considerations make the 2022–2023 Serie A league season particularly interesting to analyze with our approach. Except for Napoli, all the teams showed different home-and-away behaviors concerning the results obtained at the season’s end. Ball possession and corners have positively influenced scored points in both home and away games with a different impact. The precision indicator was not an essential variable. The procedure highlighted the negative roles played by offside, as well as yellow and red cards.</p>","PeriodicalId":55223,"journal":{"name":"Computational Statistics","volume":"2 1","pages":""},"PeriodicalIF":1.3,"publicationDate":"2024-09-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142262402","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"数学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-09-17DOI: 10.1007/s00180-024-01542-w
Roy Cerqueti, Raffaele Mattera, Valerio Ficcadenti
This paper deals with the challenging themes of the way sporting teams and athletes are ranked in sports competitions. Starting from the paradigmatic case of soccer, we advance a new method for ranking teams in the official national championships through computational statistics methods based on Kendall correlations and radar charts. In detail, we consider the goals for and against the teams in the individual matches as a further source of score assignment beyond the usual win-tie-lose trichotomy. Our approach overcomes some biases in the scoring rules that are currently employed. The methodological proposal is tested over the relevant case of the Italian “Serie A” championships played during 1930–2023.
{"title":"Kendall correlations and radar charts to include goals for and goals against in soccer rankings","authors":"Roy Cerqueti, Raffaele Mattera, Valerio Ficcadenti","doi":"10.1007/s00180-024-01542-w","DOIUrl":"https://doi.org/10.1007/s00180-024-01542-w","url":null,"abstract":"<p>This paper deals with the challenging themes of the way sporting teams and athletes are ranked in sports competitions. Starting from the paradigmatic case of soccer, we advance a new method for ranking teams in the official national championships through computational statistics methods based on Kendall correlations and radar charts. In detail, we consider the goals for and against the teams in the individual matches as a further source of score assignment beyond the usual win-tie-lose trichotomy. Our approach overcomes some biases in the scoring rules that are currently employed. The methodological proposal is tested over the relevant case of the Italian “Serie A” championships played during 1930–2023.</p>","PeriodicalId":55223,"journal":{"name":"Computational Statistics","volume":"35 1","pages":""},"PeriodicalIF":1.3,"publicationDate":"2024-09-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142269782","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"数学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-09-16DOI: 10.1007/s00180-024-01546-6
Ranran Chen, Mai Dao, Keying Ye, Min Wang
In this paper, we develop a fully Bayesian adaptive lasso quantile regression model to analyze data with non-ignorable missing responses, which frequently occur in various fields of study. Specifically, we employ a logistic regression model to deal with missing data of non-ignorable mechanism. By using the asymmetric Laplace working likelihood for the data and specifying Laplace priors for the regression coefficients, our proposed method extends the Bayesian lasso framework by imposing specific penalization parameters on each regression coefficient, enhancing our estimation and variable selection capability. Furthermore, we embrace the normal-exponential mixture representation of the asymmetric Laplace distribution and the Student-t approximation of the logistic regression model to develop a simple and efficient Gibbs sampling algorithm for generating posterior samples and making statistical inferences. The finite-sample performance of the proposed algorithm is investigated through various simulation studies and a real-data example.
{"title":"Bayesian adaptive lasso quantile regression with non-ignorable missing responses","authors":"Ranran Chen, Mai Dao, Keying Ye, Min Wang","doi":"10.1007/s00180-024-01546-6","DOIUrl":"https://doi.org/10.1007/s00180-024-01546-6","url":null,"abstract":"<p>In this paper, we develop a fully Bayesian adaptive lasso quantile regression model to analyze data with non-ignorable missing responses, which frequently occur in various fields of study. Specifically, we employ a logistic regression model to deal with missing data of non-ignorable mechanism. By using the asymmetric Laplace working likelihood for the data and specifying Laplace priors for the regression coefficients, our proposed method extends the Bayesian lasso framework by imposing specific penalization parameters on each regression coefficient, enhancing our estimation and variable selection capability. Furthermore, we embrace the normal-exponential mixture representation of the asymmetric Laplace distribution and the Student-<i>t</i> approximation of the logistic regression model to develop a simple and efficient Gibbs sampling algorithm for generating posterior samples and making statistical inferences. The finite-sample performance of the proposed algorithm is investigated through various simulation studies and a real-data example.</p>","PeriodicalId":55223,"journal":{"name":"Computational Statistics","volume":"94 1","pages":""},"PeriodicalIF":1.3,"publicationDate":"2024-09-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142262401","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"数学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-09-14DOI: 10.1007/s00180-024-01543-9
Tarn Duong
Kernel smoothers are essential tools for data analysis due to their ability to convey complex statistical information with concise graphical visualisations. Their inclusion in the base distribution and in the many user-contributed add-on packages of the R statistical analysis environment caters well to many practitioners. Though there remain some important gaps for specialised data, most notably for tidy and geospatial data. The proposed eks package fills in these gaps. In addition to kernel density estimation, this package also caters for more complex data analysis situations, such as density derivative estimation, density-based classification (supervised learning) and mean shift clustering (unsupervised learning). We illustrate with experimental data how to obtain and to interpret the statistical visualisations for these kernel smoothing methods.
{"title":"Statistical visualisation of tidy and geospatial data in R via kernel smoothing methods in the eks package","authors":"Tarn Duong","doi":"10.1007/s00180-024-01543-9","DOIUrl":"https://doi.org/10.1007/s00180-024-01543-9","url":null,"abstract":"<p>Kernel smoothers are essential tools for data analysis due to their ability to convey complex statistical information with concise graphical visualisations. Their inclusion in the base distribution and in the many user-contributed add-on packages of the <span>R</span> statistical analysis environment caters well to many practitioners. Though there remain some important gaps for specialised data, most notably for tidy and geospatial data. The proposed <span>eks</span> package fills in these gaps. In addition to kernel density estimation, this package also caters for more complex data analysis situations, such as density derivative estimation, density-based classification (supervised learning) and mean shift clustering (unsupervised learning). We illustrate with experimental data how to obtain and to interpret the statistical visualisations for these kernel smoothing methods.</p>","PeriodicalId":55223,"journal":{"name":"Computational Statistics","volume":"119 1","pages":""},"PeriodicalIF":1.3,"publicationDate":"2024-09-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142269783","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"数学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-09-12DOI: 10.1007/s00180-024-01545-7
Tommy Löfstedt
Partial least squares regression (PLS-R) has been an important regression method in the life sciences and many other fields for decades. However, PLS-R is typically solved using an opaque algorithmic approach, rather than through an optimisation formulation and procedure. There is a clear optimisation formulation of the PLS-R problem based on a Krylov subspace formulation, but it is only rarely considered. The popularity of PLS-R is attributed to the ability to interpret the data through the model components, but the model components are not available when solving the PLS-R problem using the Krylov subspace formulation. We therefore highlight a simple reformulation of the PLS-R problem using the Krylov subspace formulation as a promising modelling framework for PLS-R, and illustrate one of the main benefits of this reformulation—that it allows arbitrary penalties of the regression coefficients in the PLS-R model. Further, we propose an approach to estimate the PLS-R model components for the solution found through the Krylov subspace formulation, that are those we would have obtained had we been able to use the common algorithms for estimating the PLS-R model. We illustrate the utility of the proposed method on simulated and real data.
{"title":"Using the Krylov subspace formulation to improve regularisation and interpretation in partial least squares regression","authors":"Tommy Löfstedt","doi":"10.1007/s00180-024-01545-7","DOIUrl":"https://doi.org/10.1007/s00180-024-01545-7","url":null,"abstract":"<p>Partial least squares regression (PLS-R) has been an important regression method in the life sciences and many other fields for decades. However, PLS-R is typically solved using an opaque algorithmic approach, rather than through an optimisation formulation and procedure. There is a clear optimisation formulation of the PLS-R problem based on a Krylov subspace formulation, but it is only rarely considered. The popularity of PLS-R is attributed to the ability to interpret the data through the model components, but the model components are not available when solving the PLS-R problem using the Krylov subspace formulation. We therefore highlight a simple reformulation of the PLS-R problem using the Krylov subspace formulation as a promising modelling framework for PLS-R, and illustrate one of the main benefits of this reformulation—that it allows arbitrary penalties of the regression coefficients in the PLS-R model. Further, we propose an approach to estimate the PLS-R model components for the solution found through the Krylov subspace formulation, that are those we would have obtained had we been able to use the common algorithms for estimating the PLS-R model. We illustrate the utility of the proposed method on simulated and real data.</p>","PeriodicalId":55223,"journal":{"name":"Computational Statistics","volume":"25 1","pages":""},"PeriodicalIF":1.3,"publicationDate":"2024-09-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142186113","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"数学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-09-12DOI: 10.1007/s00180-024-01548-4
Junchen Li
In recent years, high-dimensional matrix factor models have been widely applied in various fields. However, there are few methods that effectively handle heavy-tailed data. To address this problem, we introduced a smooth Cauchy loss function and established an optimization objective through norm minimization, deriving a Cauchy version of the weighted iterative estimation method. Unlike the Huber loss weighted estimation method, the weight calculation in this method is a smooth function rather than a piecewise function. It also considers the need to update parameters in the Cauchy loss function with each iteration during estimation. Ultimately, we propose a weighted estimation method with adaptive parameter adjustment. Subsequently, this paper analyzes the theoretical properties of the method, proving that it has a fast convergence rate. Through data simulation, our method demonstrates significant advantages. Thus, it can serve as a better alternative to other existing estimation methods. Finally, we analyzed a dataset of regional population movements between cities, demonstrating that our proposed method offers estimations with excellent interpretability compared to other methods.
{"title":"Robust matrix factor analysis method with adaptive parameter adjustment using Cauchy weighting","authors":"Junchen Li","doi":"10.1007/s00180-024-01548-4","DOIUrl":"https://doi.org/10.1007/s00180-024-01548-4","url":null,"abstract":"<p>In recent years, high-dimensional matrix factor models have been widely applied in various fields. However, there are few methods that effectively handle heavy-tailed data. To address this problem, we introduced a smooth Cauchy loss function and established an optimization objective through norm minimization, deriving a Cauchy version of the weighted iterative estimation method. Unlike the Huber loss weighted estimation method, the weight calculation in this method is a smooth function rather than a piecewise function. It also considers the need to update parameters in the Cauchy loss function with each iteration during estimation. Ultimately, we propose a weighted estimation method with adaptive parameter adjustment. Subsequently, this paper analyzes the theoretical properties of the method, proving that it has a fast convergence rate. Through data simulation, our method demonstrates significant advantages. Thus, it can serve as a better alternative to other existing estimation methods. Finally, we analyzed a dataset of regional population movements between cities, demonstrating that our proposed method offers estimations with excellent interpretability compared to other methods.</p>","PeriodicalId":55223,"journal":{"name":"Computational Statistics","volume":"5 1","pages":""},"PeriodicalIF":1.3,"publicationDate":"2024-09-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142186112","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"数学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-09-06DOI: 10.1007/s00180-024-01540-y
Thomas Suesse, Alexander Brenning
Inference for predicted exceedance sets is important for various environmental issues such as detecting environmental anomalies and emergencies with high confidence. A critical part is to construct inner and outer predicted exceedance sets using an algorithm that samples from the predictive distribution. The simple currently used sampling procedure can lead to misleading conclusions for some locations due to relatively large standard errors when proportions are estimated from independent observations. Instead we propose an algorithm that calculates probabilities numerically using the Genz–Bretz algorithm, which is based on quasi-random numbers leading to more accurate inner and outer sets, as illustrated on rainfall data in the state of Paraná, Brazil.
{"title":"A precise and efficient exceedance-set algorithm for detecting environmental extremes","authors":"Thomas Suesse, Alexander Brenning","doi":"10.1007/s00180-024-01540-y","DOIUrl":"https://doi.org/10.1007/s00180-024-01540-y","url":null,"abstract":"<p>Inference for predicted exceedance sets is important for various environmental issues such as detecting environmental anomalies and emergencies with high confidence. A critical part is to construct inner and outer predicted exceedance sets using an algorithm that samples from the predictive distribution. The simple currently used sampling procedure can lead to misleading conclusions for some locations due to relatively large standard errors when proportions are estimated from independent observations. Instead we propose an algorithm that calculates probabilities numerically using the Genz–Bretz algorithm, which is based on quasi-random numbers leading to more accurate inner and outer sets, as illustrated on rainfall data in the state of Paraná, Brazil.</p>","PeriodicalId":55223,"journal":{"name":"Computational Statistics","volume":"60 1","pages":""},"PeriodicalIF":1.3,"publicationDate":"2024-09-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142224382","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"数学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
This paper proposes a method for change-point estimation, focusing on detecting structural shifts within time series data. Traditional maximum likelihood estimation (MLE) methods assume either independence or linear dependence via auto-regressive models. To address this limitation, the paper introduces copula-based Markov chain models, offering more flexible dependence modeling. These models treat a Gaussian time series as a Markov chain and utilize copula functions to handle serial dependence. The profile MLE procedure is then employed to estimate the change-point and other model parameters, with the Newton–Raphson algorithm facilitating numerical calculations for the estimators. The proposed approach is evaluated through simulations and real stock return data, considering two distinct periods: the 2008 financial crisis and the COVID-19 pandemic in 2020.
{"title":"Change point estimation for Gaussian time series data with copula-based Markov chain models","authors":"Li-Hsien Sun, Yu-Kai Wang, Lien-Hsi Liu, Takeshi Emura, Chi-Yang Chiu","doi":"10.1007/s00180-024-01541-x","DOIUrl":"https://doi.org/10.1007/s00180-024-01541-x","url":null,"abstract":"<p>This paper proposes a method for change-point estimation, focusing on detecting structural shifts within time series data. Traditional maximum likelihood estimation (MLE) methods assume either independence or linear dependence via auto-regressive models. To address this limitation, the paper introduces copula-based Markov chain models, offering more flexible dependence modeling. These models treat a Gaussian time series as a Markov chain and utilize copula functions to handle serial dependence. The profile MLE procedure is then employed to estimate the change-point and other model parameters, with the Newton–Raphson algorithm facilitating numerical calculations for the estimators. The proposed approach is evaluated through simulations and real stock return data, considering two distinct periods: the 2008 financial crisis and the COVID-19 pandemic in 2020.</p>","PeriodicalId":55223,"journal":{"name":"Computational Statistics","volume":"46 1","pages":""},"PeriodicalIF":1.3,"publicationDate":"2024-09-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142186114","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"数学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
When collecting several data sets and heterogeneous data types on a given phenomenon of interest, the individual analysis of each data set will provide only a particular view of such phenomenon. Instead, integrating all the data may widen and deepen the results, offering a better view of the entire system. In the context of network integration, we propose the INet algorithm. INet assumes a similar network structure, representing latent variables in different network layers of the same system. Therefore, by combining individual edge weights and topological network structures, INet first constructs a Consensus Network that represents the shared information underneath the different layers to provide a global view of the entities that play a fundamental role in the phenomenon of interest. Then, it derives a Case Specific Network for each layer containing peculiar information of the single data type not present in all the others. We demonstrated good performance with our method through simulated data and detected new insights by analyzing biological and sociological datasets.
{"title":"INet for network integration","authors":"Valeria Policastro, Matteo Magnani, Claudia Angelini, Annamaria Carissimo","doi":"10.1007/s00180-024-01536-8","DOIUrl":"https://doi.org/10.1007/s00180-024-01536-8","url":null,"abstract":"<p>When collecting several data sets and heterogeneous data types on a given phenomenon of interest, the individual analysis of each data set will provide only a particular view of such phenomenon. Instead, integrating all the data may widen and deepen the results, offering a better view of the entire system. In the context of network integration, we propose the <span>INet</span> algorithm. <span>INet</span> assumes a similar network structure, representing latent variables in different network layers of the same system. Therefore, by combining individual edge weights and topological network structures, <span>INet</span> first constructs a <span>Consensus Network</span> that represents the shared information underneath the different layers to provide a global view of the entities that play a fundamental role in the phenomenon of interest. Then, it derives a <span>Case Specific Network</span> for each layer containing peculiar information of the single data type not present in all the others. We demonstrated good performance with our method through simulated data and detected new insights by analyzing biological and sociological datasets.\u0000</p>","PeriodicalId":55223,"journal":{"name":"Computational Statistics","volume":"13 1","pages":""},"PeriodicalIF":1.3,"publicationDate":"2024-09-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142186115","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"数学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}