We suggest novel correlation coefficients which equal the maximum correlation for a class of bivariate Lancaster distributions while being only slightly smaller than maximum correlation for a variety of further bivariate distributions. In contrast to maximum correlation, however, our correlation coefficients allow for rank and moment‐based estimators which are simple to compute and have tractable asymptotic distributions. Confidence intervals resulting from these asymptotic approximations and the covariance bootstrap show good finite‐sample coverage. In a simulation, the power of asymptotic as well as permutation tests for independence based on our correlation measures compares favorably with competing methods based on distance correlation or rank coefficients for functional dependence, among others. Moreover, for the bivariate normal distribution, our correlation coefficients equal the absolute value of the Pearson correlation, an attractive feature for practitioners which is not shared by various competitors. We illustrate the practical usefulness of our methods in applications to two real data sets.
{"title":"Lancaster correlation: A new dependence measure linked to maximum correlation","authors":"Hajo Holzmann, Bernhard Klar","doi":"10.1111/sjos.12733","DOIUrl":"https://doi.org/10.1111/sjos.12733","url":null,"abstract":"We suggest novel correlation coefficients which equal the maximum correlation for a class of bivariate Lancaster distributions while being only slightly smaller than maximum correlation for a variety of further bivariate distributions. In contrast to maximum correlation, however, our correlation coefficients allow for rank and moment‐based estimators which are simple to compute and have tractable asymptotic distributions. Confidence intervals resulting from these asymptotic approximations and the covariance bootstrap show good finite‐sample coverage. In a simulation, the power of asymptotic as well as permutation tests for independence based on our correlation measures compares favorably with competing methods based on distance correlation or rank coefficients for functional dependence, among others. Moreover, for the bivariate normal distribution, our correlation coefficients equal the absolute value of the Pearson correlation, an attractive feature for practitioners which is not shared by various competitors. We illustrate the practical usefulness of our methods in applications to two real data sets.","PeriodicalId":49567,"journal":{"name":"Scandinavian Journal of Statistics","volume":"56 1","pages":""},"PeriodicalIF":1.0,"publicationDate":"2024-07-03","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141548568","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"数学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Claudio Heinrich‐Mertsching, Thordis L. Thorarinsdottir, Peter Guttorp, Max Schneider
We introduce a class of proper scoring rules for evaluating spatial point process forecasts based on summary statistics. These scoring rules rely on Monte Carlo approximations of expectations and can therefore easily be evaluated for any point process model that can be simulated. In this regard, they are more flexible than the commonly used logarithmic score and other existing proper scores for point process predictions. The scoring rules allow for evaluating the calibration of a model to specific aspects of a point process, such as its spatial distribution or tendency toward clustering. Using simulations, we analyze the sensitivity of our scoring rules to different aspects of the forecasts and compare it to the logarithmic score. Applications to earthquake occurrences in northern California, United States and the spatial distribution of Pacific silver firs in Findley Lake Reserve in Washington highlight the usefulness of our scores for scientific model selection.
{"title":"Validation of point process predictions with proper scoring rules","authors":"Claudio Heinrich‐Mertsching, Thordis L. Thorarinsdottir, Peter Guttorp, Max Schneider","doi":"10.1111/sjos.12736","DOIUrl":"https://doi.org/10.1111/sjos.12736","url":null,"abstract":"We introduce a class of proper scoring rules for evaluating spatial point process forecasts based on summary statistics. These scoring rules rely on Monte Carlo approximations of expectations and can therefore easily be evaluated for any point process model that can be simulated. In this regard, they are more flexible than the commonly used logarithmic score and other existing proper scores for point process predictions. The scoring rules allow for evaluating the calibration of a model to specific aspects of a point process, such as its spatial distribution or tendency toward clustering. Using simulations, we analyze the sensitivity of our scoring rules to different aspects of the forecasts and compare it to the logarithmic score. Applications to earthquake occurrences in northern California, United States and the spatial distribution of Pacific silver firs in Findley Lake Reserve in Washington highlight the usefulness of our scores for scientific model selection.","PeriodicalId":49567,"journal":{"name":"Scandinavian Journal of Statistics","volume":"57 1","pages":""},"PeriodicalIF":1.0,"publicationDate":"2024-07-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141504887","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"数学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
The multivariate coefficient of variation (MCV) is an attractive and easy‐to‐interpret effect size for the dispersion in multivariate data. Recently, the first inference methods for the MCV were proposed for general factorial designs. However, the inference methods are primarily derived for one special MCV variant while there are several reasonable proposals. Moreover, when rejecting a global null hypothesis, a more in‐depth analysis is of interest to find the significant contrasts of MCV. This paper concerns extending the nonparametric permutation procedure to the other MCV variants and a max‐type test for post hoc analysis. To improve the small sample performance of the latter, we suggest a novel bootstrap strategy and prove its asymptotic validity. The actual performance of all proposed tests is compared in an extensive simulation study and illustrated by real data analysis. All methods are implemented in the R package GFDmcv, available on CRAN.
{"title":"Inference for all variants of the multivariate coefficient of variation in factorial designs","authors":"Marc Ditzhaus, Łukasz Smaga","doi":"10.1111/sjos.12740","DOIUrl":"https://doi.org/10.1111/sjos.12740","url":null,"abstract":"The multivariate coefficient of variation (MCV) is an attractive and easy‐to‐interpret effect size for the dispersion in multivariate data. Recently, the first inference methods for the MCV were proposed for general factorial designs. However, the inference methods are primarily derived for one special MCV variant while there are several reasonable proposals. Moreover, when rejecting a global null hypothesis, a more in‐depth analysis is of interest to find the significant contrasts of MCV. This paper concerns extending the nonparametric permutation procedure to the other MCV variants and a max‐type test for post hoc analysis. To improve the small sample performance of the latter, we suggest a novel bootstrap strategy and prove its asymptotic validity. The actual performance of all proposed tests is compared in an extensive simulation study and illustrated by real data analysis. All methods are implemented in the R package GFDmcv, available on CRAN.","PeriodicalId":49567,"journal":{"name":"Scandinavian Journal of Statistics","volume":"27 1","pages":""},"PeriodicalIF":1.0,"publicationDate":"2024-06-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141504888","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"数学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Professor (now emeritus) Nils Lid Hjort has through more than four decades been one of the most original and productive statisticians in Norway, contributing to a wide range of topics such as survival analysis, Bayesian nonparametrics, empirical likelihood, density estimation, focused inference, model selection, and confidence distributions. This conversation, which took place at the University of Oslo in December 2023, sheds light on how Nils Hjort's curious and open mind, coupled with a deep understanding, has enabled him to seamlessly navigate between different fields of statistics and its applications. Our aim is to encourage the statistics community to always be on the lookout for unexpected connections in statistical science and to embrace unexpected encounters with fellow statisticians from around the world.
尼尔斯-利德-希约尔特教授(现为名誉教授)四十多年来一直是挪威最具原创性和最有成就的统计学家之一,在生存分析、贝叶斯非参数、经验似然法、密度估计、重点推断、模型选择和置信度分布等广泛领域做出了贡献。这次对话于 2023 年 12 月在奥斯陆大学举行,它揭示了 Nils Hjort 如何以其好奇、开放的心态和深刻的理解力,在统计学及其应用的不同领域之间游刃有余。我们的目标是鼓励统计界始终关注统计科学中的意外联系,并与来自世界各地的统计学家同仁不期而遇。
{"title":"A conversation with Nils Lid Hjort","authors":"Ørnulf Borgan, Ingrid K. Glad","doi":"10.1111/sjos.12732","DOIUrl":"https://doi.org/10.1111/sjos.12732","url":null,"abstract":"Professor (now emeritus) Nils Lid Hjort has through more than four decades been one of the most original and productive statisticians in Norway, contributing to a wide range of topics such as survival analysis, Bayesian nonparametrics, empirical likelihood, density estimation, focused inference, model selection, and confidence distributions. This conversation, which took place at the University of Oslo in December 2023, sheds light on how Nils Hjort's curious and open mind, coupled with a deep understanding, has enabled him to seamlessly navigate between different fields of statistics and its applications. Our aim is to encourage the statistics community to always be on the lookout for unexpected connections in statistical science and to embrace unexpected encounters with fellow statisticians from around the world.","PeriodicalId":49567,"journal":{"name":"Scandinavian Journal of Statistics","volume":"6 1","pages":""},"PeriodicalIF":1.0,"publicationDate":"2024-06-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141504889","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"数学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
There is a lack of point process models on linear networks. For an arbitrary linear network, we consider new models for a Cox process with an isotropic pair correlation function obtained in various ways by transforming an isotropic Gaussian process which is used for driving the random intensity function of the Cox process. In particular, we introduce three model classes given by log Gaussian, interrupted, and permanental Cox processes on linear networks, and consider for the first time statistical procedures and applications for parametric families of such models. Moreover, we construct new simulation algorithms for Gaussian processes on linear networks and discuss whether the geodesic metric or the resistance metric should be used for the kind of Cox processes studied in this paper.
{"title":"Cox processes driven by transformed Gaussian processes on linear networks—A review and new contributions","authors":"Jesper Møller, Jakob G. Rasmussen","doi":"10.1111/sjos.12720","DOIUrl":"https://doi.org/10.1111/sjos.12720","url":null,"abstract":"There is a lack of point process models on linear networks. For an arbitrary linear network, we consider new models for a Cox process with an isotropic pair correlation function obtained in various ways by transforming an isotropic Gaussian process which is used for driving the random intensity function of the Cox process. In particular, we introduce three model classes given by log Gaussian, interrupted, and permanental Cox processes on linear networks, and consider for the first time statistical procedures and applications for parametric families of such models. Moreover, we construct new simulation algorithms for Gaussian processes on linear networks and discuss whether the geodesic metric or the resistance metric should be used for the kind of Cox processes studied in this paper.","PeriodicalId":49567,"journal":{"name":"Scandinavian Journal of Statistics","volume":"209 1","pages":""},"PeriodicalIF":1.0,"publicationDate":"2024-05-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141062342","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"数学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
General geostatistical models are powerful tools for analyzing spatial datasets. A two‐step estimation based on the likelihood function is widely used by researchers, but several theoretical and computational challenges remain to be addressed. First, it is unclear whether there is a unique global maximizer of the log‐likelihood function, a seemingly simple but theoretically challenging question. The second challenge is the convexity of the log‐likelihood function. Besides these two challenges in maximizing the likelihood function, we also study the theoretical property of the two‐step estimation. Unlike many previous works, our results can apply to the non‐twice differentiable covariance functions. In the simulation studies, three optimization algorithms are evaluated in terms of maximizing the log‐likelihood functions.
{"title":"On maximizing the likelihood function of general geostatistical models","authors":"Tingjin Chu","doi":"10.1111/sjos.12722","DOIUrl":"https://doi.org/10.1111/sjos.12722","url":null,"abstract":"General geostatistical models are powerful tools for analyzing spatial datasets. A two‐step estimation based on the likelihood function is widely used by researchers, but several theoretical and computational challenges remain to be addressed. First, it is unclear whether there is a unique global maximizer of the log‐likelihood function, a seemingly simple but theoretically challenging question. The second challenge is the convexity of the log‐likelihood function. Besides these two challenges in maximizing the likelihood function, we also study the theoretical property of the two‐step estimation. Unlike many previous works, our results can apply to the non‐twice differentiable covariance functions. In the simulation studies, three optimization algorithms are evaluated in terms of maximizing the log‐likelihood functions.","PeriodicalId":49567,"journal":{"name":"Scandinavian Journal of Statistics","volume":"2016 1","pages":""},"PeriodicalIF":1.0,"publicationDate":"2024-05-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140942490","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"数学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
In the past decade, various exact balancing‐based weighting methods were introduced to the causal inference literature. It eliminates covariate imbalance by imposing balancing constraints in a certain optimization problem, which can nevertheless be infeasible when there is bad overlap between the covariate distributions in the treated and control groups or when the covariates are high dimensional. Recently, approximate balancing was proposed as an alternative balancing framework. It resolves the feasibility issue by using inequality moment constraints instead. However, it can be difficult to select the threshold parameters. Moreover, moment constraints may not fully capture the discrepancy of covariate distributions. In this paper, we propose Mahalanobis balancing to approximately balance covariate distributions from a multivariate perspective. We use a quadratic constraint to control overall imbalance with a single threshold parameter, which can be tuned by a simple selection procedure. We show that the dual problem of Mahalanobis balancing is an norm‐based regularized regression problem, and establish interesting connection to propensity score models. We derive asymptotic properties, discuss the high‐dimensional scenario, and make extensive numerical comparisons with existing balancing methods.
{"title":"Mahalanobis balancing: A multivariate perspective on approximate covariate balancing","authors":"Yimin Dai, Ying Yan","doi":"10.1111/sjos.12721","DOIUrl":"https://doi.org/10.1111/sjos.12721","url":null,"abstract":"In the past decade, various exact balancing‐based weighting methods were introduced to the causal inference literature. It eliminates covariate imbalance by imposing balancing constraints in a certain optimization problem, which can nevertheless be infeasible when there is bad overlap between the covariate distributions in the treated and control groups or when the covariates are high dimensional. Recently, approximate balancing was proposed as an alternative balancing framework. It resolves the feasibility issue by using inequality moment constraints instead. However, it can be difficult to select the threshold parameters. Moreover, moment constraints may not fully capture the discrepancy of covariate distributions. In this paper, we propose Mahalanobis balancing to approximately balance covariate distributions from a multivariate perspective. We use a quadratic constraint to control overall imbalance with a single threshold parameter, which can be tuned by a simple selection procedure. We show that the dual problem of Mahalanobis balancing is an norm‐based regularized regression problem, and establish interesting connection to propensity score models. We derive asymptotic properties, discuss the high‐dimensional scenario, and make extensive numerical comparisons with existing balancing methods.","PeriodicalId":49567,"journal":{"name":"Scandinavian Journal of Statistics","volume":"36 1","pages":""},"PeriodicalIF":1.0,"publicationDate":"2024-04-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140801664","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"数学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
In observational studies with time‐to‐event outcomes, the g‐formula can be used to estimate a treatment effect in the presence of confounding factors. However, the asymptotic distribution of the corresponding stochastic process is complicated and thus not suitable for deriving confidence intervals or time‐simultaneous confidence bands for the average treatment effect. A common remedy are resampling‐based approximations, with Efron's nonparametric bootstrap being the standard tool in practice. We investigate the large sample properties of three different resampling approaches and prove their asymptotic validity in a setting with time‐to‐event data subject to competing risks. The usage of these approaches is demonstrated by an analysis of the effect of physical activity on the risk of knee replacement among patients with advanced knee osteoarthritis.
{"title":"Asymptotic properties of resampling‐based processes for the average treatment effect in observational studies with competing risks","authors":"Jasmin Rühl, Sarah Friedrich","doi":"10.1111/sjos.12714","DOIUrl":"https://doi.org/10.1111/sjos.12714","url":null,"abstract":"In observational studies with time‐to‐event outcomes, the g‐formula can be used to estimate a treatment effect in the presence of confounding factors. However, the asymptotic distribution of the corresponding stochastic process is complicated and thus not suitable for deriving confidence intervals or time‐simultaneous confidence bands for the average treatment effect. A common remedy are resampling‐based approximations, with Efron's nonparametric bootstrap being the standard tool in practice. We investigate the large sample properties of three different resampling approaches and prove their asymptotic validity in a setting with time‐to‐event data subject to competing risks. The usage of these approaches is demonstrated by an analysis of the effect of physical activity on the risk of knee replacement among patients with advanced knee osteoarthritis.","PeriodicalId":49567,"journal":{"name":"Scandinavian Journal of Statistics","volume":"105 1","pages":""},"PeriodicalIF":1.0,"publicationDate":"2024-04-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140801709","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"数学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Ryad Belhakem, Franck Picard, Vincent Rivoirard, Angelina Roche
Functional Principal Component Analysis is a reference method for dimension reduction of curve data. Its theoretical properties are now well understood in the simplified case where the sample curves are fully observed without noise. However, functional data are noisy and necessarily observed on a finite discretization grid. Common practice consists in smoothing the data and then to compute the functional estimates, but the impact of this denoising step on the procedure's statistical performance are rarely considered. Here we prove new convergence rates for functional principal component estimators. We introduce a double asymptotic framework: one corresponding to the sampling size and a second to the size of the grid. We prove that estimates based on projection onto histograms show optimal rates in a minimax sense. Theoretical results are illustrated on simulated data and the method is applied to the visualization of genomic data.
{"title":"Minimax estimation of functional principal components from noisy discretized functional data","authors":"Ryad Belhakem, Franck Picard, Vincent Rivoirard, Angelina Roche","doi":"10.1111/sjos.12719","DOIUrl":"https://doi.org/10.1111/sjos.12719","url":null,"abstract":"Functional Principal Component Analysis is a reference method for dimension reduction of curve data. Its theoretical properties are now well understood in the simplified case where the sample curves are fully observed without noise. However, functional data are noisy and necessarily observed on a finite discretization grid. Common practice consists in smoothing the data and then to compute the functional estimates, but the impact of this denoising step on the procedure's statistical performance are rarely considered. Here we prove new convergence rates for functional principal component estimators. We introduce a double asymptotic framework: one corresponding to the sampling size and a second to the size of the grid. We prove that estimates based on projection onto histograms show optimal rates in a minimax sense. Theoretical results are illustrated on simulated data and the method is applied to the visualization of genomic data.","PeriodicalId":49567,"journal":{"name":"Scandinavian Journal of Statistics","volume":"101 1","pages":""},"PeriodicalIF":1.0,"publicationDate":"2024-04-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140801711","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"数学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
In survival analysis, cure models have been developed to account for the presence of cured subjects that will never experience the event of interest. Mixture cure models with a parametric model for the incidence and a semiparametric model for the survival of the susceptibles are particularly common in practice. Because of the latent cure status, maximum likelihood estimation is performed via the iterative EM algorithm. Here, we focus on the cure probabilities and propose a two‐step procedure to improve upon the maximum likelihood estimator when the sample size is not large. The new method is based on presmoothing by first constructing a nonparametric estimator and then projecting it on the desired parametric class. We investigate the theoretical properties of the resulting estimator and show through an extensive simulation study for the logistic‐Cox model that it outperforms the existing method. Practical use of the method is illustrated through two melanoma datasets.
在生存分析中,人们开发了治愈模型,以考虑到永远不会发生相关事件的治愈受试者的存在。在实践中,采用发病率参数模型和易感人群生存率半参数模型的混合治愈模型尤为常见。由于存在潜伏的治愈状态,最大似然估计是通过迭代 EM 算法进行的。在此,我们将重点放在治愈概率上,并提出了一个两步程序,以改进样本量不大时的最大似然估计方法。新方法基于预平滑,首先构建一个非参数估计器,然后将其投影到所需的参数类别上。我们研究了由此产生的估计器的理论特性,并通过对 logistic-Cox 模型的大量模拟研究表明,它优于现有方法。我们通过两个黑色素瘤数据集说明了该方法的实际应用。
{"title":"A two‐step estimation procedure for semiparametric mixture cure models","authors":"Eni Musta, Valentin Patilea, Ingrid Van Keilegom","doi":"10.1111/sjos.12713","DOIUrl":"https://doi.org/10.1111/sjos.12713","url":null,"abstract":"In survival analysis, cure models have been developed to account for the presence of cured subjects that will never experience the event of interest. Mixture cure models with a parametric model for the incidence and a semiparametric model for the survival of the susceptibles are particularly common in practice. Because of the latent cure status, maximum likelihood estimation is performed via the iterative EM algorithm. Here, we focus on the cure probabilities and propose a two‐step procedure to improve upon the maximum likelihood estimator when the sample size is not large. The new method is based on presmoothing by first constructing a nonparametric estimator and then projecting it on the desired parametric class. We investigate the theoretical properties of the resulting estimator and show through an extensive simulation study for the logistic‐Cox model that it outperforms the existing method. Practical use of the method is illustrated through two melanoma datasets.","PeriodicalId":49567,"journal":{"name":"Scandinavian Journal of Statistics","volume":"87 1","pages":""},"PeriodicalIF":1.0,"publicationDate":"2024-04-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140624872","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"数学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}