In a placebo-controlled clinical study one may calculate the average treatment effect to convey the effect of the active treatment on some outcome. However, if it is speculated that the treatment only has an effect if the patient responds to the treatment defined by a certain biomarker response, then it is arguably more relevant to estimate the treatment effect among such responders. We present such a causal parameter that is based on principal stratification and is identified under the exclusion of a treatment effect among the non-responders. We focus on time-;to-event outcomes allowing for right censoring, and construct a doubly robust and efficient estimator based on the associated efficient influence function. The properties of the estimator are showcased in a simulation study and the methodology is applied to the Leader trial investigating the effect of liraglutide on the occurrence of cardiovascular events.
{"title":"Estimation of treatment effect among treatment responders with a time-to-event endpoint","authors":"Andreas Nordland, Torben Martinussen","doi":"10.1111/sjos.12706","DOIUrl":"https://doi.org/10.1111/sjos.12706","url":null,"abstract":"In a placebo-controlled clinical study one may calculate the average treatment effect to convey the effect of the active treatment on some outcome. However, if it is speculated that the treatment only has an effect if the patient responds to the treatment defined by a certain biomarker response, then it is arguably more relevant to estimate the treatment effect among such responders. We present such a causal parameter that is based on principal stratification and is identified under the exclusion of a treatment effect among the non-responders. We focus on time-;to-event outcomes allowing for right censoring, and construct a doubly robust and efficient estimator based on the associated efficient influence function. The properties of the estimator are showcased in a simulation study and the methodology is applied to the Leader trial investigating the effect of liraglutide on the occurrence of cardiovascular events.","PeriodicalId":49567,"journal":{"name":"Scandinavian Journal of Statistics","volume":null,"pages":null},"PeriodicalIF":1.0,"publicationDate":"2024-01-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"139501684","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"数学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Christophe Denis, Charlotte Dion-Blanc, Eddy Ella-Mintsa, Viet Chi Tran
We study the multiclass classification problem where the features come from a mixture of time-homogeneous diffusion.Specifically, the classes are discriminated by their drift functions while the diffusion coefficient is common to all classes and unknown.In this framework, we build a plug-in classifier which relies on nonparamateric estimators of the drift and diffusion functions.We first establish the consistency of our classification procedure under mild assumptions and then provide rates of convergence under different setof assumptions. Finally, a numerical study supports our theoretical findings.
{"title":"Nonparametric plug-in classifier for multiclass classification of S.D.E. paths","authors":"Christophe Denis, Charlotte Dion-Blanc, Eddy Ella-Mintsa, Viet Chi Tran","doi":"10.1111/sjos.12702","DOIUrl":"https://doi.org/10.1111/sjos.12702","url":null,"abstract":"We study the multiclass classification problem where the features come from a mixture of time-homogeneous diffusion.Specifically, the classes are discriminated by their drift functions while the diffusion coefficient is common to all classes and unknown.In this framework, we build a plug-in classifier which relies on nonparamateric estimators of the drift and diffusion functions.We first establish the consistency of our classification procedure under mild assumptions and then provide rates of convergence under different setof assumptions. Finally, a numerical study supports our theoretical findings.","PeriodicalId":49567,"journal":{"name":"Scandinavian Journal of Statistics","volume":null,"pages":null},"PeriodicalIF":1.0,"publicationDate":"2024-01-15","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"139475214","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"数学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
We present a detailed discussion of the theoretical properties of quadratic inference function estimators of the parameters in marginal linear regression models. We consider the effect of the choice of working correlation on fundamental questions including the existence of quadratic inference function estimators, their relationship with generalized estimating equations estimators, and the robustness and asymptotic relative efficiency of quadratic inference function and generalized estimating equations estimators. We show that the quadratic inference function estimators do not always exist and propose a way to handle this. We then show that they have unbounded influence functions and can be more or less asymptotically efficient than generalized estimating equations estimators. We also present empirical evidence to demonstrate these results. We conclude that the choice of working correlation can have surprisingly large effects.
{"title":"The effect of the working correlation on fitting models to longitudinal data","authors":"Samuel Muller, Suojin Wang, A. H. Welsh","doi":"10.1111/sjos.12704","DOIUrl":"https://doi.org/10.1111/sjos.12704","url":null,"abstract":"We present a detailed discussion of the theoretical properties of quadratic inference function estimators of the parameters in marginal linear regression models. We consider the effect of the choice of working correlation on fundamental questions including the existence of quadratic inference function estimators, their relationship with generalized estimating equations estimators, and the robustness and asymptotic relative efficiency of quadratic inference function and generalized estimating equations estimators. We show that the quadratic inference function estimators do not always exist and propose a way to handle this. We then show that they have unbounded influence functions and can be more or less asymptotically efficient than generalized estimating equations estimators. We also present empirical evidence to demonstrate these results. We conclude that the choice of working correlation can have surprisingly large effects.","PeriodicalId":49567,"journal":{"name":"Scandinavian Journal of Statistics","volume":null,"pages":null},"PeriodicalIF":1.0,"publicationDate":"2024-01-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"139077716","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"数学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
A metric tensor for Riemann manifold Monte Carlo particularly suited for non-linear Bayesian hierarchical models is proposed. The metric tensor is built from symmetric positive semidefinite log-density gradient covariance (LGC) matrices, which are also proposed and further explored here. The LGCs generalize the Fisher information matrix by measuring the joint information content and dependence structure of both a random variable and the parameters of said variable. Consequently, positive definite Fisher/LGC-based metric tensors may be constructed not only from the observation likelihoods as is current practice, but also from arbitrarily complicated non-linear prior/latent variable structures, provided the LGC may be derived for each conditional distribution used to construct said structures. The proposed methodology is highly automatic and allows for exploitation of any sparsity associated with the model in question. When implemented in conjunction with a Riemann manifold variant of the recently proposed numerical generalized randomized Hamiltonian Monte Carlo processes, the proposed methodology is highly competitive, in particular for the more challenging target distributions associated with Bayesian hierarchical models.
{"title":"Log-density gradient covariance and automatic metric tensors for Riemann manifold Monte Carlo methods†","authors":"Tore Selland Kleppe","doi":"10.1111/sjos.12705","DOIUrl":"https://doi.org/10.1111/sjos.12705","url":null,"abstract":"A metric tensor for Riemann manifold Monte Carlo particularly suited for non-linear Bayesian hierarchical models is proposed. The metric tensor is built from symmetric positive semidefinite log-density gradient covariance (LGC) matrices, which are also proposed and further explored here. The LGCs generalize the Fisher information matrix by measuring the joint information content and dependence structure of both a random variable and the parameters of said variable. Consequently, positive definite Fisher/LGC-based metric tensors may be constructed not only from the observation likelihoods as is current practice, but also from arbitrarily complicated non-linear prior/latent variable structures, provided the LGC may be derived for each conditional distribution used to construct said structures. The proposed methodology is highly automatic and allows for exploitation of any sparsity associated with the model in question. When implemented in conjunction with a Riemann manifold variant of the recently proposed numerical generalized randomized Hamiltonian Monte Carlo processes, the proposed methodology is highly competitive, in particular for the more challenging target distributions associated with Bayesian hierarchical models.","PeriodicalId":49567,"journal":{"name":"Scandinavian Journal of Statistics","volume":null,"pages":null},"PeriodicalIF":1.0,"publicationDate":"2023-12-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"139053743","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"数学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
In this paper we study two important representations for extreme value distributions and their max-domains of attraction (MDA), namely von Mises representation (vMR) and variation representation (VR), which are convenient ways to gain limit results. Both VR and vMR are defined via so-called auxiliary functions ψ. Up to now, however, the set of valid auxiliary functions for vMR has neither been characterized completely nor separated from those for VR. We contribute to the current literature by introducing “universal” auxiliary functions which are valid for both VR and vMR representations for the entire MDA distribution families. Then we identify exactly the sets of valid auxiliary functions for both VR and vMR. Moreover, we propose a method for finding appropriate auxiliary functions with analytically simple structure and provide them for several important distributions.
{"title":"Characterization of valid auxiliary functions for representations of extreme value distributions and their max-domains of attraction","authors":"Miriam Isabel Seifert","doi":"10.1111/sjos.12701","DOIUrl":"https://doi.org/10.1111/sjos.12701","url":null,"abstract":"In this paper we study two important representations for extreme value distributions and their max-domains of attraction (MDA), namely von Mises representation (vMR) and variation representation (VR), which are convenient ways to gain limit results. Both VR and vMR are defined via so-called auxiliary functions <i>ψ</i>. Up to now, however, the set of valid auxiliary functions for vMR has neither been characterized completely nor separated from those for VR. We contribute to the current literature by introducing “universal” auxiliary functions which are valid for both VR and vMR representations for the entire MDA distribution families. Then we identify exactly the sets of valid auxiliary functions for both VR and vMR. Moreover, we propose a method for finding appropriate auxiliary functions with analytically simple structure and provide them for several important distributions.","PeriodicalId":49567,"journal":{"name":"Scandinavian Journal of Statistics","volume":null,"pages":null},"PeriodicalIF":1.0,"publicationDate":"2023-12-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"139053691","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"数学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Zixuan Zhao, Yanglei Song, Wenyu Jiang, Dongsheng Tu
Pocock and Simon's minimization method is a popular approach for covariate-adaptive randomization in clinical trials. Valid statistical inference with data collected under the minimization method requires the knowledge of the limiting covariance matrix of within-stratum imbalances, whose existence is only recently established. In this work, we propose a bootstrap-based estimator for this limit and establish its consistency, in particular, by Le Cam's third lemma. As an application, we consider in simulation studies adjustments to existing robust tests for treatment effects with survival data by the proposed estimator. It shows that the adjusted tests achieve a size close to the nominal level, and unlike other designs, the robust tests without adjustment may have an asymptotic size inflation issue under the minimization method.
波科克和西蒙的最小化方法是临床试验中一种常用的协方差自适应随机方法。使用最小化方法收集的数据进行有效的统计推断,需要知道层内不平衡的极限协方差矩阵,而该矩阵的存在最近才被证实。在这项工作中,我们提出了一种基于自举法的极限估计方法,并特别通过 Le Cam 的第三个 Lemma 建立了其一致性。作为一项应用,我们在模拟研究中考虑用提出的估计器调整现有的生存数据治疗效果稳健检验。结果表明,调整后的检验规模接近名义水平,与其他设计不同的是,在最小化方法下,未经调整的稳健检验可能存在规模膨胀的渐近问题。
{"title":"Consistent covariances estimation for stratum imbalances under minimization method for covariate-adaptive randomization","authors":"Zixuan Zhao, Yanglei Song, Wenyu Jiang, Dongsheng Tu","doi":"10.1111/sjos.12703","DOIUrl":"https://doi.org/10.1111/sjos.12703","url":null,"abstract":"Pocock and Simon's minimization method is a popular approach for covariate-adaptive randomization in clinical trials. Valid statistical inference with data collected under the minimization method requires the knowledge of the limiting covariance matrix of within-stratum imbalances, whose existence is only recently established. In this work, we propose a bootstrap-based estimator for this limit and establish its consistency, in particular, by Le Cam's third lemma. As an application, we consider in simulation studies adjustments to existing robust tests for treatment effects with survival data by the proposed estimator. It shows that the adjusted tests achieve a size close to the nominal level, and unlike other designs, the robust tests without adjustment may have an asymptotic size inflation issue under the minimization method.","PeriodicalId":49567,"journal":{"name":"Scandinavian Journal of Statistics","volume":null,"pages":null},"PeriodicalIF":1.0,"publicationDate":"2023-12-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"139053744","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"数学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
We consider the construction of confidence bands for survival curves under the outcome-dependent stratified sampling. A main challenge of this design is that data are a biased dependent sample due to stratification and sampling without replacement. Most literature on regression approximates this design by Bernoulli sampling but variance is generally overestimated. Even with this approximation, the limiting distribution of the inverse probability weighted Kaplan-Meier estimator involves a general Gaussian process, and hence quantiles of its supremum is not analytically available. In this paper, we provide a rigorous asymptotic theory for the weighted Kaplan-Meier estimator accounting for dependence in the sample. We propose the novel hybrid method to both simulate and bootstrap parts of the limiting process to compute confidence bands with asymptotically correct coverage probability. Simulation study indicates that the proposed bands are appropriate for practical use. A Wilms tumor example is presented.
{"title":"Confidence Bands for Survival Curves from Outcome-Dependent Stratified Samples","authors":"Takumi Saegusa, Peter Nandori","doi":"10.1111/sjos.12700","DOIUrl":"https://doi.org/10.1111/sjos.12700","url":null,"abstract":"We consider the construction of confidence bands for survival curves under the outcome-dependent stratified sampling. A main challenge of this design is that data are a biased dependent sample due to stratification and sampling without replacement. Most literature on regression approximates this design by Bernoulli sampling but variance is generally overestimated. Even with this approximation, the limiting distribution of the inverse probability weighted Kaplan-Meier estimator involves a general Gaussian process, and hence quantiles of its supremum is not analytically available. In this paper, we provide a rigorous asymptotic theory for the weighted Kaplan-Meier estimator accounting for dependence in the sample. We propose the novel hybrid method to both simulate and bootstrap parts of the limiting process to compute confidence bands with asymptotically correct coverage probability. Simulation study indicates that the proposed bands are appropriate for practical use. A Wilms tumor example is presented.","PeriodicalId":49567,"journal":{"name":"Scandinavian Journal of Statistics","volume":null,"pages":null},"PeriodicalIF":1.0,"publicationDate":"2023-12-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"138825587","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"数学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Subhadra Dasgupta, Siuli Mukhopadhyay, Jonathan Keith
This work is focused on finding G -optimal designs theoretically for kriging models with two -dimensional inputs and separable exponential covariance structures. For design comparison, the notion of evenness of two-dimensional grid designs is developed. The mathematical relationship between the design and the supremum of the mean squared prediction error (SMSPE) function is studied and then optimal designs are explored for both prospective and retrospective design scenarios. In the case of prospective designs, the new design is developed before the experiment is conducted and the regularly spaced grid is shown to be the G -optimal design. Retrospective designs are constructed by adding or deleting points from an already existing design. Deterministic algorithms are developed to find the best possible retrospective designs (which minimizes the SMSPE). It is found that a more evenly spread design under the G -optimality criterion leads to the best possible retrospective design. For all the cases of finding the optimal prospective designs and the best possible retrospective designs, both frequentist and Bayesian frameworks have been considered. The proposed methodology for finding retrospective designs is illustrated with a spatio-temporal river water quality monitoring experiment.
这项工作的重点是为具有二维输入和可分离指数协方差结构的克里金模型从理论上找到 G 最佳设计。为了进行设计比较,提出了二维网格设计的均匀性概念。研究了设计与均方预测误差(SMSPE)函数上确界之间的数学关系,然后探讨了前瞻性设计和回顾性设计两种情况下的最优设计。在前瞻性设计的情况下,新设计是在实验进行之前开发的,而规则间隔的网格被证明是 G 最佳设计。回顾性设计是通过在已有设计中添加或删除点来构建的。我们开发了确定性算法来寻找最佳的回顾性设计(使 SMSPE 最小)。研究发现,在 G 最佳准则下,更均匀分布的设计会导致最佳的回顾性设计。在寻找最优前瞻性设计和最佳回顾性设计的所有情况下,都考虑了频繁主义和贝叶斯框架。我们用一个时空河流水质监测实验来说明所提出的寻找回溯设计的方法。
{"title":"G-optimal grid designs for kriging models","authors":"Subhadra Dasgupta, Siuli Mukhopadhyay, Jonathan Keith","doi":"10.1111/sjos.12699","DOIUrl":"https://doi.org/10.1111/sjos.12699","url":null,"abstract":"This work is focused on finding G -optimal designs theoretically for kriging models with two -dimensional inputs and separable exponential covariance structures. For design comparison, the notion of evenness of two-dimensional grid designs is developed. The mathematical relationship between the design and the supremum of the mean squared prediction error (<i>SMSPE</i>) function is studied and then optimal designs are explored for both prospective and retrospective design scenarios. In the case of prospective designs, the new design is developed before the experiment is conducted and the regularly spaced grid is shown to be the G -optimal design. Retrospective designs are constructed by adding or deleting points from an already existing design. Deterministic algorithms are developed to find the best possible retrospective designs (which minimizes the <i>SMSPE</i>). It is found that a more evenly spread design under the G -optimality criterion leads to the best possible retrospective design. For all the cases of finding the optimal prospective designs and the best possible retrospective designs, both frequentist and Bayesian frameworks have been considered. The proposed methodology for finding retrospective designs is illustrated with a spatio-temporal river water quality monitoring experiment.","PeriodicalId":49567,"journal":{"name":"Scandinavian Journal of Statistics","volume":null,"pages":null},"PeriodicalIF":1.0,"publicationDate":"2023-12-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"138566680","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"数学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Yiming Liu, Guangming Pan, Guangren Yang, Wang Zhou
We propose a new test to investigate the conditional mean dependence between a response variable and the corresponding covariates in the high dimensional regimes. The test statistic is an extreme-type one built on the nonparametric method. The limiting null distribution of the proposed extreme type statistic under a mild mixing condition is established. Moreover, to make the test more powerful in general structures we propose a more general test statistic and develop its asymptotic properties. The power analysis of both methods is also considered. In real data analysis, we also propose a new way to conduct the feature screening based on our results. To evaluate the performance of our estimators and other methods, extensive simulations are conducted.
{"title":"Nonparametric conditional mean testing via an extreme-type statistic in high dimension","authors":"Yiming Liu, Guangming Pan, Guangren Yang, Wang Zhou","doi":"10.1111/sjos.12697","DOIUrl":"https://doi.org/10.1111/sjos.12697","url":null,"abstract":"We propose a new test to investigate the conditional mean dependence between a response variable and the corresponding covariates in the high dimensional regimes. The test statistic is an extreme-type one built on the nonparametric method. The limiting null distribution of the proposed extreme type statistic under a mild mixing condition is established. Moreover, to make the test more powerful in general structures we propose a more general test statistic and develop its asymptotic properties. The power analysis of both methods is also considered. In real data analysis, we also propose a new way to conduct the feature screening based on our results. To evaluate the performance of our estimators and other methods, extensive simulations are conducted.","PeriodicalId":49567,"journal":{"name":"Scandinavian Journal of Statistics","volume":null,"pages":null},"PeriodicalIF":1.0,"publicationDate":"2023-12-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"138526384","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"数学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Multivariate extreme value distributions are a common choice for modelling multivariate extremes. In high dimensions, however, the construction of flexible and parsimonious models is challenging. We propose to combine bivariate max-stable distributions into a Markov random field with respect to a tree. Although in general not max-stable itself, this Markov tree is attracted by a multivariate max-stable distribution. The latter serves as a tree-based approximation to an unknown max-stable distribution with the given bivariate distributions as margins. Given data, we learn an appropriate tree structure by Prim's algorithm with estimated pairwise upper tail dependence coefficients as edge weights. The distributions of pairs of connected variables can be fitted in various ways. The resulting tree-structured max-stable distribution allows for inference on rare event probabilities, as illustrated on river discharge data from the upper Danube basin.
{"title":"Modelling multivariate extreme value distributions via Markov trees*","authors":"Shuang Hu, Zuoxiang Peng, Johan Segers","doi":"10.1111/sjos.12698","DOIUrl":"https://doi.org/10.1111/sjos.12698","url":null,"abstract":"Multivariate extreme value distributions are a common choice for modelling multivariate extremes. In high dimensions, however, the construction of flexible and parsimonious models is challenging. We propose to combine bivariate max-stable distributions into a Markov random field with respect to a tree. Although in general not max-stable itself, this Markov tree is attracted by a multivariate max-stable distribution. The latter serves as a tree-based approximation to an unknown max-stable distribution with the given bivariate distributions as margins. Given data, we learn an appropriate tree structure by Prim's algorithm with estimated pairwise upper tail dependence coefficients as edge weights. The distributions of pairs of connected variables can be fitted in various ways. The resulting tree-structured max-stable distribution allows for inference on rare event probabilities, as illustrated on river discharge data from the upper Danube basin.","PeriodicalId":49567,"journal":{"name":"Scandinavian Journal of Statistics","volume":null,"pages":null},"PeriodicalIF":1.0,"publicationDate":"2023-11-30","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"138526422","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"数学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}