Using least squares techniques, there is an awareness of the dangers posed by the occurrence of outliers present in the data. In general, outliers may totally spoil an ordinary least squares analysis. To cope with this problem, statistical techniques have been developed that are not so easily affected by outliers. These methods are called robust or resistant. In this overview paper we illustrate that robust solutions can be acquired by solving a reweighted least squares problem even though the initial solution is not robust. This overview paper relates classical results from robustness to the most recent advances of robustness in least squares kernel based regression, with an emphasis on theoretical results as well as practical examples. Software for iterative reweighting is also made freely available to the user.
{"title":"Robustness by Reweighting for Kernel Estimators: An Overview","authors":"K. De Brabanter, Joseph De Brabanter","doi":"10.1214/20-sts816","DOIUrl":"https://doi.org/10.1214/20-sts816","url":null,"abstract":"Using least squares techniques, there is an awareness of the dangers posed by the occurrence of outliers present in the data. In general, outliers may totally spoil an ordinary least squares analysis. To cope with this problem, statistical techniques have been developed that are not so easily affected by outliers. These methods are called robust or resistant. In this overview paper we illustrate that robust solutions can be acquired by solving a reweighted least squares problem even though the initial solution is not robust. This overview paper relates classical results from robustness to the most recent advances of robustness in least squares kernel based regression, with an emphasis on theoretical results as well as practical examples. Software for iterative reweighting is also made freely available to the user.","PeriodicalId":51172,"journal":{"name":"Statistical Science","volume":" ","pages":""},"PeriodicalIF":5.7,"publicationDate":"2021-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"46726047","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"数学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Bouchra R. Nasri, B. Rémillard, B. Szyszkowicz, Jean Vaillancourt
. Donald Andrew Dawson (Don Dawson) was born in 1937. He received a bachelor’s degree in 1958 and a master’s degree in 1959 from McGill University and a Ph.D. in 1963 from M.I.T. under the supervision of Henry P. McKean, Jr. Following an appointment at McGill University as professor for 7 years, he joined Carleton University in 1970 where he remained for the rest of his career. Among his many contributions to the theory of stochastic processes, his work leading to the creation of the Dawson–Watanabe superprocess and the analysis of its remarkable properties in describing the evolution in space and time of populations, stand out as milestones of modern probability theory. His numerous papers span the whole gamut of contemporary hot areas, notably the study of stochastic evolution equations, measure-valued processes, McKean–Vlasov limits, hierarchical structures, super-Brownian motion, as well as branching, catalytic and historical processes. He has over 200 refereed publications and 8 monographs, with an impressive number of citations, more than 7000. He is elected Fellow of the Royal Society and of the Royal Society of Canada, as well as Gold medalist of the Statistical Society of Canada and elected Fellow of the Institute of Mathematical Statistics. We realized this interview to celebrate the outstanding contribution of Don Dawson to 50 years of Stochastics at Carleton University.
{"title":"A Conversation with Don Dawson","authors":"Bouchra R. Nasri, B. Rémillard, B. Szyszkowicz, Jean Vaillancourt","doi":"10.1214/21-sts821","DOIUrl":"https://doi.org/10.1214/21-sts821","url":null,"abstract":". Donald Andrew Dawson (Don Dawson) was born in 1937. He received a bachelor’s degree in 1958 and a master’s degree in 1959 from McGill University and a Ph.D. in 1963 from M.I.T. under the supervision of Henry P. McKean, Jr. Following an appointment at McGill University as professor for 7 years, he joined Carleton University in 1970 where he remained for the rest of his career. Among his many contributions to the theory of stochastic processes, his work leading to the creation of the Dawson–Watanabe superprocess and the analysis of its remarkable properties in describing the evolution in space and time of populations, stand out as milestones of modern probability theory. His numerous papers span the whole gamut of contemporary hot areas, notably the study of stochastic evolution equations, measure-valued processes, McKean–Vlasov limits, hierarchical structures, super-Brownian motion, as well as branching, catalytic and historical processes. He has over 200 refereed publications and 8 monographs, with an impressive number of citations, more than 7000. He is elected Fellow of the Royal Society and of the Royal Society of Canada, as well as Gold medalist of the Statistical Society of Canada and elected Fellow of the Institute of Mathematical Statistics. We realized this interview to celebrate the outstanding contribution of Don Dawson to 50 years of Stochastics at Carleton University.","PeriodicalId":51172,"journal":{"name":"Statistical Science","volume":" ","pages":""},"PeriodicalIF":5.7,"publicationDate":"2021-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"41927993","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"数学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
. In the framework of multi-way data analysis, this paper presents symmetrical and non-symmetrical variants of three-way correspondence analysis that are suitable when a three-way contingency table is constructed from ordinal variables. In particular, such variables may be modelled using general recurrence formulae to generate orthogonal polynomial vectors in-stead of singular vectors coming from one of the possible three-way extensions of the singular value decomposition. As we shall see, these polynomials, that until now have been used to decompose two-way contingency tables with ordered variables, also constitute an alternative orthogonal basis for modelling symmetrical, non-symmetrical associations and predictabilities in three-way contingency tables. Consequences with respect to modelling and graphing will be highlighted.
{"title":"Symmetrical and Non-symmetrical Variants of Three-Way Correspondence Analysis for Ordered Variables","authors":"Rosaria Lombardo Eric J Beh, P. Kroonenberg","doi":"10.1214/20-sts814","DOIUrl":"https://doi.org/10.1214/20-sts814","url":null,"abstract":". In the framework of multi-way data analysis, this paper presents symmetrical and non-symmetrical variants of three-way correspondence analysis that are suitable when a three-way contingency table is constructed from ordinal variables. In particular, such variables may be modelled using general recurrence formulae to generate orthogonal polynomial vectors in-stead of singular vectors coming from one of the possible three-way extensions of the singular value decomposition. As we shall see, these polynomials, that until now have been used to decompose two-way contingency tables with ordered variables, also constitute an alternative orthogonal basis for modelling symmetrical, non-symmetrical associations and predictabilities in three-way contingency tables. Consequences with respect to modelling and graphing will be highlighted.","PeriodicalId":51172,"journal":{"name":"Statistical Science","volume":" ","pages":""},"PeriodicalIF":5.7,"publicationDate":"2021-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"48886926","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"数学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Bayesian analyses combine information represented by different terms in a joint Bayesian model. When one or more of the terms is misspecified, it can be helpful to restrict the use of information from suspect model components to modify posterior inference. This is called"cutting feedback", and both the specification and computation of the posterior for such"cut models"is challenging. In this paper, we define cut posterior distributions as solutions to constrained optimization problems, and propose optimization-based variational methods for their computation. These methods are faster than existing Markov chain Monte Carlo (MCMC) approaches for computing cut posterior distributions by an order of magnitude. It is also shown that variational methods allow for the evaluation of computationally intensive conflict checks that can be used to decide whether or not feedback should be cut. Our methods are illustrated in a number of simulated and real examples, including an application where recent methodological advances that combine variational inference and MCMC within the variational optimization are used.
{"title":"Variational Inference for Cutting Feedback in Misspecified Models","authors":"Xue Yu, D. Nott, M. Smith","doi":"10.1214/23-sts886","DOIUrl":"https://doi.org/10.1214/23-sts886","url":null,"abstract":"Bayesian analyses combine information represented by different terms in a joint Bayesian model. When one or more of the terms is misspecified, it can be helpful to restrict the use of information from suspect model components to modify posterior inference. This is called\"cutting feedback\", and both the specification and computation of the posterior for such\"cut models\"is challenging. In this paper, we define cut posterior distributions as solutions to constrained optimization problems, and propose optimization-based variational methods for their computation. These methods are faster than existing Markov chain Monte Carlo (MCMC) approaches for computing cut posterior distributions by an order of magnitude. It is also shown that variational methods allow for the evaluation of computationally intensive conflict checks that can be used to decide whether or not feedback should be cut. Our methods are illustrated in a number of simulated and real examples, including an application where recent methodological advances that combine variational inference and MCMC within the variational optimization are used.","PeriodicalId":51172,"journal":{"name":"Statistical Science","volume":" ","pages":""},"PeriodicalIF":5.7,"publicationDate":"2021-08-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"43472951","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"数学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
In this article, we take a step back to distill seven principles out of our experience in the spring of 2020, when our 12-person rapid-response team used skills of data science and beyond to help distribute Covid PPE. This process included tapping into domain knowledge of epidemiology and medical logistics chains, curating a relevant data repository, developing models for short-term county-level death forecasting in the US, and building a website for sharing visualization (an automated AI machine). The principles are described in the context of working with Response4Life, a then-new nonprofit organization, to illustrate their necessity. Many of these principles overlap with those in standard data-science teams, but an emphasis is put on dealing with problems that require rapid response, often resembling agile software development.
{"title":"Seven Principles for Rapid-Response Data Science: Lessons Learned from Covid-19 Forecasting","authors":"Bin Yu, Chandan Singh","doi":"10.1214/22-sts855","DOIUrl":"https://doi.org/10.1214/22-sts855","url":null,"abstract":"In this article, we take a step back to distill seven principles out of our experience in the spring of 2020, when our 12-person rapid-response team used skills of data science and beyond to help distribute Covid PPE. This process included tapping into domain knowledge of epidemiology and medical logistics chains, curating a relevant data repository, developing models for short-term county-level death forecasting in the US, and building a website for sharing visualization (an automated AI machine). The principles are described in the context of working with Response4Life, a then-new nonprofit organization, to illustrate their necessity. Many of these principles overlap with those in standard data-science teams, but an emphasis is put on dealing with problems that require rapid response, often resembling agile software development.","PeriodicalId":51172,"journal":{"name":"Statistical Science","volume":" ","pages":""},"PeriodicalIF":5.7,"publicationDate":"2021-08-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"47846235","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"数学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
This article reviews the important ideas behind random matrix theory (RMT), which has become a major tool in a variety of disciplines, including mathematical physics, number theory, combinatorics and multivariate statistical analysis. Much of the theory involves ensembles of random matrices that are governed by some probability distribution. Examples include Gaussian ensembles and Wishart–Laguerre ensembles. Interest has centered on studying the spectrum of random matrices, especially the extreme eigenvalues, suitably normalized, for a single Wishart matrix and for two Wishart matrices, for finite and infinite sample sizes in the real and complex cases. The Tracy–Widom Laws for the probability distribution of a normalized largest eigenvalue of a random matrix have become very prominent in RMT. Limiting probability distributions of eigenvalues of a certain random matrix lead to Wigner’s Semicircle Law and Marc˘enko–Pastur’s Quarter-Circle Law. Several applications of these results in RMT are described in this article.
{"title":"Random Matrix Theory and Its Applications","authors":"A. Izenman","doi":"10.1142/9789814273121","DOIUrl":"https://doi.org/10.1142/9789814273121","url":null,"abstract":"This article reviews the important ideas behind random matrix theory (RMT), which has become a major tool in a variety of disciplines, including mathematical physics, number theory, combinatorics and multivariate statistical analysis. Much of the theory involves ensembles of random matrices that are governed by some probability distribution. Examples include Gaussian ensembles and Wishart–Laguerre ensembles. Interest has centered on studying the spectrum of random matrices, especially the extreme eigenvalues, suitably normalized, for a single Wishart matrix and for two Wishart matrices, for finite and infinite sample sizes in the real and complex cases. The Tracy–Widom Laws for the probability distribution of a normalized largest eigenvalue of a random matrix have become very prominent in RMT. Limiting probability distributions of eigenvalues of a certain random matrix lead to Wigner’s Semicircle Law and Marc˘enko–Pastur’s Quarter-Circle Law. Several applications of these results in RMT are described in this article.","PeriodicalId":51172,"journal":{"name":"Statistical Science","volume":"1 1","pages":""},"PeriodicalIF":5.7,"publicationDate":"2021-08-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"41736171","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"数学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
In 1929, a few years prior to his colleague Kolmogorov’s Grundbegriffe, the leading Russian probabilist Khinchin published a paper in which he commented on the foundational ambitions of von Mises’ frequency theory of probability. This brief introduction provides background and context for the English translation of Khinchin’s historically revealing paper, published as an online supplement.
{"title":"Khinchin’s 1929 Paper on Von Mises’ Frequency Theory of Probability","authors":"L. Verburgt","doi":"10.1214/20-sts798","DOIUrl":"https://doi.org/10.1214/20-sts798","url":null,"abstract":"In 1929, a few years prior to his colleague Kolmogorov’s Grundbegriffe, the leading Russian probabilist Khinchin published a paper in which he commented on the foundational ambitions of von Mises’ frequency theory of probability. This brief introduction provides background and context for the English translation of Khinchin’s historically revealing paper, published as an online supplement.","PeriodicalId":51172,"journal":{"name":"Statistical Science","volume":" ","pages":""},"PeriodicalIF":5.7,"publicationDate":"2021-08-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"49222524","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"数学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pooled testing offers an efficient solution to the unprecedented testing demands of the COVID-19 pandemic, although with potentially lower sensitivity and increased costs to implementation in some settings. Assessments of this trade-off typically assume pooled specimens are independent and identically distributed. Yet, in the context of COVID-19, these assumptions are often violated: testing done on networks (housemates, spouses, co-workers) captures correlated individuals, while infection risk varies substantially across time, place and individuals. Neglecting dependencies and heterogeneity may bias established optimality grids and induce a sub-optimal implementation of the procedure. As a lesson learned from this pandemic, this paper highlights the necessity of integrating field sampling information with statistical modeling to efficiently optimize pooled testing. Using real data, we show that (a) greater gains can be achieved at low logistical cost by exploiting natural correlations (non-independence) between samples -- allowing improvements in sensitivity and efficiency of up to 30% and 90% respectively;and (b) these gains are robust despite substantial heterogeneity across pools (non-identical). Our modeling results complement and extend the observations of Barak et al (2021) who report an empirical sensitivity well beyond expectations. Finally, we provide an interactive tool for selecting an optimal pool size using contextual information
{"title":"Statistical Modeling for Practical Pooled Testing During the COVID-19 Pandemic","authors":"S. Comess, H. Wang, S. Holmes, Claire Donnat","doi":"10.1214/22-sts857","DOIUrl":"https://doi.org/10.1214/22-sts857","url":null,"abstract":"Pooled testing offers an efficient solution to the unprecedented testing demands of the COVID-19 pandemic, although with potentially lower sensitivity and increased costs to implementation in some settings. Assessments of this trade-off typically assume pooled specimens are independent and identically distributed. Yet, in the context of COVID-19, these assumptions are often violated: testing done on networks (housemates, spouses, co-workers) captures correlated individuals, while infection risk varies substantially across time, place and individuals. Neglecting dependencies and heterogeneity may bias established optimality grids and induce a sub-optimal implementation of the procedure. As a lesson learned from this pandemic, this paper highlights the necessity of integrating field sampling information with statistical modeling to efficiently optimize pooled testing. Using real data, we show that (a) greater gains can be achieved at low logistical cost by exploiting natural correlations (non-independence) between samples -- allowing improvements in sensitivity and efficiency of up to 30% and 90% respectively;and (b) these gains are robust despite substantial heterogeneity across pools (non-identical). Our modeling results complement and extend the observations of Barak et al (2021) who report an empirical sensitivity well beyond expectations. Finally, we provide an interactive tool for selecting an optimal pool size using contextual information","PeriodicalId":51172,"journal":{"name":"Statistical Science","volume":" ","pages":""},"PeriodicalIF":5.7,"publicationDate":"2021-07-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"46125680","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"数学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Two-sample tests with censored outcomes are a classical topic in statistics with wide use even in cutting edge applications. There are at least two modes of inference used to justify two-sample tests. One is usual superpopulation inference assuming that units are independent and identically distributed (i.i.d.) samples from some superpopulation; the other is finite population inference that relies on the random assignments of units into different groups. When randomization is actually implemented, the latter has the advantage of avoiding distributional assumptions on the outcomes. In this paper, we focus on finite population inference for censored outcomes, which has been less explored in the literature. Moreover, we allow the censoring time to depend on treatment assignment, under which exact permutation inference is unachievable. We find that, surprisingly, the usual logrank test can also be justified by randomization. Specifically, under a Bernoulli randomized experiment with non-informative i.i.d. censoring, the logrank test is asymptotically valid for testing Fisher’s null hypothesis of no treatment effect on any unit. The asymptotic validity of the logrank test does not require any distributional assumption on the potential event times. We further extend the theory to the stratified logrank test, which is useful for randomized block designs and when censoring mechanisms vary across strata. In sum, the developed theory for the logrank test from finite population inference supplements its classical theory from usual superpopulation inference, and helps provide a broader justification for the logrank test.
{"title":"Randomization-Based Test for Censored Outcomes: A New Look at the Logrank Test","authors":"Xinran Li, Dylan S. Small","doi":"10.1214/22-sts851","DOIUrl":"https://doi.org/10.1214/22-sts851","url":null,"abstract":"Two-sample tests with censored outcomes are a classical topic in statistics with wide use even in cutting edge applications. There are at least two modes of inference used to justify two-sample tests. One is usual superpopulation inference assuming that units are independent and identically distributed (i.i.d.) samples from some superpopulation; the other is finite population inference that relies on the random assignments of units into different groups. When randomization is actually implemented, the latter has the advantage of avoiding distributional assumptions on the outcomes. In this paper, we focus on finite population inference for censored outcomes, which has been less explored in the literature. Moreover, we allow the censoring time to depend on treatment assignment, under which exact permutation inference is unachievable. We find that, surprisingly, the usual logrank test can also be justified by randomization. Specifically, under a Bernoulli randomized experiment with non-informative i.i.d. censoring, the logrank test is asymptotically valid for testing Fisher’s null hypothesis of no treatment effect on any unit. The asymptotic validity of the logrank test does not require any distributional assumption on the potential event times. We further extend the theory to the stratified logrank test, which is useful for randomized block designs and when censoring mechanisms vary across strata. In sum, the developed theory for the logrank test from finite population inference supplements its classical theory from usual superpopulation inference, and helps provide a broader justification for the logrank test.","PeriodicalId":51172,"journal":{"name":"Statistical Science","volume":" ","pages":""},"PeriodicalIF":5.7,"publicationDate":"2021-07-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"47475631","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"数学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Andreas Anastasiou, A. Barp, F. Briol, B. Ebner, Robert E. Gaunt, Fatemeh Ghaderinezhad, Jackson Gorham, A. Gretton, Christophe Ley, Qiang Liu, Lester W. Mackey, C. Oates, G. Reinert, Yvik Swan
Stein's method compares probability distributions through the study of a class of linear operators called Stein operators. While mainly studied in probability and used to underpin theoretical statistics, Stein's method has led to significant advances in computational statistics in recent years. The goal of this survey is to bring together some of these recent developments and, in doing so, to stimulate further research into the successful field of Stein's method and statistics. The topics we discuss include tools to benchmark and compare sampling methods such as approximate Markov chain Monte Carlo, deterministic alternatives to sampling methods, control variate techniques, parameter estimation and goodness-of-fit testing.
{"title":"Stein’s Method Meets Computational Statistics: A Review of Some Recent Developments","authors":"Andreas Anastasiou, A. Barp, F. Briol, B. Ebner, Robert E. Gaunt, Fatemeh Ghaderinezhad, Jackson Gorham, A. Gretton, Christophe Ley, Qiang Liu, Lester W. Mackey, C. Oates, G. Reinert, Yvik Swan","doi":"10.1214/22-sts863","DOIUrl":"https://doi.org/10.1214/22-sts863","url":null,"abstract":"Stein's method compares probability distributions through the study of a class of linear operators called Stein operators. While mainly studied in probability and used to underpin theoretical statistics, Stein's method has led to significant advances in computational statistics in recent years. The goal of this survey is to bring together some of these recent developments and, in doing so, to stimulate further research into the successful field of Stein's method and statistics. The topics we discuss include tools to benchmark and compare sampling methods such as approximate Markov chain Monte Carlo, deterministic alternatives to sampling methods, control variate techniques, parameter estimation and goodness-of-fit testing.","PeriodicalId":51172,"journal":{"name":"Statistical Science","volume":" ","pages":""},"PeriodicalIF":5.7,"publicationDate":"2021-05-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"43029634","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"数学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}