Pub Date : 2025-03-11DOI: 10.1016/j.jspi.2025.106289
Hui Ding , Mei Yao , Riquan Zhang , Zhenglong Zhang , Hanbing Zhu
In this paper we propose varying-coefficient single-index quantile regression models, which includes most existing quantile regression models. We adopt B-spline basis approximation for the estimation of nonparametric components and use the “delete-one-component” method to construct check loss function. Under some mild conditions, we establish asymptotic theory of the proposed estimators for both the parametric and nonparametric components. Moreover, we propose a rank score based test to examine whether the varying-coefficient functions are constant. The finite sample performance of the proposed estimation method is illustrated by simulation studies and an empirical analysis of two real datasets.
{"title":"Estimation and testing for varying-coefficient single-index quantile regression models","authors":"Hui Ding , Mei Yao , Riquan Zhang , Zhenglong Zhang , Hanbing Zhu","doi":"10.1016/j.jspi.2025.106289","DOIUrl":"10.1016/j.jspi.2025.106289","url":null,"abstract":"<div><div>In this paper we propose varying-coefficient single-index quantile regression models, which includes most existing quantile regression models. We adopt B-spline basis approximation for the estimation of nonparametric components and use the “delete-one-component” method to construct check loss function. Under some mild conditions, we establish asymptotic theory of the proposed estimators for both the parametric and nonparametric components. Moreover, we propose a rank score based test to examine whether the varying-coefficient functions are constant. The finite sample performance of the proposed estimation method is illustrated by simulation studies and an empirical analysis of two real datasets.</div></div>","PeriodicalId":50039,"journal":{"name":"Journal of Statistical Planning and Inference","volume":"239 ","pages":"Article 106289"},"PeriodicalIF":0.8,"publicationDate":"2025-03-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143629064","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"数学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-03-04DOI: 10.1016/j.jspi.2025.106286
Gecheng Chen, Rui Tuo
This work focuses on the design of experiments of multi-fidelity computer experiments. We consider the autoregressive Gaussian process model proposed by Kennedy and O’Hagan (2000) and the optimal nested design that maximizes the prediction accuracy subject to a budget constraint. An approximate solution is identified through the idea of multi-level approximation and recent error bounds of Gaussian process regression. The proposed (approximately) optimal designs admit a simple analytical form. We prove that, to achieve the same prediction accuracy, the proposed optimal multi-fidelity design requires much lower computational cost than any single-fidelity design in the asymptotic sense. Numerical studies confirm this theoretical assertion.
{"title":"Fixed-budget optimal designs for multi-fidelity computer experiments","authors":"Gecheng Chen, Rui Tuo","doi":"10.1016/j.jspi.2025.106286","DOIUrl":"10.1016/j.jspi.2025.106286","url":null,"abstract":"<div><div>This work focuses on the design of experiments of multi-fidelity computer experiments. We consider the autoregressive Gaussian process model proposed by Kennedy and O’Hagan (2000) and the optimal nested design that maximizes the prediction accuracy subject to a budget constraint. An approximate solution is identified through the idea of multi-level approximation and recent error bounds of Gaussian process regression. The proposed (approximately) optimal designs admit a simple analytical form. We prove that, to achieve the same prediction accuracy, the proposed optimal multi-fidelity design requires much lower computational cost than any single-fidelity design in the asymptotic sense. Numerical studies confirm this theoretical assertion.</div></div>","PeriodicalId":50039,"journal":{"name":"Journal of Statistical Planning and Inference","volume":"239 ","pages":"Article 106286"},"PeriodicalIF":0.8,"publicationDate":"2025-03-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143579798","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"数学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-03-01DOI: 10.1016/j.jspi.2025.106278
Tian Jiang
Nonparametric regression with missing at random (MAR) predictors, univariate regression component of interest, and the scale function depending on both the predictor and auxiliary covariates, is considered. The asymptotic theory suggests that both heteroscedasticity and MAR mechanism affect the sharp constant of the minimax mean integrated squared error (MISE) convergence. We propose a data-driven procedure adaptive to the missing mechanism and unknown smoothness of the estimated regression function. The estimator preserves the optimal convergence rate and can achieve sharp minimaxity when predictors are missing completely at random (MCAR).
{"title":"Nonparametric regression with predictors missing at random and the scale depending on auxiliary covariates","authors":"Tian Jiang","doi":"10.1016/j.jspi.2025.106278","DOIUrl":"10.1016/j.jspi.2025.106278","url":null,"abstract":"<div><div>Nonparametric regression with missing at random (MAR) predictors, univariate regression component of interest, and the scale function depending on both the predictor and auxiliary covariates, is considered. The asymptotic theory suggests that both heteroscedasticity and MAR mechanism affect the sharp constant of the minimax mean integrated squared error (MISE) convergence. We propose a data-driven procedure adaptive to the missing mechanism and unknown smoothness of the estimated regression function. The estimator preserves the optimal convergence rate and can achieve sharp minimaxity when predictors are missing completely at random (MCAR).</div></div>","PeriodicalId":50039,"journal":{"name":"Journal of Statistical Planning and Inference","volume":"239 ","pages":"Article 106278"},"PeriodicalIF":0.8,"publicationDate":"2025-03-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143552811","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"数学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-02-21DOI: 10.1016/j.jspi.2025.106274
Shanchao Yang , Qi Lan , Xueyan Xu , Zhu Liang
The diffusion process is widely used in finance, and many scholars pay close attention to the statistical estimation of diffusion processes. Some literature has discussed the non parametric kernel estimation of drift and diffusion functions, and proved the consistency and asymptotic normality of the estimators, but the convergence rate of asymptotic normality has not been discussed yet. In this paper, we derive the convergence rate of uniformly asymptotic normality of the drift function estimator by using the method of large and small blocks for stationary and -mixing diffusion process. In the case of optimal bandwidth, the rate of uniformly asymptotic normality reaches . In order to prove the results, we put forward some inequalities for mixing processes with variable sampling interval, which play a key role in the study of limit theory.
{"title":"Uniformly asymptotic normality of estimation of the drift function for diffusion processes","authors":"Shanchao Yang , Qi Lan , Xueyan Xu , Zhu Liang","doi":"10.1016/j.jspi.2025.106274","DOIUrl":"10.1016/j.jspi.2025.106274","url":null,"abstract":"<div><div>The diffusion process is widely used in finance, and many scholars pay close attention to the statistical estimation of diffusion processes. Some literature has discussed the non parametric kernel estimation of drift and diffusion functions, and proved the consistency and asymptotic normality of the estimators, but the convergence rate of asymptotic normality has not been discussed yet. In this paper, we derive the convergence rate of uniformly asymptotic normality of the drift function estimator by using the method of large and small blocks for stationary and <span><math><mi>ρ</mi></math></span>-mixing diffusion process. In the case of optimal bandwidth, the rate of uniformly asymptotic normality reaches <span><math><msup><mrow><mi>n</mi></mrow><mrow><mo>−</mo><mn>2</mn><mo>/</mo><mn>15</mn></mrow></msup></math></span>. In order to prove the results, we put forward some inequalities for mixing processes with variable sampling interval, which play a key role in the study of limit theory.</div></div>","PeriodicalId":50039,"journal":{"name":"Journal of Statistical Planning and Inference","volume":"239 ","pages":"Article 106274"},"PeriodicalIF":0.8,"publicationDate":"2025-02-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143579797","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"数学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-02-08DOI: 10.1016/j.jspi.2025.106276
Daniel Gaigall , Julian Gerstenberg
Conditional excess distribution modelling is a widely used technique, in financial and insurance mathematics or survival analysis, for instance. Classical theory considers the thresholds as fixed values. In contrast, the use of empirical quantiles as thresholds offers advantages with respect to the design of the statistical experiment. Either way, the modeller is in a non-standard situation and runs in the risk of improper usage of statistical procedures. From both points of view, statistical planning and inference, a detailed discussion is requested. For this purpose, we treat both methods and demonstrate the necessity taking into account the characteristics of the approaches in practice. In detail, we derive general statements for empirical processes related to the conditional excess distribution in both situations. As examples, estimating the mean excess and the conditional Value-at-Risk are given. We apply our findings for the testing problems of goodness-of-fit and homogeneity for the conditional excess distribution and obtain new results of outstanding interest.
{"title":"Fixed values versus empirical quantiles as thresholds in excess distribution modelling","authors":"Daniel Gaigall , Julian Gerstenberg","doi":"10.1016/j.jspi.2025.106276","DOIUrl":"10.1016/j.jspi.2025.106276","url":null,"abstract":"<div><div>Conditional excess distribution modelling is a widely used technique, in financial and insurance mathematics or survival analysis, for instance. Classical theory considers the thresholds as fixed values. In contrast, the use of empirical quantiles as thresholds offers advantages with respect to the design of the statistical experiment. Either way, the modeller is in a non-standard situation and runs in the risk of improper usage of statistical procedures. From both points of view, statistical planning and inference, a detailed discussion is requested. For this purpose, we treat both methods and demonstrate the necessity taking into account the characteristics of the approaches in practice. In detail, we derive general statements for empirical processes related to the conditional excess distribution in both situations. As examples, estimating the mean excess and the conditional Value-at-Risk are given. We apply our findings for the testing problems of goodness-of-fit and homogeneity for the conditional excess distribution and obtain new results of outstanding interest.</div></div>","PeriodicalId":50039,"journal":{"name":"Journal of Statistical Planning and Inference","volume":"238 ","pages":"Article 106276"},"PeriodicalIF":0.8,"publicationDate":"2025-02-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143379110","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"数学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-02-06DOI: 10.1016/j.jspi.2025.106275
Jingying Zhou , Hui Jiang , Weigang Wang
In this paper, under discrete observations, we study the asymptotic consistency, asymptotic normality and Cramér-type moderate deviations of Yule’s nonsense correlation statistic for two Ornstein–Uhlenbeck processes. As applications, the global and local powers of the hypothesis testing for the independence between two Ornstein–Uhlenbeck processes are shown to approach one at exponential rates. Simulation experiments are conducted to confirm the theoretical results. Moreover, empirical applications illustrate the usefulness of the above mentioned statistic and the asymptotic theory. The main methods consist of the deviation inequalities and Cramér-type moderate deviations for multiple Wiener–Itô integrals and asymptotic analysis techniques.
{"title":"Asymptotic normality and Cramér-type moderate deviations of Yule’s nonsense correlation statistic for Ornstein–Uhlenbeck processes","authors":"Jingying Zhou , Hui Jiang , Weigang Wang","doi":"10.1016/j.jspi.2025.106275","DOIUrl":"10.1016/j.jspi.2025.106275","url":null,"abstract":"<div><div>In this paper, under discrete observations, we study the asymptotic consistency, asymptotic normality and Cramér-type moderate deviations of Yule’s nonsense correlation statistic for two Ornstein–Uhlenbeck processes. As applications, the global and local powers of the hypothesis testing for the independence between two Ornstein–Uhlenbeck processes are shown to approach one at exponential rates. Simulation experiments are conducted to confirm the theoretical results. Moreover, empirical applications illustrate the usefulness of the above mentioned statistic and the asymptotic theory. The main methods consist of the deviation inequalities and Cramér-type moderate deviations for multiple Wiener–Itô integrals and asymptotic analysis techniques.</div></div>","PeriodicalId":50039,"journal":{"name":"Journal of Statistical Planning and Inference","volume":"238 ","pages":"Article 106275"},"PeriodicalIF":0.8,"publicationDate":"2025-02-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143349639","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"数学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-02-06DOI: 10.1016/j.jspi.2025.106273
Ansgar Steland
Gumbel-type extreme value theory for arrays of discrete Gaussian random fields is studied and applied to some classes of discretely sampled approximately locally self-similar Gaussian processes, especially micro-noise models. Non-Gaussian discrete random fields are handled by considering the maximum of local averages of raw data or residuals. Based on some novel weak approximations with rate for (weighted) partial sums for spatial linear processes including results under a class of local alternatives, sufficient conditions for Gumbel-type asymptotics of maximum-type detection rules to detect peaks and suspicious areas in image data and, more generally, random field data, are established. The results are examined by simulations and illustrated by analyzing CT brain image data.
{"title":"Detection of suspicious areas in non-stationary Gaussian fields and locally averaged non-Gaussian linear fields","authors":"Ansgar Steland","doi":"10.1016/j.jspi.2025.106273","DOIUrl":"10.1016/j.jspi.2025.106273","url":null,"abstract":"<div><div>Gumbel-type extreme value theory for arrays of discrete Gaussian random fields is studied and applied to some classes of discretely sampled approximately locally self-similar Gaussian processes, especially micro-noise models. Non-Gaussian discrete random fields are handled by considering the maximum of local averages of raw data or residuals. Based on some novel weak approximations with rate for (weighted) partial sums for spatial linear processes including results under a class of local alternatives, sufficient conditions for Gumbel-type asymptotics of maximum-type detection rules to detect peaks and suspicious areas in image data and, more generally, random field data, are established. The results are examined by simulations and illustrated by analyzing CT brain image data.</div></div>","PeriodicalId":50039,"journal":{"name":"Journal of Statistical Planning and Inference","volume":"238 ","pages":"Article 106273"},"PeriodicalIF":0.8,"publicationDate":"2025-02-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143349644","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"数学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-01-28DOI: 10.1016/j.jspi.2025.106272
Riddhiman Saha , Priyam Das , Nilanjana Laha
In this paper, we consider the two-sample location shift model, a classic semiparametric model introduced by Stein(1956). This model is known for its adaptive nature, enabling nonparametric estimation with full parametric efficiency. Existing nonparametric estimators of the location shift often depend on external tuning parameters, which restricts their practical applicability Vanet al. (1998). We demonstrate that introducing an additional assumption of log-concavity on the underlying density can alleviate the need for tuning parameters. We propose a one step estimator for location shift estimation, utilizing log-concave density estimation techniques to facilitate tuning-free estimation of the efficient influence function. While we use a truncated version of the one step estimator to theoretically demonstrate adaptivity, our simulations indicate that the one step estimators perform best with zero truncation, eliminating the need for tuning during practical implementation. Notably, the efficiency of the truncated one step estimators steadily increases as the truncation level decreases, and those with low levels of truncation exhibit nearly identical empirical performance to the estimator with zero truncation. We apply our method to investigate the location shift in the distribution of Spanish annual household incomes following the 2008 financial crisis.
{"title":"The two-sample location shift model under log-concavity","authors":"Riddhiman Saha , Priyam Das , Nilanjana Laha","doi":"10.1016/j.jspi.2025.106272","DOIUrl":"10.1016/j.jspi.2025.106272","url":null,"abstract":"<div><div>In this paper, we consider the two-sample location shift model, a classic semiparametric model introduced by Stein(1956). This model is known for its adaptive nature, enabling nonparametric estimation with full parametric efficiency. Existing nonparametric estimators of the location shift often depend on external tuning parameters, which restricts their practical applicability Vanet al. (1998). We demonstrate that introducing an additional assumption of log-concavity on the underlying density can alleviate the need for tuning parameters. We propose a one step estimator for location shift estimation, utilizing log-concave density estimation techniques to facilitate tuning-free estimation of the efficient influence function. While we use a truncated version of the one step estimator to theoretically demonstrate adaptivity, our simulations indicate that the one step estimators perform best with zero truncation, eliminating the need for tuning during practical implementation. Notably, the efficiency of the truncated one step estimators steadily increases as the truncation level decreases, and those with low levels of truncation exhibit nearly identical empirical performance to the estimator with zero truncation. We apply our method to investigate the location shift in the distribution of Spanish annual household incomes following the 2008 financial crisis.</div></div>","PeriodicalId":50039,"journal":{"name":"Journal of Statistical Planning and Inference","volume":"238 ","pages":"Article 106272"},"PeriodicalIF":0.8,"publicationDate":"2025-01-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143150096","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"数学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-01-25DOI: 10.1016/j.jspi.2025.106271
Jian Zhang , Tong Wang
Skew normal model suffers from inferential drawbacks, namely singular Fisher information when it is close to symmetry and diverging of maximum likelihood estimation. This causes a large variation of the conventional maximum likelihood estimate. To address the above drawbacks, Azzalini and Arellano-Valle (2013) introduced maximum penalised likelihood estimation (MPLE) by subtracting a penalty function from the log-likelihood function with a pre-specified penalty coefficient. Here, we propose a cross-validated MPLE to improve its performance when the underlying model is close to symmetry. We develop a theory for MPLE, where an asymptotic rate for the cross-validated penalty coefficient is derived. We further show that the proposed cross-validated MPLE is asymptotically efficient under certain conditions. In simulation studies and a real data application, we demonstrate that the proposed estimator can outperform the conventional MPLE when the model is close to symmetry.
{"title":"On cross-validated estimation of skew normal model","authors":"Jian Zhang , Tong Wang","doi":"10.1016/j.jspi.2025.106271","DOIUrl":"10.1016/j.jspi.2025.106271","url":null,"abstract":"<div><div>Skew normal model suffers from inferential drawbacks, namely singular Fisher information when it is close to symmetry and diverging of maximum likelihood estimation. This causes a large variation of the conventional maximum likelihood estimate. To address the above drawbacks, Azzalini and Arellano-Valle (2013) introduced maximum penalised likelihood estimation (MPLE) by subtracting a penalty function from the log-likelihood function with a pre-specified penalty coefficient. Here, we propose a cross-validated MPLE to improve its performance when the underlying model is close to symmetry. We develop a theory for MPLE, where an asymptotic rate for the cross-validated penalty coefficient is derived. We further show that the proposed cross-validated MPLE is asymptotically efficient under certain conditions. In simulation studies and a real data application, we demonstrate that the proposed estimator can outperform the conventional MPLE when the model is close to symmetry.</div></div>","PeriodicalId":50039,"journal":{"name":"Journal of Statistical Planning and Inference","volume":"238 ","pages":"Article 106271"},"PeriodicalIF":0.8,"publicationDate":"2025-01-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143150094","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"数学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-01-06DOI: 10.1016/j.jspi.2024.106260
Xiaoguang Wang , Rong Hu , Mengyu Li
It is a fundamental task to predict patients’ survival outcomes in clinical research. As an extension of the Cox proportional hazards model, the time-dependent coefficient Cox model is typically utilized for time-to-event data with time-dependent effects. When the number of covariates is large, the curse of dimensionality emerges for most existing methods. To overcome the limitation and improve predictive performance, a semiparametric model averaging approach is proposed for the time-dependent coefficient Cox model. We introduce a novel criterion to estimate model weights and demonstrate its theoretical properties. Extensive simulation studies are conducted to compare the proposed technique with existing competitive methods. A real clinical data set is also analyzed to illustrate the advantages of our approach.
{"title":"Model averaging prediction for survival data with time-dependent effects","authors":"Xiaoguang Wang , Rong Hu , Mengyu Li","doi":"10.1016/j.jspi.2024.106260","DOIUrl":"10.1016/j.jspi.2024.106260","url":null,"abstract":"<div><div>It is a fundamental task to predict patients’ survival outcomes in clinical research. As an extension of the Cox proportional hazards model, the time-dependent coefficient Cox model is typically utilized for time-to-event data with time-dependent effects. When the number of covariates is large, the curse of dimensionality emerges for most existing methods. To overcome the limitation and improve predictive performance, a semiparametric model averaging approach is proposed for the time-dependent coefficient Cox model. We introduce a novel criterion to estimate model weights and demonstrate its theoretical properties. Extensive simulation studies are conducted to compare the proposed technique with existing competitive methods. A real clinical data set is also analyzed to illustrate the advantages of our approach.</div></div>","PeriodicalId":50039,"journal":{"name":"Journal of Statistical Planning and Inference","volume":"238 ","pages":"Article 106260"},"PeriodicalIF":0.8,"publicationDate":"2025-01-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143150095","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"数学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}