Pub Date : 2024-12-13DOI: 10.1007/s10463-024-00920-x
Friederike Preusse, Anna Vesely, Thorsten Dickhaus
In multiple hypotheses testing it has become widely popular to make inference on the true discovery proportion (TDP) of a set (mathscr {M}) of null hypotheses. This approach is useful for several application fields, such as neuroimaging and genomics. Several procedures to compute simultaneous lower confidence bounds for the TDP have been suggested in prior literature. Simultaneity allows for post-hoc selection of (mathscr {M}). If sets of interest are specified a priori, it is possible to gain power by removing the simultaneity requirement. We present an approach to compute lower confidence bounds for the TDP if the set of null hypotheses is defined a priori. The proposed method determines the bounds using the exact distribution of the number of rejections based on a step-up multiple testing procedure under independence assumptions. We assess robustness properties of our procedure and apply it to real data from the field of functional magnetic resonance imaging.
{"title":"Confidence bounds for the true discovery proportion based on the exact distribution of the number of rejections","authors":"Friederike Preusse, Anna Vesely, Thorsten Dickhaus","doi":"10.1007/s10463-024-00920-x","DOIUrl":"10.1007/s10463-024-00920-x","url":null,"abstract":"<div><p>In multiple hypotheses testing it has become widely popular to make inference on the true discovery proportion (TDP) of a set <span>(mathscr {M})</span> of null hypotheses. This approach is useful for several application fields, such as neuroimaging and genomics. Several procedures to compute simultaneous lower confidence bounds for the TDP have been suggested in prior literature. Simultaneity allows for post-hoc selection of <span>(mathscr {M})</span>. If sets of interest are specified a priori, it is possible to gain power by removing the simultaneity requirement. We present an approach to compute lower confidence bounds for the TDP if the set of null hypotheses is defined a priori. The proposed method determines the bounds using the exact distribution of the number of rejections based on a step-up multiple testing procedure under independence assumptions. We assess robustness properties of our procedure and apply it to real data from the field of functional magnetic resonance imaging.</p></div>","PeriodicalId":55511,"journal":{"name":"Annals of the Institute of Statistical Mathematics","volume":"77 2","pages":"191 - 216"},"PeriodicalIF":0.8,"publicationDate":"2024-12-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143513289","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"数学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-12-05DOI: 10.1007/s10463-024-00918-5
Hongwei Shi, Xinyu Zhang, Xu Guo, Baihua He, Chenyang Wang
The validity of instruments plays a crucial role in addressing endogenous treatment effects and instruments that violate the exclusion restriction are invalid. This paper concerns the overidentifying restrictions test for evaluating the validity of instruments in the high-dimensional instrumental variable model. We confront the challenge of high dimensionality by introducing a new testing procedure based on U-statistic. Our procedure allows the number of instruments and covariates to be in exponential order of the sample size. Under some mild conditions, we establish the asymptotic normality of the proposed test statistic under the null and local alternative hypotheses. The effectiveness of the proposed method is clearly supported by simulations and its application to a real dataset on trade and economic growth.
{"title":"Testing overidentifying restrictions on high-dimensional instruments and covariates","authors":"Hongwei Shi, Xinyu Zhang, Xu Guo, Baihua He, Chenyang Wang","doi":"10.1007/s10463-024-00918-5","DOIUrl":"10.1007/s10463-024-00918-5","url":null,"abstract":"<div><p>The validity of instruments plays a crucial role in addressing endogenous treatment effects and instruments that violate the exclusion restriction are invalid. This paper concerns the overidentifying restrictions test for evaluating the validity of instruments in the high-dimensional instrumental variable model. We confront the challenge of high dimensionality by introducing a new testing procedure based on <i>U</i>-statistic. Our procedure allows the number of instruments and covariates to be in exponential order of the sample size. Under some mild conditions, we establish the asymptotic normality of the proposed test statistic under the null and local alternative hypotheses. The effectiveness of the proposed method is clearly supported by simulations and its application to a real dataset on trade and economic growth.</p></div>","PeriodicalId":55511,"journal":{"name":"Annals of the Institute of Statistical Mathematics","volume":"77 2","pages":"331 - 352"},"PeriodicalIF":0.8,"publicationDate":"2024-12-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143513249","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"数学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-12-04DOI: 10.1007/s10463-024-00916-7
Mátyás Barczy, Zsolt Páles
We solve the comparison problem for generalized (psi )-estimators introduced by Barczy and Páles (arXiv: 2211.06026, 2022). Namely, we derive several necessary and sufficient conditions under which a generalized (psi )-estimator less than or equal to another (psi )-estimator for any sample. We also solve the corresponding equality problem for generalized (psi )-estimators. We also apply our results for some known statistical estimators such as for empirical expectiles and Mathieu-type estimators and for solutions of likelihood equations in case of normal, a Beta-type, Gamma, Lomax (Pareto type II), lognormal and Laplace distributions.
{"title":"Comparison and equality of generalized (psi )-estimators","authors":"Mátyás Barczy, Zsolt Páles","doi":"10.1007/s10463-024-00916-7","DOIUrl":"10.1007/s10463-024-00916-7","url":null,"abstract":"<div><p>We solve the comparison problem for generalized <span>(psi )</span>-estimators introduced by Barczy and Páles (<i>arXiv</i>: 2211.06026, 2022). Namely, we derive several necessary and sufficient conditions under which a generalized <span>(psi )</span>-estimator less than or equal to another <span>(psi )</span>-estimator for any sample. We also solve the corresponding equality problem for generalized <span>(psi )</span>-estimators. We also apply our results for some known statistical estimators such as for empirical expectiles and Mathieu-type estimators and for solutions of likelihood equations in case of normal, a Beta-type, Gamma, Lomax (Pareto type II), lognormal and Laplace distributions.</p></div>","PeriodicalId":55511,"journal":{"name":"Annals of the Institute of Statistical Mathematics","volume":"77 2","pages":"217 - 250"},"PeriodicalIF":0.8,"publicationDate":"2024-12-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143513149","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"数学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-12-02DOI: 10.1007/s10463-024-00914-9
Phuoc-Loc Tran, Shen-Ming Lee, Truong-Nhat Le, Chin-Shang Li
We examine the asymptotic properties of two multiple imputation (MI) estimators, given in the study of Lee et al. (Computational Statistics, 38, 899–934, 2023) for the parameters of logistic regression with both sets of discrete or categorical covariates that are missing at random separately or simultaneously. The proposed estimated asymptotic variances of the two MI estimators address a limitation observed with Rubin’s estimated variances, which lead to underestimate the variances of the two MI estimators (Rubin, 1987, Statistical Analysis with Missing Data, New York:Wiley). Simulation results demonstrate that our two proposed MI methods outperform the complete-case, semiparametric inverse probability weighting, random forest MI using chained equations, and stochastic approximation of expectation-maximization methods. To illustrate the methodology’s practical application, we provide a real data example from a survey conducted at the Feng Chia night market in Taichung City, Taiwan.
{"title":"Large-sample properties of multiple imputation estimators for parameters of logistic regression with covariates missing at random separately or simultaneously","authors":"Phuoc-Loc Tran, Shen-Ming Lee, Truong-Nhat Le, Chin-Shang Li","doi":"10.1007/s10463-024-00914-9","DOIUrl":"10.1007/s10463-024-00914-9","url":null,"abstract":"<div><p>We examine the asymptotic properties of two multiple imputation (MI) estimators, given in the study of Lee et al. (<u>Computational Statistics</u>, <b>38</b>, 899–934, 2023) for the parameters of logistic regression with both sets of discrete or categorical covariates that are missing at random separately or simultaneously. The proposed estimated asymptotic variances of the two MI estimators address a limitation observed with Rubin’s estimated variances, which lead to underestimate the variances of the two MI estimators (Rubin, 1987, <u>Statistical Analysis with Missing Data</u>, New York:Wiley). Simulation results demonstrate that our two proposed MI methods outperform the complete-case, semiparametric inverse probability weighting, random forest MI using chained equations, and stochastic approximation of expectation-maximization methods. To illustrate the methodology’s practical application, we provide a real data example from a survey conducted at the Feng Chia night market in Taichung City, Taiwan.</p></div>","PeriodicalId":55511,"journal":{"name":"Annals of the Institute of Statistical Mathematics","volume":"77 2","pages":"251 - 287"},"PeriodicalIF":0.8,"publicationDate":"2024-12-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143513126","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"数学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-11-22DOI: 10.1007/s10463-024-00915-8
A. C. Micheas
We introduce and study a new class of Cox point processes, based on random mixture models of exponential family components for the intensity function of the underlying Poisson process. We investigate theoretical properties of the proposed probability distributions of the point process, as well as provide procedures for parameter estimation using a classical and Bayesian approach. We illustrate the richness of the new models through examples, simulations and real data applications.
{"title":"Random mixture Cox point processes","authors":"A. C. Micheas","doi":"10.1007/s10463-024-00915-8","DOIUrl":"10.1007/s10463-024-00915-8","url":null,"abstract":"<div><p>We introduce and study a new class of Cox point processes, based on random mixture models of exponential family components for the intensity function of the underlying Poisson process. We investigate theoretical properties of the proposed probability distributions of the point process, as well as provide procedures for parameter estimation using a classical and Bayesian approach. We illustrate the richness of the new models through examples, simulations and real data applications.</p></div>","PeriodicalId":55511,"journal":{"name":"Annals of the Institute of Statistical Mathematics","volume":"77 2","pages":"289 - 330"},"PeriodicalIF":0.8,"publicationDate":"2024-11-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143513303","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"数学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-10-19DOI: 10.1007/s10463-024-00912-x
Yingli Pan, Haoyu Wang, Zhan Liu
This paper provides a novel perspective on feature screening in the analysis of high-dimensional right-censored large-p-large-N survival data. The research introduces a distributed feature screening method known as Aggregated Distance Correlation Screening (ADCS). The proposed screening framework involves expressing the distance correlation measure as a function of multiple component parameters, each of which can be estimated in a distributed manner using a natural U-statistic from data segments. By aggregating the component estimates, a final correlation estimate is obtained, facilitating feature screening. Importantly, this approach does not necessitate any specific model specification for responses or predictors and is effective with heavy-tailed data. The study establishes the consistency of the proposed aggregated correlation estimator (widetilde{omega }_{j}) under mild conditions and demonstrates the sure screening property of the ADCS. Empirical results from both simulated and real datasets confirm the efficacy and practicality of the ADCS approach proposed in this paper.
{"title":"Model free feature screening for large scale and ultrahigh dimensional survival data","authors":"Yingli Pan, Haoyu Wang, Zhan Liu","doi":"10.1007/s10463-024-00912-x","DOIUrl":"10.1007/s10463-024-00912-x","url":null,"abstract":"<div><p>This paper provides a novel perspective on feature screening in the analysis of high-dimensional right-censored large-<i>p</i>-large-<i>N</i> survival data. The research introduces a distributed feature screening method known as Aggregated Distance Correlation Screening (ADCS). The proposed screening framework involves expressing the distance correlation measure as a function of multiple component parameters, each of which can be estimated in a distributed manner using a natural U-statistic from data segments. By aggregating the component estimates, a final correlation estimate is obtained, facilitating feature screening. Importantly, this approach does not necessitate any specific model specification for responses or predictors and is effective with heavy-tailed data. The study establishes the consistency of the proposed aggregated correlation estimator <span>(widetilde{omega }_{j})</span> under mild conditions and demonstrates the sure screening property of the ADCS. Empirical results from both simulated and real datasets confirm the efficacy and practicality of the ADCS approach proposed in this paper.</p></div>","PeriodicalId":55511,"journal":{"name":"Annals of the Institute of Statistical Mathematics","volume":"77 1","pages":"155 - 190"},"PeriodicalIF":0.8,"publicationDate":"2024-10-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142912884","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"数学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-09-24DOI: 10.1007/s10463-024-00909-6
Nan Zheng, Noel Cadigan
Statistical inference for high-dimensional parameters (HDPs) can leverage their intrinsic correlations, as spatially or temporally close parameters tend to have similar values. This is why nonlinear mixed-effects models (NMMs) are commonly used for HDPs. Conversely, in many practical applications, the random effects (REs) in NMMs are correlated HDPs that should remain constant during repeated sampling for frequentist inference. In both scenarios, the inference should be conditional on REs, instead of marginal inference by integrating out REs. We summarize recent theory of conditional inference for NMM, and then propose a bias-corrected RE predictor and confidence interval (CI). We also extend this methodology to accommodate the case where some REs are not associated with data. Simulation studies indicate our new approach leads to substantial improvement in the conditional coverage rate of RE CIs, including CIs for smooth functions in generalized additive models, compared to the existing method based on marginal inference.
{"title":"Improved confidence intervals for nonlinear mixed-effects and nonparametric regression models","authors":"Nan Zheng, Noel Cadigan","doi":"10.1007/s10463-024-00909-6","DOIUrl":"10.1007/s10463-024-00909-6","url":null,"abstract":"<div><p>Statistical inference for high-dimensional parameters (HDPs) can leverage their intrinsic correlations, as spatially or temporally close parameters tend to have similar values. This is why nonlinear mixed-effects models (NMMs) are commonly used for HDPs. Conversely, in many practical applications, the random effects (REs) in NMMs are correlated HDPs that should remain constant during repeated sampling for frequentist inference. In both scenarios, the inference should be conditional on REs, instead of marginal inference by integrating out REs. We summarize recent theory of conditional inference for NMM, and then propose a bias-corrected RE predictor and confidence interval (CI). We also extend this methodology to accommodate the case where some REs are not associated with data. Simulation studies indicate our new approach leads to substantial improvement in the conditional coverage rate of RE CIs, including CIs for smooth functions in generalized additive models, compared to the existing method based on marginal inference.</p></div>","PeriodicalId":55511,"journal":{"name":"Annals of the Institute of Statistical Mathematics","volume":"77 1","pages":"105 - 126"},"PeriodicalIF":0.8,"publicationDate":"2024-09-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142912885","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"数学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-09-21DOI: 10.1007/s10463-024-00913-w
Hengfang Wang, Jae Kwang Kim
Propensity score weighting is widely used to correct the selection bias in the sample with missing data. The propensity score function is often developed using a model for the response probability, which completely ignores the outcome regression model. In this paper, we explore an alternative approach by developing smoothed propensity score weights that provide a more efficient estimation by removing unnecessary auxiliary variables in the propensity score model. The smoothed propensity score function is obtained by applying the information projection of the original propensity score function to the space that satisfies the moment conditions on the balancing scores obtained from the outcome regression model. By including the covariates for the outcome regression models only in the density ratio model, we can achieve an efficiency gain. Penalized regression is used to identify important covariates. Some limited simulation studies are presented to compare with the existing methods.
{"title":"Information projection approach to smoothed propensity score weighting for handling selection bias under missing at random","authors":"Hengfang Wang, Jae Kwang Kim","doi":"10.1007/s10463-024-00913-w","DOIUrl":"10.1007/s10463-024-00913-w","url":null,"abstract":"<div><p>Propensity score weighting is widely used to correct the selection bias in the sample with missing data. The propensity score function is often developed using a model for the response probability, which completely ignores the outcome regression model. In this paper, we explore an alternative approach by developing smoothed propensity score weights that provide a more efficient estimation by removing unnecessary auxiliary variables in the propensity score model. The smoothed propensity score function is obtained by applying the information projection of the original propensity score function to the space that satisfies the moment conditions on the balancing scores obtained from the outcome regression model. By including the covariates for the outcome regression models only in the density ratio model, we can achieve an efficiency gain. Penalized regression is used to identify important covariates. Some limited simulation studies are presented to compare with the existing methods.</p></div>","PeriodicalId":55511,"journal":{"name":"Annals of the Institute of Statistical Mathematics","volume":"77 1","pages":"127 - 153"},"PeriodicalIF":0.8,"publicationDate":"2024-09-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142913066","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"数学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-09-19DOI: 10.1007/s10463-024-00911-y
Peng Sun, Fuming Lin, Haiyang Xu, Kaizhi Yu
Exploring more accurate estimates of financial value at risk (VaR) has always been an important issue in applied statistics. To this end either quantile or expectile regression methods are widely employed at present, but an accumulating body of research indicates that (L^{p}) quantile regression outweighs both quantile and expectile regression in many aspects. In view of this, the paper extends (L^{p}) quantile regression to a general classical nonlinear conditional autoregressive model and proposes a new model called the conditional (L^{p}) quantile nonlinear autoregressive regression model (CAR-(L^{p})-quantile model for short). Limit theorems for regression estimators are proved in mild conditions, and algorithms are provided for obtaining parameter estimates and the optimal value of p. Simulation study of estimation’s quality is given. Then, a CLVaR method calculating VaR based on the CAR-(L^{p})-quantile model is elaborated. Finally, a real data analysis is conducted to illustrate virtues of our proposed methods.
{"title":"Estimation of value-at-risk by (L^{p}) quantile regression","authors":"Peng Sun, Fuming Lin, Haiyang Xu, Kaizhi Yu","doi":"10.1007/s10463-024-00911-y","DOIUrl":"10.1007/s10463-024-00911-y","url":null,"abstract":"<div><p>Exploring more accurate estimates of financial value at risk (VaR) has always been an important issue in applied statistics. To this end either quantile or expectile regression methods are widely employed at present, but an accumulating body of research indicates that <span>(L^{p})</span> quantile regression outweighs both quantile and expectile regression in many aspects. In view of this, the paper extends <span>(L^{p})</span> quantile regression to a general classical nonlinear conditional autoregressive model and proposes a new model called the conditional <span>(L^{p})</span> quantile nonlinear autoregressive regression model (CAR-<span>(L^{p})</span>-quantile model for short). Limit theorems for regression estimators are proved in mild conditions, and algorithms are provided for obtaining parameter estimates and the optimal value of <i>p</i>. Simulation study of estimation’s quality is given. Then, a CLVaR method calculating VaR based on the CAR-<span>(L^{p})</span>-quantile model is elaborated. Finally, a real data analysis is conducted to illustrate virtues of our proposed methods.</p></div>","PeriodicalId":55511,"journal":{"name":"Annals of the Institute of Statistical Mathematics","volume":"77 1","pages":"25 - 59"},"PeriodicalIF":0.8,"publicationDate":"2024-09-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142254402","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"数学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-09-14DOI: 10.1007/s10463-024-00907-8
Nakahiro Yoshida
The IHK program is a general framework in asymptotic decision theory, introduced by Ibragimov and Hasminskii and extended to semimartingales by Kutoyants. The quasi-likelihood analysis (QLA) asserts that a polynomial type large deviation inequality is always valid if the quasi-likelihood random field is asymptotically quadratic and if a key index reflecting the identifiability is non-degenerate. As a result, following the IHK program, the QLA gives a way to inference for various nonlinear stochastic processes. This paper provides a reformed and simplified version of the QLA and improves accessibility to the theory. As an example of the advantages of the scheme, the user can obtain asymptotic properties of the quasi-Bayesian estimator by only verifying non-degeneracy of the key index.
{"title":"Simplified quasi-likelihood analysis for a locally asymptotically quadratic random field","authors":"Nakahiro Yoshida","doi":"10.1007/s10463-024-00907-8","DOIUrl":"10.1007/s10463-024-00907-8","url":null,"abstract":"<div><p>The IHK program is a general framework in asymptotic decision theory, introduced by Ibragimov and Hasminskii and extended to semimartingales by Kutoyants. The quasi-likelihood analysis (QLA) asserts that a polynomial type large deviation inequality is always valid if the quasi-likelihood random field is asymptotically quadratic and if a key index reflecting the identifiability is non-degenerate. As a result, following the IHK program, the QLA gives a way to inference for various nonlinear stochastic processes. This paper provides a reformed and simplified version of the QLA and improves accessibility to the theory. As an example of the advantages of the scheme, the user can obtain asymptotic properties of the quasi-Bayesian estimator by only verifying non-degeneracy of the key index.</p></div>","PeriodicalId":55511,"journal":{"name":"Annals of the Institute of Statistical Mathematics","volume":"77 1","pages":"1 - 24"},"PeriodicalIF":0.8,"publicationDate":"2024-09-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142254403","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"数学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}