Clustering is a technique used to partition a dataset into groups of similar elements. In addition to traditional clustering methods, clustering for probability density functions (CDF) has been studied to capture data uncertainty. In CDF, automatic clustering is a clever technique that can determine the number of clusters automatically. However, current automatic clustering algorithms update the new probability density function (pdf) fi(t) based on the weighted mean of all previous pdfs fj(t − 1), j = 1, 2, …, N, resulting in slow convergence. This paper proposes an efficient automatic clustering algorithm for pdfs. In the proposed approach, the update of fi(t) is based on the weighted mean of {f1(t), f2(t),…, fi − 1(t), fi(t − 1), fi+1(t − 1),…,fN(t − 1)}, where N is the number of pdfs and i = 1,2,…, N. This technique allows for the incorporation of recently updated pdfs, leading to faster convergence. This paper also pioneers the applications of certain CDF algorithms in the field of surface image recognition. The numerical examples demonstrate that the proposed method can result in a rapid convergence at some early iterations. It also outperforms other state‐of‐the‐art automatic clustering methods in terms of the Adjusted Rand Index (ARI) and the Normalized Mutual Information (NMI). Additionally, the proposed algorithm proves to be competitive when clustering material images contaminated by noise. These results highlight the applicability of the proposed method in the problem of surface image recognition.This article is protected by copyright. All rights reserved.
{"title":"An efficient automatic clustering algorithm for probability density functions and its applications in surface material classification","authors":"Thao Nguyen-Trang, Tai Vo-Van, Ha Che-Ngoc","doi":"10.1111/stan.12315","DOIUrl":"https://doi.org/10.1111/stan.12315","url":null,"abstract":"Clustering is a technique used to partition a dataset into groups of similar elements. In addition to traditional clustering methods, clustering for probability density functions (CDF) has been studied to capture data uncertainty. In CDF, automatic clustering is a clever technique that can determine the number of clusters automatically. However, current automatic clustering algorithms update the new probability density function (pdf) fi(t) based on the weighted mean of all previous pdfs fj(t − 1), j = 1, 2, …, N, resulting in slow convergence. This paper proposes an efficient automatic clustering algorithm for pdfs. In the proposed approach, the update of fi(t) is based on the weighted mean of {f1(t), f2(t),…, fi − 1(t), fi(t − 1), fi+1(t − 1),…,fN(t − 1)}, where N is the number of pdfs and i = 1,2,…, N. This technique allows for the incorporation of recently updated pdfs, leading to faster convergence. This paper also pioneers the applications of certain CDF algorithms in the field of surface image recognition. The numerical examples demonstrate that the proposed method can result in a rapid convergence at some early iterations. It also outperforms other state‐of‐the‐art automatic clustering methods in terms of the Adjusted Rand Index (ARI) and the Normalized Mutual Information (NMI). Additionally, the proposed algorithm proves to be competitive when clustering material images contaminated by noise. These results highlight the applicability of the proposed method in the problem of surface image recognition.This article is protected by copyright. All rights reserved.","PeriodicalId":51178,"journal":{"name":"Statistica Neerlandica","volume":null,"pages":null},"PeriodicalIF":1.5,"publicationDate":"2023-08-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"83291722","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"数学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2023-08-01Epub Date: 2023-01-19DOI: 10.1111/stan.12286
Pei Wang, Erin L Abner, Changrui Liu, David W Fardo, Frederick A Schmitt, Gregory A Jicha, Linda J Van Eldik, Richard J Kryscio
Finite Markov chains with absorbing states are popular tools for analyzing longitudinal data with categorical responses. The one step transition probabilities can be defined in terms of fixed and random effects but it is difficult to estimate these effects due to many unknown parameters. In this article we propose a three-step estimation method. In the first step the fixed effects are estimated by using a marginal likelihood function, in the second step the random effects are estimated after substituting the estimated fixed effects into a joint likelihood function defined as a h-likelihood, and in the third step the covariance matrix for the vector of random effects is estimated using the Hessian matrix for this likelihood function. An application involving an analysis of longitudinal cognitive data is used to illustrate the method.
{"title":"Estimating random effects in a finite Markov chain with absorbing states: Application to cognitive data.","authors":"Pei Wang, Erin L Abner, Changrui Liu, David W Fardo, Frederick A Schmitt, Gregory A Jicha, Linda J Van Eldik, Richard J Kryscio","doi":"10.1111/stan.12286","DOIUrl":"10.1111/stan.12286","url":null,"abstract":"<p><p>Finite Markov chains with absorbing states are popular tools for analyzing longitudinal data with categorical responses. The one step transition probabilities can be defined in terms of fixed and random effects but it is difficult to estimate these effects due to many unknown parameters. In this article we propose a three-step estimation method. In the first step the fixed effects are estimated by using a marginal likelihood function, in the second step the random effects are estimated after substituting the estimated fixed effects into a joint likelihood function defined as a h-likelihood, and in the third step the covariance matrix for the vector of random effects is estimated using the Hessian matrix for this likelihood function. An application involving an analysis of longitudinal cognitive data is used to illustrate the method.</p>","PeriodicalId":51178,"journal":{"name":"Statistica Neerlandica","volume":null,"pages":null},"PeriodicalIF":1.4,"publicationDate":"2023-08-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11415262/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"87630240","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"数学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Sheng Li, Wen Wang, Menghan Yao, Junyu Wang, Qianqian Du, Xuelin Li, Xinyue Tian, Jing Zeng, Ying Deng, Zhang Tao, F. Yin, Yue Ma
The Poisson ridge estimator (PRE) is a commonly used parameter estimation method to address multicollinearity in Poisson regression (PR). However, PRE shrinks the parameters toward zero, contradicting the real association. In such cases, PRE tends to become an insufficient solution for multicollinearity. In this work, we proposed a new estimator called the Poisson average maximum likelihood‐centered penalized estimator (PAMLPE), which shrinks the parameters toward the weighted average of the maximum likelihood estimators. We conducted a simulation study and case study to compare PAMLPE with existing estimators in terms of mean squared error (MSE) and predictive mean squared error (PMSE). These results suggest that PAMLPE can obtain smaller MSE and PMSE (i.e., more accurate estimates) than the Poisson ridge estimator, Poisson Liu estimator, and Poisson K‐L estimator when the true β$$ beta $$ s have the same sign and small variation. Therefore, we recommend using PAMLPE to address multicollinearity in PR when the signs of the true β$$ beta $$ s are known to be identical in advance.
{"title":"Poisson average maximum likelihood‐centred penalized estimator: A new estimator to better address multicollinearity in Poisson regression","authors":"Sheng Li, Wen Wang, Menghan Yao, Junyu Wang, Qianqian Du, Xuelin Li, Xinyue Tian, Jing Zeng, Ying Deng, Zhang Tao, F. Yin, Yue Ma","doi":"10.1111/stan.12313","DOIUrl":"https://doi.org/10.1111/stan.12313","url":null,"abstract":"The Poisson ridge estimator (PRE) is a commonly used parameter estimation method to address multicollinearity in Poisson regression (PR). However, PRE shrinks the parameters toward zero, contradicting the real association. In such cases, PRE tends to become an insufficient solution for multicollinearity. In this work, we proposed a new estimator called the Poisson average maximum likelihood‐centered penalized estimator (PAMLPE), which shrinks the parameters toward the weighted average of the maximum likelihood estimators. We conducted a simulation study and case study to compare PAMLPE with existing estimators in terms of mean squared error (MSE) and predictive mean squared error (PMSE). These results suggest that PAMLPE can obtain smaller MSE and PMSE (i.e., more accurate estimates) than the Poisson ridge estimator, Poisson Liu estimator, and Poisson K‐L estimator when the true β$$ beta $$ s have the same sign and small variation. Therefore, we recommend using PAMLPE to address multicollinearity in PR when the signs of the true β$$ beta $$ s are known to be identical in advance.","PeriodicalId":51178,"journal":{"name":"Statistica Neerlandica","volume":null,"pages":null},"PeriodicalIF":1.5,"publicationDate":"2023-06-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"79135996","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"数学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Various researches have been conducted on forecasting stock prices. Several tools ranging from statistical techniques to quantitative methods have been used by researchers to forecast the market. But so far, very little research has been done on forecasting the stock markets of the Gulf countries such as Saudi Arabia, United Arab Emirates, Oman, Kuwait, Bahrain, and Qatar. Our approach is to predict the market indices of the Gulf countries using Long Short‐Term Memory (LSTM) techniques. Thereafter, we optimized the hyperparameters of the LSTM technique using various optimization methods such as Grid Search and Bayesian Optimization with Gaussian Process and found out the best‐suited hyperparameter for the LSTM model. We tried the LSTM method for predicting the indices using data from the last twenty years.
{"title":"A case study of Gulf Securities Market in the last 20 years: A Long Short‐Term Memory approach","authors":"Abhibasu Sen, Karabi Dutta Choudhury","doi":"10.1111/stan.12309","DOIUrl":"https://doi.org/10.1111/stan.12309","url":null,"abstract":"Various researches have been conducted on forecasting stock prices. Several tools ranging from statistical techniques to quantitative methods have been used by researchers to forecast the market. But so far, very little research has been done on forecasting the stock markets of the Gulf countries such as Saudi Arabia, United Arab Emirates, Oman, Kuwait, Bahrain, and Qatar. Our approach is to predict the market indices of the Gulf countries using Long Short‐Term Memory (LSTM) techniques. Thereafter, we optimized the hyperparameters of the LSTM technique using various optimization methods such as Grid Search and Bayesian Optimization with Gaussian Process and found out the best‐suited hyperparameter for the LSTM model. We tried the LSTM method for predicting the indices using data from the last twenty years.","PeriodicalId":51178,"journal":{"name":"Statistica Neerlandica","volume":null,"pages":null},"PeriodicalIF":1.5,"publicationDate":"2023-06-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"91252395","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"数学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Asmussen and Lehtomaa [Distinguishing log‐concavity from heavy tails. Risks 5(10), 2017] introduced an interesting function g which is able to distinguish between log‐convex and log‐concave tail behaviour of distributions, and proposed a randomized estimator for g. In this paper, we show that g can also be seen as a tool to detect gamma distributions or distributions with gamma tail. We construct a more efficient estimator ĝn based on U‐statistics, propose several estimators of the (asymptotic) variance of ĝn, and study their performance by simulations. Finally, the methods are applied to several data sets of daily precipitation.This article is protected by copyright. All rights reserved.
{"title":"A gamma tail statistic and its asymptotics","authors":"Toshiya Iwashita, B. Klar","doi":"10.1111/stan.12316","DOIUrl":"https://doi.org/10.1111/stan.12316","url":null,"abstract":"Asmussen and Lehtomaa [Distinguishing log‐concavity from heavy tails. Risks 5(10), 2017] introduced an interesting function g which is able to distinguish between log‐convex and log‐concave tail behaviour of distributions, and proposed a randomized estimator for g. In this paper, we show that g can also be seen as a tool to detect gamma distributions or distributions with gamma tail. We construct a more efficient estimator ĝn based on U‐statistics, propose several estimators of the (asymptotic) variance of ĝn, and study their performance by simulations. Finally, the methods are applied to several data sets of daily precipitation.This article is protected by copyright. All rights reserved.","PeriodicalId":51178,"journal":{"name":"Statistica Neerlandica","volume":null,"pages":null},"PeriodicalIF":1.5,"publicationDate":"2023-06-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"74111110","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"数学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
In this paper, we derive the copula‐graphic estimator (Zheng and Klein) for marginal survival functions using Archimedean copula models based on competing risks data subject to univariate right censoring and prove its uniform consistency and asymptotic properties. We then propose a novel parameter estimation method based on the semi‐competing risks data using Archimedean copula models. Based on our estimation strategy, we propose a new model selection procedure. We also describe an easy way to accommodate possible covariates in data analysis using our strategies. Simulation studies have shown that our parameter estimate outperforms the estimator proposed by Lakhal, Rivest and Abdous for the Hougaard model and the model selection procedure works quite well. We fit a leukemia dataset using our model and end our paper with some discussion.
{"title":"The analysis of semi‐competing risks data using Archimedean copula models","authors":"Antai Wang, Ziyan Guo, Yilong Zhang, Jihua Wu","doi":"10.1111/stan.12311","DOIUrl":"https://doi.org/10.1111/stan.12311","url":null,"abstract":"In this paper, we derive the copula‐graphic estimator (Zheng and Klein) for marginal survival functions using Archimedean copula models based on competing risks data subject to univariate right censoring and prove its uniform consistency and asymptotic properties. We then propose a novel parameter estimation method based on the semi‐competing risks data using Archimedean copula models. Based on our estimation strategy, we propose a new model selection procedure. We also describe an easy way to accommodate possible covariates in data analysis using our strategies. Simulation studies have shown that our parameter estimate outperforms the estimator proposed by Lakhal, Rivest and Abdous for the Hougaard model and the model selection procedure works quite well. We fit a leukemia dataset using our model and end our paper with some discussion.","PeriodicalId":51178,"journal":{"name":"Statistica Neerlandica","volume":null,"pages":null},"PeriodicalIF":1.5,"publicationDate":"2023-06-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"73218378","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"数学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Ordinary Least Squares Estimator (OLSE) is widely used to estimate parameters in regression analysis. In practice, the assumptions of regression analysis are often not met. The most common problems that break these assumptions are outliers and multicollinearity problems. As a result of these problems, OLSE loses efficiency. Therefore, alternative estimators to OLSE have been proposed to solve these problems. Robust estimators are often used to solve the outlier problem, and biased estimators are often used to solve the multicollinearity problem. These problems do not always occur individually in the real‐world dataset. Therefore, robust biased estimators are proposed for simultaneous solutions to these problems. The aim of this study is to propose Liu‐type Generalized M Estimator as an alternative to the robust biased estimators available in the literature to obtain more efficient results. This estimator gives effective results in the case of outlier and multicollinearity in both dependent and independent variables. The proposed estimator is theoretically compared with other estimators available in the literature. In addition, Monte Carlo simulation and real dataset example are performed to compare the performance of the estimator with existing estimators.
{"title":"Robust Liu‐type Estimator based on GM estimator","authors":"Melike Işilar, Y. M. Bulut","doi":"10.1111/stan.12310","DOIUrl":"https://doi.org/10.1111/stan.12310","url":null,"abstract":"Ordinary Least Squares Estimator (OLSE) is widely used to estimate parameters in regression analysis. In practice, the assumptions of regression analysis are often not met. The most common problems that break these assumptions are outliers and multicollinearity problems. As a result of these problems, OLSE loses efficiency. Therefore, alternative estimators to OLSE have been proposed to solve these problems. Robust estimators are often used to solve the outlier problem, and biased estimators are often used to solve the multicollinearity problem. These problems do not always occur individually in the real‐world dataset. Therefore, robust biased estimators are proposed for simultaneous solutions to these problems. The aim of this study is to propose Liu‐type Generalized M Estimator as an alternative to the robust biased estimators available in the literature to obtain more efficient results. This estimator gives effective results in the case of outlier and multicollinearity in both dependent and independent variables. The proposed estimator is theoretically compared with other estimators available in the literature. In addition, Monte Carlo simulation and real dataset example are performed to compare the performance of the estimator with existing estimators.","PeriodicalId":51178,"journal":{"name":"Statistica Neerlandica","volume":null,"pages":null},"PeriodicalIF":1.5,"publicationDate":"2023-05-29","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"73854271","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"数学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
In this paper, we discuss the inference for the competing risks model when the failure times follow Chen distribution. With assumption of two causes of failures, which are partially observed, are considered as independent. The existence and uniqueness of maximum likelihood estimates for model parameters are obtained under generalized progressive hybrid censoring. Also, we discussed the classical and Bayesian inferences of the model parameters under the assumption of restricted and nonrestricted parameters. Performance of classical point and interval estimators are compared with Bayesian point and interval estimators by conducting extensive simulation study. In addition to that, for illustration purpose, a real life example is discussed. Finally, some concluding remarks, regarding the presented model, are made.
{"title":"On partially observed competing risks model for Chen distribution under generalized progressive hybrid censoring","authors":"Kundan Singh, Amulya Kumar Mahto, Y. Tripathi","doi":"10.1111/stan.12308","DOIUrl":"https://doi.org/10.1111/stan.12308","url":null,"abstract":"In this paper, we discuss the inference for the competing risks model when the failure times follow Chen distribution. With assumption of two causes of failures, which are partially observed, are considered as independent. The existence and uniqueness of maximum likelihood estimates for model parameters are obtained under generalized progressive hybrid censoring. Also, we discussed the classical and Bayesian inferences of the model parameters under the assumption of restricted and nonrestricted parameters. Performance of classical point and interval estimators are compared with Bayesian point and interval estimators by conducting extensive simulation study. In addition to that, for illustration purpose, a real life example is discussed. Finally, some concluding remarks, regarding the presented model, are made.","PeriodicalId":51178,"journal":{"name":"Statistica Neerlandica","volume":null,"pages":null},"PeriodicalIF":1.5,"publicationDate":"2023-05-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"83140976","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"数学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Jumps in the paths of efficient asset prices have important economic implications. Motivated by the issue of testing for jumps based on noisy high‐frequency data, we develop a novel spot volatility estimator, which is obtained by minimizing the sum of some Huber loss functions, and use it as an ingredient for jump detection. This type of estimators is uniformly consistent in estimating the spot volatilities of the efficient price at numerous time points. We further demonstrate the consistency of the proposed jump test based on the property of the novel spot volatility estimator. We show that in finite samples, the proposed volatility estimator and the test perform favorably compared to some competitors through Monte Carlo simulations. We also illustrate our methodology with the stock prices of Apple and Microsoft.
{"title":"Testing for jumps with robust spot volatility estimators","authors":"Yucheng Sun","doi":"10.1111/stan.12306","DOIUrl":"https://doi.org/10.1111/stan.12306","url":null,"abstract":"Jumps in the paths of efficient asset prices have important economic implications. Motivated by the issue of testing for jumps based on noisy high‐frequency data, we develop a novel spot volatility estimator, which is obtained by minimizing the sum of some Huber loss functions, and use it as an ingredient for jump detection. This type of estimators is uniformly consistent in estimating the spot volatilities of the efficient price at numerous time points. We further demonstrate the consistency of the proposed jump test based on the property of the novel spot volatility estimator. We show that in finite samples, the proposed volatility estimator and the test perform favorably compared to some competitors through Monte Carlo simulations. We also illustrate our methodology with the stock prices of Apple and Microsoft.","PeriodicalId":51178,"journal":{"name":"Statistica Neerlandica","volume":null,"pages":null},"PeriodicalIF":1.5,"publicationDate":"2023-05-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"89970698","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"数学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
We consider designs with t treatments, the ith level of which has ni observations. Four cases are examined: treatment levels both ordered and not, and the design balanced, with all ni equal, and not. A general construction is given that takes observations, typically treatment sums or treatment rank sums, constructs a simple quadratic form and expresses it as a sum of squares of orthogonal contrasts. For the case of ordered treatment levels, the Kruskal–Wallis, Friedman and Durbin tests are recovered by this construction. A dataset where the design is the supplemented balanced, which is an unbalanced design in our terminology, is analyzed. When treatment levels are not ordered the construction also applies. We then focus on Helmert contrasts.
{"title":"Orthogonal Contrasts for both Balanced and Unbalanced Designs and both Ordered and Unordered Treatments","authors":"J. Rayner, G. Livingston","doi":"10.1111/stan.12305","DOIUrl":"https://doi.org/10.1111/stan.12305","url":null,"abstract":"We consider designs with t treatments, the ith level of which has ni observations. Four cases are examined: treatment levels both ordered and not, and the design balanced, with all ni equal, and not. A general construction is given that takes observations, typically treatment sums or treatment rank sums, constructs a simple quadratic form and expresses it as a sum of squares of orthogonal contrasts. For the case of ordered treatment levels, the Kruskal–Wallis, Friedman and Durbin tests are recovered by this construction. A dataset where the design is the supplemented balanced, which is an unbalanced design in our terminology, is analyzed. When treatment levels are not ordered the construction also applies. We then focus on Helmert contrasts.","PeriodicalId":51178,"journal":{"name":"Statistica Neerlandica","volume":null,"pages":null},"PeriodicalIF":1.5,"publicationDate":"2023-05-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"87060789","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"数学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}