Deep learning has been at the center of analytics in recent years due to its impressive empirical success in analyzing complex data objects. Despite this success, most existing tools behave like black-box machines, thus the increasing interest in interpretable, reliable, and robust deep learning models applicable to a broad class of applications. Feature-selected deep learning has emerged as a promising tool in this realm. However, the recent developments do not accommodate ultrahigh-dimensional and highly correlated features or high noise levels. In this article, we propose a novel screening and cleaning method with the aid of deep learning for a data-adaptive multi-resolutional discovery of highly correlated predictors with a controlled error rate. Extensive empirical evaluations over a wide range of simulated scenarios and several real datasets demonstrate the effectiveness of the proposed method in achieving high power while keeping the false discovery rate at a minimum.
{"title":"Error-controlled feature selection for ultrahigh-dimensional and highly correlated feature space using deep learning","authors":"Arkaprabha Ganguli, Tapabrata Maiti, David Todem","doi":"10.1002/sam.11664","DOIUrl":"https://doi.org/10.1002/sam.11664","url":null,"abstract":"Deep learning has been at the center of analytics in recent years due to its impressive empirical success in analyzing complex data objects. Despite this success, most existing tools behave like black-box machines, thus the increasing interest in interpretable, reliable, and robust deep learning models applicable to a broad class of applications. Feature-selected deep learning has emerged as a promising tool in this realm. However, the recent developments do not accommodate ultrahigh-dimensional and highly correlated features or high noise levels. In this article, we propose a novel screening and cleaning method with the aid of deep learning for a data-adaptive multi-resolutional discovery of highly correlated predictors with a controlled error rate. Extensive empirical evaluations over a wide range of simulated scenarios and several real datasets demonstrate the effectiveness of the proposed method in achieving high power while keeping the false discovery rate at a minimum.","PeriodicalId":48684,"journal":{"name":"Statistical Analysis and Data Mining","volume":"272 1","pages":""},"PeriodicalIF":1.3,"publicationDate":"2024-03-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140046612","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"数学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Informative cluster size (ICS) is a phenomenon where cluster size is related to the outcome. While multistate models can be applied to characterize the unit‐level transition process for clustered interval‐censored data, there is a research gap addressing ICS within this framework. We propose two extensions of multistate model that account for ICS to make marginal inference: one by incorporating within‐cluster resampling and another by constructing cluster‐weighted score functions. We evaluate the performances of the proposed methods through simulation studies and apply them to the Veterans Affairs Dental Longitudinal Study (VADLS) to understand the effect of risk factors on periodontal disease progression. ICS occurs frequently in dental data, particularly in the study of periodontal disease, as people with fewer teeth due to the disease are more susceptible to disease progression. According to the simulation results, the mean estimates of the parameters obtained from the proposed methods are close to the true values, but methods that ignore ICS can lead to substantial bias. Our proposed methods for clustered multistate model are able to appropriately take ICS into account when making marginal inference of a typical unit from a randomly sampled cluster.
{"title":"Marginal clustered multistate models for longitudinal progressive processes with informative cluster size","authors":"Sean Xinyang Feng, Aya A. Mitani","doi":"10.1002/sam.11668","DOIUrl":"https://doi.org/10.1002/sam.11668","url":null,"abstract":"Informative cluster size (ICS) is a phenomenon where cluster size is related to the outcome. While multistate models can be applied to characterize the unit‐level transition process for clustered interval‐censored data, there is a research gap addressing ICS within this framework. We propose two extensions of multistate model that account for ICS to make marginal inference: one by incorporating within‐cluster resampling and another by constructing cluster‐weighted score functions. We evaluate the performances of the proposed methods through simulation studies and apply them to the Veterans Affairs Dental Longitudinal Study (VADLS) to understand the effect of risk factors on periodontal disease progression. ICS occurs frequently in dental data, particularly in the study of periodontal disease, as people with fewer teeth due to the disease are more susceptible to disease progression. According to the simulation results, the mean estimates of the parameters obtained from the proposed methods are close to the true values, but methods that ignore ICS can lead to substantial bias. Our proposed methods for clustered multistate model are able to appropriately take ICS into account when making marginal inference of a typical unit from a randomly sampled cluster.","PeriodicalId":48684,"journal":{"name":"Statistical Analysis and Data Mining","volume":"13 1","pages":""},"PeriodicalIF":1.3,"publicationDate":"2024-03-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140033736","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"数学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
The catastrophe loss model developed is a challenging problem in the insurance industry. In the context of Pareto‐type distribution, measuring risk at the extreme right tail has become a major focus for academic research. The quantile and Expectile of distribution are found to be useful descriptors of its tail, in the same way as the median and mean are related to its central behavior. In this article, a novel two‐step extrapolation‐insertion method is introduced and proved its advantages of less bias and variance theoretically through asymptotic normality by modifying the existing far‐right tail numerical model using the risk measures of Expectile and Expected Shortfall (ES). In addition, another solution to obtain the ES is proposed based on the fitted extreme distribution, which is demonstrated to have superior unbiased statistical properties. Uniting these two methods provides the numerical interval upper and lower bounds for capturing the real quantile‐based ES commonly used in insurance. The numerical simulation and the empirical analysis results of Danish reinsurance claim data indicate that these methods offer high prediction accuracy in the applications of catastrophe risk management.
所开发的巨灾损失模型是保险业的一个挑战性问题。在帕累托类型分布的背景下,衡量右极端尾部的风险已成为学术研究的重点。正如中位数和平均数与中心行为的关系一样,分布的量值和期望值被认为是其尾部的有用描述符。本文介绍了一种新颖的两步外推-插入法,并通过渐近正态性对现有的远右尾数值模型进行修改,利用期望值和期望缺口(ES)的风险度量,从理论上证明了其偏差和方差较小的优点。此外,还提出了另一种基于拟合极值分布的 ES 求解方法,证明其具有优越的无偏统计特性。将这两种方法结合起来,就能获得保险业常用的基于真实量值的 ES 的数值区间上下限。丹麦再保险理赔数据的数值模拟和实证分析结果表明,这些方法在巨灾风险管理应用中具有很高的预测准确性。
{"title":"A novel two‐step extrapolation‐insertion risk model based on the Expectile under the Pareto‐type distribution","authors":"Ziwen Geng","doi":"10.1002/sam.11665","DOIUrl":"https://doi.org/10.1002/sam.11665","url":null,"abstract":"The catastrophe loss model developed is a challenging problem in the insurance industry. In the context of Pareto‐type distribution, measuring risk at the extreme right tail has become a major focus for academic research. The quantile and Expectile of distribution are found to be useful descriptors of its tail, in the same way as the median and mean are related to its central behavior. In this article, a novel two‐step extrapolation‐insertion method is introduced and proved its advantages of less bias and variance theoretically through asymptotic normality by modifying the existing far‐right tail numerical model using the risk measures of Expectile and Expected Shortfall (ES). In addition, another solution to obtain the ES is proposed based on the fitted extreme distribution, which is demonstrated to have superior unbiased statistical properties. Uniting these two methods provides the numerical interval upper and lower bounds for capturing the real quantile‐based ES commonly used in insurance. The numerical simulation and the empirical analysis results of Danish reinsurance claim data indicate that these methods offer high prediction accuracy in the applications of catastrophe risk management.","PeriodicalId":48684,"journal":{"name":"Statistical Analysis and Data Mining","volume":"37 1","pages":""},"PeriodicalIF":1.3,"publicationDate":"2024-02-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"139953467","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"数学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Nonprobability samples, especially web survey data, have been available in many different fields. However, nonprobability samples suffer from selection bias, which will yield biased estimates. Moreover, missingness, especially nonignorable missingness, may also be encountered in nonprobability samples. Thus, it is a challenging task to make inference from nonprobability samples with nonignorable missingness. In this article, we propose a Bayesian approach to infer the population based on nonprobability samples with nonignorable missingness. In our method, different Logistic regression models are employed to estimate the selection probabilities and the response probabilities; the superpopulation model is used to explain the relationship between the study variable and covariates. Further, Bayesian and approximate Bayesian methods are proposed to estimate the response model parameters and the superpopulation model parameters, respectively. Specifically, the estimating functions for the response model parameters and superpopulation model parameters are utilized to derive the approximate posterior distribution in superpopulation model estimation. Simulation studies are conducted to investigate the finite sample performance of the proposed method. The data from the Pew Research Center and the Behavioral Risk Factor Surveillance System are used to show better performance of our proposed method over the other approaches.
{"title":"Bayesian inference for nonprobability samples with nonignorable missingness","authors":"Zhan Liu, Xuesong Chen, Ruohan Li, Lanbao Hou","doi":"10.1002/sam.11667","DOIUrl":"https://doi.org/10.1002/sam.11667","url":null,"abstract":"Nonprobability samples, especially web survey data, have been available in many different fields. However, nonprobability samples suffer from selection bias, which will yield biased estimates. Moreover, missingness, especially nonignorable missingness, may also be encountered in nonprobability samples. Thus, it is a challenging task to make inference from nonprobability samples with nonignorable missingness. In this article, we propose a Bayesian approach to infer the population based on nonprobability samples with nonignorable missingness. In our method, different Logistic regression models are employed to estimate the selection probabilities and the response probabilities; the superpopulation model is used to explain the relationship between the study variable and covariates. Further, Bayesian and approximate Bayesian methods are proposed to estimate the response model parameters and the superpopulation model parameters, respectively. Specifically, the estimating functions for the response model parameters and superpopulation model parameters are utilized to derive the approximate posterior distribution in superpopulation model estimation. Simulation studies are conducted to investigate the finite sample performance of the proposed method. The data from the Pew Research Center and the Behavioral Risk Factor Surveillance System are used to show better performance of our proposed method over the other approaches.","PeriodicalId":48684,"journal":{"name":"Statistical Analysis and Data Mining","volume":"22 1","pages":""},"PeriodicalIF":1.3,"publicationDate":"2024-02-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"139953469","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"数学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Data collected today have increasingly become more complex and cannot be analyzed using regular statistical methods. Matrix variate time series data is one such example where the observations in the time series are matrices. Herein, we introduce a set of three hidden Markov models using skewed matrix variate emission distributions for modeling matrix variate time series data. Compared to the hidden Markov model with matrix variate normal emissions, the proposed models present greater flexibility and are capable of modeling skewness in time series data. Parameter estimation is performed using an expectation maximization algorithm. We then look at both simulated data and salary data for public Texas universities.
{"title":"Modeling matrix variate time series via hidden Markov models with skewed emissions","authors":"Michael P. B. Gallaugher, Xuwen Zhu","doi":"10.1002/sam.11666","DOIUrl":"https://doi.org/10.1002/sam.11666","url":null,"abstract":"Data collected today have increasingly become more complex and cannot be analyzed using regular statistical methods. Matrix variate time series data is one such example where the observations in the time series are matrices. Herein, we introduce a set of three hidden Markov models using skewed matrix variate emission distributions for modeling matrix variate time series data. Compared to the hidden Markov model with matrix variate normal emissions, the proposed models present greater flexibility and are capable of modeling skewness in time series data. Parameter estimation is performed using an expectation maximization algorithm. We then look at both simulated data and salary data for public Texas universities.","PeriodicalId":48684,"journal":{"name":"Statistical Analysis and Data Mining","volume":"37 1","pages":""},"PeriodicalIF":1.3,"publicationDate":"2024-02-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"139956631","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"数学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Today's ever-increasing generation of streaming data demands novel data mining approaches tailored to mining dynamic data streams. Data streams are non-static in nature, continuously generated, and endless. They often suffer from class imbalance and undergo temporal drift. To address the classification of consecutive data instances within imbalanced data streams, this research introduces a new ensemble classification algorithm called Rarity Updated Ensemble with Oversampling (RUEO). The RUEO approach is specifically designed to exhibit robustness against class imbalance by incorporating an imbalance-specific criterion to assess the efficacy of the base classifiers and employing an oversampling technique to reduce the imbalance in the training data. The RUEO algorithm was evaluated on a set of 20 data streams and compared against 14 baseline algorithms. On average, the proposed RUEO algorithm achieves an average-accuracy of 0.69 on the real-world data streams, while the chunk-based algorithms AWE, AUE, and KUE achieve average-accuracies of 0.48, 0.65, and 0.66, respectively. The statistical analysis, conducted using the Wilcoxon test, reveals a statistically significant improvement in average-accuracy for the proposed RUEO algorithm when compared to 12 out of the 14 baseline algorithms. The source code and experimental results of this research work will be publicly available at https://github.com/vkiani/RUEO.
{"title":"Rarity updated ensemble with oversampling: An ensemble approach to classification of imbalanced data streams","authors":"Zahra Nouri, Vahid Kiani, Hamid Fadishei","doi":"10.1002/sam.11662","DOIUrl":"https://doi.org/10.1002/sam.11662","url":null,"abstract":"Today's ever-increasing generation of streaming data demands novel data mining approaches tailored to mining dynamic data streams. Data streams are non-static in nature, continuously generated, and endless. They often suffer from class imbalance and undergo temporal drift. To address the classification of consecutive data instances within imbalanced data streams, this research introduces a new ensemble classification algorithm called Rarity Updated Ensemble with Oversampling (RUEO). The RUEO approach is specifically designed to exhibit robustness against class imbalance by incorporating an imbalance-specific criterion to assess the efficacy of the base classifiers and employing an oversampling technique to reduce the imbalance in the training data. The RUEO algorithm was evaluated on a set of 20 data streams and compared against 14 baseline algorithms. On average, the proposed RUEO algorithm achieves an average-accuracy of 0.69 on the real-world data streams, while the chunk-based algorithms AWE, AUE, and KUE achieve average-accuracies of 0.48, 0.65, and 0.66, respectively. The statistical analysis, conducted using the Wilcoxon test, reveals a statistically significant improvement in average-accuracy for the proposed RUEO algorithm when compared to 12 out of the 14 baseline algorithms. The source code and experimental results of this research work will be publicly available at https://github.com/vkiani/RUEO.","PeriodicalId":48684,"journal":{"name":"Statistical Analysis and Data Mining","volume":"247 1","pages":""},"PeriodicalIF":1.3,"publicationDate":"2024-02-09","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"139769619","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"数学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Forensic questioned document examiners still largely rely on visual assessments and expert judgment to determine the provenance of a handwritten document. Here, we propose a novel approach to objectively compare two handwritten documents using a deep learning algorithm. First, we implement a bootstrapping technique to segment document data into smaller units, as a means to enhance the efficiency of the deep learning process. Next, we use a transfer learning algorithm to systematically extract document features. The unique characteristics of the document data are then represented as latent vectors. Finally, the similarity between two handwritten documents is quantified via the cosine similarity between the two latent vectors. We illustrate the use of the proposed method by implementing it on a variety of collections of handwritten documents with different attributes, and show that in most cases, we can accurately classify pairs of documents into same or different author categories.
{"title":"A deep learning approach for the comparison of handwritten documents using latent feature vectors","authors":"Juhyeon Kim, Soyoung Park, Alicia Carriquiry","doi":"10.1002/sam.11660","DOIUrl":"https://doi.org/10.1002/sam.11660","url":null,"abstract":"Forensic questioned document examiners still largely rely on visual assessments and expert judgment to determine the provenance of a handwritten document. Here, we propose a novel approach to objectively compare two handwritten documents using a deep learning algorithm. First, we implement a bootstrapping technique to segment document data into smaller units, as a means to enhance the efficiency of the deep learning process. Next, we use a transfer learning algorithm to systematically extract document features. The unique characteristics of the document data are then represented as latent vectors. Finally, the similarity between two handwritten documents is quantified via the cosine similarity between the two latent vectors. We illustrate the use of the proposed method by implementing it on a variety of collections of handwritten documents with different attributes, and show that in most cases, we can accurately classify pairs of documents into same or different author categories.","PeriodicalId":48684,"journal":{"name":"Statistical Analysis and Data Mining","volume":"136 1","pages":""},"PeriodicalIF":1.3,"publicationDate":"2024-02-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"139769432","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"数学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Zhuanzhuan Ma, Zifei Han, Souparno Ghosh, Liucang Wu, Min Wang
In this paper, we propose a sparse Bayesian procedure with global and local (GL) shrinkage priors for the problems of variable selection and classification in high-dimensional logistic regression models. In particular, we consider two types of GL shrinkage priors for the regression coefficients, the horseshoe (HS) prior and the normal-gamma (NG) prior, and then specify a correlated prior for the binary vector to distinguish models with the same size. The GL priors are then combined with mixture representations of logistic distribution to construct a hierarchical Bayes model that allows efficient implementation of a Markov chain Monte Carlo (MCMC) to generate samples from posterior distribution. We carry out simulations to compare the finite sample performances of the proposed Bayesian method with the existing Bayesian methods in terms of the accuracy of variable selection and prediction. Finally, two real-data applications are provided for illustrative purposes.
{"title":"Sparse Bayesian variable selection in high-dimensional logistic regression models with correlated priors","authors":"Zhuanzhuan Ma, Zifei Han, Souparno Ghosh, Liucang Wu, Min Wang","doi":"10.1002/sam.11663","DOIUrl":"https://doi.org/10.1002/sam.11663","url":null,"abstract":"In this paper, we propose a sparse Bayesian procedure with global and local (GL) shrinkage priors for the problems of variable selection and classification in high-dimensional logistic regression models. In particular, we consider two types of GL shrinkage priors for the regression coefficients, the horseshoe (HS) prior and the normal-gamma (NG) prior, and then specify a correlated prior for the binary vector to distinguish models with the same size. The GL priors are then combined with mixture representations of logistic distribution to construct a hierarchical Bayes model that allows efficient implementation of a Markov chain Monte Carlo (MCMC) to generate samples from posterior distribution. We carry out simulations to compare the finite sample performances of the proposed Bayesian method with the existing Bayesian methods in terms of the accuracy of variable selection and prediction. Finally, two real-data applications are provided for illustrative purposes.","PeriodicalId":48684,"journal":{"name":"Statistical Analysis and Data Mining","volume":"22 1","pages":""},"PeriodicalIF":1.3,"publicationDate":"2024-01-30","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"139646214","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"数学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Agent-based model (ABM) has been widely used to study infectious disease transmission by simulating behaviors and interactions of autonomous individuals called agents. In the ABM, agent states, for example infected or susceptible, are assigned according to a set of simple rules, and a complex dynamics of disease transmission is described by the collective states of agents over time. Despite the flexibility in real-world modeling, ABMs have received less attention by statisticians because of the intractable likelihood functions which lead to difficulty in estimating parameters and quantifying uncertainty around model outputs. To overcome this limitation, a Bayesian framework that treats the entire ABM as a Hidden Markov Model has been previously proposed. However, existing approach is limited due to computational inefficiency and unidentifiability of parameters. We extend the ABM approach within Bayesian framework to study infectious disease transmission addressing these limitations. We estimate the hidden states, represented by individual agent's states over time, and the model parameters by applying an improved particle Markov Chain Monte Carlo algorithm, that accounts for computing efficiency. We further evaluate the performance of the approach for parameter recovery and prediction, along with sensitivity to prior assumptions under various simulation conditions. Finally, we apply the proposed approach to the study of COVID-19 outbreak on Diamond Princess cruise ship. We examine the differences in transmission by key demographic characteristics, while considering two different networks and limited COVID-19 testing in the cruise.
{"title":"Considerations in Bayesian agent-based modeling for the analysis of COVID-19 data","authors":"Seungha Um, Samrachana Adhikari","doi":"10.1002/sam.11655","DOIUrl":"https://doi.org/10.1002/sam.11655","url":null,"abstract":"Agent-based model (ABM) has been widely used to study infectious disease transmission by simulating behaviors and interactions of autonomous individuals called agents. In the ABM, agent states, for example infected or susceptible, are assigned according to a set of simple rules, and a complex dynamics of disease transmission is described by the collective states of agents over time. Despite the flexibility in real-world modeling, ABMs have received less attention by statisticians because of the intractable likelihood functions which lead to difficulty in estimating parameters and quantifying uncertainty around model outputs. To overcome this limitation, a Bayesian framework that treats the entire ABM as a Hidden Markov Model has been previously proposed. However, existing approach is limited due to computational inefficiency and unidentifiability of parameters. We extend the ABM approach within Bayesian framework to study infectious disease transmission addressing these limitations. We estimate the hidden states, represented by individual agent's states over time, and the model parameters by applying an improved particle Markov Chain Monte Carlo algorithm, that accounts for computing efficiency. We further evaluate the performance of the approach for parameter recovery and prediction, along with sensitivity to prior assumptions under various simulation conditions. Finally, we apply the proposed approach to the study of COVID-19 outbreak on Diamond Princess cruise ship. We examine the differences in transmission by key demographic characteristics, while considering two different networks and limited COVID-19 testing in the cruise.","PeriodicalId":48684,"journal":{"name":"Statistical Analysis and Data Mining","volume":"4 1","pages":""},"PeriodicalIF":1.3,"publicationDate":"2024-01-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"139582015","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"数学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}