Pub Date : 2020-06-06DOI: 10.1142/s0219530520500037
Yang Zhou, Dirong Chen
In functional data analysis, linear prediction problems have been widely studied based on the functional linear regression model. However, restrictive condition is needed to ensure the existence of the coefficient function. In this paper, a general linear prediction model is considered on the framework of reproducing kernel Hilbert space, which includes both the functional linear regression model and the point impact model. We show that from the point view of prediction, this general model works as well even the coefficient function does not exist. Moreover, under mild conditions, the minimax optimal rate of convergence is established for the prediction under the integrated mean squared prediction error. In particular, the rate reduces to the existing result when the coefficient function exists.
{"title":"Optimal rate for prediction when predictor and response are functions","authors":"Yang Zhou, Dirong Chen","doi":"10.1142/s0219530520500037","DOIUrl":"https://doi.org/10.1142/s0219530520500037","url":null,"abstract":"In functional data analysis, linear prediction problems have been widely studied based on the functional linear regression model. However, restrictive condition is needed to ensure the existence of the coefficient function. In this paper, a general linear prediction model is considered on the framework of reproducing kernel Hilbert space, which includes both the functional linear regression model and the point impact model. We show that from the point view of prediction, this general model works as well even the coefficient function does not exist. Moreover, under mild conditions, the minimax optimal rate of convergence is established for the prediction under the integrated mean squared prediction error. In particular, the rate reduces to the existing result when the coefficient function exists.","PeriodicalId":55519,"journal":{"name":"Analysis and Applications","volume":" ","pages":""},"PeriodicalIF":2.2,"publicationDate":"2020-06-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.1142/s0219530520500037","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"43388237","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"数学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2020-05-27DOI: 10.1142/s0219530520400011
Hong Chen, Changying Guo, Huijuan Xiong, Yingjie Wang
Sparse additive machines (SAMs) have attracted increasing attention in high dimensional classification due to their representation flexibility and interpretability. However, most of existing method...
{"title":"Sparse additive machine with ramp loss","authors":"Hong Chen, Changying Guo, Huijuan Xiong, Yingjie Wang","doi":"10.1142/s0219530520400011","DOIUrl":"https://doi.org/10.1142/s0219530520400011","url":null,"abstract":"Sparse additive machines (SAMs) have attracted increasing attention in high dimensional classification due to their representation flexibility and interpretability. However, most of existing method...","PeriodicalId":55519,"journal":{"name":"Analysis and Applications","volume":"13 24","pages":"1-20"},"PeriodicalIF":2.2,"publicationDate":"2020-05-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.1142/s0219530520400011","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"41244603","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"数学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2020-05-27DOI: 10.1142/s0219530520400035
Chuangji Meng, Cunlu Xu, Qin Lei, W. Su, Jinzhao Wu
Recent studies have revealed that deep networks can learn transferable features that generalize well to novel tasks with little or unavailable labeled data for domain adaptation. However, justifying which components of the feature representations can reason about original joint distributions using JMMD within the regime of deep architecture remains unclear. We present a new backpropagation algorithm for JMMD called the Balanced Joint Maximum Mean Discrepancy (B-JMMD) to further reduce the domain discrepancy. B-JMMD achieves the effect of balanced distribution adaptation for deep network architecture, and can be treated as an improved version of JMMD’s backpropagation algorithm. The proposed method leverages the importance of marginal and conditional distributions behind multiple domain-specific layers across domains adaptively to get a good match for the joint distributions in a second-order reproducing kernel Hilbert space. The learning of the proposed method can be performed technically by a special form of stochastic gradient descent, in which the gradient is computed by backpropagation with a strategy of balanced distribution adaptation. Theoretical analysis shows that the proposed B-JMMD is superior to JMMD method. Experiments confirm that our method yields state-of-the-art results with standard datasets.
{"title":"Balanced joint maximum mean discrepancy for deep transfer learning","authors":"Chuangji Meng, Cunlu Xu, Qin Lei, W. Su, Jinzhao Wu","doi":"10.1142/s0219530520400035","DOIUrl":"https://doi.org/10.1142/s0219530520400035","url":null,"abstract":"Recent studies have revealed that deep networks can learn transferable features that generalize well to novel tasks with little or unavailable labeled data for domain adaptation. However, justifying which components of the feature representations can reason about original joint distributions using JMMD within the regime of deep architecture remains unclear. We present a new backpropagation algorithm for JMMD called the Balanced Joint Maximum Mean Discrepancy (B-JMMD) to further reduce the domain discrepancy. B-JMMD achieves the effect of balanced distribution adaptation for deep network architecture, and can be treated as an improved version of JMMD’s backpropagation algorithm. The proposed method leverages the importance of marginal and conditional distributions behind multiple domain-specific layers across domains adaptively to get a good match for the joint distributions in a second-order reproducing kernel Hilbert space. The learning of the proposed method can be performed technically by a special form of stochastic gradient descent, in which the gradient is computed by backpropagation with a strategy of balanced distribution adaptation. Theoretical analysis shows that the proposed B-JMMD is superior to JMMD method. Experiments confirm that our method yields state-of-the-art results with standard datasets.","PeriodicalId":55519,"journal":{"name":"Analysis and Applications","volume":" ","pages":""},"PeriodicalIF":2.2,"publicationDate":"2020-05-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.1142/s0219530520400035","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"48077944","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"数学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2020-05-27DOI: 10.1142/s0219530520500025
Jiu‐Gang Dong, Seung‐Yeal Ha, Doheon Kim
We study the emergent dynamics of the thermomechanical Cucker–Smale (TCS) model with switching network topologies. The TCS model is a generalized CS model with extra internal dynamical variable called “temperature” in which isothermal case exactly coincides with the CS model for flocking. In previous studies, emergent dynamics of the TCS model has been mostly restricted to some static network topologies such as complete graph, connected graph with positive in and out degrees at each node, and digraphs with spanning trees. In this paper, we consider switching network topologies with a spanning tree in a sequence of time-blocks, and present two sufficient frameworks leading to the asymptotic mono-cluster flocking in terms of initial data and system parameters. In the first framework in which the sizes of time-blocks are uniformly bounded by some positive constant, we show that temperature and velocity diameters tend to zero exponentially fast, and spatial diameter is uniformly bounded. In the second framework, we admit a situation in which the sizes of time-blocks may grow mildly by a logarithmic function. In latter framework, our temperature and velocity diameters tend to zero at least algebraically slow.
{"title":"Emergence of mono-cluster flocking in the thermomechanical Cucker–Smale model under switching topologies","authors":"Jiu‐Gang Dong, Seung‐Yeal Ha, Doheon Kim","doi":"10.1142/s0219530520500025","DOIUrl":"https://doi.org/10.1142/s0219530520500025","url":null,"abstract":"We study the emergent dynamics of the thermomechanical Cucker–Smale (TCS) model with switching network topologies. The TCS model is a generalized CS model with extra internal dynamical variable called “temperature” in which isothermal case exactly coincides with the CS model for flocking. In previous studies, emergent dynamics of the TCS model has been mostly restricted to some static network topologies such as complete graph, connected graph with positive in and out degrees at each node, and digraphs with spanning trees. In this paper, we consider switching network topologies with a spanning tree in a sequence of time-blocks, and present two sufficient frameworks leading to the asymptotic mono-cluster flocking in terms of initial data and system parameters. In the first framework in which the sizes of time-blocks are uniformly bounded by some positive constant, we show that temperature and velocity diameters tend to zero exponentially fast, and spatial diameter is uniformly bounded. In the second framework, we admit a situation in which the sizes of time-blocks may grow mildly by a logarithmic function. In latter framework, our temperature and velocity diameters tend to zero at least algebraically slow.","PeriodicalId":55519,"journal":{"name":"Analysis and Applications","volume":"1 1","pages":"1-38"},"PeriodicalIF":2.2,"publicationDate":"2020-05-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.1142/s0219530520500025","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"44164981","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"数学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2020-05-01DOI: 10.1142/s0219530519500179
L. Agud, J. Calabuig, E. Pérez
Let [Formula: see text] be a finite measure space and consider a Banach function space [Formula: see text]. Motivated by some previous papers and current applications, we provide a general framework for representing reproducing kernel Hilbert spaces as subsets of Köthe–Bochner (vector-valued) function spaces. We analyze operator-valued kernels [Formula: see text] that define integration maps [Formula: see text] between Köthe–Bochner spaces of Hilbert-valued functions [Formula: see text] We show a reduction procedure which allows to find a factorization of the corresponding kernel operator through weighted Bochner spaces [Formula: see text] and [Formula: see text] — where [Formula: see text] — under the assumption of [Formula: see text]-concavity of [Formula: see text] Equivalently, a new kernel obtained by multiplying [Formula: see text] by scalar functions can be given in such a way that the kernel operator is defined from [Formula: see text] to [Formula: see text] in a natural way. As an application, we prove a new version of Mercer Theorem for matrix-valued weighted kernels.
设[公式:见文]是一个有限测度空间,并考虑一个Banach函数空间[公式:见文]。受一些以前的论文和当前应用的启发,我们提供了一个通用框架,将核希尔伯特空间表示为Köthe-Bochner(向量值)函数空间的子集。我们分析了定义hilbert值函数的Köthe-Bochner空间之间的积分映射的算子值核[公式:见文]。我们展示了一个约简过程,它允许通过加权Bochner空间[公式:见文]和[公式:见文]找到相应核算子的因式分解,其中[公式:见文]在[公式:见文]的假设下,[公式:见文]的凹凸性。同样,将[Formula: see text]与标量函数相乘得到的新核可以用这样的方式给出,即核算子从[Formula: see text]自然地定义为[Formula: see text]。作为应用,我们证明了矩阵值加权核的Mercer定理的一个新版本。
{"title":"Weighted p-regular kernels for reproducing kernel Hilbert spaces and Mercer Theorem","authors":"L. Agud, J. Calabuig, E. Pérez","doi":"10.1142/s0219530519500179","DOIUrl":"https://doi.org/10.1142/s0219530519500179","url":null,"abstract":"Let [Formula: see text] be a finite measure space and consider a Banach function space [Formula: see text]. Motivated by some previous papers and current applications, we provide a general framework for representing reproducing kernel Hilbert spaces as subsets of Köthe–Bochner (vector-valued) function spaces. We analyze operator-valued kernels [Formula: see text] that define integration maps [Formula: see text] between Köthe–Bochner spaces of Hilbert-valued functions [Formula: see text] We show a reduction procedure which allows to find a factorization of the corresponding kernel operator through weighted Bochner spaces [Formula: see text] and [Formula: see text] — where [Formula: see text] — under the assumption of [Formula: see text]-concavity of [Formula: see text] Equivalently, a new kernel obtained by multiplying [Formula: see text] by scalar functions can be given in such a way that the kernel operator is defined from [Formula: see text] to [Formula: see text] in a natural way. As an application, we prove a new version of Mercer Theorem for matrix-valued weighted kernels.","PeriodicalId":55519,"journal":{"name":"Analysis and Applications","volume":"1 1","pages":""},"PeriodicalIF":2.2,"publicationDate":"2020-05-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.1142/s0219530519500179","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"42714734","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"数学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2020-05-01DOI: 10.1142/s0219530519500192
Bao-huai Sheng, Jianli Wang
[Formula: see text]-functionals are used in learning theory literature to study approximation errors in kernel-based regularization schemes. In this paper, we study the approximation error and [Formula: see text]-functionals in [Formula: see text] spaces with [Formula: see text]. To this end, we give a new viewpoint for a reproducing kernel Hilbert space (RKHS) from a fractional derivative and treat powers of the induced integral operator as fractional derivatives of various orders. Then a generalized translation operator is defined by Fourier multipliers, with which a generalized modulus of smoothness is defined. Some general strong equivalent relations between the moduli of smoothness and the [Formula: see text]-functionals are established. As applications, some strong equivalent relations between these two families of quantities on the unit sphere and the unit ball are provided explicitly.
{"title":"On the K-functional in learning theory","authors":"Bao-huai Sheng, Jianli Wang","doi":"10.1142/s0219530519500192","DOIUrl":"https://doi.org/10.1142/s0219530519500192","url":null,"abstract":"[Formula: see text]-functionals are used in learning theory literature to study approximation errors in kernel-based regularization schemes. In this paper, we study the approximation error and [Formula: see text]-functionals in [Formula: see text] spaces with [Formula: see text]. To this end, we give a new viewpoint for a reproducing kernel Hilbert space (RKHS) from a fractional derivative and treat powers of the induced integral operator as fractional derivatives of various orders. Then a generalized translation operator is defined by Fourier multipliers, with which a generalized modulus of smoothness is defined. Some general strong equivalent relations between the moduli of smoothness and the [Formula: see text]-functionals are established. As applications, some strong equivalent relations between these two families of quantities on the unit sphere and the unit ball are provided explicitly.","PeriodicalId":55519,"journal":{"name":"Analysis and Applications","volume":" ","pages":""},"PeriodicalIF":2.2,"publicationDate":"2020-05-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.1142/s0219530519500192","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"49625495","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"数学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2020-04-15DOI: 10.1142/s0219530520400072
A. Kabán
Singular covariance matrices are frequently encountered in both machine learning and optimization problems, most commonly due to high dimensionality of data and insufficient sample sizes. Among many methods of regularization, here we focus on a relatively recent random matrix-theoretic approach, the idea of which is to create well-conditioned approximations of a singular covariance matrix and its inverse by taking the expectation of its random projections. We are interested in the error of a Monte Carlo implementation of this approach, which allows subsequent parallel processing in low dimensions in practice. We find that [Formula: see text] random projections, where [Formula: see text] is the size of the original matrix, are sufficient for the Monte Carlo error to become negligible, in the sense of expected spectral norm difference, for both covariance and inverse covariance approximation, in the latter case under mild assumptions.
{"title":"Sufficient ensemble size for random matrix theory-based handling of singular covariance matrices","authors":"A. Kabán","doi":"10.1142/s0219530520400072","DOIUrl":"https://doi.org/10.1142/s0219530520400072","url":null,"abstract":"Singular covariance matrices are frequently encountered in both machine learning and optimization problems, most commonly due to high dimensionality of data and insufficient sample sizes. Among many methods of regularization, here we focus on a relatively recent random matrix-theoretic approach, the idea of which is to create well-conditioned approximations of a singular covariance matrix and its inverse by taking the expectation of its random projections. We are interested in the error of a Monte Carlo implementation of this approach, which allows subsequent parallel processing in low dimensions in practice. We find that [Formula: see text] random projections, where [Formula: see text] is the size of the original matrix, are sufficient for the Monte Carlo error to become negligible, in the sense of expected spectral norm difference, for both covariance and inverse covariance approximation, in the latter case under mild assumptions.","PeriodicalId":55519,"journal":{"name":"Analysis and Applications","volume":" ","pages":""},"PeriodicalIF":2.2,"publicationDate":"2020-04-15","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.1142/s0219530520400072","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"46899370","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"数学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2020-03-25DOI: 10.1142/s0219530521500378
J. Guella
We present a characterization for a positive definite operator valued kernel to be universal or $C_{0}$-universal, and apply these characterizations to a family of operator valued kernels that are shown to be well behaved. Later, we obtain a characterization for an operator valued differentiable kernel to be $C^{q}$-universal and $C_{0}^{q}$-universal. In order to obtain such characterization and examples we generalize some well known results concerning the structure of differentiable kernels to the operator valued context. On the examples is given an emphasis on the radial kernels on Euclidean spaces.
{"title":"Operator Valued Positive Definite Kernels and Differentiable Universality","authors":"J. Guella","doi":"10.1142/s0219530521500378","DOIUrl":"https://doi.org/10.1142/s0219530521500378","url":null,"abstract":"We present a characterization for a positive definite operator valued kernel to be universal or $C_{0}$-universal, and apply these characterizations to a family of operator valued kernels that are shown to be well behaved. Later, we obtain a characterization for an operator valued differentiable kernel to be $C^{q}$-universal and $C_{0}^{q}$-universal. In order to obtain such characterization and examples we generalize some well known results concerning the structure of differentiable kernels to the operator valued context. On the examples is given an emphasis on the radial kernels on Euclidean spaces.","PeriodicalId":55519,"journal":{"name":"Analysis and Applications","volume":" ","pages":""},"PeriodicalIF":2.2,"publicationDate":"2020-03-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"46024632","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"数学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2020-03-02DOI: 10.1142/s0219530520500013
Xiangcheng Zheng, Hong Wang
We prove wellposedness of a variable-order linear space-time fractional diffusion equation in multiple space dimensions. In addition we prove that the regularity of its solutions depends on the beh...
{"title":"Wellposedness and regularity of a variable-order space-time fractional diffusion equation","authors":"Xiangcheng Zheng, Hong Wang","doi":"10.1142/s0219530520500013","DOIUrl":"https://doi.org/10.1142/s0219530520500013","url":null,"abstract":"We prove wellposedness of a variable-order linear space-time fractional diffusion equation in multiple space dimensions. In addition we prove that the regularity of its solutions depends on the beh...","PeriodicalId":55519,"journal":{"name":"Analysis and Applications","volume":"18 1","pages":"615-638"},"PeriodicalIF":2.2,"publicationDate":"2020-03-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.1142/s0219530520500013","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"42598293","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"数学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2020-03-01DOI: 10.1142/S0219530519500076
Yutian Li, Xiang-Sheng Wang, R. Wong
In this paper, we study the asymptotic behavior of the Wilson polynomials [Formula: see text] as their degree tends to infinity. These polynomials lie on the top level of the Askey scheme of hypergeometric orthogonal polynomials. Infinite asymptotic expansions are derived for these polynomials in various cases, for instance, (i) when the variable [Formula: see text] is fixed and (ii) when the variable is rescaled as [Formula: see text] with [Formula: see text]. Case (ii) has two subcases, namely, (a) zero-free zone ([Formula: see text]) and (b) oscillatory region [Formula: see text]. Corresponding results are also obtained in these cases (iii) when [Formula: see text] lies in a neighborhood of the transition point [Formula: see text], and (iv) when [Formula: see text] is in the neighborhood of the transition point [Formula: see text]. The expansions in the last two cases hold uniformly in [Formula: see text]. Case (iv) is also the only unsettled case in a sequence of works on the asymptotic analysis of linear difference equations.
{"title":"Asymptotics of the Wilson polynomials","authors":"Yutian Li, Xiang-Sheng Wang, R. Wong","doi":"10.1142/S0219530519500076","DOIUrl":"https://doi.org/10.1142/S0219530519500076","url":null,"abstract":"In this paper, we study the asymptotic behavior of the Wilson polynomials [Formula: see text] as their degree tends to infinity. These polynomials lie on the top level of the Askey scheme of hypergeometric orthogonal polynomials. Infinite asymptotic expansions are derived for these polynomials in various cases, for instance, (i) when the variable [Formula: see text] is fixed and (ii) when the variable is rescaled as [Formula: see text] with [Formula: see text]. Case (ii) has two subcases, namely, (a) zero-free zone ([Formula: see text]) and (b) oscillatory region [Formula: see text]. Corresponding results are also obtained in these cases (iii) when [Formula: see text] lies in a neighborhood of the transition point [Formula: see text], and (iv) when [Formula: see text] is in the neighborhood of the transition point [Formula: see text]. The expansions in the last two cases hold uniformly in [Formula: see text]. Case (iv) is also the only unsettled case in a sequence of works on the asymptotic analysis of linear difference equations.","PeriodicalId":55519,"journal":{"name":"Analysis and Applications","volume":" ","pages":""},"PeriodicalIF":2.2,"publicationDate":"2020-03-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.1142/S0219530519500076","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"43829240","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"数学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}