In this paper, the eigenvalue problem of real symmetric interval matrices is studied. First, in the case of $2 times 2$ real symmetric interval matrices, all the four endpoints of the two eigenvalue intervals are determined. These are not necessarily eigenvalues of vertex matrices, but it is shown that such a real symmetric interval matrix can be constructed from the original one. Then, necessary and sufficient conditions are provided for the disjointness of eigenvalue intervals. In the general $ntimes n$ case, due to Hertz, a set of special vertex matrices determines the maximal eigenvalue and a similar statement holds for the minimal one. In a special case, namely if the right endpoints of the off-diagonal intervals are not smaller than the absolute value of the left ones, he concluded the vertex matrix of the right endpoints provides the maximal eigenvalue. Generalizing it, it is shown that in the case of real symmetric interval matrices with special sign pattern, a single vertex matrix determines one of the extremal bounds.
{"title":"On eigenvalues of real symmetric interval matrices: Sharp bounds and disjointness","authors":"Gábor Zoltan Faragó, Róbert Vajda","doi":"10.13001/ela.2022.7317","DOIUrl":"https://doi.org/10.13001/ela.2022.7317","url":null,"abstract":"In this paper, the eigenvalue problem of real symmetric interval matrices is studied. First, in the case of $2 times 2$ real symmetric interval matrices, all the four endpoints of the two eigenvalue intervals are determined. These are not necessarily eigenvalues of vertex matrices, but it is shown that such a real symmetric interval matrix can be constructed from the original one. Then, necessary and sufficient conditions are provided for the disjointness of eigenvalue intervals. In the general $ntimes n$ case, due to Hertz, a set of special vertex matrices determines the maximal eigenvalue and a similar statement holds for the minimal one. In a special case, namely if the right endpoints of the off-diagonal intervals are not smaller than the absolute value of the left ones, he concluded the vertex matrix of the right endpoints provides the maximal eigenvalue. Generalizing it, it is shown that in the case of real symmetric interval matrices with special sign pattern, a single vertex matrix determines one of the extremal bounds.","PeriodicalId":50540,"journal":{"name":"Electronic Journal of Linear Algebra","volume":"1 1","pages":""},"PeriodicalIF":0.7,"publicationDate":"2023-01-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"66372396","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"数学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
The eccentricity matrix of a connected graph $G$, denoted by $mathcal{E}(G)$, is obtained from the distance matrix of $G$ by keeping the largest nonzero entries in each row and each column and leaving zeros in the remaining ones. The $mathcal{E}$-eigenvalues of $G$ are the eigenvalues of $mathcal{E}(G)$. The largest modulus of an eigenvalue is the $mathcal{E}$-spectral radius of $G$. The $mathcal{E}$-energy of $G$ is the sum of the absolute values of all $mathcal{E}$-eigenvalues of $G$. In this article, we study some of the extremal problems for eccentricity matrices of complements of trees and characterize the extremal graphs. First, we determine the unique tree whose complement has minimum (respectively, maximum) $mathcal{E}$-spectral radius among the complements of trees. Then, we prove that the $mathcal{E}$-eigenvalues of the complement of a tree are symmetric about the origin. As a consequence of these results, we characterize the trees whose complement has minimum (respectively, maximum) least $mathcal{E}$-eigenvalues among the complements of trees. Finally, we discuss the extremal problems for the second largest $mathcal{E}$-eigenvalue and the $mathcal{E}$-energy of complements of trees and characterize the extremal graphs. As an application, we obtain a Nordhaus-Gaddum-type lower bounds for the second largest $mathcal{E}$-eigenvalue and $mathcal{E}$-energy of a tree and its complement.
{"title":"Extremal problems for the eccentricity matrices of complements of trees","authors":"Iswar Mahato, M. Kannan","doi":"10.13001/ela.2023.7781","DOIUrl":"https://doi.org/10.13001/ela.2023.7781","url":null,"abstract":"The eccentricity matrix of a connected graph $G$, denoted by $mathcal{E}(G)$, is obtained from the distance matrix of $G$ by keeping the largest nonzero entries in each row and each column and leaving zeros in the remaining ones. The $mathcal{E}$-eigenvalues of $G$ are the eigenvalues of $mathcal{E}(G)$. The largest modulus of an eigenvalue is the $mathcal{E}$-spectral radius of $G$. The $mathcal{E}$-energy of $G$ is the sum of the absolute values of all $mathcal{E}$-eigenvalues of $G$. In this article, we study some of the extremal problems for eccentricity matrices of complements of trees and characterize the extremal graphs. First, we determine the unique tree whose complement has minimum (respectively, maximum) $mathcal{E}$-spectral radius among the complements of trees. Then, we prove that the $mathcal{E}$-eigenvalues of the complement of a tree are symmetric about the origin. As a consequence of these results, we characterize the trees whose complement has minimum (respectively, maximum) least $mathcal{E}$-eigenvalues among the complements of trees. Finally, we discuss the extremal problems for the second largest $mathcal{E}$-eigenvalue and the $mathcal{E}$-energy of complements of trees and characterize the extremal graphs. As an application, we obtain a Nordhaus-Gaddum-type lower bounds for the second largest $mathcal{E}$-eigenvalue and $mathcal{E}$-energy of a tree and its complement.","PeriodicalId":50540,"journal":{"name":"Electronic Journal of Linear Algebra","volume":" ","pages":""},"PeriodicalIF":0.7,"publicationDate":"2023-01-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"42122440","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"数学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
A new method of using the numerical range of a matrix to bound the optimal value of certain optimization problems over real tensor product vectors is presented. This bound is stronger than the trivial bounds based on eigenvalues and can be computed significantly faster than bounds provided by semidefinite programming relaxations. Numerous applications to other hard linear algebra problems are discussed, such as showing that a real subspace of matrices contains no rank-one matrix, and showing that a linear map acting on matrices is positive.
{"title":"Bounding real tensor optimizations via the numerical range","authors":"N. Johnston, Logan Pipes","doi":"10.13001/ela.2023.7635","DOIUrl":"https://doi.org/10.13001/ela.2023.7635","url":null,"abstract":"A new method of using the numerical range of a matrix to bound the optimal value of certain optimization problems over real tensor product vectors is presented. This bound is stronger than the trivial bounds based on eigenvalues and can be computed significantly faster than bounds provided by semidefinite programming relaxations. Numerous applications to other hard linear algebra problems are discussed, such as showing that a real subspace of matrices contains no rank-one matrix, and showing that a linear map acting on matrices is positive.","PeriodicalId":50540,"journal":{"name":"Electronic Journal of Linear Algebra","volume":" ","pages":""},"PeriodicalIF":0.7,"publicationDate":"2022-12-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"45270692","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"数学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
A block matrix $left[ begin{smallmatrix}A & X {{X}^{*}} & B end{smallmatrix} right]$ is positive partial transpose (PPT) if both $left[ begin{smallmatrix}A & X {{X}^{*}} & B end{smallmatrix} right]$ and $left[ begin{smallmatrix}A & {{X}^{*}} X & B end{smallmatrix} right]$ are positive semi-definite. This class is significant in studying the separability criterion for density matrices. The current paper presents new relations for such matrices. This includes some equivalent forms and new related inequalities that extend some results from the literature. In the end of the paper, we present some related results for positive semi-definite block matrices, which have similar forms as those presented for PPT matrices, with applications that include significant improvement of numerical radius inequalities.
{"title":"On positive and positive partial transpose matrices","authors":"I. Gumus, H. Moradi, M. Sababheh","doi":"10.13001/ela.2022.7333","DOIUrl":"https://doi.org/10.13001/ela.2022.7333","url":null,"abstract":"A block matrix $left[ begin{smallmatrix}A & X {{X}^{*}} & B end{smallmatrix} right]$ is positive partial transpose (PPT) if both $left[ begin{smallmatrix}A & X {{X}^{*}} & B end{smallmatrix} right]$ and $left[ begin{smallmatrix}A & {{X}^{*}} X & B end{smallmatrix} right]$ are positive semi-definite. This class is significant in studying the separability criterion for density matrices. The current paper presents new relations for such matrices. This includes some equivalent forms and new related inequalities that extend some results from the literature. In the end of the paper, we present some related results for positive semi-definite block matrices, which have similar forms as those presented for PPT matrices, with applications that include significant improvement of numerical radius inequalities.","PeriodicalId":50540,"journal":{"name":"Electronic Journal of Linear Algebra","volume":" ","pages":""},"PeriodicalIF":0.7,"publicationDate":"2022-12-15","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"43882686","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"数学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
For some families of classical orthogonal polynomials defined on appropriate intervals, it is shown that the corresponding Jacobi matrices are totally positive and their bidiagonal factorizations can be accurately computed. By exploiting these facts, an algorithm to compute with high relative accuracy the eigenvalues of those Jacobi matrices, and consequently the nodes of Gaussian quadrature formulae for those families of orthogonal polynomials, is presented. An algorithm is also presented for the computation of the eigenvectors of these Jacobi matrices, and hence the weights of Gaussian quadrature formulae. Although in this case high relative accuracy is not theoretically guaranteed, the numerical experiments with our algorithm provide very accurate results.
{"title":"Accurate computations with totally positive matrices applied to the computation of Gaussian quadrature formulae","authors":"A. Marco, José‐Javier Martínez, Raquel Viaña","doi":"10.13001/ela.2022.7185","DOIUrl":"https://doi.org/10.13001/ela.2022.7185","url":null,"abstract":"For some families of classical orthogonal polynomials defined on appropriate intervals, it is shown that the corresponding Jacobi matrices are totally positive and their bidiagonal factorizations can be accurately computed. By exploiting these facts, an algorithm to compute with high relative accuracy the eigenvalues of those Jacobi matrices, and consequently the nodes of Gaussian quadrature formulae for those families of orthogonal polynomials, is presented. An algorithm is also presented for the computation of the eigenvectors of these Jacobi matrices, and hence the weights of Gaussian quadrature formulae. Although in this case high relative accuracy is not theoretically guaranteed, the numerical experiments with our algorithm provide very accurate results.","PeriodicalId":50540,"journal":{"name":"Electronic Journal of Linear Algebra","volume":" ","pages":""},"PeriodicalIF":0.7,"publicationDate":"2022-12-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"44904058","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"数学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Zachary Brennan, Christopher Cox, Bryan A. Curtis, Enrique Gomez-Leos, Kimberly P. Hadaway, L. Hogben, Conor Thompson
A sign pattern is an array with entries in ${+,-,0}$. A real matrix $Q$ is row orthogonal if $QQ^T = I$. The Strong Inner Product Property (SIPP), introduced in [B.A. Curtis and B.L. Shader, Sign patterns of orthogonal matrices and the strong inner product property, Linear Algebra Appl. 592: 228-259, 2020], is an important tool when determining whether a sign pattern allows row orthogonality because it guarantees there is a nearby matrix with the same property, allowing zero entries to be perturbed to nonzero entries, while preserving the sign of every nonzero entry. This paper uses the SIPP to initiate the study of conditions under which random sign patterns allow row orthogonality with high probability. Building on prior work, $5times n$ nowhere zero sign patterns that minimally allow orthogonality are determined. Conditions on zero entries in a sign pattern are established that guarantee any row orthogonal matrix with such a sign pattern has the SIPP.
{"title":"Orthogonal realizations of random sign patterns and other applications of the SIPP","authors":"Zachary Brennan, Christopher Cox, Bryan A. Curtis, Enrique Gomez-Leos, Kimberly P. Hadaway, L. Hogben, Conor Thompson","doi":"10.13001/ela.2023.7579","DOIUrl":"https://doi.org/10.13001/ela.2023.7579","url":null,"abstract":"A sign pattern is an array with entries in ${+,-,0}$. A real matrix $Q$ is row orthogonal if $QQ^T = I$. The Strong Inner Product Property (SIPP), introduced in [B.A. Curtis and B.L. Shader, Sign patterns of orthogonal matrices and the strong inner product property, Linear Algebra Appl. 592: 228-259, 2020], is an important tool when determining whether a sign pattern allows row orthogonality because it guarantees there is a nearby matrix with the same property, allowing zero entries to be perturbed to nonzero entries, while preserving the sign of every nonzero entry. This paper uses the SIPP to initiate the study of conditions under which random sign patterns allow row orthogonality with high probability. Building on prior work, $5times n$ nowhere zero sign patterns that minimally allow orthogonality are determined. Conditions on zero entries in a sign pattern are established that guarantee any row orthogonal matrix with such a sign pattern has the SIPP.","PeriodicalId":50540,"journal":{"name":"Electronic Journal of Linear Algebra","volume":" ","pages":""},"PeriodicalIF":0.7,"publicationDate":"2022-12-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"48563133","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"数学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
The construction of matrices with prescribed eigenvalues is a kind of inverse eigenvalue problems. The authors proposed an algorithm for constructing band oscillatory matrices with prescribed eigenvalues based on the extended discrete hungry Toda equation (Numer. Algor. 75:1079--1101, 2017). In this paper, we develop a new algorithm for constructing band matrices with prescribed eigenvalues based on a generalization of the extended discrete hungry Toda equation. The new algorithm improves the previous algorithm so that the new one can produce more generic band matrices than the previous one in a certain sense. We compare the new algorithm with the previous one by numerical examples. Especially, we show an example of band oscillatory matrices which the new algorithm can produce but the previous one cannot.
{"title":"An improved algorithm for solving an inverse eigenvalue problem for band matrices","authors":"Kanae Akaiwa, Akira Yoshida, Koichi Kondo","doi":"10.13001/ela.2022.7475","DOIUrl":"https://doi.org/10.13001/ela.2022.7475","url":null,"abstract":"The construction of matrices with prescribed eigenvalues is a kind of inverse eigenvalue problems. The authors proposed an algorithm for constructing band oscillatory matrices with prescribed eigenvalues based on the extended discrete hungry Toda equation (Numer. Algor. 75:1079--1101, 2017). In this paper, we develop a new algorithm for constructing band matrices with prescribed eigenvalues based on a generalization of the extended discrete hungry Toda equation. The new algorithm improves the previous algorithm so that the new one can produce more generic band matrices than the previous one in a certain sense. We compare the new algorithm with the previous one by numerical examples. Especially, we show an example of band oscillatory matrices which the new algorithm can produce but the previous one cannot.","PeriodicalId":50540,"journal":{"name":"Electronic Journal of Linear Algebra","volume":" ","pages":""},"PeriodicalIF":0.7,"publicationDate":"2022-12-03","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"48922608","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"数学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
In this article, some new results on $M$-matrices, $H$-matrices and their inverse classes are proved. Specifically, we study when a singular $Z$-matrix is an $M$-matrix, convex combinations of $H$-matrices, almost monotone $H$-matrices and Cholesky factorizations of $H$-matrices.
{"title":"New results on $M$-matrices, $H$-matrices and their inverse classes","authors":"S. Mondal, K. Sivakumar, M. Tsatsomeros","doi":"10.13001/ela.2022.7177","DOIUrl":"https://doi.org/10.13001/ela.2022.7177","url":null,"abstract":"In this article, some new results on $M$-matrices, $H$-matrices and their inverse classes are proved. Specifically, we study when a singular $Z$-matrix is an $M$-matrix, convex combinations of $H$-matrices, almost monotone $H$-matrices and Cholesky factorizations of $H$-matrices.","PeriodicalId":50540,"journal":{"name":"Electronic Journal of Linear Algebra","volume":" ","pages":""},"PeriodicalIF":0.7,"publicationDate":"2022-11-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"48466615","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"数学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
We present an alternative algorithm and implementation for theHessenberg-triangular reduction, an essential step in the QZalgorithm for solving generalized eigenvalue problems. Thereduction step has a cubic computational complexity, and hence,high-performance implementations are compulsory for keeping thecomputing time under control. Our algorithm is of simplemathematical nature and relies on the connection betweengeneralized and classical eigenvalue problems. Via system solving andthe classical reduction of a single matrix to Hessenberg form, we areable to get a theoretically equivalent reduction toHessenberg-triangular form. As a result, we can perform most of thecomputational work by relying on existing, highly efficient implementations,which make extensive use of blocking. The accompanying error analysisshows that preprocessing and iterative refinement can benecessary to achieve accurate results. Numerical results showcompetitiveness with existing implementations.
{"title":"A novel, blocked algorithm for the reduction to Hessenberg-triangular form","authors":"Thijs Steel, R. Vandebril","doi":"10.13001/ela.2022.6483","DOIUrl":"https://doi.org/10.13001/ela.2022.6483","url":null,"abstract":"We present an alternative algorithm and implementation for theHessenberg-triangular reduction, an essential step in the QZalgorithm for solving generalized eigenvalue problems. Thereduction step has a cubic computational complexity, and hence,high-performance implementations are compulsory for keeping thecomputing time under control. Our algorithm is of simplemathematical nature and relies on the connection betweengeneralized and classical eigenvalue problems. Via system solving andthe classical reduction of a single matrix to Hessenberg form, we areable to get a theoretically equivalent reduction toHessenberg-triangular form. As a result, we can perform most of thecomputational work by relying on existing, highly efficient implementations,which make extensive use of blocking. The accompanying error analysisshows that preprocessing and iterative refinement can benecessary to achieve accurate results. Numerical results showcompetitiveness with existing implementations.","PeriodicalId":50540,"journal":{"name":"Electronic Journal of Linear Algebra","volume":" ","pages":""},"PeriodicalIF":0.7,"publicationDate":"2022-10-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"47157692","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"数学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
The adjugate matrix of $G$, denoted by $operatorname{adj}(G)$, is the adjugate of the matrix $xmathbf{I}-mathbf{A}$, where $mathbf{A}$ is the adjacency matrix of $G$. The polynomial reconstruction problem (PRP) asks if the characteristic polynomial of a graph $G$ can always be recovered from the multiset $operatorname{mathcal{PD}}(G)$ containing the $n$ characteristic polynomials of the vertex-deleted subgraphs of $G$. Noting that the $n$ diagonal entries of $operatorname{adj}(G)$ are precisely the elements of $operatorname{mathcal{PD}}(G)$, we investigate variants of the PRP in which multisets containing entries from $operatorname{adj}(G)$ successfully reconstruct the characteristic polynomial of $G$. Furthermore, we interpret the entries off the diagonal of $operatorname{adj}(G)$ in terms of characteristic polynomials of graphs, allowing us to solve versions of the PRP that utilize alternative multisets to $operatorname{mathcal{PD}}(G)$ containing polynomials related to characteristic polynomials of graphs, rather than entries from $operatorname{adj}(G)$.
{"title":"Recovering the characteristic polynomial of a graph from entries of the adjugate matrix","authors":"Alexander Farrugia","doi":"10.13001/ela.2022.7231","DOIUrl":"https://doi.org/10.13001/ela.2022.7231","url":null,"abstract":"The adjugate matrix of $G$, denoted by $operatorname{adj}(G)$, is the adjugate of the matrix $xmathbf{I}-mathbf{A}$, where $mathbf{A}$ is the adjacency matrix of $G$. The polynomial reconstruction problem (PRP) asks if the characteristic polynomial of a graph $G$ can always be recovered from the multiset $operatorname{mathcal{PD}}(G)$ containing the $n$ characteristic polynomials of the vertex-deleted subgraphs of $G$. Noting that the $n$ diagonal entries of $operatorname{adj}(G)$ are precisely the elements of $operatorname{mathcal{PD}}(G)$, we investigate variants of the PRP in which multisets containing entries from $operatorname{adj}(G)$ successfully reconstruct the characteristic polynomial of $G$. Furthermore, we interpret the entries off the diagonal of $operatorname{adj}(G)$ in terms of characteristic polynomials of graphs, allowing us to solve versions of the PRP that utilize alternative multisets to $operatorname{mathcal{PD}}(G)$ containing polynomials related to characteristic polynomials of graphs, rather than entries from $operatorname{adj}(G)$.","PeriodicalId":50540,"journal":{"name":"Electronic Journal of Linear Algebra","volume":" ","pages":""},"PeriodicalIF":0.7,"publicationDate":"2022-10-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"47603506","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"数学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}