The optimization problems involving local unitary and local contraction matrices and some Hermitian structures have been concedered in this paper. We establish a set of explicit formulas for calculating the maximal and minimal values of the ranks and inertias of the matrices begin{document}$ X_{1}X_{1}^{ast}-P_{1} $end{document}, begin{document}$ X_{2}X_{2}^{ast}-P_{1} $end{document}, begin{document}$ X_{3}X_{3}^{ast}-P_{2} $end{document} and begin{document}$ X_{4}X_{4}^{ast }-P_{2} $end{document}, with respect to begin{document}$ X_{1} $end{document}, begin{document}$ X_{2} $end{document}, begin{document}$ X_{3} $end{document} and begin{document}$ X_{4} $end{document} respectively, where begin{document}$ P_{1}in mathbb{C} ^{n_{1}times n_{1}} $end{document}, begin{document}$ P_{2}in mathbb{C} ^{n_{2}times n_{2}} $end{document} are given, begin{document}$ X_{1} $end{document}, begin{document}$ X_{2} $end{document}, begin{document}$ X_{3} $end{document} and begin{document}$ X_{4} $end{document} are submatrices in a general common solution begin{document}$ X $end{document} to the paire of matrix equations begin{document}$ AX = C $end{document}, begin{document}$ XB = D. $end{document}
The optimization problems involving local unitary and local contraction matrices and some Hermitian structures have been concedered in this paper. We establish a set of explicit formulas for calculating the maximal and minimal values of the ranks and inertias of the matrices begin{document}$ X_{1}X_{1}^{ast}-P_{1} $end{document}, begin{document}$ X_{2}X_{2}^{ast}-P_{1} $end{document}, begin{document}$ X_{3}X_{3}^{ast}-P_{2} $end{document} and begin{document}$ X_{4}X_{4}^{ast }-P_{2} $end{document}, with respect to begin{document}$ X_{1} $end{document}, begin{document}$ X_{2} $end{document}, begin{document}$ X_{3} $end{document} and begin{document}$ X_{4} $end{document} respectively, where begin{document}$ P_{1}in mathbb{C} ^{n_{1}times n_{1}} $end{document}, begin{document}$ P_{2}in mathbb{C} ^{n_{2}times n_{2}} $end{document} are given, begin{document}$ X_{1} $end{document}, begin{document}$ X_{2} $end{document}, begin{document}$ X_{3} $end{document} and begin{document}$ X_{4} $end{document} are submatrices in a general common solution begin{document}$ X $end{document} to the paire of matrix equations begin{document}$ AX = C $end{document}, begin{document}$ XB = D. $end{document}
{"title":"Some structures of submatrices in solution to the paire of matrix equations $ AX = C $, $ XB = D $","authors":"Radja Belkhiri, Sihem Guerarra","doi":"10.3934/mfc.2022023","DOIUrl":"https://doi.org/10.3934/mfc.2022023","url":null,"abstract":"<p style='text-indent:20px;'>The optimization problems involving local unitary and local contraction matrices and some Hermitian structures have been concedered in this paper. We establish a set of explicit formulas for calculating the maximal and minimal values of the ranks and inertias of the matrices <inline-formula><tex-math id=\"M3\">begin{document}$ X_{1}X_{1}^{ast}-P_{1} $end{document}</tex-math></inline-formula>, <inline-formula><tex-math id=\"M4\">begin{document}$ X_{2}X_{2}^{ast}-P_{1} $end{document}</tex-math></inline-formula>, <inline-formula><tex-math id=\"M5\">begin{document}$ X_{3}X_{3}^{ast}-P_{2} $end{document}</tex-math></inline-formula> and <inline-formula><tex-math id=\"M6\">begin{document}$ X_{4}X_{4}^{ast }-P_{2} $end{document}</tex-math></inline-formula>, with respect to <inline-formula><tex-math id=\"M7\">begin{document}$ X_{1} $end{document}</tex-math></inline-formula>, <inline-formula><tex-math id=\"M8\">begin{document}$ X_{2} $end{document}</tex-math></inline-formula>, <inline-formula><tex-math id=\"M9\">begin{document}$ X_{3} $end{document}</tex-math></inline-formula> and <inline-formula><tex-math id=\"M10\">begin{document}$ X_{4} $end{document}</tex-math></inline-formula> respectively, where <inline-formula><tex-math id=\"M11\">begin{document}$ P_{1}in mathbb{C} ^{n_{1}times n_{1}} $end{document}</tex-math></inline-formula>, <inline-formula><tex-math id=\"M12\">begin{document}$ P_{2}in mathbb{C} ^{n_{2}times n_{2}} $end{document}</tex-math></inline-formula> are given, <inline-formula><tex-math id=\"M13\">begin{document}$ X_{1} $end{document}</tex-math></inline-formula>, <inline-formula><tex-math id=\"M14\">begin{document}$ X_{2} $end{document}</tex-math></inline-formula>, <inline-formula><tex-math id=\"M15\">begin{document}$ X_{3} $end{document}</tex-math></inline-formula> and <inline-formula><tex-math id=\"M16\">begin{document}$ X_{4} $end{document}</tex-math></inline-formula> are submatrices in a general common solution <inline-formula><tex-math id=\"M17\">begin{document}$ X $end{document}</tex-math></inline-formula> to the paire of matrix equations <inline-formula><tex-math id=\"M18\">begin{document}$ AX = C $end{document}</tex-math></inline-formula>, <inline-formula><tex-math id=\"M19\">begin{document}$ XB = D. $end{document}</tex-math></inline-formula></p>","PeriodicalId":93334,"journal":{"name":"Mathematical foundations of computing","volume":"89 1","pages":"231-252"},"PeriodicalIF":0.0,"publicationDate":"2023-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"88200490","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
The motive of this research article is to introduce a sequence of Szbegin{document}$ acute{a}sz $end{document} Schurer Beta bivariate operators in terms of generalization exponential functions and their approximation properties. Further, preliminaries results and definitions are presented. Moreover, we study existence of convergence with the aid of Korovkin theorem and order of approximation via usual modulus of continuity, Peetre's K-functional, Lipschitz maximal functional. Lastly, approximation properties of these sequences of operators are studied in Bbegin{document}$ ddot{o} $end{document}gel space via mixed modulus of continuity.
The motive of this research article is to introduce a sequence of Szbegin{document}$ acute{a}sz $end{document} Schurer Beta bivariate operators in terms of generalization exponential functions and their approximation properties. Further, preliminaries results and definitions are presented. Moreover, we study existence of convergence with the aid of Korovkin theorem and order of approximation via usual modulus of continuity, Peetre's K-functional, Lipschitz maximal functional. Lastly, approximation properties of these sequences of operators are studied in Bbegin{document}$ ddot{o} $end{document}gel space via mixed modulus of continuity.
{"title":"Dunkl analouge of Sz$ acute{a} $sz Schurer Beta bivariate operators","authors":"V. Mishra, Mohd Raiz, N. Rao","doi":"10.3934/mfc.2022037","DOIUrl":"https://doi.org/10.3934/mfc.2022037","url":null,"abstract":"<p style='text-indent:20px;'>The motive of this research article is to introduce a sequence of Sz<inline-formula><tex-math id=\"M2\">begin{document}$ acute{a}sz $end{document}</tex-math></inline-formula> Schurer Beta bivariate operators in terms of generalization exponential functions and their approximation properties. Further, preliminaries results and definitions are presented. Moreover, we study existence of convergence with the aid of Korovkin theorem and order of approximation via usual modulus of continuity, Peetre's K-functional, Lipschitz maximal functional. Lastly, approximation properties of these sequences of operators are studied in B<inline-formula><tex-math id=\"M3\">begin{document}$ ddot{o} $end{document}</tex-math></inline-formula>gel space via mixed modulus of continuity.</p>","PeriodicalId":93334,"journal":{"name":"Mathematical foundations of computing","volume":"4 1","pages":"651-669"},"PeriodicalIF":0.0,"publicationDate":"2023-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"78307449","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Expression recognition has been an important research direction in the field of psychology, which can be used in traffic, medical, security, and criminal investigation by expressing human feelings through the muscles in the corners of the mouth, eyes, and face. Most of the existing research work uses convolutional neural networks (CNN) to recognize face images and thus classify expressions, which does achieve good results, but CNN do not have enough ability to extract global features. The Transformer has advantages for global feature extraction, but the Transformer is more computationally intensive and requires a large amount of training data. So, in this paper, we use the hierarchical Transformer, namely Swin Transformer, for the expression recognition task, and its computational power will be greatly reduced. At the same time, it is fused with a CNN model to propose a network architecture that combines the Transformer and CNN, and to the best of our knowledge, we are the first to combine the Swin Transformer with CNN and use it in an expression recognition task. We then evaluate the proposed method on some publicly available expression datasets and can obtain competitive results.
{"title":"Expression recognition method combining convolutional features and Transformer","authors":"Xiaoning Zhu, Zhongyi Li, Jian Sun","doi":"10.3934/mfc.2022018","DOIUrl":"https://doi.org/10.3934/mfc.2022018","url":null,"abstract":"Expression recognition has been an important research direction in the field of psychology, which can be used in traffic, medical, security, and criminal investigation by expressing human feelings through the muscles in the corners of the mouth, eyes, and face. Most of the existing research work uses convolutional neural networks (CNN) to recognize face images and thus classify expressions, which does achieve good results, but CNN do not have enough ability to extract global features. The Transformer has advantages for global feature extraction, but the Transformer is more computationally intensive and requires a large amount of training data. So, in this paper, we use the hierarchical Transformer, namely Swin Transformer, for the expression recognition task, and its computational power will be greatly reduced. At the same time, it is fused with a CNN model to propose a network architecture that combines the Transformer and CNN, and to the best of our knowledge, we are the first to combine the Swin Transformer with CNN and use it in an expression recognition task. We then evaluate the proposed method on some publicly available expression datasets and can obtain competitive results.","PeriodicalId":93334,"journal":{"name":"Mathematical foundations of computing","volume":"284 1","pages":"203-217"},"PeriodicalIF":0.0,"publicationDate":"2023-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"86744988","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
In the present article, we study a generalization of Szász operators by Gould-Hopper polynomials. First, we obtain an estimate of error of the rate of convergence by these operators in terms of first order and second order moduli of continuity. Then, we derive a Voronovkaya-type theorem for these operators. Lastly, we derive Grüss-Voronovskaya type approximation theorem and Grüss-Voronovskaya type asymptotic result in quantitative form.
{"title":"On Szász-Durrmeyer type modification using Gould Hopper polynomials","authors":"Karunesh Singh, P. Agrawal","doi":"10.3934/mfc.2022011","DOIUrl":"https://doi.org/10.3934/mfc.2022011","url":null,"abstract":"In the present article, we study a generalization of Szász operators by Gould-Hopper polynomials. First, we obtain an estimate of error of the rate of convergence by these operators in terms of first order and second order moduli of continuity. Then, we derive a Voronovkaya-type theorem for these operators. Lastly, we derive Grüss-Voronovskaya type approximation theorem and Grüss-Voronovskaya type asymptotic result in quantitative form.","PeriodicalId":93334,"journal":{"name":"Mathematical foundations of computing","volume":"87 1","pages":"123-135"},"PeriodicalIF":0.0,"publicationDate":"2023-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"84420470","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Rates of weighted statistical convergence for a generalization of positive linear operators","authors":"Reyhan Canatan Ilbey, O. Dogru","doi":"10.3934/mfc.2022059","DOIUrl":"https://doi.org/10.3934/mfc.2022059","url":null,"abstract":"","PeriodicalId":93334,"journal":{"name":"Mathematical foundations of computing","volume":"100 1","pages":"427-438"},"PeriodicalIF":0.0,"publicationDate":"2023-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"76257438","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"On approximation of unbounded functions by certain modified Bernstein operators","authors":"R. Păltănea","doi":"10.3934/mfc.2023014","DOIUrl":"https://doi.org/10.3934/mfc.2023014","url":null,"abstract":"","PeriodicalId":93334,"journal":{"name":"Mathematical foundations of computing","volume":"5 1","pages":"512-519"},"PeriodicalIF":0.0,"publicationDate":"2023-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"79628158","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
In this paper, we study the learning performance of regularized large-margin unified machines (LUMs) for classification problem. The hypothesis space is taken to be a reproducing kernel Hilbert space begin{document}$ {mathcal H}_K $end{document}, and the penalty term is denoted by the norm of the function in begin{document}$ {mathcal H}_K $end{document}. Since the LUM loss functions are differentiable and convex, so the data piling phenomena can be avoided when dealing with the high-dimension low-sample size data. The error analysis of this classification learning machine mainly lies upon the comparison theorem [3] which ensures that the excess classification error can be bounded by the excess generalization error. Under a mild source condition which shows that the minimizer begin{document}$ f_V $end{document} of the generalization error can be approximated by the hypothesis space begin{document}$ {mathcal H}_K $end{document}, and by a leave one out variant technique proposed in [13], satisfying error bound and learning rate about the mean of excess classification error are deduced.
In this paper, we study the learning performance of regularized large-margin unified machines (LUMs) for classification problem. The hypothesis space is taken to be a reproducing kernel Hilbert space begin{document}$ {mathcal H}_K $end{document}, and the penalty term is denoted by the norm of the function in begin{document}$ {mathcal H}_K $end{document}. Since the LUM loss functions are differentiable and convex, so the data piling phenomena can be avoided when dealing with the high-dimension low-sample size data. The error analysis of this classification learning machine mainly lies upon the comparison theorem [3] which ensures that the excess classification error can be bounded by the excess generalization error. Under a mild source condition which shows that the minimizer begin{document}$ f_V $end{document} of the generalization error can be approximated by the hypothesis space begin{document}$ {mathcal H}_K $end{document}, and by a leave one out variant technique proposed in [13], satisfying error bound and learning rate about the mean of excess classification error are deduced.
{"title":"Error analysis of classification learning algorithms based on LUMs loss","authors":"Xuqing He, Hongwei Sun","doi":"10.3934/mfc.2022028","DOIUrl":"https://doi.org/10.3934/mfc.2022028","url":null,"abstract":"<p style='text-indent:20px;'>In this paper, we study the learning performance of regularized large-margin unified machines (LUMs) for classification problem. The hypothesis space is taken to be a reproducing kernel Hilbert space <inline-formula><tex-math id=\"M1\">begin{document}$ {mathcal H}_K $end{document}</tex-math></inline-formula>, and the penalty term is denoted by the norm of the function in <inline-formula><tex-math id=\"M2\">begin{document}$ {mathcal H}_K $end{document}</tex-math></inline-formula>. Since the LUM loss functions are differentiable and convex, so the data piling phenomena can be avoided when dealing with the high-dimension low-sample size data. The error analysis of this classification learning machine mainly lies upon the comparison theorem [<xref ref-type=\"bibr\" rid=\"b3\">3</xref>] which ensures that the excess classification error can be bounded by the excess generalization error. Under a mild source condition which shows that the minimizer <inline-formula><tex-math id=\"M3\">begin{document}$ f_V $end{document}</tex-math></inline-formula> of the generalization error can be approximated by the hypothesis space <inline-formula><tex-math id=\"M4\">begin{document}$ {mathcal H}_K $end{document}</tex-math></inline-formula>, and by a leave one out variant technique proposed in [<xref ref-type=\"bibr\" rid=\"b13\">13</xref>], satisfying error bound and learning rate about the mean of excess classification error are deduced.</p>","PeriodicalId":93334,"journal":{"name":"Mathematical foundations of computing","volume":"78 1","pages":"616-624"},"PeriodicalIF":0.0,"publicationDate":"2023-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"87095307","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
The primary goal of this paper is to present the generalization of begin{document}$ lambda $end{document}-Bernstein operators with the assistance of a sequence of operators proposed by Mache and Zhou [24]. For these operators, I establish some approximation results using second-order modulus of continuity, Lipschitz space, Ditzian-Totik modulus of smoothness, and Voronovskaya type asymptotic results. I also indicate some graphical comparisons of my operators among existing operators for better presentation and justification using Matlab.
The primary goal of this paper is to present the generalization of begin{document}$ lambda $end{document}-Bernstein operators with the assistance of a sequence of operators proposed by Mache and Zhou [24]. For these operators, I establish some approximation results using second-order modulus of continuity, Lipschitz space, Ditzian-Totik modulus of smoothness, and Voronovskaya type asymptotic results. I also indicate some graphical comparisons of my operators among existing operators for better presentation and justification using Matlab.
{"title":"The family of $ lambda $-Bernstein-Durrmeyer operators based on certain parameters","authors":"Ram Pratap","doi":"10.3934/mfc.2022038","DOIUrl":"https://doi.org/10.3934/mfc.2022038","url":null,"abstract":"<p style='text-indent:20px;'>The primary goal of this paper is to present the generalization of <inline-formula><tex-math id=\"M2\">begin{document}$ lambda $end{document}</tex-math></inline-formula>-Bernstein operators with the assistance of a sequence of operators proposed by Mache and Zhou [<xref ref-type=\"bibr\" rid=\"b24\">24</xref>]. For these operators, I establish some approximation results using second-order modulus of continuity, Lipschitz space, Ditzian-Totik modulus of smoothness, and Voronovskaya type asymptotic results. I also indicate some graphical comparisons of my operators among existing operators for better presentation and justification using Matlab.</p>","PeriodicalId":93334,"journal":{"name":"Mathematical foundations of computing","volume":"10 1","pages":"546-557"},"PeriodicalIF":0.0,"publicationDate":"2023-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"74024602","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}