Pub Date : 2022-01-01DOI: 10.1007/978-3-031-19493-1
{"title":"Advances in Computational Intelligence: 21st Mexican International Conference on Artificial Intelligence, MICAI 2022, Monterrey, Mexico, October 24–29, 2022, Proceedings, Part I","authors":"","doi":"10.1007/978-3-031-19493-1","DOIUrl":"https://doi.org/10.1007/978-3-031-19493-1","url":null,"abstract":"","PeriodicalId":72089,"journal":{"name":"Advances in computational intelligence","volume":"51 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2022-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"87132071","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2022-01-01DOI: 10.1007/978-3-031-19496-2
{"title":"Advances in Computational Intelligence: 21st Mexican International Conference on Artificial Intelligence, MICAI 2022, Monterrey, Mexico, October 24–29, 2022, Proceedings, Part II","authors":"","doi":"10.1007/978-3-031-19496-2","DOIUrl":"https://doi.org/10.1007/978-3-031-19496-2","url":null,"abstract":"","PeriodicalId":72089,"journal":{"name":"Advances in computational intelligence","volume":"6 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2022-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"89961185","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2021-12-17DOI: 10.1007/s43674-021-00026-4
Songfeng Zheng
Support vector machine (SVM) models are usually trained by solving the dual of a quadratic programming, which is time consuming. Using the idea of penalty function method from optimization theory, this paper combines the objective function and the constraints in the dual, obtaining an unconstrained optimization problem, which could be solved by a generalized Newton method, yielding an approximate solution to the original model. Extensive experiments on pattern classification were conducted, and compared to the quadratic programming-based models, the proposed approach is much more computationally efficient (tens to hundreds of times faster) and yields similar performance in terms of receiver operating characteristic curve. Furthermore, the proposed method and quadratic programming-based models extract almost the same set of support vectors.
{"title":"A support vector approach based on penalty function method","authors":"Songfeng Zheng","doi":"10.1007/s43674-021-00026-4","DOIUrl":"10.1007/s43674-021-00026-4","url":null,"abstract":"<div><p>Support vector machine (SVM) models are usually trained by solving the dual of a quadratic programming, which is time consuming. Using the idea of penalty function method from optimization theory, this paper combines the objective function and the constraints in the dual, obtaining an unconstrained optimization problem, which could be solved by a generalized Newton method, yielding an approximate solution to the original model. Extensive experiments on pattern classification were conducted, and compared to the quadratic programming-based models, the proposed approach is much more computationally efficient (tens to hundreds of times faster) and yields similar performance in terms of receiver operating characteristic curve. Furthermore, the proposed method and quadratic programming-based models extract almost the same set of support vectors.</p></div>","PeriodicalId":72089,"journal":{"name":"Advances in computational intelligence","volume":"2 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2021-12-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://link.springer.com/content/pdf/10.1007/s43674-021-00026-4.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"50488923","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2021-12-17DOI: 10.1007/s43674-021-00017-5
Amir Baklouti, Warda Bensalah, Khaled Al-Motairi
We establish in this paper the equivalence between the existence of a solution of the Yang Baxter equation of a Jordan superalgebras and that of symplectic form on Jordan superalgebras.
{"title":"Solutions of Yang Baxter equation of symplectic Jordan superalgebras","authors":"Amir Baklouti, Warda Bensalah, Khaled Al-Motairi","doi":"10.1007/s43674-021-00017-5","DOIUrl":"10.1007/s43674-021-00017-5","url":null,"abstract":"<div><p>We establish in this paper the equivalence between the existence of a solution of the Yang Baxter equation of a Jordan superalgebras and that of symplectic form on Jordan superalgebras.</p></div>","PeriodicalId":72089,"journal":{"name":"Advances in computational intelligence","volume":"2 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2021-12-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"50488898","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2021-12-17DOI: 10.1007/s43674-021-00023-7
Swetha Velluva Chathoth, Asish Kumar Mishra, Deepak Mishra, Subrahmanyam Gorthi R. K. Sai
The convolution neural networks are well known for their efficiency in detecting and classifying objects once adequately trained. Though they address shift in-variance up to a limit, appreciable rotation and scale in-variances are not guaranteed by many of the existing CNN architectures, making them sensitive towards input image or feature map rotation and scale variations. Many attempts have been made in the past to acquire rotation and scale in-variances in CNNs. In this paper, an efficient approach is proposed for incorporating rotation and scale in-variances in CNN-based classifications, based on eigenvectors and eigenvalues of the image covariance matrix. Without demanding any training data augmentation or CNN architectural change, the proposed method, ‘Scale and Orientation Corrected Networks (SOCN)’, achieves better rotation and scale-invariant performances. SOCN proposes a scale and orientation correction step for images before baseline CNN training and testing. Being a generalized approach, SOCN can be combined with any baseline CNN to improve its rotational and scale in-variance performances. We demonstrate the proposed approach’s scale and orientation invariant classification ability with several real cases ranging from scale and orientation invariant character recognition to orientation invariant image classification, with different suitable baseline architectures. The proposed approach of SOCN, though is simple, outperforms the current state of the art scale and orientation invariant classifiers comparatively with minimal training and testing time.
{"title":"An eigenvector approach for obtaining scale and orientation invariant classification in convolutional neural networks","authors":"Swetha Velluva Chathoth, Asish Kumar Mishra, Deepak Mishra, Subrahmanyam Gorthi R. K. Sai","doi":"10.1007/s43674-021-00023-7","DOIUrl":"10.1007/s43674-021-00023-7","url":null,"abstract":"<div><p>The convolution neural networks are well known for their efficiency in detecting and classifying objects once adequately trained. Though they address shift in-variance up to a limit, appreciable rotation and scale in-variances are not guaranteed by many of the existing CNN architectures, making them sensitive towards input image or feature map rotation and scale variations. Many attempts have been made in the past to acquire rotation and scale in-variances in CNNs. In this paper, an efficient approach is proposed for incorporating rotation and scale in-variances in CNN-based classifications, based on eigenvectors and eigenvalues of the image covariance matrix. Without demanding any training data augmentation or CNN architectural change, the proposed method, <b>‘Scale and Orientation Corrected Networks (SOCN)’</b>, achieves better rotation and scale-invariant performances. <b>SOCN</b> proposes a scale and orientation correction step for images before baseline CNN training and testing. Being a generalized approach, <b>SOCN</b> can be combined with any baseline CNN to improve its rotational and scale in-variance performances. We demonstrate the proposed approach’s scale and orientation invariant classification ability with several real cases ranging from scale and orientation invariant character recognition to orientation invariant image classification, with different suitable baseline architectures. The proposed approach of <b>SOCN</b>, though is simple, outperforms the current state of the art scale and orientation invariant classifiers comparatively with minimal training and testing time.</p></div>","PeriodicalId":72089,"journal":{"name":"Advances in computational intelligence","volume":"2 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2021-12-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://link.springer.com/content/pdf/10.1007/s43674-021-00023-7.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"50488900","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2021-12-17DOI: 10.1007/s43674-021-00018-4
Hashem Bordbar
In this paper, we initiate the study of the notion of the BCK-function on an arbitrary set A, and providing connections with x-functions and x-subsets for (x in X) where X is a BCK-algebra. Moreover, using the notion of order in a BCK-algebra, the BCK-code C is introduced and besides a new structure of order in C is investigated. Finally, we show that the structure of the BCK-algebra X and the BCK-code C which is generated by X, with their related orders are the same.
在本文中,我们开始研究任意集A上BCK函数的概念,并为(x In x)提供了x函数和x子集的连接,其中x是BCK代数。此外,利用BCK代数中阶的概念,引入了BCK码C,并研究了C中一种新的阶结构。最后,我们证明了BCK代数X和由X生成的BCK码C的结构及其相关阶是相同的。
{"title":"BCK codes","authors":"Hashem Bordbar","doi":"10.1007/s43674-021-00018-4","DOIUrl":"10.1007/s43674-021-00018-4","url":null,"abstract":"<div><p>In this paper, we initiate the study of the notion of the <i>BCK</i>-function on an arbitrary set <i>A</i>, and providing connections with <i>x</i>-functions and <i>x</i>-subsets for <span>(x in X)</span> where <i>X</i> is a <i>BCK</i>-algebra. Moreover, using the notion of order in a <i>BCK</i>-algebra, the <i>BCK</i>-code <i>C</i> is introduced and besides a new structure of order in <i>C</i> is investigated. Finally, we show that the structure of the <i>BCK</i>-algebra <i>X</i> and the <i>BCK</i>-code <i>C</i> which is generated by <i>X</i>, with their related orders are the same.</p></div>","PeriodicalId":72089,"journal":{"name":"Advances in computational intelligence","volume":"2 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2021-12-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://link.springer.com/content/pdf/10.1007/s43674-021-00018-4.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"50488925","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2021-12-17DOI: 10.1007/s43674-021-00014-8
J. Martínez-Moreno, D. Gopal, Vladimir Rakočević, A. S. Ranadive, R. P. Pant
This paper deals with some issues of fixed point concerning Caristi type mappings introduced by Abbasi and Golshan (Kybernetika 52:929–942, 2016) in fuzzy metric spaces. We enlarge this class of mappings and prove completeness characterization of corresponding fuzzy metric space. The paper includes a comprehensive set of examples showing the generality of our results and an open question.
{"title":"Caristi type mappings and characterization of completeness of Archimedean type fuzzy metric spaces","authors":"J. Martínez-Moreno, D. Gopal, Vladimir Rakočević, A. S. Ranadive, R. P. Pant","doi":"10.1007/s43674-021-00014-8","DOIUrl":"10.1007/s43674-021-00014-8","url":null,"abstract":"<div><p>This paper deals with some issues of fixed point concerning Caristi type mappings introduced by Abbasi and Golshan (Kybernetika 52:929–942, 2016) in fuzzy metric spaces. We enlarge this class of mappings and prove completeness characterization of corresponding fuzzy metric space. The paper includes a comprehensive set of examples showing the generality of our results and an open question.</p></div>","PeriodicalId":72089,"journal":{"name":"Advances in computational intelligence","volume":"2 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2021-12-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"50488896","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2021-12-17DOI: 10.1007/s43674-021-00021-9
Yanting Guo, Meng Hu, Eric C. C. Tsang, Degang Chen, Weihua Xu
Feature selection can effectively eliminate irrelevant or redundant features without changing features semantics, so as to improve the performance of learning and reduce the training time. In most of the existing feature selection methods based on rough sets, eliminating the redundant features between features and decisions, and deleting the redundant features between features are performed separately. This will greatly increase the search time of feature subset. To quickly remove redundant features, we define a series of feature evaluation functions that consider both the consistency between features and decisions, and redundancy between features, then propose a novel feature selection method based on min-redundancy and max-consistency. Firstly, we define the consistency of features with respect to decisions and the redundancy between features from neighborhood information granules. Then we propose a combined criterion to measure the importance of features and design a feature selection algorithm based on minimal-redundancy-maximal-consistency (mRMC). Finally, on UCI data sets, mRMC is compared with three other popular feature selection algorithms based on neighborhood idea, from classification accuracy, the number of selected features and running time. The experimental comparison shows that mRMC can quickly delete redundant features and select useful features while ensuring classification accuracy.
{"title":"Feature selection based on min-redundancy and max-consistency","authors":"Yanting Guo, Meng Hu, Eric C. C. Tsang, Degang Chen, Weihua Xu","doi":"10.1007/s43674-021-00021-9","DOIUrl":"10.1007/s43674-021-00021-9","url":null,"abstract":"<div><p>Feature selection can effectively eliminate irrelevant or redundant features without changing features semantics, so as to improve the performance of learning and reduce the training time. In most of the existing feature selection methods based on rough sets, eliminating the redundant features between features and decisions, and deleting the redundant features between features are performed separately. This will greatly increase the search time of feature subset. To quickly remove redundant features, we define a series of feature evaluation functions that consider both the consistency between features and decisions, and redundancy between features, then propose a novel feature selection method based on min-redundancy and max-consistency. Firstly, we define the consistency of features with respect to decisions and the redundancy between features from neighborhood information granules. Then we propose a combined criterion to measure the importance of features and design a feature selection algorithm based on minimal-redundancy-maximal-consistency (mRMC). Finally, on UCI data sets, mRMC is compared with three other popular feature selection algorithms based on neighborhood idea, from classification accuracy, the number of selected features and running time. The experimental comparison shows that mRMC can quickly delete redundant features and select useful features while ensuring classification accuracy.</p></div>","PeriodicalId":72089,"journal":{"name":"Advances in computational intelligence","volume":"2 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2021-12-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://link.springer.com/content/pdf/10.1007/s43674-021-00021-9.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"50488902","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2021-12-17DOI: 10.1007/s43674-021-00016-6
Jiankai Chen, Zhongyan Li, Xin Wang, Junhai Zhai
The existing monotonic decision tree algorithms are based on a linearly ordered constraint that certain attributes are monotonously consistent with the decision, which could be called monotonic attributes, whereas others, called non-monotonic attributes. In practice, monotonic and non-monotonic attributes coexist in most classification tasks, and some attribute values are even evaluated as interval numbers. In this paper, we proposed a fuzzy rank-inconsistent rate based on probability degree to judge the monotonicity of interval numbers. Furthermore, we devised a hybrid model composed of monotonic and non-monotonic attributes to construct a mixed monotone decision tree for interval-valued data. Experiments on artificial and real-world data sets show that the proposed hybrid model is effective.
{"title":"A hybrid monotone decision tree model for interval-valued attributes","authors":"Jiankai Chen, Zhongyan Li, Xin Wang, Junhai Zhai","doi":"10.1007/s43674-021-00016-6","DOIUrl":"10.1007/s43674-021-00016-6","url":null,"abstract":"<div><p>The existing monotonic decision tree algorithms are based on a linearly ordered constraint that certain attributes are monotonously consistent with the decision, which could be called monotonic attributes, whereas others, called non-monotonic attributes. In practice, monotonic and non-monotonic attributes coexist in most classification tasks, and some attribute values are even evaluated as interval numbers. In this paper, we proposed a fuzzy rank-inconsistent rate based on probability degree to judge the monotonicity of interval numbers. Furthermore, we devised a hybrid model composed of monotonic and non-monotonic attributes to construct a mixed monotone decision tree for interval-valued data. Experiments on artificial and real-world data sets show that the proposed hybrid model is effective.</p></div>","PeriodicalId":72089,"journal":{"name":"Advances in computational intelligence","volume":"2 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2021-12-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://link.springer.com/content/pdf/10.1007/s43674-021-00016-6.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"50488897","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2021-12-17DOI: 10.1007/s43674-021-00022-8
Alaa El Khatib, Fakhri Karray
Continual learning models are known to suffer from catastrophic forgetting. Existing regularization methods to countering forgetting operate by penalizing large changes to learned parameters. A significant downside to these methods, however, is that, by effectively freezing model parameters, they gradually suspend the capacity of a model to learn new tasks. In this paper, we explore an alternative approach to the continual learning problem that aims to circumvent this downside. In particular, we ask the question: instead of forcing continual learning models to remember the past, can we modify the learning process from the start, such that the learned representations are less susceptible to forgetting? To this end, we explore multiple methods that could potentially encourage durable representations. We demonstrate empirically that the use of unsupervised auxiliary tasks achieves significant reduction in parameter re-optimization across tasks, and consequently reduces forgetting, without explicitly penalizing forgetting. Moreover, we propose a distance metric to track internal model dynamics across tasks, and use it to gain insight into the workings of our proposed approach, as well as other recently proposed methods.
{"title":"Toward durable representations for continual learning","authors":"Alaa El Khatib, Fakhri Karray","doi":"10.1007/s43674-021-00022-8","DOIUrl":"10.1007/s43674-021-00022-8","url":null,"abstract":"<div><p>Continual learning models are known to suffer from <i>catastrophic forgetting</i>. Existing regularization methods to countering forgetting operate by penalizing large changes to learned parameters. A significant downside to these methods, however, is that, by effectively freezing model parameters, they gradually suspend the capacity of a model to learn new tasks. In this paper, we explore an alternative approach to the continual learning problem that aims to circumvent this downside. In particular, we ask the question: instead of forcing continual learning models to remember the past, can we modify the learning process from the start, such that the learned representations are less susceptible to forgetting? To this end, we explore multiple methods that could potentially encourage durable representations. We demonstrate empirically that the use of unsupervised auxiliary tasks achieves significant reduction in parameter re-optimization across tasks, and consequently reduces forgetting, without explicitly penalizing forgetting. Moreover, we propose a distance metric to track internal model dynamics across tasks, and use it to gain insight into the workings of our proposed approach, as well as other recently proposed methods.</p></div>","PeriodicalId":72089,"journal":{"name":"Advances in computational intelligence","volume":"2 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2021-12-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://link.springer.com/content/pdf/10.1007/s43674-021-00022-8.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"50488901","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}