Pub Date : 2023-08-01DOI: 10.1016/j.jcmds.2023.100081
M. Cesarini , E. Brentegani , G. Ceci , F. Cerreta , D. Messina , F. Petrarca , M. Robutti
Information theory uses the Kullback–Leibler divergence to compare distributions. In this paper, we apply it to bayesian posterior distributions and we show how it can be used to train a machine learning algorithm as well. The data sample used in this study is an OCTOTelematics set of driving behaviour data.
{"title":"Usage of the Kullback–Leibler divergence on posterior Dirichlet distributions to create a training dataset for a learning algorithm to classify driving behaviour events","authors":"M. Cesarini , E. Brentegani , G. Ceci , F. Cerreta , D. Messina , F. Petrarca , M. Robutti","doi":"10.1016/j.jcmds.2023.100081","DOIUrl":"https://doi.org/10.1016/j.jcmds.2023.100081","url":null,"abstract":"<div><p>Information theory uses the Kullback–Leibler divergence to compare distributions. In this paper, we apply it to bayesian posterior distributions and we show how it can be used to train a machine learning algorithm as well. The data sample used in this study is an OCTOTelematics set of driving behaviour data.</p></div>","PeriodicalId":100768,"journal":{"name":"Journal of Computational Mathematics and Data Science","volume":"8 ","pages":"Article 100081"},"PeriodicalIF":0.0,"publicationDate":"2023-08-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"50194983","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2023-08-01DOI: 10.1016/j.jcmds.2023.100080
Kaj Nyström, Matias Vestberg
The Monge–Ampère equation is a full y nonlinear partial differential equation (PDE) of fundamental importance in analysis, geometry and in the applied sciences. In this paper we solve the Dirichlet problem associated with the Monge–Ampère equation using neural networks and we show that an ansatz using deep input convex neural networks can be used to find the unique convex solution. As part of our analysis we study the effect of singularities, discontinuities and noise in the source function, we consider nontrivial domains, and we investigate how the method performs in higher dimensions. We investigate the convergence numerically and present error estimates based on a stability result. We also compare this method to an alternative approach in which standard feed-forward networks are used together with a loss function which penalizes lack of convexity.
{"title":"Solving the Dirichlet problem for the Monge–Ampère equation using neural networks","authors":"Kaj Nyström, Matias Vestberg","doi":"10.1016/j.jcmds.2023.100080","DOIUrl":"https://doi.org/10.1016/j.jcmds.2023.100080","url":null,"abstract":"<div><p>The Monge–Ampère equation is a full y nonlinear partial differential equation (PDE) of fundamental importance in analysis, geometry and in the applied sciences. In this paper we solve the Dirichlet problem associated with the Monge–Ampère equation using neural networks and we show that an ansatz using deep input convex neural networks can be used to find the unique convex solution. As part of our analysis we study the effect of singularities, discontinuities and noise in the source function, we consider nontrivial domains, and we investigate how the method performs in higher dimensions. We investigate the convergence numerically and present error estimates based on a stability result. We also compare this method to an alternative approach in which standard feed-forward networks are used together with a loss function which penalizes lack of convexity.</p></div>","PeriodicalId":100768,"journal":{"name":"Journal of Computational Mathematics and Data Science","volume":"8 ","pages":"Article 100080"},"PeriodicalIF":0.0,"publicationDate":"2023-08-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"50194984","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2023-06-01DOI: 10.1016/j.jcmds.2023.100079
Fiza Zafar , Sofia Iqbal , Tahira Nawaz
In this study, we introduce a novel weight function-based eighth order derivative-free method for locating repeated roots of nonlinear equations. It is a three-step Steffensen-type scheme with first order divided differences in place of the first order derivatives. It is noteworthy that so far only few eighth order derivative free multiple root finding scheme exist in literature. Different nonlinear standard and applications based nonlinear functions are used to demonstrate the applicability of the suggested approach and to confirm its strong convergence tendency. Drawing basins of attraction on the graphical regions demonstrates how the offered family of approaches converge.
{"title":"A Steffensen type optimal eighth order multiple root finding scheme for nonlinear equations","authors":"Fiza Zafar , Sofia Iqbal , Tahira Nawaz","doi":"10.1016/j.jcmds.2023.100079","DOIUrl":"https://doi.org/10.1016/j.jcmds.2023.100079","url":null,"abstract":"<div><p>In this study, we introduce a novel weight function-based eighth order derivative-free method for locating repeated roots of nonlinear equations. It is a three-step Steffensen-type scheme with first order divided differences in place of the first order derivatives. It is noteworthy that so far only few eighth order derivative free multiple root finding scheme exist in literature. Different nonlinear standard and applications based nonlinear functions are used to demonstrate the applicability of the suggested approach and to confirm its strong convergence tendency. Drawing basins of attraction on the graphical regions demonstrates how the offered family of approaches converge.</p></div>","PeriodicalId":100768,"journal":{"name":"Journal of Computational Mathematics and Data Science","volume":"7 ","pages":"Article 100079"},"PeriodicalIF":0.0,"publicationDate":"2023-06-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"49865402","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2023-01-01DOI: 10.1016/j.jcmds.2022.100067
Robert Hu, Dino Sejdinovic
The problem of interpretability for binary image classification is considered through the lens of kernel two-sample tests and generative modeling. A feature extraction framework coined Deep Interpretable Features is developed, which is used in combination with IntroVAE, a generative model capable of high-resolution image synthesis. Experimental results on a variety of datasets, including COVID-19 chest x-rays demonstrate the benefits of combining deep generative models with the ideas from kernel-based hypothesis testing in moving towards more robust interpretable deep generative models.
{"title":"Towards Deep Interpretable Features","authors":"Robert Hu, Dino Sejdinovic","doi":"10.1016/j.jcmds.2022.100067","DOIUrl":"https://doi.org/10.1016/j.jcmds.2022.100067","url":null,"abstract":"<div><p>The problem of interpretability for binary image classification is considered through the lens of kernel two-sample tests and generative modeling. A feature extraction framework coined <span>Deep Interpretable Features</span><svg><path></path></svg> is developed, which is used in combination with IntroVAE, a generative model capable of high-resolution image synthesis. Experimental results on a variety of datasets, including COVID-19 chest x-rays demonstrate the benefits of combining deep generative models with the ideas from kernel-based hypothesis testing in moving towards more robust interpretable deep generative models.</p></div>","PeriodicalId":100768,"journal":{"name":"Journal of Computational Mathematics and Data Science","volume":"6 ","pages":"Article 100067"},"PeriodicalIF":0.0,"publicationDate":"2023-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"50188314","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2023-01-01DOI: 10.1016/j.jcmds.2022.100073
Sammy Khalife , Douglas S. Gonçalves , Leo Liberti
Many machine learning methods used for the treatment of sequential data often rely on the construction of vector representations of unitary entities (e.g. words in natural language processing, or -mers in bioinformatics). Traditionally, these representations are constructed with optimization formulations arising from co-occurrence based models. In this work, we propose a new method to embed these entities based on the Distance Geometry Problem: find object positions based on a subset of their pairwise distances or inner products. Considering the empirical Pointwise Mutual Information as a surrogate for the inner product, we discuss two Distance Geometry based algorithms to obtain word vector representations. The main advantage of such algorithms is their significantly lower computational complexity in comparison with state-of-the-art word embedding methods, which allows us to obtain word vector representations much faster. Furthermore, numerical experiments indicate that our word vectors behave quite well on text classification tasks in natural language processing as well as regression tasks in bioinformatics.
{"title":"Distance geometry for word representations and applications","authors":"Sammy Khalife , Douglas S. Gonçalves , Leo Liberti","doi":"10.1016/j.jcmds.2022.100073","DOIUrl":"https://doi.org/10.1016/j.jcmds.2022.100073","url":null,"abstract":"<div><p>Many machine learning methods used for the treatment of sequential data often rely on the construction of vector representations of unitary entities (e.g. words in natural language processing, or <span><math><mi>k</mi></math></span>-mers in bioinformatics). Traditionally, these representations are constructed with optimization formulations arising from co-occurrence based models. In this work, we propose a new method to embed these entities based on the Distance Geometry Problem: find object positions based on a subset of their pairwise distances or inner products. Considering the empirical Pointwise Mutual Information as a surrogate for the inner product, we discuss two Distance Geometry based algorithms to obtain word vector representations. The main advantage of such algorithms is their significantly lower computational complexity in comparison with state-of-the-art word embedding methods, which allows us to obtain word vector representations much faster. Furthermore, numerical experiments indicate that our word vectors behave quite well on text classification tasks in natural language processing as well as regression tasks in bioinformatics.</p></div>","PeriodicalId":100768,"journal":{"name":"Journal of Computational Mathematics and Data Science","volume":"6 ","pages":"Article 100073"},"PeriodicalIF":0.0,"publicationDate":"2023-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"50188316","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2023-01-01DOI: 10.1016/j.jcmds.2022.100071
Ashok Mondal , A.K. Pal , G.P. Samanta
In this paper, a two prey–one predator model with different types of growth rate and mixed functional responses is proposed and analysed. Moreover, we considered anti-predation behaviour and constant harvesting effort in both the prey populations. The positivity and boundedness of the system are studied. The criteria for the extinction of the predator–prey populations are discussed. Analytically, we have studied the criteria for existence and stability of different equilibrium points. In addition, we have derived sufficient conditions for local bifurcations such as transcritical and Hopf bifurcation. We discussed that the effect of fear not only reduces prey populations, but also decreases the rate of growth of the predator population. Computer simulations are performed to validate our analytical results. The biological implications of analytical and numerical results are critically discussed.
{"title":"Complex dynamics of two prey–one predator model together with fear effect and harvesting efforts in preys","authors":"Ashok Mondal , A.K. Pal , G.P. Samanta","doi":"10.1016/j.jcmds.2022.100071","DOIUrl":"https://doi.org/10.1016/j.jcmds.2022.100071","url":null,"abstract":"<div><p>In this paper, a two prey–one predator model with different types of growth rate and mixed functional responses is proposed and analysed. Moreover, we considered anti-predation behaviour and constant harvesting effort in both the prey populations. The positivity and boundedness of the system are studied. The criteria for the extinction of the predator–prey populations are discussed. Analytically, we have studied the criteria for existence and stability of different equilibrium points. In addition, we have derived sufficient conditions for local bifurcations such as transcritical and Hopf bifurcation. We discussed that the effect of fear not only reduces prey populations, but also decreases the rate of growth of the predator population. Computer simulations are performed to validate our analytical results. The biological implications of analytical and numerical results are critically discussed.</p></div>","PeriodicalId":100768,"journal":{"name":"Journal of Computational Mathematics and Data Science","volume":"6 ","pages":"Article 100071"},"PeriodicalIF":0.0,"publicationDate":"2023-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"50188313","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2023-01-01DOI: 10.1016/j.jcmds.2023.100076
Emil M. Prodanov
Analysing the cubic sectors of a real polynomial of degree , a modification of Newton’s Rule of signs is proposed with which stricter upper bound on the number of real roots can be found. A new necessary condition for reality of the roots of a polynomial is also proposed. Relationship between the quadratic elements of the polynomial is established through its roots and those of its derivatives. Some aspects of polynomial discriminants are also discussed — the relationship between the discriminants of real polynomials, the discriminants of their derivatives, and the quadratic elements, following a “discriminant of the discriminant” approach.
{"title":"On Newton’s Rule of signs","authors":"Emil M. Prodanov","doi":"10.1016/j.jcmds.2023.100076","DOIUrl":"https://doi.org/10.1016/j.jcmds.2023.100076","url":null,"abstract":"<div><p>Analysing the <em>cubic sectors</em> of a real polynomial of degree <span><math><mi>n</mi></math></span>, a modification of Newton’s Rule of signs is proposed with which stricter upper bound on the number of real roots can be found. A new necessary condition for reality of the roots of a polynomial is also proposed. Relationship between the quadratic elements of the polynomial is established through its roots and those of its derivatives. Some aspects of polynomial discriminants are also discussed — the relationship between the discriminants of real polynomials, the discriminants of their derivatives, and the quadratic elements, following a “discriminant of the discriminant” approach.</p></div>","PeriodicalId":100768,"journal":{"name":"Journal of Computational Mathematics and Data Science","volume":"6 ","pages":"Article 100076"},"PeriodicalIF":0.0,"publicationDate":"2023-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"50188319","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2023-01-01DOI: 10.1016/j.jcmds.2023.100075
Khumukcham Robindro , Urikhimbam Boby Clinton , Nazrul Hoque , Dhruba K. Bhattacharyya
Feature selection (FS) is a common preprocessing step of machine learning that selects informative subset of features which fuels a model to perform better during prediction or classification. It helps in the design of an intelligent and expert system used in computer vision, image processing, gene expression data analysis, intrusion detection and natural language processing. In this paper, we introduce an effective filter method called Joint Mutual Information with Class relevance (JoMIC) using multivariate Joint Mutual Information (JMI) and Mutual Information (MI). Our method considers both JMI and MI of a non selected feature with selected ones w.r.t a given class to select a feature that is highly relevant to the class but non redundant to other selected features. We compare our method with seven other filter-based methods using the machine learning classifiers viz., Logistic Regression, Support Vector Machine, K-nearest Neighbor (KNN), Decision Tree, Random Forest, Naïve Bayes, and Stochastic Gradient Descent on various datasets. Experimental results reveal that our method yields better performance in terms of accuracy, Matthew’s Correlation Coefficient (MCC) and F1-score over 16 benchmark datasets, as compared to other competent methods. The superiority of our proposed method is that it uses an effective objective function that combines both JMI and MI to choose the relevant and non redundant features.
{"title":"JoMIC: A joint MI-based filter feature selection method","authors":"Khumukcham Robindro , Urikhimbam Boby Clinton , Nazrul Hoque , Dhruba K. Bhattacharyya","doi":"10.1016/j.jcmds.2023.100075","DOIUrl":"https://doi.org/10.1016/j.jcmds.2023.100075","url":null,"abstract":"<div><p>Feature selection (FS) is a common preprocessing step of machine learning that selects informative subset of features which fuels a model to perform better during prediction or classification. It helps in the design of an intelligent and expert system used in computer vision, image processing, gene expression data analysis, intrusion detection and natural language processing. In this paper, we introduce an effective filter method called Joint Mutual Information with Class relevance (JoMIC) using multivariate Joint Mutual Information (JMI) and Mutual Information (MI). Our method considers both JMI and MI of a non selected feature with selected ones w.r.t a given class to select a feature that is highly relevant to the class but non redundant to other selected features. We compare our method with seven other filter-based methods using the machine learning classifiers viz., Logistic Regression, Support Vector Machine, K-nearest Neighbor (KNN), Decision Tree, Random Forest, Naïve Bayes, and Stochastic Gradient Descent on various datasets. Experimental results reveal that our method yields better performance in terms of accuracy, Matthew’s Correlation Coefficient (MCC) and F1-score over 16 benchmark datasets, as compared to other competent methods. The superiority of our proposed method is that it uses an effective objective function that combines both JMI and MI to choose the relevant and non redundant features.</p></div>","PeriodicalId":100768,"journal":{"name":"Journal of Computational Mathematics and Data Science","volume":"6 ","pages":"Article 100075"},"PeriodicalIF":0.0,"publicationDate":"2023-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"50188317","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
The goal of the present paper is to scrutinize the heat transport of an unsteady stagnation-point flow of a hybrid nanofluid over a stretching/shrinking disk in the presence of a variable magnetic field and thermal radiation. The entropy generation analysis for the system is also made. The suitable similarity transformation is utilized for the reduction of a set of governing equations which are solved numerically by using fourth order Runge–Kutta procedure with shooting technique. The impact of different physical parameters under the realistic passive control of nanoparticles on the flow, heat transfer and concentration profiles are analyzed and demonstrated graphically. The analysis of the results obtained shows that multiple solutions exist for a certain parametric domain. With the increase in magnetic parameter (), fluid velocity is found to increase for the first solution whereas opposite behavior is observed for the second solution. Also as increases, the temperature at a point decreases for the first solution branch. Further, the influences of Reynolds number, Brinkmann number and volumetric concentrations on entropy generation are sketched and discussed. The novel result that emerges from the analysis is that among two solution branches, the first solution branch is linearly stable and physically meaningful whereas the second solution branch is linearly unstable and cannot be realized physically.
{"title":"Scrutinization of unsteady MHD fluid flow and entropy generation: Hybrid nanofluid model","authors":"Hiranmoy Maiti , Amir Yaseen Khan , Sabyasachi Mondal , Samir Kumar Nandy","doi":"10.1016/j.jcmds.2023.100074","DOIUrl":"https://doi.org/10.1016/j.jcmds.2023.100074","url":null,"abstract":"<div><p>The goal of the present paper is to scrutinize the heat transport of an unsteady stagnation-point flow of a hybrid nanofluid over a stretching/shrinking disk in the presence of a variable magnetic field and thermal radiation. The entropy generation analysis for the system is also made. The suitable similarity transformation is utilized for the reduction of a set of governing equations which are solved numerically by using fourth order Runge–Kutta procedure with shooting technique. The impact of different physical parameters under the realistic passive control of nanoparticles on the flow, heat transfer and concentration profiles are analyzed and demonstrated graphically. The analysis of the results obtained shows that multiple solutions exist for a certain parametric domain. With the increase in magnetic parameter (<span><math><mi>M</mi></math></span>), fluid velocity is found to increase for the first solution whereas opposite behavior is observed for the second solution. Also as <span><math><mi>M</mi></math></span> increases, the temperature at a point decreases for the first solution branch. Further, the influences of Reynolds number, Brinkmann number and volumetric concentrations on entropy generation are sketched and discussed. The novel result that emerges from the analysis is that among two solution branches, the first solution branch is linearly stable and physically meaningful whereas the second solution branch is linearly unstable and cannot be realized physically.</p></div>","PeriodicalId":100768,"journal":{"name":"Journal of Computational Mathematics and Data Science","volume":"6 ","pages":"Article 100074"},"PeriodicalIF":0.0,"publicationDate":"2023-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"50188318","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2023-01-01DOI: 10.1016/j.jcmds.2022.100070
C. Tamborrino , F. Mazzia
In the last decade, supervised learning methods for the classification of remotely sensed images (RSI) have grown significantly, especially for hyper-spectral (HS) images. Recently, deep learning-based approaches have produced encouraging results for the land cover classification of HS images. In particular, the Convolutional Neural Networks (CNN) and Recurrent Neural Networks (RNN) have shown good performance. However, these methods suffer for the problem of the hyperparameter optimization or tuning that requires a high computational cost; moreover, they are sensitive to the number of observations in the learning phase. In this work we propose a novel supervised learning algorithm based on the use of copula functions for the classification of hyperspectral images called CopSCHI (Copula Supervised Classification of Hyperspectral Images). In particular, we start with a dimensionality reduction technique based on Singular Value Decomposition (SVD) in order to extract a small number of relevant features that best preserve the characteristics of the original image. Afterward, we learn the classifier through a dynamic choice of copulas that allows us to identify the distribution of the different classes within the dataset. The use of copulas proves to be a good choice due to their ability to recognize the probability distribution of classes and hence an accurate final classification with low computational cost can be conducted. The proposed approach was tested on two benchmark datasets widely used in literature. The experimental results confirm that CopSCHI outperforms the state-of-the-art methods considered in this paper as competitors.
{"title":"Classification of hyperspectral images with copulas","authors":"C. Tamborrino , F. Mazzia","doi":"10.1016/j.jcmds.2022.100070","DOIUrl":"https://doi.org/10.1016/j.jcmds.2022.100070","url":null,"abstract":"<div><p>In the last decade, supervised learning methods for the classification of remotely sensed images (RSI) have grown significantly, especially for hyper-spectral (HS) images. Recently, deep learning-based approaches have produced encouraging results for the land cover classification of HS images. In particular, the Convolutional Neural Networks (CNN) and Recurrent Neural Networks (RNN) have shown good performance. However, these methods suffer for the problem of the hyperparameter optimization or tuning that requires a high computational cost; moreover, they are sensitive to the number of observations in the learning phase. In this work we propose a novel supervised learning algorithm based on the use of copula functions for the classification of hyperspectral images called CopSCHI (Copula Supervised Classification of Hyperspectral Images). In particular, we start with a dimensionality reduction technique based on Singular Value Decomposition (SVD) in order to extract a small number of relevant features that best preserve the characteristics of the original image. Afterward, we learn the classifier through a dynamic choice of copulas that allows us to identify the distribution of the different classes within the dataset. The use of copulas proves to be a good choice due to their ability to recognize the probability distribution of classes and hence an accurate final classification with low computational cost can be conducted. The proposed approach was tested on two benchmark datasets widely used in literature. The experimental results confirm that CopSCHI outperforms the state-of-the-art methods considered in this paper as competitors.</p></div>","PeriodicalId":100768,"journal":{"name":"Journal of Computational Mathematics and Data Science","volume":"6 ","pages":"Article 100070"},"PeriodicalIF":0.0,"publicationDate":"2023-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"50188312","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}