Pub Date : 2015-11-01DOI: 10.1109/SOCPAR.2015.7492768
Sebastián Basterrech, G. Rubino, V. Snás̃el
This paper presents an analysis of the impact of the parameters of an Echo State Network (ESN) on its performance. In particular, we are interested on the parameter behaviour when the model is used for forecasting pseudo-periodic time series. According previous literature, the spectral radius of the hidden-hidden weight matrix of the ESN is a relevant parameter on the model performance. It impacts in the memory capacity and in the accuracy the model. Small values of the spectral radius are recommended for modelling time-series that require short fading memory. On the other hand, a matrix with spectral radius close to the unity is recommended for processing long memory time series. In this article, we figure out that the periodicity of the data is also an important factor to consider in the design of the ESN. Our results show that the better forecasting (according to two metrics of performance) occurs when the hidden-hidden weight matrix has spectral value equal to 0.5. For our analysis we use a public synthetic dataset that has a high periodicity.
{"title":"Sensitivity analysis of echo state networks for forecasting pseudo-periodic time series","authors":"Sebastián Basterrech, G. Rubino, V. Snás̃el","doi":"10.1109/SOCPAR.2015.7492768","DOIUrl":"https://doi.org/10.1109/SOCPAR.2015.7492768","url":null,"abstract":"This paper presents an analysis of the impact of the parameters of an Echo State Network (ESN) on its performance. In particular, we are interested on the parameter behaviour when the model is used for forecasting pseudo-periodic time series. According previous literature, the spectral radius of the hidden-hidden weight matrix of the ESN is a relevant parameter on the model performance. It impacts in the memory capacity and in the accuracy the model. Small values of the spectral radius are recommended for modelling time-series that require short fading memory. On the other hand, a matrix with spectral radius close to the unity is recommended for processing long memory time series. In this article, we figure out that the periodicity of the data is also an important factor to consider in the design of the ESN. Our results show that the better forecasting (according to two metrics of performance) occurs when the hidden-hidden weight matrix has spectral value equal to 0.5. For our analysis we use a public synthetic dataset that has a high periodicity.","PeriodicalId":409493,"journal":{"name":"2015 7th International Conference of Soft Computing and Pattern Recognition (SoCPaR)","volume":"12 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2015-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115654313","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2015-11-01DOI: 10.1109/SOCPAR.2015.7492771
K. Murakami
Information in genetic databases often describes complex concepts, such as diseases and gene functions having implicit relationships. However, such information is presented as independent concepts (for example, “genes” and “function”), making it difficult for the user, even specialists, to understand their meaning in relation to one another. This facilitates the need for extraction of hidden relationships among biological concepts, and for the addition of this information to databases. Therefore, we factorized a gene data matrix and extracted hidden relationships among both genes and their functional terms. We successfully identified composite concepts explained by plural genes and plural terms. This re-organization provides new insights for researchers and is helpful for interpretation of information.
{"title":"Extraction of latent concepts from an integrated human gene database: Non-negative matrix factorization for identification of hidden data structure","authors":"K. Murakami","doi":"10.1109/SOCPAR.2015.7492771","DOIUrl":"https://doi.org/10.1109/SOCPAR.2015.7492771","url":null,"abstract":"Information in genetic databases often describes complex concepts, such as diseases and gene functions having implicit relationships. However, such information is presented as independent concepts (for example, “genes” and “function”), making it difficult for the user, even specialists, to understand their meaning in relation to one another. This facilitates the need for extraction of hidden relationships among biological concepts, and for the addition of this information to databases. Therefore, we factorized a gene data matrix and extracted hidden relationships among both genes and their functional terms. We successfully identified composite concepts explained by plural genes and plural terms. This re-organization provides new insights for researchers and is helpful for interpretation of information.","PeriodicalId":409493,"journal":{"name":"2015 7th International Conference of Soft Computing and Pattern Recognition (SoCPaR)","volume":"192 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2015-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128356392","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2015-11-01DOI: 10.1109/SOCPAR.2015.7492793
A. Tharwat, Hani M. K. Mahdi, A. El-Hennawy, A. Hassanien
Face sketch recognition is one of the recent biometrics, which is used to identify criminals. In this paper, a proposed model is used to identify face sketch images based on local invariant features. In this model, two local invariant feature extraction methods, namely, Scale Invariant Feature Transform (SIFT) and Local Binary Patterns (LBP) are used to extract local features from photos and sketches. Minimum distance and Support Vector Machine (SVM) classifiers are used to match the features of an unknown sketch with photos. Due to high dimensional features, Direct Linear Discriminant Analysis (Direct-LDA) is used. CHUK face sketch database images is used in our experiments. The experimental results show that SIFT method is robust and it extracts discriminative features than LBP. Moreover, different parameters of SIFT and LBP are discussed and tuned to extract robust and discriminative features.
{"title":"Face sketch recognition using local invariant features","authors":"A. Tharwat, Hani M. K. Mahdi, A. El-Hennawy, A. Hassanien","doi":"10.1109/SOCPAR.2015.7492793","DOIUrl":"https://doi.org/10.1109/SOCPAR.2015.7492793","url":null,"abstract":"Face sketch recognition is one of the recent biometrics, which is used to identify criminals. In this paper, a proposed model is used to identify face sketch images based on local invariant features. In this model, two local invariant feature extraction methods, namely, Scale Invariant Feature Transform (SIFT) and Local Binary Patterns (LBP) are used to extract local features from photos and sketches. Minimum distance and Support Vector Machine (SVM) classifiers are used to match the features of an unknown sketch with photos. Due to high dimensional features, Direct Linear Discriminant Analysis (Direct-LDA) is used. CHUK face sketch database images is used in our experiments. The experimental results show that SIFT method is robust and it extracts discriminative features than LBP. Moreover, different parameters of SIFT and LBP are discussed and tuned to extract robust and discriminative features.","PeriodicalId":409493,"journal":{"name":"2015 7th International Conference of Soft Computing and Pattern Recognition (SoCPaR)","volume":"37 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2015-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"121407973","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2015-11-01DOI: 10.1109/SOCPAR.2015.7492780
P. Hurtík, P. Hodáková
The goal of this paper is to introduce a task of image plagiarism detection. More specifically, we propose a method of searching for a plagiarized image in a database. The main requirements for searching in the database are computational speed and success rate. The proposed method is based on the technique of F-transform, particularly Fs-transform, s ≥ 0. This technique significantly reduces the domain dimension and therefore, is speeds-up the whole process. we present several experiments and measurements which prove the speed and accuracy of our method. We also propose examples to demonstrate an ability of using this method in many applications.
{"title":"FTIP: A tool for an image plagiarism detection","authors":"P. Hurtík, P. Hodáková","doi":"10.1109/SOCPAR.2015.7492780","DOIUrl":"https://doi.org/10.1109/SOCPAR.2015.7492780","url":null,"abstract":"The goal of this paper is to introduce a task of image plagiarism detection. More specifically, we propose a method of searching for a plagiarized image in a database. The main requirements for searching in the database are computational speed and success rate. The proposed method is based on the technique of F-transform, particularly Fs-transform, s ≥ 0. This technique significantly reduces the domain dimension and therefore, is speeds-up the whole process. we present several experiments and measurements which prove the speed and accuracy of our method. We also propose examples to demonstrate an ability of using this method in many applications.","PeriodicalId":409493,"journal":{"name":"2015 7th International Conference of Soft Computing and Pattern Recognition (SoCPaR)","volume":"12 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2015-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"124292874","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2015-11-01DOI: 10.1109/SOCPAR.2015.7492821
Hoan Tran Quoc, H. Ochiai, H. Esaki
The automatic assessment of online review's quality is becoming important with the number of reviews increasing rapidly. In order to help determining review's quality, some online services provide a system where users can evaluate or feedback the helpfulness of review as crowdsourcing knowledge. This approach has shortcomings of sparse voted data and richer-get-richer problem in which favor reviews are voted frequently more than others. In this work, we use Latent Dirichlet Allocation (LDA) method to exploit hidden topics distribution information of all reviews and propose supervisor prediction model based on probabilistic meaning of the review's quality. We also propose a deep neural network to classify the review in quality and validate our proposals within some real reviews datasets. We demonstrate that using hidden topics distribution information could be helpful to improve the accuracy of review quality prediction and classification.
{"title":"Hidden topics modeling approach for review quality prediction and classification","authors":"Hoan Tran Quoc, H. Ochiai, H. Esaki","doi":"10.1109/SOCPAR.2015.7492821","DOIUrl":"https://doi.org/10.1109/SOCPAR.2015.7492821","url":null,"abstract":"The automatic assessment of online review's quality is becoming important with the number of reviews increasing rapidly. In order to help determining review's quality, some online services provide a system where users can evaluate or feedback the helpfulness of review as crowdsourcing knowledge. This approach has shortcomings of sparse voted data and richer-get-richer problem in which favor reviews are voted frequently more than others. In this work, we use Latent Dirichlet Allocation (LDA) method to exploit hidden topics distribution information of all reviews and propose supervisor prediction model based on probabilistic meaning of the review's quality. We also propose a deep neural network to classify the review in quality and validate our proposals within some real reviews datasets. We demonstrate that using hidden topics distribution information could be helpful to improve the accuracy of review quality prediction and classification.","PeriodicalId":409493,"journal":{"name":"2015 7th International Conference of Soft Computing and Pattern Recognition (SoCPaR)","volume":"51 ","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2015-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"120871137","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2015-11-01DOI: 10.1109/SOCPAR.2015.7492787
Gilberto Marcon dos Santos, Victor Terra Ferrão, C. Vinhal, G. Cruz
This paper presents a fast algorithm for ground segmentation that quickly and accurately differentiates ground points from obstacles after processing unstructured point clouds. Unlike most recent approaches found in the literature, it does not rely on any sensor-specific feature or data ordering. It performs an orthogonal projection into the horizontal plane followed by a top-down 4-ary tree segmentation. The segmentation self-adapts to the point cloud, focusing processing effort on detailed areas. This adaptive subdivision process allows successfully extracting ground points even when the floor is not perfectly flat. Finally, tests demonstrate real-time performance for execution in low cost embedded devices.
{"title":"An adaptive algorithm for embedded real-time point cloud ground segmentation","authors":"Gilberto Marcon dos Santos, Victor Terra Ferrão, C. Vinhal, G. Cruz","doi":"10.1109/SOCPAR.2015.7492787","DOIUrl":"https://doi.org/10.1109/SOCPAR.2015.7492787","url":null,"abstract":"This paper presents a fast algorithm for ground segmentation that quickly and accurately differentiates ground points from obstacles after processing unstructured point clouds. Unlike most recent approaches found in the literature, it does not rely on any sensor-specific feature or data ordering. It performs an orthogonal projection into the horizontal plane followed by a top-down 4-ary tree segmentation. The segmentation self-adapts to the point cloud, focusing processing effort on detailed areas. This adaptive subdivision process allows successfully extracting ground points even when the floor is not perfectly flat. Finally, tests demonstrate real-time performance for execution in low cost embedded devices.","PeriodicalId":409493,"journal":{"name":"2015 7th International Conference of Soft Computing and Pattern Recognition (SoCPaR)","volume":"20 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2015-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115394485","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2015-11-01DOI: 10.1109/SOCPAR.2015.7492823
W. Huh, Sung-Bae Cho
The development of equipment that measures EEG signals leads to the research that applies them to many domains. There are active research going on EEG signals for shared vehicle control system between human and car. An appropriate filtering method is also important because EEG signals normally have lots of noises. To reduce such noises, full matrix filter, sparse matrix reference filter, and common average reference (CAR) filter are presented and analyzed in this paper. In order to develop shared vehicle control system, we use controller, brain-computer interface (BCI), EEG signals, and car simulator program. By executing t-test, it was possible to find the optimal filter out of three filters mentioned above. With the analysis of t-test, it has revealed that full matrix filter is not appropriate for shared vehicle control system. In addition, it proves CAR filter has the best performance among these filters.
{"title":"Optimal partial filters of EEG signals for shared control of vehicle","authors":"W. Huh, Sung-Bae Cho","doi":"10.1109/SOCPAR.2015.7492823","DOIUrl":"https://doi.org/10.1109/SOCPAR.2015.7492823","url":null,"abstract":"The development of equipment that measures EEG signals leads to the research that applies them to many domains. There are active research going on EEG signals for shared vehicle control system between human and car. An appropriate filtering method is also important because EEG signals normally have lots of noises. To reduce such noises, full matrix filter, sparse matrix reference filter, and common average reference (CAR) filter are presented and analyzed in this paper. In order to develop shared vehicle control system, we use controller, brain-computer interface (BCI), EEG signals, and car simulator program. By executing t-test, it was possible to find the optimal filter out of three filters mentioned above. With the analysis of t-test, it has revealed that full matrix filter is not appropriate for shared vehicle control system. In addition, it proves CAR filter has the best performance among these filters.","PeriodicalId":409493,"journal":{"name":"2015 7th International Conference of Soft Computing and Pattern Recognition (SoCPaR)","volume":"31 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2015-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115057772","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2015-11-01DOI: 10.1109/SOCPAR.2015.7492776
Hossam M. Zawbaa, E. Emary, A. Hassanien, B. Pârv
In this paper, a proposed system for feature selection based on social spider optimization (SSO) is proposed. SSO is used in the proposed system as searching method to find optimal feature set maximizing classification performance and mimics the cooperative behavior mechanism of social spiders in nature. The proposed SSO algorithm considers two different search agents (social members) male and female spiders, that simulate a group of spiders with interaction to each other based on the biological laws of the cooperative colony. Depending on spider gender, each spider (individual) is simulating a set of different evolutionary operators of different cooperative behaviors that are typically found in the colony. The proposed system is evaluated using different evaluation criteria on 18 different datasets, which compared with two common search methods namely particle swarm optimization (PSO), and genetic algorithm (GA). SSO algorithm proves an advance in classification performance using different evaluation indicators.
{"title":"A wrapper approach for feature selection based on swarm optimization algorithm inspired from the behavior of social-spiders","authors":"Hossam M. Zawbaa, E. Emary, A. Hassanien, B. Pârv","doi":"10.1109/SOCPAR.2015.7492776","DOIUrl":"https://doi.org/10.1109/SOCPAR.2015.7492776","url":null,"abstract":"In this paper, a proposed system for feature selection based on social spider optimization (SSO) is proposed. SSO is used in the proposed system as searching method to find optimal feature set maximizing classification performance and mimics the cooperative behavior mechanism of social spiders in nature. The proposed SSO algorithm considers two different search agents (social members) male and female spiders, that simulate a group of spiders with interaction to each other based on the biological laws of the cooperative colony. Depending on spider gender, each spider (individual) is simulating a set of different evolutionary operators of different cooperative behaviors that are typically found in the colony. The proposed system is evaluated using different evaluation criteria on 18 different datasets, which compared with two common search methods namely particle swarm optimization (PSO), and genetic algorithm (GA). SSO algorithm proves an advance in classification performance using different evaluation indicators.","PeriodicalId":409493,"journal":{"name":"2015 7th International Conference of Soft Computing and Pattern Recognition (SoCPaR)","volume":"42 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2015-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115079400","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2015-11-01DOI: 10.1109/SOCPAR.2015.7492778
Andrii Shalaginov, K. Franke
This paper describes ongoing study and first results on the application of Neuro-Fuzzy (NF) to support large-scale forensics investigation in the domain of Network Forensics. In particular we focus on patterns of benign and malicious activity that can be find in network traffic dumps. We propose several improvements to the NF algorithm that results in proper handling of large-scale datasets, significantly reduces number of rules and yields a decreased complexity of the classification model. This includes better automated extraction of rules parameters as well as bootstrap aggregation for generalization. Experimental results show that such optimization gives a smaller number of rules, while the accuracy increases in comparison to existing approaches. In particular, it showed an accuracy of 98% when using only 39 rules. In our research we contribute to forensics science by increasing awareness and bringing more comprehensive fuzzy rules. During the last decade many cases related to network forensics resulted in data that can be related to Big Data due to its complexity. Application of Soft Computing methods, such that Neuro-Fuzzy may bring not only sufficient classification accuracy of normal and attack traffic, yet also facilitate in understanding traffic properties and developing a decision-support mechanism.
{"title":"Automated generation of fuzzy rules from large-scale network traffic analysis in digital forensics investigations","authors":"Andrii Shalaginov, K. Franke","doi":"10.1109/SOCPAR.2015.7492778","DOIUrl":"https://doi.org/10.1109/SOCPAR.2015.7492778","url":null,"abstract":"This paper describes ongoing study and first results on the application of Neuro-Fuzzy (NF) to support large-scale forensics investigation in the domain of Network Forensics. In particular we focus on patterns of benign and malicious activity that can be find in network traffic dumps. We propose several improvements to the NF algorithm that results in proper handling of large-scale datasets, significantly reduces number of rules and yields a decreased complexity of the classification model. This includes better automated extraction of rules parameters as well as bootstrap aggregation for generalization. Experimental results show that such optimization gives a smaller number of rules, while the accuracy increases in comparison to existing approaches. In particular, it showed an accuracy of 98% when using only 39 rules. In our research we contribute to forensics science by increasing awareness and bringing more comprehensive fuzzy rules. During the last decade many cases related to network forensics resulted in data that can be related to Big Data due to its complexity. Application of Soft Computing methods, such that Neuro-Fuzzy may bring not only sufficient classification accuracy of normal and attack traffic, yet also facilitate in understanding traffic properties and developing a decision-support mechanism.","PeriodicalId":409493,"journal":{"name":"2015 7th International Conference of Soft Computing and Pattern Recognition (SoCPaR)","volume":"9 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2015-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127928078","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2015-11-01DOI: 10.1109/SOCPAR.2015.7492810
Y. Herdiyeni, Dicky Iqbal Lubis, S. Douady
. This study proposes a new algorithm for leaf shape identification of medicinal leaves based on curvilinear shape descriptor. Leaf shape is a very discriminating feature for identification. The proposed approach is introduced to recognize and locate points of local maxima from smooth curvature and also to reduce contour points in order to optimize the efficiency of leaf shape identification. Experiments were conducted on six shapes of medicinal leaves, i.e lanceolate, ovate, obovate, reniform, cordate, and deltoid. We extracted five shape descriptors of leaf shape curvature: salient points' position, centroid distance, extreme curvature, angle of curvature, and slope of salient points. The experimental results show that the proposed algorithm can extract the shape descriptors for leaf shape identification. Moreover, the experimental results indicated that the fusion of shape descriptors outperform than using single shape descriptor with accuracy 72.22%.
{"title":"Leaf shape identification of medicinal leaves using curvilinear shape descriptor","authors":"Y. Herdiyeni, Dicky Iqbal Lubis, S. Douady","doi":"10.1109/SOCPAR.2015.7492810","DOIUrl":"https://doi.org/10.1109/SOCPAR.2015.7492810","url":null,"abstract":". This study proposes a new algorithm for leaf shape identification of medicinal leaves based on curvilinear shape descriptor. Leaf shape is a very discriminating feature for identification. The proposed approach is introduced to recognize and locate points of local maxima from smooth curvature and also to reduce contour points in order to optimize the efficiency of leaf shape identification. Experiments were conducted on six shapes of medicinal leaves, i.e lanceolate, ovate, obovate, reniform, cordate, and deltoid. We extracted five shape descriptors of leaf shape curvature: salient points' position, centroid distance, extreme curvature, angle of curvature, and slope of salient points. The experimental results show that the proposed algorithm can extract the shape descriptors for leaf shape identification. Moreover, the experimental results indicated that the fusion of shape descriptors outperform than using single shape descriptor with accuracy 72.22%.","PeriodicalId":409493,"journal":{"name":"2015 7th International Conference of Soft Computing and Pattern Recognition (SoCPaR)","volume":"98 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2015-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"131544154","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}