This paper discusses the various approaches used for identifying semantically similar concepts in an ontology. The purpose of this survey is to explore how these similarity computation methods could assist in ontology based query expansion. This query expansion method based on the similarity function is expected to improve the retrieval effectiveness of the ontology based Information retrieval models. Various similarity computation methods fall under three categories: Edge counting, information content and node based counting. The limitations of each of these approaches have been discussed in this paper.
{"title":"A Survey of Semantic Similarity Methods for Ontology Based Information Retrieval","authors":"K. Saruladha, G. Aghila, S. Raj","doi":"10.1109/ICMLC.2010.63","DOIUrl":"https://doi.org/10.1109/ICMLC.2010.63","url":null,"abstract":"This paper discusses the various approaches used for identifying semantically similar concepts in an ontology. The purpose of this survey is to explore how these similarity computation methods could assist in ontology based query expansion. This query expansion method based on the similarity function is expected to improve the retrieval effectiveness of the ontology based Information retrieval models. Various similarity computation methods fall under three categories: Edge counting, information content and node based counting. The limitations of each of these approaches have been discussed in this paper.","PeriodicalId":423912,"journal":{"name":"2010 Second International Conference on Machine Learning and Computing","volume":"46 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2010-02-09","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"122006603","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Network attacks are occurring continuously day after day. The researchers are expected to find the solution by identifying the address of source. We propose the IP traceback ant colony system (ITACS) algorithm to solve the IP traceback of denial of service (DoS) problem. The ITACS is novel attempted to apply in solving the problem. It is a meta- heuristic algorithm, which is a technique applies so that attack detection and attack identification can be implemented. The proposed algorithm has improved by the previous one to conquer this problem successfully. We obtained the data set of topology from one of famous research organizations for the experiment. The parameters of algorithm are considered by packet contents in topology. In the meanwhile, we discussed the increment of traffic condition. By the experiment, the examples of increment of traffic are above average 70%. The results show that the performance of ITACS algorithm is efficient and accurate. Furthermore, the proposed algorithm has also nature of robust for the problem. Future work may even be extended to study the other behaviors of organisms from derivations of meta-heuristic algorithm.
{"title":"The Design and Implementation of a Practical Meta-Heuristic for the Detection and Identification of Denial-of-Service Attack Using Hybrid Approach","authors":"Hsia-Hsiang Chen, Wuu Yang","doi":"10.1109/ICMLC.2010.46","DOIUrl":"https://doi.org/10.1109/ICMLC.2010.46","url":null,"abstract":"Network attacks are occurring continuously day after day. The researchers are expected to find the solution by identifying the address of source. We propose the IP traceback ant colony system (ITACS) algorithm to solve the IP traceback of denial of service (DoS) problem. The ITACS is novel attempted to apply in solving the problem. It is a meta- heuristic algorithm, which is a technique applies so that attack detection and attack identification can be implemented. The proposed algorithm has improved by the previous one to conquer this problem successfully. We obtained the data set of topology from one of famous research organizations for the experiment. The parameters of algorithm are considered by packet contents in topology. In the meanwhile, we discussed the increment of traffic condition. By the experiment, the examples of increment of traffic are above average 70%. The results show that the performance of ITACS algorithm is efficient and accurate. Furthermore, the proposed algorithm has also nature of robust for the problem. Future work may even be extended to study the other behaviors of organisms from derivations of meta-heuristic algorithm.","PeriodicalId":423912,"journal":{"name":"2010 Second International Conference on Machine Learning and Computing","volume":"19 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2010-02-09","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126012812","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Clustering is an important research topic in data mining that appears in a wide range of unsupervised classification applications. Partitional clustering algorithms such as the k-means algorithm are the most popular for clustering large datasets. The major problem with the k-means algorithm is that it is sensitive to the selection of the initial partitions and it may converge to local optima. In this paper, we present a hybrid two-phase GAI-PSO+k-means data clustering algorithm that performs fast data clustering and can avoid premature convergence to local optima. In the first phase we utilize the new genetically improved particle swarm optimization algorithm (GAI-PSO) which is a population-based heuristic search technique modeled on the hybrid of cultural and social rules derived from the analysis of the swarm intelligence (PSO) and the concepts of natural selection and evolution (GA). The GAI-PSO combines the standard velocity and position update rules of PSOs with the ideas of selection, mutation and crossover from GAs. The GAI-PSO algorithm searches the solution space to find the optimal initial cluster centroids for the next phase. The second phase is a local refining stage utilizing the k-means algorithm which can efficiently converge to the optimal solution. The proposed algorithm combines the ability of the globalized searching of the evolutionary algorithms and the fast convergence of the k-means algorithm and can avoid the drawback of both. The performance of the proposed algorithm is evaluated through several benchmark datasets. The experimental results show that the proposed algorithm is highly forceful and outperforms the previous approaches such as SA, ACO, PSO and k-means for the partitional clustering problem.
{"title":"Genetically Improved PSO Algorithm for Efficient Data Clustering","authors":"Rehab F. Abdel-Kader","doi":"10.1109/ICMLC.2010.19","DOIUrl":"https://doi.org/10.1109/ICMLC.2010.19","url":null,"abstract":"Clustering is an important research topic in data mining that appears in a wide range of unsupervised classification applications. Partitional clustering algorithms such as the k-means algorithm are the most popular for clustering large datasets. The major problem with the k-means algorithm is that it is sensitive to the selection of the initial partitions and it may converge to local optima. In this paper, we present a hybrid two-phase GAI-PSO+k-means data clustering algorithm that performs fast data clustering and can avoid premature convergence to local optima. In the first phase we utilize the new genetically improved particle swarm optimization algorithm (GAI-PSO) which is a population-based heuristic search technique modeled on the hybrid of cultural and social rules derived from the analysis of the swarm intelligence (PSO) and the concepts of natural selection and evolution (GA). The GAI-PSO combines the standard velocity and position update rules of PSOs with the ideas of selection, mutation and crossover from GAs. The GAI-PSO algorithm searches the solution space to find the optimal initial cluster centroids for the next phase. The second phase is a local refining stage utilizing the k-means algorithm which can efficiently converge to the optimal solution. The proposed algorithm combines the ability of the globalized searching of the evolutionary algorithms and the fast convergence of the k-means algorithm and can avoid the drawback of both. The performance of the proposed algorithm is evaluated through several benchmark datasets. The experimental results show that the proposed algorithm is highly forceful and outperforms the previous approaches such as SA, ACO, PSO and k-means for the partitional clustering problem.","PeriodicalId":423912,"journal":{"name":"2010 Second International Conference on Machine Learning and Computing","volume":"87 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2010-02-09","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"116570389","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
The Handwriting character recognition (HCR) for Indian Languages is an important problem where there is relatively little work has been done. In this paper, we investigate the use of moments features on Kannada Kagunita. Kannada characters are curved in nature with some kind of symmetric structure observed in the shape. This information can be best extracted as a feature if we extract moment features from the directional images. To recognize a Kagunita, we need to identify the vowel and the consonant present in the image. So we are finding 4 directional images using Gabor wavelets from the dynamically preprocessed original image. We analyze the Kagunita set and identify the regions with vowel information and consonant information and cut these portions from the preprocessed original image and form a set of cut images. We then extract moments features from them. These features are trained and tested for both vowel and Kagunita recognition on Multi Layer Perceptron with Back Propagation Neural Network. The recognition results for vowels is average 85% and consonants is 59% when tested on separate test data with moments features from directional images and cut images.
{"title":"Adapting Moments for Handwritten Kannada Kagunita Recognition","authors":"L. Ragha, M. Sasikumar","doi":"10.1109/ICMLC.2010.51","DOIUrl":"https://doi.org/10.1109/ICMLC.2010.51","url":null,"abstract":"The Handwriting character recognition (HCR) for Indian Languages is an important problem where there is relatively little work has been done. In this paper, we investigate the use of moments features on Kannada Kagunita. Kannada characters are curved in nature with some kind of symmetric structure observed in the shape. This information can be best extracted as a feature if we extract moment features from the directional images. To recognize a Kagunita, we need to identify the vowel and the consonant present in the image. So we are finding 4 directional images using Gabor wavelets from the dynamically preprocessed original image. We analyze the Kagunita set and identify the regions with vowel information and consonant information and cut these portions from the preprocessed original image and form a set of cut images. We then extract moments features from them. These features are trained and tested for both vowel and Kagunita recognition on Multi Layer Perceptron with Back Propagation Neural Network. The recognition results for vowels is average 85% and consonants is 59% when tested on separate test data with moments features from directional images and cut images.","PeriodicalId":423912,"journal":{"name":"2010 Second International Conference on Machine Learning and Computing","volume":"64 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2010-02-09","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129924064","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
In this paper a new iterative approach is proposed for solving the Lagrangian formulation of twin support vector machine classifiers. The main advantage of our method is that rather than solving a quadratic programming problem as in the case of the standard support vector machine the inverse of a matrix of size equals to the number of input examples needs to be determined at the very beginning of the algorithm. The convergence of the algorithm is stated. Experiments have been performed on a number of interesting datasets. The predicted results are in good agreement with the observed values clearly demonstrates the applicability of the proposed method.
{"title":"Application of Lagrangian Twin Support Vector Machines for Classification","authors":"S. Balasundaram, N. Kapil","doi":"10.1109/ICMLC.2010.40","DOIUrl":"https://doi.org/10.1109/ICMLC.2010.40","url":null,"abstract":"In this paper a new iterative approach is proposed for solving the Lagrangian formulation of twin support vector machine classifiers. The main advantage of our method is that rather than solving a quadratic programming problem as in the case of the standard support vector machine the inverse of a matrix of size equals to the number of input examples needs to be determined at the very beginning of the algorithm. The convergence of the algorithm is stated. Experiments have been performed on a number of interesting datasets. The predicted results are in good agreement with the observed values clearly demonstrates the applicability of the proposed method.","PeriodicalId":423912,"journal":{"name":"2010 Second International Conference on Machine Learning and Computing","volume":"64 9","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2010-02-09","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"120969703","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Veena H. Bhat, Prasanth G. Rao, Abhilash R.V., P. D. Shenoy, Venugopal K.R., L. Patnaik
With the rapid advancements in information and communication technology in the world, crimes committed are also becoming technically intensive. When crimes committed use digital devices, forensic examiners have to adopt practical frameworks and methods for recovering data for analysis as evidence. Data Generation, Data Warehousing and Data Mining, are the three essential features involved in this process. This paper proposes a unique way of generating, storing and analyzing data, retrieved from digital devices which pose as evidence in forensic analysis. A statistical approach is used in validating the reliability of the pre-processed data. This work proposes a practical framework for digital forensics on flash drives.
{"title":"A Novel Data Generation Approach for Digital Forensic Application in Data Mining","authors":"Veena H. Bhat, Prasanth G. Rao, Abhilash R.V., P. D. Shenoy, Venugopal K.R., L. Patnaik","doi":"10.1109/ICMLC.2010.24","DOIUrl":"https://doi.org/10.1109/ICMLC.2010.24","url":null,"abstract":"With the rapid advancements in information and communication technology in the world, crimes committed are also becoming technically intensive. When crimes committed use digital devices, forensic examiners have to adopt practical frameworks and methods for recovering data for analysis as evidence. Data Generation, Data Warehousing and Data Mining, are the three essential features involved in this process. This paper proposes a unique way of generating, storing and analyzing data, retrieved from digital devices which pose as evidence in forensic analysis. A statistical approach is used in validating the reliability of the pre-processed data. This work proposes a practical framework for digital forensics on flash drives.","PeriodicalId":423912,"journal":{"name":"2010 Second International Conference on Machine Learning and Computing","volume":"22 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2010-02-09","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127643451","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
The traffic modeling is one of the effective methods of detecting and evaluating the urban traffic. The effect of uncertain factors such as the different behavior of a human society would count as an intricacy of the issue and would cause some problems for modeling. Level crossroads are one of the important sections in an urban traffic control system and are usually controlled by traffic lights. In this study, an attempt has been made to model the traffic of an important crossroads in Mashhad city using intelligent elements in a multi-agent environment and a large amount of real data. For this purpose, the total traffic behavior at the intersection was first modeled based on the Bayesian networks structures. Then, effective factors have been modeled using the probabilistic causal networks. Results of the evaluation of the model show that this model is able to measure system efficiency according to variances in the crossroads adjustments. Also, this model is cheaper and less time-consuming. On this basis, this modeling can be used for the evaluating and even predicting the efficacy of the traffic control system in the crossroads. The data used in this study have been collected by the SCATS software in Mashhad Traffic Control Center. The Weka software has been used for training and evaluations with the Bayesian and causal probabilistic networks.
{"title":"Traffic Modeling with Multi Agent Bayesian and Causal Networks and Performance Prediction for Changed Setting System","authors":"R. Maarefdoust, S. Rahati","doi":"10.1109/ICMLC.2010.34","DOIUrl":"https://doi.org/10.1109/ICMLC.2010.34","url":null,"abstract":"The traffic modeling is one of the effective methods of detecting and evaluating the urban traffic. The effect of uncertain factors such as the different behavior of a human society would count as an intricacy of the issue and would cause some problems for modeling. Level crossroads are one of the important sections in an urban traffic control system and are usually controlled by traffic lights. In this study, an attempt has been made to model the traffic of an important crossroads in Mashhad city using intelligent elements in a multi-agent environment and a large amount of real data. For this purpose, the total traffic behavior at the intersection was first modeled based on the Bayesian networks structures. Then, effective factors have been modeled using the probabilistic causal networks. Results of the evaluation of the model show that this model is able to measure system efficiency according to variances in the crossroads adjustments. Also, this model is cheaper and less time-consuming. On this basis, this modeling can be used for the evaluating and even predicting the efficacy of the traffic control system in the crossroads. The data used in this study have been collected by the SCATS software in Mashhad Traffic Control Center. The Weka software has been used for training and evaluations with the Bayesian and causal probabilistic networks.","PeriodicalId":423912,"journal":{"name":"2010 Second International Conference on Machine Learning and Computing","volume":"11 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2010-02-09","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"124786127","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
The Simple Genetic Algorithm evaluates a group of binary strings on the basis of their fitness, performs crossover and mutation on them and tries to generate a group having maximum fitness. The usual method used for implementing the SGA is by using character arrays for storage of binary strings. But, this method has some disadvantages. The SGA implementation can be termed a success if the average fitness of the new generation is more than the initial average fitness. In this paper, we plan to implement the SGA using integer arrays for storage of binary strings. Then, we plan to compare the initial average fitness with the final average fitness so that the working of SGA can be verified. We have written the application such that varying population sizes can be given to check the correctness of the SGA algorithm.
{"title":"SGA Implementation Using Integer Arrays for Storage of Binary Strings","authors":"P. Kanchan, Rio G. L. D'Souza","doi":"10.1109/ICMLC.2010.62","DOIUrl":"https://doi.org/10.1109/ICMLC.2010.62","url":null,"abstract":"The Simple Genetic Algorithm evaluates a group of binary strings on the basis of their fitness, performs crossover and mutation on them and tries to generate a group having maximum fitness. The usual method used for implementing the SGA is by using character arrays for storage of binary strings. But, this method has some disadvantages. The SGA implementation can be termed a success if the average fitness of the new generation is more than the initial average fitness. In this paper, we plan to implement the SGA using integer arrays for storage of binary strings. Then, we plan to compare the initial average fitness with the final average fitness so that the working of SGA can be verified. We have written the application such that varying population sizes can be given to check the correctness of the SGA algorithm.","PeriodicalId":423912,"journal":{"name":"2010 Second International Conference on Machine Learning and Computing","volume":"98 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2010-02-09","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"121240876","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Automatic animal sound classification and retrieval is very helpful for bioacoustic and audio retrieval applications. In this paper we propose a system to define and extract a set of acoustic features from all archived wild animal sound recordings that is used in subsequent feature selection, classification and retrieval tasks. The database consisted of sounds of six wild animals. The Fractal Dimension analysis based segmentation was selected due to its ability to select the right portion of signal for extracting the features. The feature vectors of the proposed algorithm consist of spectral, temporal and perceptual features of the animal vocalizations. The minimal Redundancy, Maximal Relevance (mRMR) feature selection analysis was exploited to increase the classification accuracy at a compact set of features. These features were used as the inputs of two neural networks, the k-Nearest Neighbor (kNN), the Multi-Layer Perceptron (MLP) and its fusion. The proposed system provides quite robust approach for classification and retrieval purposes, especially for the wild animal sounds.
{"title":"Content-Based Classification and Retrieval of Wild Animal Sounds Using Feature Selection Algorithm","authors":"S. Gunasekaran, K. Revathy","doi":"10.1109/ICMLC.2010.11","DOIUrl":"https://doi.org/10.1109/ICMLC.2010.11","url":null,"abstract":"Automatic animal sound classification and retrieval is very helpful for bioacoustic and audio retrieval applications. In this paper we propose a system to define and extract a set of acoustic features from all archived wild animal sound recordings that is used in subsequent feature selection, classification and retrieval tasks. The database consisted of sounds of six wild animals. The Fractal Dimension analysis based segmentation was selected due to its ability to select the right portion of signal for extracting the features. The feature vectors of the proposed algorithm consist of spectral, temporal and perceptual features of the animal vocalizations. The minimal Redundancy, Maximal Relevance (mRMR) feature selection analysis was exploited to increase the classification accuracy at a compact set of features. These features were used as the inputs of two neural networks, the k-Nearest Neighbor (kNN), the Multi-Layer Perceptron (MLP) and its fusion. The proposed system provides quite robust approach for classification and retrieval purposes, especially for the wild animal sounds.","PeriodicalId":423912,"journal":{"name":"2010 Second International Conference on Machine Learning and Computing","volume":"2004 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2010-02-09","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127332554","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
In such real data mining applications as medical diagnosis, fraud detection and fault classification, and so on, the two problems that the error cost is expensive and the reject cost is class-dependent are often encountered. In order to overcome those problems, firstly, the general mathematical description of the Binary Classification Problem with Error Cost and Class-dependent Reject Cost (BCP-EC2RC) is proposed. Secondly, as one of implementation methods of BCP-EC2RC, the new algorithm, named as Cost-sensitive Support Vector Machines with the Error Cost and the Class-dependent Reject Cost (CSVM-EC2RC), is presented. The CSVM-EC2RC algorithm involves two stages: estimating the classification reliability based on trained SVM classifier, and determining the optimal reject rate of positive class and negative class by minimizing the average cost based on the given error cost and class-dependent reject cost. The experiment studies based on a benchmark data set illustrate that the proposed algorithm is effective.
{"title":"SVM-Based Cost-sensitive Classification Algorithm with Error Cost and Class-dependent Reject Cost","authors":"Enhui Zheng, Chao Zou, Jian Sun, Le Chen, Ping Li","doi":"10.1109/ICMLC.2010.27","DOIUrl":"https://doi.org/10.1109/ICMLC.2010.27","url":null,"abstract":"In such real data mining applications as medical diagnosis, fraud detection and fault classification, and so on, the two problems that the error cost is expensive and the reject cost is class-dependent are often encountered. In order to overcome those problems, firstly, the general mathematical description of the Binary Classification Problem with Error Cost and Class-dependent Reject Cost (BCP-EC2RC) is proposed. Secondly, as one of implementation methods of BCP-EC2RC, the new algorithm, named as Cost-sensitive Support Vector Machines with the Error Cost and the Class-dependent Reject Cost (CSVM-EC2RC), is presented. The CSVM-EC2RC algorithm involves two stages: estimating the classification reliability based on trained SVM classifier, and determining the optimal reject rate of positive class and negative class by minimizing the average cost based on the given error cost and class-dependent reject cost. The experiment studies based on a benchmark data set illustrate that the proposed algorithm is effective.","PeriodicalId":423912,"journal":{"name":"2010 Second International Conference on Machine Learning and Computing","volume":"123 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2010-02-09","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"133048615","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}