J. Raheja, Radhey Shyam, Jatin Gupta, Umesh Kumar, P. B. Prasad
This paper describes a robust technique to determine happy/sad/neutral facial gestures of humans by processing an image containing human face. It aims to do away with the cumbersome process of training the computer with images and thereby significantly reducing the processing time. In this technique human face is identified using skin color identification on various spaces like HSV and YCbCr. Segmented features of face like the lips are determined using unique face feature determination. Contour of lips formed for different human moods are analyzed to identify, facial gesture. Edge detection of lips, followed by morphological operation, gives lip structure. Pattern analysis of lips using the unique histogram algorithm and subsequent comparison with different facial gesture icons gives facial gesture of human being in an image. This technique when tested on a huge database of human images under varying lightning conditions gave acceptable accuracy rates and was found fast enough to be used in real time video-stream
{"title":"Facial Gesture Identification Using Lip Contours","authors":"J. Raheja, Radhey Shyam, Jatin Gupta, Umesh Kumar, P. B. Prasad","doi":"10.1109/ICMLC.2010.13","DOIUrl":"https://doi.org/10.1109/ICMLC.2010.13","url":null,"abstract":"This paper describes a robust technique to determine happy/sad/neutral facial gestures of humans by processing an image containing human face. It aims to do away with the cumbersome process of training the computer with images and thereby significantly reducing the processing time. In this technique human face is identified using skin color identification on various spaces like HSV and YCbCr. Segmented features of face like the lips are determined using unique face feature determination. Contour of lips formed for different human moods are analyzed to identify, facial gesture. Edge detection of lips, followed by morphological operation, gives lip structure. Pattern analysis of lips using the unique histogram algorithm and subsequent comparison with different facial gesture icons gives facial gesture of human being in an image. This technique when tested on a huge database of human images under varying lightning conditions gave acceptable accuracy rates and was found fast enough to be used in real time video-stream","PeriodicalId":423912,"journal":{"name":"2010 Second International Conference on Machine Learning and Computing","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2010-02-09","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130132357","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
P. V. Rao, S. Madhusudana, Nachiketh S.S., K. Keerthi
This paper explores the application of artificial neural networks to image compression. An image compressing algorithm based on Back Propagation (BP) network is developed after image pre-processing. By implementing the proposed scheme the influence of different transfer functions and compression ratios within the scheme is investigated. It has been demonstrated through several experiments that peak-signal-to-noise ratio (PSNR) almost remains same for all compression ratios while mean square error (MSE) varies.
{"title":"Image Compression using Artificial Neural Networks","authors":"P. V. Rao, S. Madhusudana, Nachiketh S.S., K. Keerthi","doi":"10.1109/ICMLC.2010.33","DOIUrl":"https://doi.org/10.1109/ICMLC.2010.33","url":null,"abstract":"This paper explores the application of artificial neural networks to image compression. An image compressing algorithm based on Back Propagation (BP) network is developed after image pre-processing. By implementing the proposed scheme the influence of different transfer functions and compression ratios within the scheme is investigated. It has been demonstrated through several experiments that peak-signal-to-noise ratio (PSNR) almost remains same for all compression ratios while mean square error (MSE) varies.","PeriodicalId":423912,"journal":{"name":"2010 Second International Conference on Machine Learning and Computing","volume":"134 3 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2010-02-09","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"131004452","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
S. K, Vidya Yeri, Arjun A.V., Venugopal K.R., L. Patnaik
Wireless Sensor Networks(WSN) consists of sensor nodes that are networked and are deployed in unattended areas where security is very important. Security is considered as one of the vital issues in the area of sensor networks. This work proposes an efficient scheme that deals with the availability of a mobile node and introduces a session time within which the secured secret key remains valid. The analysis and simulation results showed that the performance of the proposed scheme is better than the existing scheme.
{"title":"Adaptive Mobility and Availability of a Mobile Node for Efficient Secret Key Distribution inWireless Sensor Networks","authors":"S. K, Vidya Yeri, Arjun A.V., Venugopal K.R., L. Patnaik","doi":"10.1109/ICMLC.2010.30","DOIUrl":"https://doi.org/10.1109/ICMLC.2010.30","url":null,"abstract":"Wireless Sensor Networks(WSN) consists of sensor nodes that are networked and are deployed in unattended areas where security is very important. Security is considered as one of the vital issues in the area of sensor networks. This work proposes an efficient scheme that deals with the availability of a mobile node and introduces a session time within which the secured secret key remains valid. The analysis and simulation results showed that the performance of the proposed scheme is better than the existing scheme.","PeriodicalId":423912,"journal":{"name":"2010 Second International Conference on Machine Learning and Computing","volume":"66 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2010-02-09","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"131557957","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
This paper aims to establish a semantic model which is the most important and central component of our Vietnamese Language Query Processing (VLQP) framework. The VLQP framework is architecture of 2-tiers. This framework includes a restricted parser for analyzing Vietnamese query from users based on a class of the pre-defined syntactic rules and a transformer for transforming syntactic structure of query to its semantic representation. In this framework, the semantic model is an original feature we have addressed. This semantic model contributes to the syntax analysis and representation of Vietnamese query forms involving to application domain. We also propose transforming rules to transform syntactic structures to their semantic representation.
{"title":"A Semantic Model for Building the Vietnamese Language Query Processing Framework in e-Library Searching Application","authors":"Dang Tuan Nguyen, Tuan Ngoc Pham, Quoc Tan Phan","doi":"10.1109/ICMLC.2010.17","DOIUrl":"https://doi.org/10.1109/ICMLC.2010.17","url":null,"abstract":"This paper aims to establish a semantic model which is the most important and central component of our Vietnamese Language Query Processing (VLQP) framework. The VLQP framework is architecture of 2-tiers. This framework includes a restricted parser for analyzing Vietnamese query from users based on a class of the pre-defined syntactic rules and a transformer for transforming syntactic structure of query to its semantic representation. In this framework, the semantic model is an original feature we have addressed. This semantic model contributes to the syntax analysis and representation of Vietnamese query forms involving to application domain. We also propose transforming rules to transform syntactic structures to their semantic representation.","PeriodicalId":423912,"journal":{"name":"2010 Second International Conference on Machine Learning and Computing","volume":"29 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2010-02-09","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128075053","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
The discovery of knowledge from medical databases is important in order to make effective medical diagnosis. The aim of data mining is extract the information from database and generate clear and understandable description of patterns. In this study we have introduced a new approach to generate association rules on numeric data. We propose a modified equal width binning interval approach to discretizing continuous valued attributes. The approximate width of the desired intervals is chosen based on the opinion of medical expert and is provided as an input parameter to the model. First we have converted numeric attributes into categorical form based on above techniques. Apriori algorithm is usually used for the market basket analysis was used to generate rules on Pima Indian diabetes data. The data set was taken from UCI machine learning repository containing total instances 768 and 8 numeric attributes.We discover that the often neglected pre-processing steps in knowledge discovery are the most critical elements in determining the success of a data mining application. Lastly we have generated the association rules which are useful to identify general associations in the data, to understand the relationship between the measured fields whether the patient goes on to develop diabetes or not. We are presented step-by-step approach to help the health doctors to explore their data and to understand the discovered rules better.
{"title":"Association Rule for Classification of Type-2 Diabetic Patients","authors":"B. Patil, R. C. Joshi, Durga Toshniwal","doi":"10.1109/ICMLC.2010.67","DOIUrl":"https://doi.org/10.1109/ICMLC.2010.67","url":null,"abstract":"The discovery of knowledge from medical databases is important in order to make effective medical diagnosis. The aim of data mining is extract the information from database and generate clear and understandable description of patterns. In this study we have introduced a new approach to generate association rules on numeric data. We propose a modified equal width binning interval approach to discretizing continuous valued attributes. The approximate width of the desired intervals is chosen based on the opinion of medical expert and is provided as an input parameter to the model. First we have converted numeric attributes into categorical form based on above techniques. Apriori algorithm is usually used for the market basket analysis was used to generate rules on Pima Indian diabetes data. The data set was taken from UCI machine learning repository containing total instances 768 and 8 numeric attributes.We discover that the often neglected pre-processing steps in knowledge discovery are the most critical elements in determining the success of a data mining application. Lastly we have generated the association rules which are useful to identify general associations in the data, to understand the relationship between the measured fields whether the patient goes on to develop diabetes or not. We are presented step-by-step approach to help the health doctors to explore their data and to understand the discovered rules better.","PeriodicalId":423912,"journal":{"name":"2010 Second International Conference on Machine Learning and Computing","volume":"49 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2010-02-09","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"131631927","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Linear Support Vector Machines (SVMs) have been used successfully to classify text documents into set of concepts. With the increasing number of linear SVM formulations and decomposition algorithms publicly available, this paper performs a study on their efficiency and efficacy for text categorization tasks. Eight publicly available implementations are investigated in terms of Break Even Point (BEP), F1 measure, ROC plots, learning speed and sensitivity to penalty parameter, based on the experimental results on two benchmark text corpuses. The results show that out of the eight implementations, SVMlin and Proximal SVM perform better in terms of consistent performance and reduced training time. However being an extremely simple algorithm with training time independent of the penalty parameter and the category for which training is being done, Proximal SVM is appealing. We further investigated fuzzy proximal SVM on both the text corpuses; it showed improved generalization over proximal SVM.
{"title":"An Investigation on Linear SVM and its Variants for Text Categorization","authors":"M. A. Kumar, M. Gopal","doi":"10.1109/ICMLC.2010.64","DOIUrl":"https://doi.org/10.1109/ICMLC.2010.64","url":null,"abstract":"Linear Support Vector Machines (SVMs) have been used successfully to classify text documents into set of concepts. With the increasing number of linear SVM formulations and decomposition algorithms publicly available, this paper performs a study on their efficiency and efficacy for text categorization tasks. Eight publicly available implementations are investigated in terms of Break Even Point (BEP), F1 measure, ROC plots, learning speed and sensitivity to penalty parameter, based on the experimental results on two benchmark text corpuses. The results show that out of the eight implementations, SVMlin and Proximal SVM perform better in terms of consistent performance and reduced training time. However being an extremely simple algorithm with training time independent of the penalty parameter and the category for which training is being done, Proximal SVM is appealing. We further investigated fuzzy proximal SVM on both the text corpuses; it showed improved generalization over proximal SVM.","PeriodicalId":423912,"journal":{"name":"2010 Second International Conference on Machine Learning and Computing","volume":"38 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2010-02-09","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"134143166","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Traditionally, researchers compare the performance of new machine learning algorithms against those of locally executed simulations that serve as benchmarks. This process requires considerable time, computation resources, and expertise. In this paper, we present a method to quickly evaluate the performance feasibility of new algorithms – offering a preliminary study that either supports or opposes the need to conduct a full-scale traditional evaluation, and possibly saving valuable resources for researchers. The proposed method uses performance benchmarks obtained from results reported in the literature rather than local simulations. Furthermore, an alternate statistical technique is suggested for comparative performance analysis, since traditional statistical significance tests do not fit the problem well. We highlight the use of the proposed evaluation method in a study that compared a new algorithm against 47 other algorithms across 46 datasets.
{"title":"Fast Preliminary Evaluation of New Machine Learning Algorithms for Feasibility","authors":"Dustin Baumgartner, G. Serpen","doi":"10.1109/ICMLC.2010.31","DOIUrl":"https://doi.org/10.1109/ICMLC.2010.31","url":null,"abstract":"Traditionally, researchers compare the performance of new machine learning algorithms against those of locally executed simulations that serve as benchmarks. This process requires considerable time, computation resources, and expertise. In this paper, we present a method to quickly evaluate the performance feasibility of new algorithms – offering a preliminary study that either supports or opposes the need to conduct a full-scale traditional evaluation, and possibly saving valuable resources for researchers. The proposed method uses performance benchmarks obtained from results reported in the literature rather than local simulations. Furthermore, an alternate statistical technique is suggested for comparative performance analysis, since traditional statistical significance tests do not fit the problem well. We highlight the use of the proposed evaluation method in a study that compared a new algorithm against 47 other algorithms across 46 datasets.","PeriodicalId":423912,"journal":{"name":"2010 Second International Conference on Machine Learning and Computing","volume":"61 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2010-02-09","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"132585256","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
In this paper we discuss various machine learning approaches used in mining of data. Further we distinguish between symbolic and sub-symbolic data mining methods. We also attempt to propose a hybrid method with the combination of Artificial Neural Network (ANN) and Cased Based Reasoning (CBR) in mining of data.
{"title":"Hybrid Machine Learning Approach in Data Mining","authors":"Jyothi Bellary, Bhargavi Peyakunta, Sekhar Konetigari","doi":"10.1109/ICMLC.2010.57","DOIUrl":"https://doi.org/10.1109/ICMLC.2010.57","url":null,"abstract":"In this paper we discuss various machine learning approaches used in mining of data. Further we distinguish between symbolic and sub-symbolic data mining methods. We also attempt to propose a hybrid method with the combination of Artificial Neural Network (ANN) and Cased Based Reasoning (CBR) in mining of data.","PeriodicalId":423912,"journal":{"name":"2010 Second International Conference on Machine Learning and Computing","volume":"56 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2010-02-09","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115307709","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
In this paper, we modify the Split Bregman algorithm for color image restoration with the edge-preserving color image total variation model. The observed blurred images are assumed to be degraded by within channel and cross channel blurs. Our proposed algorithm is based on the Split Bregman process and simply requires Fast Fourier Transform in each iteration. Experimental comparisons using various types of blurs are reported, and the results show that, the proposed method significantly outperforms existing methods, such as the variable splitting alternative minimization algorithm and that adopted by MATLAB deblurring function, in terms of both objective signal to noise ratio and subjective vision quality. This demonstrates the efficiency of our proposed algorithms.
{"title":"Color Image Restoration Based on Split Bregman Iteration Algorithm","authors":"Yi Li-ya, Xiaolei Lu, Furong Wang","doi":"10.1109/ICMLC.2010.22","DOIUrl":"https://doi.org/10.1109/ICMLC.2010.22","url":null,"abstract":"In this paper, we modify the Split Bregman algorithm for color image restoration with the edge-preserving color image total variation model. The observed blurred images are assumed to be degraded by within channel and cross channel blurs. Our proposed algorithm is based on the Split Bregman process and simply requires Fast Fourier Transform in each iteration. Experimental comparisons using various types of blurs are reported, and the results show that, the proposed method significantly outperforms existing methods, such as the variable splitting alternative minimization algorithm and that adopted by MATLAB deblurring function, in terms of both objective signal to noise ratio and subjective vision quality. This demonstrates the efficiency of our proposed algorithms.","PeriodicalId":423912,"journal":{"name":"2010 Second International Conference on Machine Learning and Computing","volume":"60 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2010-02-09","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"125439309","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Ning Chen, B. Ribeiro, Armando Vieira, João M. M. Duarte, J. C. Neves
Cost-sensitive classification algorithms that enable effective prediction, where the costs of misclassification can be very different, are crucial to creditors and auditors in credit risk analysis. Learning vector quantization (LVQ) is a powerful tool to solve bankruptcy prediction problem as a classification task. The genetic algorithm (GA) is applied widely in conjunction with artificial intelligent methods. The hybridization of genetic algorithm with existing classification algorithms is well illustrated in the field of bankruptcy prediction. In this paper, a hybrid GA and LVQ approach is proposed to minimize the expected misclassified cost under the asymmetric cost preference. Experiments on real-life French private company data show the proposed approach helps to improve the predictive performance in asymmetric cost setup.
{"title":"Hybrid Genetic Algorithm and Learning Vector Quantization Modeling for Cost-Sensitive Bankruptcy Prediction","authors":"Ning Chen, B. Ribeiro, Armando Vieira, João M. M. Duarte, J. C. Neves","doi":"10.1109/ICMLC.2010.29","DOIUrl":"https://doi.org/10.1109/ICMLC.2010.29","url":null,"abstract":"Cost-sensitive classification algorithms that enable effective prediction, where the costs of misclassification can be very different, are crucial to creditors and auditors in credit risk analysis. Learning vector quantization (LVQ) is a powerful tool to solve bankruptcy prediction problem as a classification task. The genetic algorithm (GA) is applied widely in conjunction with artificial intelligent methods. The hybridization of genetic algorithm with existing classification algorithms is well illustrated in the field of bankruptcy prediction. In this paper, a hybrid GA and LVQ approach is proposed to minimize the expected misclassified cost under the asymmetric cost preference. Experiments on real-life French private company data show the proposed approach helps to improve the predictive performance in asymmetric cost setup.","PeriodicalId":423912,"journal":{"name":"2010 Second International Conference on Machine Learning and Computing","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2010-02-09","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128253446","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}