Pub Date : 2012-03-21DOI: 10.1109/ICPRIME.2012.6208292
K. I. Lakshmi, C. P. Chandran
The objective of this paper is mining coregulated biclusters from gene expression data. Gene expression is the process which produces functional product from the gene information. Data mining is used to find relevant and useful information from databases. Clustering groups the genes according to the given conditions. Biclustering algorithms belong to a distinct class of clustering algorithms that perform simultaneous clustering of both rows and columns of the gene expression matrix. In this paper a new algorithm, Enhanced Bimax algorithm is proposed based on the Bimax algorithm [7]. The normalization technique is included which is used to display a coregulated biclusters from gene expression data and grouping the genes in the particular order. In this work, Synthetic dataset is used to display the coregulated genes.
{"title":"Mining coregulated biclusters from gene expression data","authors":"K. I. Lakshmi, C. P. Chandran","doi":"10.1109/ICPRIME.2012.6208292","DOIUrl":"https://doi.org/10.1109/ICPRIME.2012.6208292","url":null,"abstract":"The objective of this paper is mining coregulated biclusters from gene expression data. Gene expression is the process which produces functional product from the gene information. Data mining is used to find relevant and useful information from databases. Clustering groups the genes according to the given conditions. Biclustering algorithms belong to a distinct class of clustering algorithms that perform simultaneous clustering of both rows and columns of the gene expression matrix. In this paper a new algorithm, Enhanced Bimax algorithm is proposed based on the Bimax algorithm [7]. The normalization technique is included which is used to display a coregulated biclusters from gene expression data and grouping the genes in the particular order. In this work, Synthetic dataset is used to display the coregulated genes.","PeriodicalId":148511,"journal":{"name":"International Conference on Pattern Recognition, Informatics and Medical Engineering (PRIME-2012)","volume":"36 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2012-03-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"123185894","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2012-03-21DOI: 10.1109/ICPRIME.2012.6208372
K. Elangovan
Orthogonal Frequency Division Multiplexing (OFDM) has recently been applied in wireless communication systems due to its high data rate transmission capability with high bandwidth efficiency and its robustness to multi-path delay. Fading is the one of the major aspect which is considered in the receiver. To cancel the effect of fading, channel estimation and equalization procedure must be done at the receiver before data demodulation. In this paper dealt the comparisons of various algorithms, complexity and advantages, on the capacity enhancement for OFDM systems channel estimation techniques. Mainly three prediction algorithms are used in the equalizer to estimate the channel responses namely, Least Mean Square (LMS), Normalized Least Mean Square (NLMS) and Recursive Least Square (RLS) algorithms. These three algorithms are considered in this work and performances are statically compared by using MATLAB Software.
{"title":"Comparative study on the channel estimation for OFDM system using LMS, NLMS and RLS algorithms","authors":"K. Elangovan","doi":"10.1109/ICPRIME.2012.6208372","DOIUrl":"https://doi.org/10.1109/ICPRIME.2012.6208372","url":null,"abstract":"Orthogonal Frequency Division Multiplexing (OFDM) has recently been applied in wireless communication systems due to its high data rate transmission capability with high bandwidth efficiency and its robustness to multi-path delay. Fading is the one of the major aspect which is considered in the receiver. To cancel the effect of fading, channel estimation and equalization procedure must be done at the receiver before data demodulation. In this paper dealt the comparisons of various algorithms, complexity and advantages, on the capacity enhancement for OFDM systems channel estimation techniques. Mainly three prediction algorithms are used in the equalizer to estimate the channel responses namely, Least Mean Square (LMS), Normalized Least Mean Square (NLMS) and Recursive Least Square (RLS) algorithms. These three algorithms are considered in this work and performances are statically compared by using MATLAB Software.","PeriodicalId":148511,"journal":{"name":"International Conference on Pattern Recognition, Informatics and Medical Engineering (PRIME-2012)","volume":"5 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2012-03-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127181261","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2012-03-21DOI: 10.1109/ICPRIME.2012.6208287
D. A. Kumar, M. C. Loraine Charlet Annie
Clustering is the process of grouping similar items. Clustering becomes very tedious when data dimensionality and sparsity increases. Binary data are the simplest form of data used in information systems for very large database and it is very efficient based on computational efficiency, memory capacity to represent categorical type data. Usually the binary data clustering is done by using 0 and 1 as numerical value. In this paper, the binary data clustering is performed by preprocessing the binary data to real by wiener transformation. Wiener is a linear Transformation based upon statistics and it is optimal in terms of Mean square error. Computational results show that the clustering based on Wiener transformation is very efficient in terms of objectivity and subjectivity.
{"title":"Binary data clustering based on Wiener transformation","authors":"D. A. Kumar, M. C. Loraine Charlet Annie","doi":"10.1109/ICPRIME.2012.6208287","DOIUrl":"https://doi.org/10.1109/ICPRIME.2012.6208287","url":null,"abstract":"Clustering is the process of grouping similar items. Clustering becomes very tedious when data dimensionality and sparsity increases. Binary data are the simplest form of data used in information systems for very large database and it is very efficient based on computational efficiency, memory capacity to represent categorical type data. Usually the binary data clustering is done by using 0 and 1 as numerical value. In this paper, the binary data clustering is performed by preprocessing the binary data to real by wiener transformation. Wiener is a linear Transformation based upon statistics and it is optimal in terms of Mean square error. Computational results show that the clustering based on Wiener transformation is very efficient in terms of objectivity and subjectivity.","PeriodicalId":148511,"journal":{"name":"International Conference on Pattern Recognition, Informatics and Medical Engineering (PRIME-2012)","volume":"6 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2012-03-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"117128709","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2012-03-21DOI: 10.1109/ICPRIME.2012.6208356
S. Somaskandan, S. Mahesan
Tumor segmentation from medical image data is a challenging task due to the high diversity in appearance of tumor tissue among different cases. In this paper we propose a new level set based deformable model to segment the tumor region. We use the gradient information as well as the regional data analysis to deform the level set. At every iteration step of the deformation, we estimate new velocity forces according to the identified tumor voxels statistical measures, and the healthy tissues information. This method provides a way to segment the objects even when there are weak edges and gaps. Moreover, the deforming contours expand or shrink as necessary so as not to miss the weak edges. Experiments are carried out on real datasets with different tumor shapes, sizes, locations, and internal texture. Our results indicate that the proposed method give promising results over high resolution medical data as well as low resolution images for the high satisfaction of the oncologist at the Cancer Treatment Unit at Jaffna Teaching Hospital.
{"title":"A level set based deformable model for segmenting tumors in medical images","authors":"S. Somaskandan, S. Mahesan","doi":"10.1109/ICPRIME.2012.6208356","DOIUrl":"https://doi.org/10.1109/ICPRIME.2012.6208356","url":null,"abstract":"Tumor segmentation from medical image data is a challenging task due to the high diversity in appearance of tumor tissue among different cases. In this paper we propose a new level set based deformable model to segment the tumor region. We use the gradient information as well as the regional data analysis to deform the level set. At every iteration step of the deformation, we estimate new velocity forces according to the identified tumor voxels statistical measures, and the healthy tissues information. This method provides a way to segment the objects even when there are weak edges and gaps. Moreover, the deforming contours expand or shrink as necessary so as not to miss the weak edges. Experiments are carried out on real datasets with different tumor shapes, sizes, locations, and internal texture. Our results indicate that the proposed method give promising results over high resolution medical data as well as low resolution images for the high satisfaction of the oncologist at the Cancer Treatment Unit at Jaffna Teaching Hospital.","PeriodicalId":148511,"journal":{"name":"International Conference on Pattern Recognition, Informatics and Medical Engineering (PRIME-2012)","volume":"79 6 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2012-03-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128113504","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2012-03-21DOI: 10.1109/ICPRIME.2012.6208352
J. Yogapriya, I. Vennila
This paper seeks to focus on Medical Image Retrieval based on Feature extraction, Classification and Similarity Measurements which will aid for computer assisted diagnosis. The selected features are Shape(Generic Fourier Descriptor (GFD)and Texture(Gabor Filter(GF)) that are extracted and classified as positive and negative features using a classification technique called Relevance Vector Machine (RVM) that provides a natural way to classify multiple features of images. The similarity model is used to measure the relevance between the query image and the target images based on Euclidean Distance(ED). This type of Medical Image Retrieval System framework is called GGRE. The retrieval algorithm performances are evaluated in terms of precision and recall. The results show that the multiple feature classifier system yields good retrieval performance than the retrieval systems based on the individual features.
{"title":"Medical image retrieval system using GGRE framework","authors":"J. Yogapriya, I. Vennila","doi":"10.1109/ICPRIME.2012.6208352","DOIUrl":"https://doi.org/10.1109/ICPRIME.2012.6208352","url":null,"abstract":"This paper seeks to focus on Medical Image Retrieval based on Feature extraction, Classification and Similarity Measurements which will aid for computer assisted diagnosis. The selected features are Shape(Generic Fourier Descriptor (GFD)and Texture(Gabor Filter(GF)) that are extracted and classified as positive and negative features using a classification technique called Relevance Vector Machine (RVM) that provides a natural way to classify multiple features of images. The similarity model is used to measure the relevance between the query image and the target images based on Euclidean Distance(ED). This type of Medical Image Retrieval System framework is called GGRE. The retrieval algorithm performances are evaluated in terms of precision and recall. The results show that the multiple feature classifier system yields good retrieval performance than the retrieval systems based on the individual features.","PeriodicalId":148511,"journal":{"name":"International Conference on Pattern Recognition, Informatics and Medical Engineering (PRIME-2012)","volume":"37 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2012-03-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129351443","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2012-03-21DOI: 10.1109/ICPRIME.2012.6208337
K. Sudheesh, C. Patil
A very important step in automatic fingerprint recognition system is to automatically and reliably extract minutia from the fingerprint images. Minutia based fingerprint recognition algorithms have been widely accepted as a standard for single fingerprint recognition applications. But the performance of a minutia extraction algorithm relies heavily on the quality of the input fingerprint images and the image processing techniques used for enhancing the ridges and valleys of the fingerprint image and make the minutia distinguishable from the other parts of the fingerprint image. This paper basically focuses on how the minutia points are extracted from fingerprint images and the proposed method is to design and implement the current techniques for fingerprint recognition specifically public key cryptography (RSA Algorithm) to improve the performance of the system. The design is mainly decomposed into image preprocessing, feature extraction, implementation of RSA algorithm for extracted feature and finally matches.
{"title":"An approach of cryptographic for estimating the impact of fingerprint for biometric","authors":"K. Sudheesh, C. Patil","doi":"10.1109/ICPRIME.2012.6208337","DOIUrl":"https://doi.org/10.1109/ICPRIME.2012.6208337","url":null,"abstract":"A very important step in automatic fingerprint recognition system is to automatically and reliably extract minutia from the fingerprint images. Minutia based fingerprint recognition algorithms have been widely accepted as a standard for single fingerprint recognition applications. But the performance of a minutia extraction algorithm relies heavily on the quality of the input fingerprint images and the image processing techniques used for enhancing the ridges and valleys of the fingerprint image and make the minutia distinguishable from the other parts of the fingerprint image. This paper basically focuses on how the minutia points are extracted from fingerprint images and the proposed method is to design and implement the current techniques for fingerprint recognition specifically public key cryptography (RSA Algorithm) to improve the performance of the system. The design is mainly decomposed into image preprocessing, feature extraction, implementation of RSA algorithm for extracted feature and finally matches.","PeriodicalId":148511,"journal":{"name":"International Conference on Pattern Recognition, Informatics and Medical Engineering (PRIME-2012)","volume":"318 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2012-03-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"124505855","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2012-03-21DOI: 10.1109/ICPRIME.2012.6208280
J. Beevi, N. Deivasigamani
A Knowledge Base is a special kind of data base used for storage and retrieval of knowledge. From the perspective of knowledge creators, maintenance and creation of knowledge base is a crucial activity in the life cycle of knowledge management. This paper presents a novel approach to the creation of knowledge base. The main focus of our approach is to extract the knowledge from unstructured web documents and create a knowledge base. Preprocessing techniques such as tokenizing, stemming are performed on the unstructured input web documents. Meanwhile, Similarity and redundancy computation is performed for duplicate knowledge removal. The extracted knowledge is organized and converted to XML documents. XCLS clustering is made on XML documents. Finally, Knowledge base is designed for storing extracted XML documents. A query interface has been developed to retrieve the search knowledge. To test the usefulness and ease of use of our prototype, we used the Technology Acceptance Model (TAM) to evaluate the system. Results are promising.
{"title":"A new approach to the design of knowledge base using XCLS clustering","authors":"J. Beevi, N. Deivasigamani","doi":"10.1109/ICPRIME.2012.6208280","DOIUrl":"https://doi.org/10.1109/ICPRIME.2012.6208280","url":null,"abstract":"A Knowledge Base is a special kind of data base used for storage and retrieval of knowledge. From the perspective of knowledge creators, maintenance and creation of knowledge base is a crucial activity in the life cycle of knowledge management. This paper presents a novel approach to the creation of knowledge base. The main focus of our approach is to extract the knowledge from unstructured web documents and create a knowledge base. Preprocessing techniques such as tokenizing, stemming are performed on the unstructured input web documents. Meanwhile, Similarity and redundancy computation is performed for duplicate knowledge removal. The extracted knowledge is organized and converted to XML documents. XCLS clustering is made on XML documents. Finally, Knowledge base is designed for storing extracted XML documents. A query interface has been developed to retrieve the search knowledge. To test the usefulness and ease of use of our prototype, we used the Technology Acceptance Model (TAM) to evaluate the system. Results are promising.","PeriodicalId":148511,"journal":{"name":"International Conference on Pattern Recognition, Informatics and Medical Engineering (PRIME-2012)","volume":"3 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2012-03-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"124868487","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2012-03-21DOI: 10.1109/ICPRIME.2012.6208338
M. Geetha, G. Nawaz
Route selection is essential in everyday life. We have several algorithms for detecting efficient route on Large Road Networks. This paper introduce the hierarchical community, is presented. It splits large road networks into hierarchical structure. It introduces a multi parameter route selection system which employs Fuzzy Logic (FL) and ant's behavior in nature is applied for the dynamic routing. The important rates of parameters such as path length and traffic are adjustable by the user. The purposes of new hierarchical routing algorithm significantly reduce the search space. We develop a community-based hierarchical graph model that supports Dynamic efficient route computation on large road networks.
{"title":"Fuzzy-ant based dynamic routing on large road networks","authors":"M. Geetha, G. Nawaz","doi":"10.1109/ICPRIME.2012.6208338","DOIUrl":"https://doi.org/10.1109/ICPRIME.2012.6208338","url":null,"abstract":"Route selection is essential in everyday life. We have several algorithms for detecting efficient route on Large Road Networks. This paper introduce the hierarchical community, is presented. It splits large road networks into hierarchical structure. It introduces a multi parameter route selection system which employs Fuzzy Logic (FL) and ant's behavior in nature is applied for the dynamic routing. The important rates of parameters such as path length and traffic are adjustable by the user. The purposes of new hierarchical routing algorithm significantly reduce the search space. We develop a community-based hierarchical graph model that supports Dynamic efficient route computation on large road networks.","PeriodicalId":148511,"journal":{"name":"International Conference on Pattern Recognition, Informatics and Medical Engineering (PRIME-2012)","volume":"20 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2012-03-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126148643","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2012-03-21DOI: 10.1109/ICPRIME.2012.6208388
M. Sundaresan, S. Ranjini
Text extraction from image is one of the complicated areas in digital image processing. It is a complex process to detect and recognize the text from comic image due to their various size, gray scale values, complex backgrounds and different styles of font. Text extraction process from comic image helps to preserve the text and formatting during conversion process and provide high quality of text from the printed document. This paper talks about English text extraction from blob in comic image using various methods.
{"title":"Text extraction from digital English comic image using two blobs extraction method","authors":"M. Sundaresan, S. Ranjini","doi":"10.1109/ICPRIME.2012.6208388","DOIUrl":"https://doi.org/10.1109/ICPRIME.2012.6208388","url":null,"abstract":"Text extraction from image is one of the complicated areas in digital image processing. It is a complex process to detect and recognize the text from comic image due to their various size, gray scale values, complex backgrounds and different styles of font. Text extraction process from comic image helps to preserve the text and formatting during conversion process and provide high quality of text from the printed document. This paper talks about English text extraction from blob in comic image using various methods.","PeriodicalId":148511,"journal":{"name":"International Conference on Pattern Recognition, Informatics and Medical Engineering (PRIME-2012)","volume":"42 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2012-03-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114434898","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2012-03-21DOI: 10.1109/ICPRIME.2012.6208284
S. P. Algur, N. T. Pendari
Web spamming refers to actions intended to mislead search engines and give some pages higher ranking than they deserve. Fundamentally, Web spam is designed to pollute search engines and corrupt the user experience by driving traffic to particular spammed Web pages, regardless of the merits of those pages. Recently, there is dramatic increase in amount of web spam, leading to a degradation of search results. Most of the existing web spam detection methods are supervised that require a large set of training web pages. The proposed system studies the problem of unsupervised web spam detection. It introduces the notion of spamicity to measure how likely a page is spam. Spamicity is a more flexible measure than the traditional supervised classification methods. In the proposed system link and content spam techniques are used to determine the spamicity score of web page. A threshold is set by empirical analysis which classifies the web page into spam or non spam.
{"title":"Hybrid spamicity score approach to web spam detection","authors":"S. P. Algur, N. T. Pendari","doi":"10.1109/ICPRIME.2012.6208284","DOIUrl":"https://doi.org/10.1109/ICPRIME.2012.6208284","url":null,"abstract":"Web spamming refers to actions intended to mislead search engines and give some pages higher ranking than they deserve. Fundamentally, Web spam is designed to pollute search engines and corrupt the user experience by driving traffic to particular spammed Web pages, regardless of the merits of those pages. Recently, there is dramatic increase in amount of web spam, leading to a degradation of search results. Most of the existing web spam detection methods are supervised that require a large set of training web pages. The proposed system studies the problem of unsupervised web spam detection. It introduces the notion of spamicity to measure how likely a page is spam. Spamicity is a more flexible measure than the traditional supervised classification methods. In the proposed system link and content spam techniques are used to determine the spamicity score of web page. A threshold is set by empirical analysis which classifies the web page into spam or non spam.","PeriodicalId":148511,"journal":{"name":"International Conference on Pattern Recognition, Informatics and Medical Engineering (PRIME-2012)","volume":"136 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2012-03-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"124271359","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}