Pub Date : 2014-11-01DOI: 10.1109/IC3I.2014.7019722
Vikram Singh, S. Saha
Identification of Clusters is a complex task as clusters found in the data sets are of arbitrary shapes and sizes. The task becomes challenging as identification of all the clusters from a single data set requires use of different types of algorithms based on different distance measures. Symmetry is a commonly used property of objects. Many of the clusters present in a data set can be identified using some point symmetry based distances. Point symmetry based and Euclidean distance measures are individually best in identifying clusters in some particular cases but not together. This article proposes a solution after analyzing and removing the shortcomings in both types of distance measures and then merging the improved versions into one to get the best of both of them. Introduction of differential evolution based optimization technique with dynamic parameter selection further enhances the quality of results. In this paper the existing point symmetry based distance is modified and is also enabled to correctly classify clusters based on Euclidean distance without making a dynamic switch between the methods. This helps the proposed clustering technique to give a speed up in computation process. The efficiency of the algorithm is established by analyzing the results obtained on 2 diversified test data sets. With the objective of highlighting the improvements achieved by our proposed algorithm, we compare its results with the results of algorithm based purely on Euclidean Distance, new point symmetry distance and the proposed modified new point symmetry based distance.
{"title":"Modified differential evolution based 0/1 clustering for classification of data points: Using modified new point symmetry based distance and dynamically controlled parameters","authors":"Vikram Singh, S. Saha","doi":"10.1109/IC3I.2014.7019722","DOIUrl":"https://doi.org/10.1109/IC3I.2014.7019722","url":null,"abstract":"Identification of Clusters is a complex task as clusters found in the data sets are of arbitrary shapes and sizes. The task becomes challenging as identification of all the clusters from a single data set requires use of different types of algorithms based on different distance measures. Symmetry is a commonly used property of objects. Many of the clusters present in a data set can be identified using some point symmetry based distances. Point symmetry based and Euclidean distance measures are individually best in identifying clusters in some particular cases but not together. This article proposes a solution after analyzing and removing the shortcomings in both types of distance measures and then merging the improved versions into one to get the best of both of them. Introduction of differential evolution based optimization technique with dynamic parameter selection further enhances the quality of results. In this paper the existing point symmetry based distance is modified and is also enabled to correctly classify clusters based on Euclidean distance without making a dynamic switch between the methods. This helps the proposed clustering technique to give a speed up in computation process. The efficiency of the algorithm is established by analyzing the results obtained on 2 diversified test data sets. With the objective of highlighting the improvements achieved by our proposed algorithm, we compare its results with the results of algorithm based purely on Euclidean Distance, new point symmetry distance and the proposed modified new point symmetry based distance.","PeriodicalId":430848,"journal":{"name":"2014 International Conference on Contemporary Computing and Informatics (IC3I)","volume":"77 3","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2014-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"120864902","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2014-11-01DOI: 10.1109/IC3I.2014.7019680
Sudipta Dhar, A. Dasgupta
Near Field Communication (NFC) is an emerging wireless short-range communication technology innovation that is based on existing standards of the Radio Frequency Identification (RFID) infrastructure. In this paper we give an overview of NFC technology and discuss its adaptation worldwide. We then focus on the current trends and application of NFC technology in India. Both existing NFC applications and some conceivable future situations are analyzed in this connection. Furthermore, security concerns, difficulties and present conflicts are also discussed.
{"title":"NFC technology: Current and future trends in India","authors":"Sudipta Dhar, A. Dasgupta","doi":"10.1109/IC3I.2014.7019680","DOIUrl":"https://doi.org/10.1109/IC3I.2014.7019680","url":null,"abstract":"Near Field Communication (NFC) is an emerging wireless short-range communication technology innovation that is based on existing standards of the Radio Frequency Identification (RFID) infrastructure. In this paper we give an overview of NFC technology and discuss its adaptation worldwide. We then focus on the current trends and application of NFC technology in India. Both existing NFC applications and some conceivable future situations are analyzed in this connection. Furthermore, security concerns, difficulties and present conflicts are also discussed.","PeriodicalId":430848,"journal":{"name":"2014 International Conference on Contemporary Computing and Informatics (IC3I)","volume":"23 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2014-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"123508857","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2014-11-01DOI: 10.1109/IC3I.2014.7019770
A. B. Raju, Satish Annigeri
In all branches of engineering, computational work/simulation is currently seen as the third vertex of a triangle, complementing observation and theory. This requirement necessitates an engineering student to know computational concepts as well as a whole new language to express these concepts. These are challenging tasks and students might face difficulties in learning the finer details of the language. It is essential to make computing skill an integral part of engineering education and not treat it as an add on. This paper attempts a review of the current approaches to teaching computation skills to engineering students of core engineering branches. It identifies the need for teaching this skill, the components of this skill and the available choices of programming languages to teach this skill. It suggests the adoption of Python as the preferred language to teach computation by comparing its merits and demerits vis-a-vis the other available choices. It is imperative to do a complete rethinking on how engineering education approaches computation skill and arrive at a holistic and integrated approach.
{"title":"Computing in engineering education: The current scenario","authors":"A. B. Raju, Satish Annigeri","doi":"10.1109/IC3I.2014.7019770","DOIUrl":"https://doi.org/10.1109/IC3I.2014.7019770","url":null,"abstract":"In all branches of engineering, computational work/simulation is currently seen as the third vertex of a triangle, complementing observation and theory. This requirement necessitates an engineering student to know computational concepts as well as a whole new language to express these concepts. These are challenging tasks and students might face difficulties in learning the finer details of the language. It is essential to make computing skill an integral part of engineering education and not treat it as an add on. This paper attempts a review of the current approaches to teaching computation skills to engineering students of core engineering branches. It identifies the need for teaching this skill, the components of this skill and the available choices of programming languages to teach this skill. It suggests the adoption of Python as the preferred language to teach computation by comparing its merits and demerits vis-a-vis the other available choices. It is imperative to do a complete rethinking on how engineering education approaches computation skill and arrive at a holistic and integrated approach.","PeriodicalId":430848,"journal":{"name":"2014 International Conference on Contemporary Computing and Informatics (IC3I)","volume":"21 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2014-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"125812295","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2014-11-01DOI: 10.1109/IC3I.2014.7019644
A. Mustafa, Y. S. Kumaraswamy
Web services are software components that communicate using pervasive, standards-based Web technologies including HTTP and XML-based messaging. Web services are designed to be accessed by other applications and vary in complexity from simple operations, such as checking a banking account balance online, to complex processes running Customer Relationship Management (CRM) or Enterprise Resource Planning (ERP) systems. Since they are based on open standards such as HTTP and XML-based protocols including SOAP and WSDL, Web services are hardware, programming language, and operating system independent. In this paper, Naïve Bayes, C4.5 and Random forest methods are used as classifiers for the efficiency of web services classification.
{"title":"Data mining algorithms for Web-services classification","authors":"A. Mustafa, Y. S. Kumaraswamy","doi":"10.1109/IC3I.2014.7019644","DOIUrl":"https://doi.org/10.1109/IC3I.2014.7019644","url":null,"abstract":"Web services are software components that communicate using pervasive, standards-based Web technologies including HTTP and XML-based messaging. Web services are designed to be accessed by other applications and vary in complexity from simple operations, such as checking a banking account balance online, to complex processes running Customer Relationship Management (CRM) or Enterprise Resource Planning (ERP) systems. Since they are based on open standards such as HTTP and XML-based protocols including SOAP and WSDL, Web services are hardware, programming language, and operating system independent. In this paper, Naïve Bayes, C4.5 and Random forest methods are used as classifiers for the efficiency of web services classification.","PeriodicalId":430848,"journal":{"name":"2014 International Conference on Contemporary Computing and Informatics (IC3I)","volume":"30 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2014-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129469711","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2014-11-01DOI: 10.1109/IC3I.2014.7019823
S. Maity, Chinmoy Maji
This paper focuses on minimization of interference to primary user (PU) through an optimal strategy of power allocation algorithm for source and relay nodes in multihop cognitive radio network (CRN) under the constraints of outage probability (successful delivery) and data rate over source-destination link. This objective is also studied in the framework of enhanced lifetime of the CRN. Extensive simulations are done for both energy aware (EA) and non-energy aware (NEA) power allocation schemes. Simulation results show that NEA based power allocation offers better capacity than EA scheme at the cost of slightly increased interference to PU. Simulation results also show a three dimensional (3D) relative trade-off performance among the data transmission capacity, network lifetime and total transmission power.
{"title":"On optimal power allocation for minimizing interferene in relay assisted cognitive radio networks","authors":"S. Maity, Chinmoy Maji","doi":"10.1109/IC3I.2014.7019823","DOIUrl":"https://doi.org/10.1109/IC3I.2014.7019823","url":null,"abstract":"This paper focuses on minimization of interference to primary user (PU) through an optimal strategy of power allocation algorithm for source and relay nodes in multihop cognitive radio network (CRN) under the constraints of outage probability (successful delivery) and data rate over source-destination link. This objective is also studied in the framework of enhanced lifetime of the CRN. Extensive simulations are done for both energy aware (EA) and non-energy aware (NEA) power allocation schemes. Simulation results show that NEA based power allocation offers better capacity than EA scheme at the cost of slightly increased interference to PU. Simulation results also show a three dimensional (3D) relative trade-off performance among the data transmission capacity, network lifetime and total transmission power.","PeriodicalId":430848,"journal":{"name":"2014 International Conference on Contemporary Computing and Informatics (IC3I)","volume":"43 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2014-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128161128","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2014-11-01DOI: 10.1109/IC3I.2014.7019818
K. V. Rama Satish, N. Kavya
Now a days, we are living with social media data like heartbeat. The exponential growth with data first presented challenges to cutting-edge businesses such as Google, MSN, Flipkart, Microsoft, Facebook, Twitter, LinkedIn etc. Nevertheless, existing big data analytical models for hadoop comply with MapReduce analytical workloads that process a small segment of the whole data set, thus failing to assess the capabilities of the MapReduce model under heavy workloads that process exponentially accumulative data sizes.[1] In all social business and technical research applications, there is a need to process big data of data in efficient manner on normal uses data. In this paper, we have proposed an efficient technique to classify the big data from e-mail using firefly and naïve bayes classifier. Proposed technique is comprised into two phase, (i) Map reduce framework for training and (ii) Map reduce framework for testing. Initially, the input twitter data is given to the process to select the suitable feature for data classification. The traditional firefly algorithm is applied and the optimized feature space is adopted for the best fitting results. Once the best feature space is identified through firefly algorithm, the data classification is done using the naïve bayes classifier. Here, these two processes are effectively distributed based on the concept given in Map-Reduce framework. The results of the experiment are validated using evaluation metrics namely, computation time, accuracy, specificity and sensitivity. For comparative analysis, proposed big data classification is compared with the existing works of naïve bayes and neural network.
{"title":"Big data processing with harnessing hadoop - MapReduce for optimizing analytical workloads","authors":"K. V. Rama Satish, N. Kavya","doi":"10.1109/IC3I.2014.7019818","DOIUrl":"https://doi.org/10.1109/IC3I.2014.7019818","url":null,"abstract":"Now a days, we are living with social media data like heartbeat. The exponential growth with data first presented challenges to cutting-edge businesses such as Google, MSN, Flipkart, Microsoft, Facebook, Twitter, LinkedIn etc. Nevertheless, existing big data analytical models for hadoop comply with MapReduce analytical workloads that process a small segment of the whole data set, thus failing to assess the capabilities of the MapReduce model under heavy workloads that process exponentially accumulative data sizes.[1] In all social business and technical research applications, there is a need to process big data of data in efficient manner on normal uses data. In this paper, we have proposed an efficient technique to classify the big data from e-mail using firefly and naïve bayes classifier. Proposed technique is comprised into two phase, (i) Map reduce framework for training and (ii) Map reduce framework for testing. Initially, the input twitter data is given to the process to select the suitable feature for data classification. The traditional firefly algorithm is applied and the optimized feature space is adopted for the best fitting results. Once the best feature space is identified through firefly algorithm, the data classification is done using the naïve bayes classifier. Here, these two processes are effectively distributed based on the concept given in Map-Reduce framework. The results of the experiment are validated using evaluation metrics namely, computation time, accuracy, specificity and sensitivity. For comparative analysis, proposed big data classification is compared with the existing works of naïve bayes and neural network.","PeriodicalId":430848,"journal":{"name":"2014 International Conference on Contemporary Computing and Informatics (IC3I)","volume":"12 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2014-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129535208","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2014-11-01DOI: 10.1109/IC3I.2014.7019638
Virendrakumar A. Dhotre, K. Karande
In this paper, we propose a refined scheme on active learning from data streams where data volumes grow continuously. The objective is to label a small portion of stream data for which a model is derived to predict future instances as accurately as possible. We propose a classifier-ensemble based active learning framework which selectively labels instances from data streams to build an ensemble classifier. Classifier ensemble's variance directly corresponds to its error rates and the efforts of reducing the variance is equivalent to improving its prediction accuracy. We introduce a Minimum-Variance principle to guide instance labeling process for data streams. The MV principle and the optimal weighting module are proposed to be combined to build an active learning framework for data streams. Results and implementation demonstrate that the percentage of accuracy of the Minimum variance margin method is good as compared to other methods.
{"title":"Refinement of data streams using Minimum Variance principle","authors":"Virendrakumar A. Dhotre, K. Karande","doi":"10.1109/IC3I.2014.7019638","DOIUrl":"https://doi.org/10.1109/IC3I.2014.7019638","url":null,"abstract":"In this paper, we propose a refined scheme on active learning from data streams where data volumes grow continuously. The objective is to label a small portion of stream data for which a model is derived to predict future instances as accurately as possible. We propose a classifier-ensemble based active learning framework which selectively labels instances from data streams to build an ensemble classifier. Classifier ensemble's variance directly corresponds to its error rates and the efforts of reducing the variance is equivalent to improving its prediction accuracy. We introduce a Minimum-Variance principle to guide instance labeling process for data streams. The MV principle and the optimal weighting module are proposed to be combined to build an active learning framework for data streams. Results and implementation demonstrate that the percentage of accuracy of the Minimum variance margin method is good as compared to other methods.","PeriodicalId":430848,"journal":{"name":"2014 International Conference on Contemporary Computing and Informatics (IC3I)","volume":"34 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2014-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127410029","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2014-11-01DOI: 10.1109/IC3I.2014.7019617
S. V. Viraktamath, Divya Sakaray, G. V. Attimarad
An interleaving is a concept which is used in conjunction with error correcting codes to counteract the effect of burst errors. Convolutional codes are frequently used to correct errors in noisy channels. The Viterbi algorithm is the most extensively employed decoding algorithm for convolutional codes. In this paper we present our studies of impact of interleaver on performance of Convolutional Encoder and Decoder (CODEC) for burst errors as well as for distributed errors. Also the performance of the Viterbi algorithm for different generator polynomials is presented. The hard decision with ½ rate coding technique is considered in this paper.
{"title":"Impact of interleaver and trace back length on performance of CODEC for burst errors","authors":"S. V. Viraktamath, Divya Sakaray, G. V. Attimarad","doi":"10.1109/IC3I.2014.7019617","DOIUrl":"https://doi.org/10.1109/IC3I.2014.7019617","url":null,"abstract":"An interleaving is a concept which is used in conjunction with error correcting codes to counteract the effect of burst errors. Convolutional codes are frequently used to correct errors in noisy channels. The Viterbi algorithm is the most extensively employed decoding algorithm for convolutional codes. In this paper we present our studies of impact of interleaver on performance of Convolutional Encoder and Decoder (CODEC) for burst errors as well as for distributed errors. Also the performance of the Viterbi algorithm for different generator polynomials is presented. The hard decision with ½ rate coding technique is considered in this paper.","PeriodicalId":430848,"journal":{"name":"2014 International Conference on Contemporary Computing and Informatics (IC3I)","volume":"35 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2014-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"132404139","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2014-11-01DOI: 10.1109/IC3I.2014.7019716
V. Menaria, D. Soni, A. Nagaraju, S. Jain
In a large scale sensor network, minimum spanning tree is evaluated to route data to a sink node in a hop by hop manner. But in this route any node can be compromised or a compromised node can be included and it can inject false data or it can alter the existing data. Therefore, to provide a security we use a COmpromised nOde Locator protocol (COOL) by which we can remove compromised node from the network. When a compromised node is detected then this protocol prevents further damages from misbehaved node and forms a reliable and energy saving sensor network. Thus in our proposed algorithm, we make a path using minimum spanning tree and maintains security (COOL protocol) in wireless sensor networks. Thus, by combining both (MST and COOL protocol) we creates a secure and energy conserving environment in which sensor nodes can communicate through the sink node which is the node whom all nodes send the data through routing. Also we can check the node consistency using the hash values.
{"title":"Secure and energy efficient routing algorithm for wireless sensor networks","authors":"V. Menaria, D. Soni, A. Nagaraju, S. Jain","doi":"10.1109/IC3I.2014.7019716","DOIUrl":"https://doi.org/10.1109/IC3I.2014.7019716","url":null,"abstract":"In a large scale sensor network, minimum spanning tree is evaluated to route data to a sink node in a hop by hop manner. But in this route any node can be compromised or a compromised node can be included and it can inject false data or it can alter the existing data. Therefore, to provide a security we use a COmpromised nOde Locator protocol (COOL) by which we can remove compromised node from the network. When a compromised node is detected then this protocol prevents further damages from misbehaved node and forms a reliable and energy saving sensor network. Thus in our proposed algorithm, we make a path using minimum spanning tree and maintains security (COOL protocol) in wireless sensor networks. Thus, by combining both (MST and COOL protocol) we creates a secure and energy conserving environment in which sensor nodes can communicate through the sink node which is the node whom all nodes send the data through routing. Also we can check the node consistency using the hash values.","PeriodicalId":430848,"journal":{"name":"2014 International Conference on Contemporary Computing and Informatics (IC3I)","volume":"640 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2014-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"132893070","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2014-11-01DOI: 10.1109/IC3I.2014.7019796
S. Chaudhari, R. Gulati
This paper presents a system for separation and recognition of offline printed Gujarati and English digits using template matching. Sample images of different quality of papers were collected. They were scanned at 200 dpi. Various preprocessing operations were performed on the digitized images followed by segmentation. Segmented image of various sizes was normalized to get an image of uniform size. Then the pixel density was calculated as binary pattern and a feature vector was created. These features were used in template matching for the classification of digits. The recognition rate was tested on images of 3 different sizes viz. 24 × 24, 32 × 40, and 48 × 48 for offline printed Gujarati and English digits. We collected 200 image samples which include more than 4200 symbols of both Gujarati and English digits. The results were evaluated for different image sizes of 24 × 24, 32 × 40, and 48 × 48. The overall recognition rates were 97.43, 98.30, and 97.28 for Gujarati digits and 99.07, 98.88, and 99.34 for English digits respectively.
{"title":"Performance assessment of different image sizes for printed Gujarati and English digits using template matching","authors":"S. Chaudhari, R. Gulati","doi":"10.1109/IC3I.2014.7019796","DOIUrl":"https://doi.org/10.1109/IC3I.2014.7019796","url":null,"abstract":"This paper presents a system for separation and recognition of offline printed Gujarati and English digits using template matching. Sample images of different quality of papers were collected. They were scanned at 200 dpi. Various preprocessing operations were performed on the digitized images followed by segmentation. Segmented image of various sizes was normalized to get an image of uniform size. Then the pixel density was calculated as binary pattern and a feature vector was created. These features were used in template matching for the classification of digits. The recognition rate was tested on images of 3 different sizes viz. 24 × 24, 32 × 40, and 48 × 48 for offline printed Gujarati and English digits. We collected 200 image samples which include more than 4200 symbols of both Gujarati and English digits. The results were evaluated for different image sizes of 24 × 24, 32 × 40, and 48 × 48. The overall recognition rates were 97.43, 98.30, and 97.28 for Gujarati digits and 99.07, 98.88, and 99.34 for English digits respectively.","PeriodicalId":430848,"journal":{"name":"2014 International Conference on Contemporary Computing and Informatics (IC3I)","volume":"29 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2014-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"131964599","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}