Pub Date : 2013-06-10DOI: 10.1109/ISSP.2013.6526937
B. Chandrasekhar, V. S. Babu, S. S. Medasani
Automatic Traffic Sign Recognition has gained significant impetus among the research community in recent times. Increasing demands in the arenas of Autonomous Vehicle Navigation and Driver Assistance Systems is making this field of research more attractive. In this paper, we developed a technique which uses Sparse Representation based Classification coupled with Boundary Discriminative Factor (BDF) for recognizing traffic signs. The performance of this system is compared with one of the existing classifiers, Convolutional Neural Networks (CNNs) which has been employed in many real-time systems. This method also helps in reducing the enormous training time required for CNNs.
{"title":"Traffic sign representation using sparse-representations","authors":"B. Chandrasekhar, V. S. Babu, S. S. Medasani","doi":"10.1109/ISSP.2013.6526937","DOIUrl":"https://doi.org/10.1109/ISSP.2013.6526937","url":null,"abstract":"Automatic Traffic Sign Recognition has gained significant impetus among the research community in recent times. Increasing demands in the arenas of Autonomous Vehicle Navigation and Driver Assistance Systems is making this field of research more attractive. In this paper, we developed a technique which uses Sparse Representation based Classification coupled with Boundary Discriminative Factor (BDF) for recognizing traffic signs. The performance of this system is compared with one of the existing classifiers, Convolutional Neural Networks (CNNs) which has been employed in many real-time systems. This method also helps in reducing the enormous training time required for CNNs.","PeriodicalId":354719,"journal":{"name":"2013 International Conference on Intelligent Systems and Signal Processing (ISSP)","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2013-06-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127459336","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2013-03-01DOI: 10.1109/ISSP.2013.6526865
K. Ravulakollu, K. Burn
Localization is very essential for interaction when it comes to multisensory integration. Based on Superior Colliculus (SC) motivation, the audio and visual signal processing during the stimuli integration is investigated. A novel methodology is proposed using neural network architecture that can localize effectively, especially in integrating stimuli of varied intensities in lower order audio and visual signals. During the integration, cases arise where the SC is unable to localize the source due to simultaneous arrival of too weak or too strong stimuli, causing enhancement and depression phenomena. This phenomena arise when the SC is not able to localize the source based on the given stimuli intensities. This paper provides a dual layered neural network model that integrates visual and audio sensory stimuli and also drives a way to track the stimuli source. This behavior is applicable for guided robots that help humans to track or cooperate for tasks like personal assistance, route guidance and incident tracking applications.
{"title":"A novel methodology for exploring enhancement and depression phenomena in multisensory localization: A biologically inspired solution from Superior Colliculus","authors":"K. Ravulakollu, K. Burn","doi":"10.1109/ISSP.2013.6526865","DOIUrl":"https://doi.org/10.1109/ISSP.2013.6526865","url":null,"abstract":"Localization is very essential for interaction when it comes to multisensory integration. Based on Superior Colliculus (SC) motivation, the audio and visual signal processing during the stimuli integration is investigated. A novel methodology is proposed using neural network architecture that can localize effectively, especially in integrating stimuli of varied intensities in lower order audio and visual signals. During the integration, cases arise where the SC is unable to localize the source due to simultaneous arrival of too weak or too strong stimuli, causing enhancement and depression phenomena. This phenomena arise when the SC is not able to localize the source based on the given stimuli intensities. This paper provides a dual layered neural network model that integrates visual and audio sensory stimuli and also drives a way to track the stimuli source. This behavior is applicable for guided robots that help humans to track or cooperate for tasks like personal assistance, route guidance and incident tracking applications.","PeriodicalId":354719,"journal":{"name":"2013 International Conference on Intelligent Systems and Signal Processing (ISSP)","volume":"58-60 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2013-03-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"123127700","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2013-03-01DOI: 10.1109/ISSP.2013.6526944
A. Shingala, P. Virparia
Effectiveness of Information Retrieval from Structured Database through Natural Language provides high utility value and ease of use. The challenge in Information Retrieval is to derive the users intended from limited number of query words (sometimes just one word) to extract the relevant information from structured database. It calls for correct interpretation, disambiguation and context resolution in natural language query processing. These difficulties can be reduced by limiting the domain of applications. In this paper we present our work in designing and developing Student Interface System by querying user in Natural Language (English). Our model consists of (i) semantically parsed the user inputted sentence (ii) transforming it into intermediate query form and finally (iii) convert it into structured query language. The system is developed in JAVA using JDBC for Student Information System. The same approach and methodology can be used in developing queries using vernacular languages.
{"title":"Enhancing the relevance of information retrieval by querying the database in natural form","authors":"A. Shingala, P. Virparia","doi":"10.1109/ISSP.2013.6526944","DOIUrl":"https://doi.org/10.1109/ISSP.2013.6526944","url":null,"abstract":"Effectiveness of Information Retrieval from Structured Database through Natural Language provides high utility value and ease of use. The challenge in Information Retrieval is to derive the users intended from limited number of query words (sometimes just one word) to extract the relevant information from structured database. It calls for correct interpretation, disambiguation and context resolution in natural language query processing. These difficulties can be reduced by limiting the domain of applications. In this paper we present our work in designing and developing Student Interface System by querying user in Natural Language (English). Our model consists of (i) semantically parsed the user inputted sentence (ii) transforming it into intermediate query form and finally (iii) convert it into structured query language. The system is developed in JAVA using JDBC for Student Information System. The same approach and methodology can be used in developing queries using vernacular languages.","PeriodicalId":354719,"journal":{"name":"2013 International Conference on Intelligent Systems and Signal Processing (ISSP)","volume":"8 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2013-03-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114643850","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2013-03-01DOI: 10.1109/ISSP.2013.6526921
D. Agarwal, K. S. Reddy, S. K. Sahoo
Finite impulse response (FIR) filters are extensively used in mobiles, TVs and offer several good properties like guaranteed stability and exact linear phase. This paper presents a design approach that reduces the FIR filter order leading to optimized hardware implementation. The proposed approach begins by designing the FIR filter with the given specifications using the equiripple method. The order thus obtained is further reduced iteratively, but keeping the frequency response within specification. The coefficients of reduced filter are then quantized successively with lesser number of bits by an iterative algorithm to a level where its frequency response still remains within the original requirements. The proposed filter, the over specified optimized filter [6] and normal filters are implemented in verilog. The synthesis result shows that the proposed FIR filter uses 28% and 57% less hardware in comparison to the optimized implementation and normal implementation.
{"title":"FIR filter design approach for reduced hardware with order optimization and coefficient quantization","authors":"D. Agarwal, K. S. Reddy, S. K. Sahoo","doi":"10.1109/ISSP.2013.6526921","DOIUrl":"https://doi.org/10.1109/ISSP.2013.6526921","url":null,"abstract":"Finite impulse response (FIR) filters are extensively used in mobiles, TVs and offer several good properties like guaranteed stability and exact linear phase. This paper presents a design approach that reduces the FIR filter order leading to optimized hardware implementation. The proposed approach begins by designing the FIR filter with the given specifications using the equiripple method. The order thus obtained is further reduced iteratively, but keeping the frequency response within specification. The coefficients of reduced filter are then quantized successively with lesser number of bits by an iterative algorithm to a level where its frequency response still remains within the original requirements. The proposed filter, the over specified optimized filter [6] and normal filters are implemented in verilog. The synthesis result shows that the proposed FIR filter uses 28% and 57% less hardware in comparison to the optimized implementation and normal implementation.","PeriodicalId":354719,"journal":{"name":"2013 International Conference on Intelligent Systems and Signal Processing (ISSP)","volume":"49 20","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2013-03-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"120819915","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2013-03-01DOI: 10.1109/ISSP.2013.6526894
P. Gupta, K. K. Sharma, S. Joshi, A. Dube
The congestive heart failure is a major cause of concern among all types of cardiovascular problems and is attributed to imbalances in sympathetic and parasympathetic nervous systems. In this paper we propose an algorithm for detection of congestive heart failure condition using the variability of instantaneous frequency of intrinsic mode functions obtained using Hilbert-Hunag Transform of RR time series in different subjects. It is observed that the instantaneous frequency of intrinsic mode functions is a key feature that is altered quite significantly with imbalances in autonomic nervous system. It is also observed through simulation results using MATLAB that higher order intrinsic mode functions of RR time series exhibit lower variations of the instantaneous frequency compared to lower order intrinsic mode functions. On the basis of this variability of instantaneous frequency of higher order intrinsic mode functions we are able to detect congestive heart failure condition in different subjects.
{"title":"Investigations on instantaneous frequency variations of RR time series in intrinsic mode functions of congestive heart failure subjects","authors":"P. Gupta, K. K. Sharma, S. Joshi, A. Dube","doi":"10.1109/ISSP.2013.6526894","DOIUrl":"https://doi.org/10.1109/ISSP.2013.6526894","url":null,"abstract":"The congestive heart failure is a major cause of concern among all types of cardiovascular problems and is attributed to imbalances in sympathetic and parasympathetic nervous systems. In this paper we propose an algorithm for detection of congestive heart failure condition using the variability of instantaneous frequency of intrinsic mode functions obtained using Hilbert-Hunag Transform of RR time series in different subjects. It is observed that the instantaneous frequency of intrinsic mode functions is a key feature that is altered quite significantly with imbalances in autonomic nervous system. It is also observed through simulation results using MATLAB that higher order intrinsic mode functions of RR time series exhibit lower variations of the instantaneous frequency compared to lower order intrinsic mode functions. On the basis of this variability of instantaneous frequency of higher order intrinsic mode functions we are able to detect congestive heart failure condition in different subjects.","PeriodicalId":354719,"journal":{"name":"2013 International Conference on Intelligent Systems and Signal Processing (ISSP)","volume":"66 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2013-03-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"121306583","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2013-03-01DOI: 10.1109/ISSP.2013.6526867
B. Vedik, A. Chandel
In the present paper differential evolution methodology has been proposed to place minimum number of phasor measurement units (PMUs) at strategic locations of the power system such that the system is completely observable. Concept of integer programming formulation has been utilized for ensuring the observability of the system configuration and to determine the optimal number and position of the PMUs differential evolution method is used in the present work. The proposed method has been applied on the seven bus system, IEEE 14-bus system and IEEE 57-bus system for optimal placement of PMUs with and without the consideration of zero injection bus pseudo-measurements. Results thus obtained have also been validated with other methods viz. integer programming method and particle swarm optimization.
{"title":"Optimal placement of PMUs using differential evolution","authors":"B. Vedik, A. Chandel","doi":"10.1109/ISSP.2013.6526867","DOIUrl":"https://doi.org/10.1109/ISSP.2013.6526867","url":null,"abstract":"In the present paper differential evolution methodology has been proposed to place minimum number of phasor measurement units (PMUs) at strategic locations of the power system such that the system is completely observable. Concept of integer programming formulation has been utilized for ensuring the observability of the system configuration and to determine the optimal number and position of the PMUs differential evolution method is used in the present work. The proposed method has been applied on the seven bus system, IEEE 14-bus system and IEEE 57-bus system for optimal placement of PMUs with and without the consideration of zero injection bus pseudo-measurements. Results thus obtained have also been validated with other methods viz. integer programming method and particle swarm optimization.","PeriodicalId":354719,"journal":{"name":"2013 International Conference on Intelligent Systems and Signal Processing (ISSP)","volume":"34 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2013-03-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"123536075","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2013-03-01DOI: 10.1109/ISSP.2013.6526893
N. Chitaliya, S. Patel, A. Trivedi, C. Rao
In this paper, comparative analysis for feature extraction and recognition based on fast discrete Curvelet transform via wrapping and discrete Contourlet transform using Neural Network and Euclidean distance classifier is proposed. The pre processing is applied on the each image of dataset. Each image from the Training Dataset is decomposed using the fast discrete Curvelet transform and discrete Contourlet transform. The Curvelet coefficients as well as Contourlet coefficients of low frequency & high frequency in different orientation and scales are obtained. The frequency coefficients are used as a feature vector for further process. The PCA (Principal component analysis) is used to reduce the dimensionality of the feature vector. Finally the reduced feature vector is used to train the Classifier. The test databases are projected on Curvelet-PCA and Contourlet-PCA subspace to retrieve reduced coefficients. These coefficients are used to match the feature vector coefficients of training dataset using Neural Network Classifier. The results are compared with the results of Euclidean distance classifier for both the methods.
{"title":"Comparative analysis using fast discrete Curvelet transform via wrapping and discrete Contourlet transform for feature extraction and recognition","authors":"N. Chitaliya, S. Patel, A. Trivedi, C. Rao","doi":"10.1109/ISSP.2013.6526893","DOIUrl":"https://doi.org/10.1109/ISSP.2013.6526893","url":null,"abstract":"In this paper, comparative analysis for feature extraction and recognition based on fast discrete Curvelet transform via wrapping and discrete Contourlet transform using Neural Network and Euclidean distance classifier is proposed. The pre processing is applied on the each image of dataset. Each image from the Training Dataset is decomposed using the fast discrete Curvelet transform and discrete Contourlet transform. The Curvelet coefficients as well as Contourlet coefficients of low frequency & high frequency in different orientation and scales are obtained. The frequency coefficients are used as a feature vector for further process. The PCA (Principal component analysis) is used to reduce the dimensionality of the feature vector. Finally the reduced feature vector is used to train the Classifier. The test databases are projected on Curvelet-PCA and Contourlet-PCA subspace to retrieve reduced coefficients. These coefficients are used to match the feature vector coefficients of training dataset using Neural Network Classifier. The results are compared with the results of Euclidean distance classifier for both the methods.","PeriodicalId":354719,"journal":{"name":"2013 International Conference on Intelligent Systems and Signal Processing (ISSP)","volume":"30 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2013-03-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126985505","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2013-03-01DOI: 10.1109/ISSP.2013.6526872
S. Srivastava, R. Kumar
In this paper, we consider experiential evidences in support of a set of object-oriented software metrics. In particular, we look at the object oriented design metrics of Chidamber and Kemerer, and their applicability in different application domains. Many of the early quality models have followed an approach, in which a set of factors that influence quality and relationships between different quality factors, are defined, with little scope of measurement. But the measurement plays an important role in every phase of software development process. The work, therefore, emphasizes on quantitative measurement of different quality attributes such as reusability, maintainability, testability, reliability and efficiency. With the widespread use of Object Oriented Technologies, CK metrics have proved to be very useful. So we have used CK metrics for measurement of these qualities attributes. The quality attributes are affected by values of CK metrics. We have derived linearly related equations from CK metrics to measure these quality attributes. Different concepts about software quality characteristics are reviewed and discussed in the Dissertation. We briefly describe the metrics, and present our empirical findings, arising from our analysis of systems taken from a number of different application domains. Our investigations have led us to conclude that a subset of the metrics can be of great value to software developers, maintainers and project managers. We have also taken an empirical study in Object Oriented language C++.
{"title":"Indirect method to measure software quality using CK-OO suite","authors":"S. Srivastava, R. Kumar","doi":"10.1109/ISSP.2013.6526872","DOIUrl":"https://doi.org/10.1109/ISSP.2013.6526872","url":null,"abstract":"In this paper, we consider experiential evidences in support of a set of object-oriented software metrics. In particular, we look at the object oriented design metrics of Chidamber and Kemerer, and their applicability in different application domains. Many of the early quality models have followed an approach, in which a set of factors that influence quality and relationships between different quality factors, are defined, with little scope of measurement. But the measurement plays an important role in every phase of software development process. The work, therefore, emphasizes on quantitative measurement of different quality attributes such as reusability, maintainability, testability, reliability and efficiency. With the widespread use of Object Oriented Technologies, CK metrics have proved to be very useful. So we have used CK metrics for measurement of these qualities attributes. The quality attributes are affected by values of CK metrics. We have derived linearly related equations from CK metrics to measure these quality attributes. Different concepts about software quality characteristics are reviewed and discussed in the Dissertation. We briefly describe the metrics, and present our empirical findings, arising from our analysis of systems taken from a number of different application domains. Our investigations have led us to conclude that a subset of the metrics can be of great value to software developers, maintainers and project managers. We have also taken an empirical study in Object Oriented language C++.","PeriodicalId":354719,"journal":{"name":"2013 International Conference on Intelligent Systems and Signal Processing (ISSP)","volume":"89 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2013-03-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128786035","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2013-03-01DOI: 10.1109/ISSP.2013.6526931
P. S. Abhijith, MC Srivastava, A. P. Mishra, M. Goswami, Babu R. Singh
Increasing need of data protection in computer networks led to the development of several cryptographic algorithms hence sending data securely over a transmission link is critically important in many applications. Hardware implementation of cryptographic algorithms are physically secure than software implementations since outside attackers cannot modify them. In order to achieve higher performance in today's heavily loaded communication networks, hardware implementation is a wise choice in terms of better speed and reliability. This paper presents the hardware implementation of Advanced Encryption Standard (AES) algorithm using Xilinx-virtex 5 Field Programmable Gate Array (FPGA). In order to achieve higher speed and lesser area, Sub Byte operation, Inverse Sub Byte operation, Mix Column operation and Inverse Mix Column operations are designed as Look Up Tables (LUTs) and Read Only Memories (ROMs). This approach gives a throughput of 3.74Gbps utilizing only 1% of total slices in xc5vlx110t-3-ff1136 target device.
{"title":"High performance hardware implementation of AES using minimal resources","authors":"P. S. Abhijith, MC Srivastava, A. P. Mishra, M. Goswami, Babu R. Singh","doi":"10.1109/ISSP.2013.6526931","DOIUrl":"https://doi.org/10.1109/ISSP.2013.6526931","url":null,"abstract":"Increasing need of data protection in computer networks led to the development of several cryptographic algorithms hence sending data securely over a transmission link is critically important in many applications. Hardware implementation of cryptographic algorithms are physically secure than software implementations since outside attackers cannot modify them. In order to achieve higher performance in today's heavily loaded communication networks, hardware implementation is a wise choice in terms of better speed and reliability. This paper presents the hardware implementation of Advanced Encryption Standard (AES) algorithm using Xilinx-virtex 5 Field Programmable Gate Array (FPGA). In order to achieve higher speed and lesser area, Sub Byte operation, Inverse Sub Byte operation, Mix Column operation and Inverse Mix Column operations are designed as Look Up Tables (LUTs) and Read Only Memories (ROMs). This approach gives a throughput of 3.74Gbps utilizing only 1% of total slices in xc5vlx110t-3-ff1136 target device.","PeriodicalId":354719,"journal":{"name":"2013 International Conference on Intelligent Systems and Signal Processing (ISSP)","volume":"21 126 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2013-03-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"123642289","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2013-03-01DOI: 10.1109/ISSP.2013.6526895
K. Mane, V. G. Pujari
Signature is a behavioral trait of an individual and forms a special class of handwriting in which legible letters or words may not be exhibited. The purpose of this paper is to design a new system to make the verification of signatures size and angle invariant for cheque system. The invariance can be achieved by scaling and rotational manipulations on the target image. That is the number of crests, toughs and curves remains the same irrespective of the size and orientation of the image. The ratio between consecutive crests and troughs there by remain the same and hence can be used to determine the genuineness of a signature. This system will be used in financial and business to automatic signature verification. It also includes the verification of the account number and amount on the cheque using OCR(Optical Character Recognition) and finds out if the cheque is cleared or bounced.
{"title":"Signature matching with automated cheque system","authors":"K. Mane, V. G. Pujari","doi":"10.1109/ISSP.2013.6526895","DOIUrl":"https://doi.org/10.1109/ISSP.2013.6526895","url":null,"abstract":"Signature is a behavioral trait of an individual and forms a special class of handwriting in which legible letters or words may not be exhibited. The purpose of this paper is to design a new system to make the verification of signatures size and angle invariant for cheque system. The invariance can be achieved by scaling and rotational manipulations on the target image. That is the number of crests, toughs and curves remains the same irrespective of the size and orientation of the image. The ratio between consecutive crests and troughs there by remain the same and hence can be used to determine the genuineness of a signature. This system will be used in financial and business to automatic signature verification. It also includes the verification of the account number and amount on the cheque using OCR(Optical Character Recognition) and finds out if the cheque is cleared or bounced.","PeriodicalId":354719,"journal":{"name":"2013 International Conference on Intelligent Systems and Signal Processing (ISSP)","volume":"19 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2013-03-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126492775","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}