Pub Date : 2009-12-11DOI: 10.1109/ICET.2009.5353166
Saad Minhas, M. Javed
Biometric technology uses human characteristics for their reliable identification. Iris recognition is a biometric technology that utilizes iris for human identification. The human iris contains very discriminating features and hence provides the accurate authentication of persons. To extract the discriminating iris features, different methods have been used in the past. In this work, gabor filter is applied on iris images in two different ways. Firstly, it is applied on the entire image at once and unique features are extracted from the image. Secondly, it is used to capture local information from the image, which is then combined to create global features. A comparison of results is presented using different number of filter banks containing 15, 20, 25, 30 and 35 filters. A number of experiments are performed using CASIA version 1 iris database. By comparing the output feature vectors using hamming distance, it is found that the best accuracy of 99.16% is achieved after capturing the local information from the iris images.
生物识别技术利用人类的特征进行可靠的识别。虹膜识别是一种利用虹膜进行人体识别的生物识别技术。人的虹膜包含非常有区别的特征,因此提供了准确的身份验证。为了提取鉴别虹膜特征,过去使用了不同的方法。在这项工作中,gabor滤波器以两种不同的方式应用于虹膜图像。首先,将其应用于整幅图像,从图像中提取出独特的特征;其次,它用于从图像中捕获局部信息,然后将其组合成全局特征。使用不同数量的滤波器组(包括15、20、25、30和35个滤波器)对结果进行了比较。利用CASIA version 1鸢尾花数据库进行了大量实验。通过对比使用汉明距离的输出特征向量,发现从虹膜图像中提取局部信息后,准确率达到了99.16%。
{"title":"Iris feature extraction using gabor filter","authors":"Saad Minhas, M. Javed","doi":"10.1109/ICET.2009.5353166","DOIUrl":"https://doi.org/10.1109/ICET.2009.5353166","url":null,"abstract":"Biometric technology uses human characteristics for their reliable identification. Iris recognition is a biometric technology that utilizes iris for human identification. The human iris contains very discriminating features and hence provides the accurate authentication of persons. To extract the discriminating iris features, different methods have been used in the past. In this work, gabor filter is applied on iris images in two different ways. Firstly, it is applied on the entire image at once and unique features are extracted from the image. Secondly, it is used to capture local information from the image, which is then combined to create global features. A comparison of results is presented using different number of filter banks containing 15, 20, 25, 30 and 35 filters. A number of experiments are performed using CASIA version 1 iris database. By comparing the output feature vectors using hamming distance, it is found that the best accuracy of 99.16% is achieved after capturing the local information from the iris images.","PeriodicalId":307661,"journal":{"name":"2009 International Conference on Emerging Technologies","volume":"14 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2009-12-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"131792536","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2009-12-11DOI: 10.1109/ICET.2009.5353173
M. A. Siddiqui
In the time series data, a regressand may respond to regressors with a time lag. This study employs dynamic methodology of Almon Polynomial Distributed-Lag (PDL) model as an application to the stocks of 13 selected insurance companies, using daily data for the period from 1996 to 2008. Realizing the importance of causality in economics and finance, this study focuses on the causal relationship between investment, growth in returns and market uncertainty. The study also employs VAR which is of non-structural approaches amongst the a-theoretic models. In this study I have constrained the coefficients on the distributed lag to lie on a third degree polynomial with the satisfactory test results of near and the far end points of the lag distribution. Generating the series of risk variable through GARCH (p, q) is also an academic contribution of this study. The Almon PDL model may also be considered as an alternative to the lagged regression models. For the PDL avoids the estimation problems associated with the autoregressive models. This study is in a way an attempt to invite researchers and practitioners for the maximum application of these very important dynamic models in economics, business and finance. The results of this study reveal mixed causality among the three variables. The Almon Polynomial Distributed Lag results support the theory of adaptive expectations.
{"title":"An application of VAR and Almon Polynomial Distributed Lag models to insurance stocks: Evidence from KSE","authors":"M. A. Siddiqui","doi":"10.1109/ICET.2009.5353173","DOIUrl":"https://doi.org/10.1109/ICET.2009.5353173","url":null,"abstract":"In the time series data, a regressand may respond to regressors with a time lag. This study employs dynamic methodology of Almon Polynomial Distributed-Lag (PDL) model as an application to the stocks of 13 selected insurance companies, using daily data for the period from 1996 to 2008. Realizing the importance of causality in economics and finance, this study focuses on the causal relationship between investment, growth in returns and market uncertainty. The study also employs VAR which is of non-structural approaches amongst the a-theoretic models. In this study I have constrained the coefficients on the distributed lag to lie on a third degree polynomial with the satisfactory test results of near and the far end points of the lag distribution. Generating the series of risk variable through GARCH (p, q) is also an academic contribution of this study. The Almon PDL model may also be considered as an alternative to the lagged regression models. For the PDL avoids the estimation problems associated with the autoregressive models. This study is in a way an attempt to invite researchers and practitioners for the maximum application of these very important dynamic models in economics, business and finance. The results of this study reveal mixed causality among the three variables. The Almon Polynomial Distributed Lag results support the theory of adaptive expectations.","PeriodicalId":307661,"journal":{"name":"2009 International Conference on Emerging Technologies","volume":"2 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2009-12-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"131349054","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2009-12-11DOI: 10.1109/ICET.2009.5353159
S. Farid, F. Ahmed
Image segmentation is a major step in image analysis and processing. Segmentation is performed through several methods. In this work Niblack's method of segmentation is further studied. It is one of the local thresholding techniques for segmentation. The output of Niblack's method is significant and has most acceptable result out of all thresholding techniques in segmenting text documents. In this work the same method is applied on images keeping one of the variables i.e. weight k of Niblack's method constant while changing the other (window size) from images to images. The output image is better segmented but the background is noisy. Improvements in the resultant images are demonstrated by applying the morphological operations of opening and closing. Opening and closing are combination of two fundamental morphological operations dilation and erosion. Dilation thickens objects in a binary image by adding pixels to the boundaries of the objects, while erosion shrinks objects in a binary image.
{"title":"Application of Niblack's method on images","authors":"S. Farid, F. Ahmed","doi":"10.1109/ICET.2009.5353159","DOIUrl":"https://doi.org/10.1109/ICET.2009.5353159","url":null,"abstract":"Image segmentation is a major step in image analysis and processing. Segmentation is performed through several methods. In this work Niblack's method of segmentation is further studied. It is one of the local thresholding techniques for segmentation. The output of Niblack's method is significant and has most acceptable result out of all thresholding techniques in segmenting text documents. In this work the same method is applied on images keeping one of the variables i.e. weight k of Niblack's method constant while changing the other (window size) from images to images. The output image is better segmented but the background is noisy. Improvements in the resultant images are demonstrated by applying the morphological operations of opening and closing. Opening and closing are combination of two fundamental morphological operations dilation and erosion. Dilation thickens objects in a binary image by adding pixels to the boundaries of the objects, while erosion shrinks objects in a binary image.","PeriodicalId":307661,"journal":{"name":"2009 International Conference on Emerging Technologies","volume":"87 5 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2009-12-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"133172907","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2009-12-11DOI: 10.1109/ICET.2009.5353206
S. Mussadiq, F. Ahmad
Active research has been going on in both computer vision and computer graphics to have more accurate and more realistic 3D models of an object. Efforts are being made to have a realistic view of an object by texturing it with real images of the object, to have a feel as if it is a real object. A method has been proposed that also makes use of real ortho-mages of the object for texturing. A method has been proposed for an accurate and realistic 3D modeling. In the proposed method first the model of an object will be created using CAD software, by taking manual calculations of its geometry. Then sufficient number of ortho photographs will be taken such that they could cover the whole object. Once the 3D model of an object is reconstructed, the object will be textured map with the real images/photos of the object taken. Then this model can be animated to view the object from any desired viewpoint. The proposed technique is suitable for modeling of an object and its environment.
{"title":"A realistic view of 3D models of artifacts using photogrammetric approach","authors":"S. Mussadiq, F. Ahmad","doi":"10.1109/ICET.2009.5353206","DOIUrl":"https://doi.org/10.1109/ICET.2009.5353206","url":null,"abstract":"Active research has been going on in both computer vision and computer graphics to have more accurate and more realistic 3D models of an object. Efforts are being made to have a realistic view of an object by texturing it with real images of the object, to have a feel as if it is a real object. A method has been proposed that also makes use of real ortho-mages of the object for texturing. A method has been proposed for an accurate and realistic 3D modeling. In the proposed method first the model of an object will be created using CAD software, by taking manual calculations of its geometry. Then sufficient number of ortho photographs will be taken such that they could cover the whole object. Once the 3D model of an object is reconstructed, the object will be textured map with the real images/photos of the object taken. Then this model can be animated to view the object from any desired viewpoint. The proposed technique is suitable for modeling of an object and its environment.","PeriodicalId":307661,"journal":{"name":"2009 International Conference on Emerging Technologies","volume":"21 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2009-12-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"125125555","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2009-12-11DOI: 10.1109/ICET.2009.5353150
Ehsan ul Haq, Muhammad Kazim Hafeez, Muhammad Salman Khan, Shoaib Sial, Arshad Riazuddin
In order to achieve low cost and reduced time to market goals ASIC and Embedded system designers have always struggled to come up with a basic platform, which once built and verified can easily be reconfigured and reused. Moreover they are also been challenged with compatibility issues of their designs with different processors. In this paper, we have presented a System-on-Chip (SoC) platform architecture, which once built can be modified for different processors with minimal effort. Using a bus architecture that allows easy addition and removal of various modules, our proposed SoC can be reconfigured and reused as a platform for various projects. Moreover, we have also included those modules in our chip which are the building blocks of almost all ASIC and embedded applications. Finally, implementation parameters of this SoC on Xilinx FPGA are reported.
{"title":"FPGA implementation of a low power, processor-independent and reusable System-on-Chip platform","authors":"Ehsan ul Haq, Muhammad Kazim Hafeez, Muhammad Salman Khan, Shoaib Sial, Arshad Riazuddin","doi":"10.1109/ICET.2009.5353150","DOIUrl":"https://doi.org/10.1109/ICET.2009.5353150","url":null,"abstract":"In order to achieve low cost and reduced time to market goals ASIC and Embedded system designers have always struggled to come up with a basic platform, which once built and verified can easily be reconfigured and reused. Moreover they are also been challenged with compatibility issues of their designs with different processors. In this paper, we have presented a System-on-Chip (SoC) platform architecture, which once built can be modified for different processors with minimal effort. Using a bus architecture that allows easy addition and removal of various modules, our proposed SoC can be reconfigured and reused as a platform for various projects. Moreover, we have also included those modules in our chip which are the building blocks of almost all ASIC and embedded applications. Finally, implementation parameters of this SoC on Xilinx FPGA are reported.","PeriodicalId":307661,"journal":{"name":"2009 International Conference on Emerging Technologies","volume":" 44","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2009-12-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"120830653","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2009-12-11DOI: 10.1109/ICET.2009.5353169
Shabana Habib, Owais Adnan, Nafees-ur-Rahman
Interpretation of medical image is often difficult and time consuming, even for experienced physicians. Medical imaging plays an important role in detecting infected area in lungs. Significant efforts within the field of radiation oncology have recently been centered on the ability to automatically detect lung infection in a breathing patient in real-time during radiation treatment using digital x-ray technology and image processing. The motivation of such a goal is to improve radiation treatments, possibly leading to an increase in survival rates. Diagnosis of X-rays can be used as an initial step in detecting infected area in lungs. This paper describes a pulmonary region in X-ray images. We developed a automated system for the X-ray images. The computer scans and marks suspicious looking areas in the image. Radiologists can then focus on those areas and decide if further evaluation is needed. To improve the diagnosis accuracy of this system we introduced a pre-processing stage which involves adjustment of the intensity and conversion to gray scale of X-ray image. Gaussian filter is used to remove the false structure. Then we process the digitized image and extract the lungs from X-ray image. Then we apply blood fill algorithm for lung region, localization of suspected infected area. As a result infected area of lung is detected.
{"title":"Automated detection of infected area in lungs","authors":"Shabana Habib, Owais Adnan, Nafees-ur-Rahman","doi":"10.1109/ICET.2009.5353169","DOIUrl":"https://doi.org/10.1109/ICET.2009.5353169","url":null,"abstract":"Interpretation of medical image is often difficult and time consuming, even for experienced physicians. Medical imaging plays an important role in detecting infected area in lungs. Significant efforts within the field of radiation oncology have recently been centered on the ability to automatically detect lung infection in a breathing patient in real-time during radiation treatment using digital x-ray technology and image processing. The motivation of such a goal is to improve radiation treatments, possibly leading to an increase in survival rates. Diagnosis of X-rays can be used as an initial step in detecting infected area in lungs. This paper describes a pulmonary region in X-ray images. We developed a automated system for the X-ray images. The computer scans and marks suspicious looking areas in the image. Radiologists can then focus on those areas and decide if further evaluation is needed. To improve the diagnosis accuracy of this system we introduced a pre-processing stage which involves adjustment of the intensity and conversion to gray scale of X-ray image. Gaussian filter is used to remove the false structure. Then we process the digitized image and extract the lungs from X-ray image. Then we apply blood fill algorithm for lung region, localization of suspected infected area. As a result infected area of lung is detected.","PeriodicalId":307661,"journal":{"name":"2009 International Conference on Emerging Technologies","volume":"82 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2009-12-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127196753","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2009-12-11DOI: 10.1109/ICET.2009.5353127
M. Alam, M. Sher, S.A. Husain
Mobility models represent real world scenarios for vehicular ad hoc networks (VANETs) and play a vital role in the performance evaluation of routing protocols. More research focus is now on the development of realistic mobility models for vehicular ad hoc networks. A number of mobility models have been presented and their impact on the performance on the routing protocols has been tested. In this paper we have introduced a new mobility model, Integrated Mobility Model (IMM), for vehicular ad hoc networks (VANETs). Integrated Mobility Model (IMM) is an integration of Manhattan Mobility Model, Freeway Mobility Model, Stop Sign Model and Traffic Sign Model and some other characteristics. In addition, we evaluated routing protocols AODV, DSR and OLSR using our Integrated Mobility Model and also Manhattan mobility model and Freeway mobility model and compared the results.
{"title":"Integrated Mobility Model (IMM) for VANETs simulation and its impact","authors":"M. Alam, M. Sher, S.A. Husain","doi":"10.1109/ICET.2009.5353127","DOIUrl":"https://doi.org/10.1109/ICET.2009.5353127","url":null,"abstract":"Mobility models represent real world scenarios for vehicular ad hoc networks (VANETs) and play a vital role in the performance evaluation of routing protocols. More research focus is now on the development of realistic mobility models for vehicular ad hoc networks. A number of mobility models have been presented and their impact on the performance on the routing protocols has been tested. In this paper we have introduced a new mobility model, Integrated Mobility Model (IMM), for vehicular ad hoc networks (VANETs). Integrated Mobility Model (IMM) is an integration of Manhattan Mobility Model, Freeway Mobility Model, Stop Sign Model and Traffic Sign Model and some other characteristics. In addition, we evaluated routing protocols AODV, DSR and OLSR using our Integrated Mobility Model and also Manhattan mobility model and Freeway mobility model and compared the results.","PeriodicalId":307661,"journal":{"name":"2009 International Conference on Emerging Technologies","volume":"34 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2009-12-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126387443","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2009-12-11DOI: 10.1109/ICET.2009.5353183
M. Atif, A. Rauf
One of the most important objectives in wireless communication is to transmit the information free of errors and to detect the data correctly. With a view to avoid occurrence of errors in communication channel, error correction techniques, also called channel coding, are used. Convolution encoding technique is the forerunner amongst those employed. In wireless communication systems, signal strength decreases logarithmically and results in fading. This fading causes random errors or burst errors (in case of deep fades). The burst errors are converted to random errors by interleaving techniques and then channel coding is used to combat the random errors. In convolutional codes, information bits are encoded by using primitive polynomials implemented in the form of shift registers. In this paper a method is proposed to detect the generator polynomial and the code rate of the convolution encoded data, once received. The information is encoded by Convolution (n, k, m) codes and then its generator polynomial is detected by using the Gaussian Elimination Method. Here n shows the data bit (parity and information), k represents the information bits and m shows the length of the registers. In Gaussian elimination method the variables are removed step by step. This elimination is different from the normal one in a sense that it is implemented over GF (2). This detection algorithm can be utilized efficiently to match convolutionally encoded reference stream to the one generated by above-mentioned convolutional encoder. This can also be utilized to verify the generator polynomial of the encoded output stream before feeding it to the complex decoder to avoid time-consuming and exhaustive debugging.
{"title":"Efficient implementation of Gaussian elimination method to recover generator polynomials of convolutional codes","authors":"M. Atif, A. Rauf","doi":"10.1109/ICET.2009.5353183","DOIUrl":"https://doi.org/10.1109/ICET.2009.5353183","url":null,"abstract":"One of the most important objectives in wireless communication is to transmit the information free of errors and to detect the data correctly. With a view to avoid occurrence of errors in communication channel, error correction techniques, also called channel coding, are used. Convolution encoding technique is the forerunner amongst those employed. In wireless communication systems, signal strength decreases logarithmically and results in fading. This fading causes random errors or burst errors (in case of deep fades). The burst errors are converted to random errors by interleaving techniques and then channel coding is used to combat the random errors. In convolutional codes, information bits are encoded by using primitive polynomials implemented in the form of shift registers. In this paper a method is proposed to detect the generator polynomial and the code rate of the convolution encoded data, once received. The information is encoded by Convolution (n, k, m) codes and then its generator polynomial is detected by using the Gaussian Elimination Method. Here n shows the data bit (parity and information), k represents the information bits and m shows the length of the registers. In Gaussian elimination method the variables are removed step by step. This elimination is different from the normal one in a sense that it is implemented over GF (2). This detection algorithm can be utilized efficiently to match convolutionally encoded reference stream to the one generated by above-mentioned convolutional encoder. This can also be utilized to verify the generator polynomial of the encoded output stream before feeding it to the complex decoder to avoid time-consuming and exhaustive debugging.","PeriodicalId":307661,"journal":{"name":"2009 International Conference on Emerging Technologies","volume":"30 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2009-12-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"116700899","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2009-12-11DOI: 10.1109/ICET.2009.5353177
Z. Khan, Khalid Hussain, Zafrullah Khan, Sana Ahmed Mir
RTOS performance analysis is critical during the design and integration of embedded software to guarantee that application time constraints will be met at run time. To select an appropriate Operating System for an embedded system for a specific application, OS services needs to be analyzed. These OS services are identified by parameters to form Performance Metrics. The Performance Metrics selected include Context switching time, Preemption time, Interrupt Latency, Semaphore Shuffling time. In this research the Performance Metrics is analyzed in order to select right OS for the an embedded system for a specific application.
{"title":"Identification and analysis of performance metrics for real time operating system","authors":"Z. Khan, Khalid Hussain, Zafrullah Khan, Sana Ahmed Mir","doi":"10.1109/ICET.2009.5353177","DOIUrl":"https://doi.org/10.1109/ICET.2009.5353177","url":null,"abstract":"RTOS performance analysis is critical during the design and integration of embedded software to guarantee that application time constraints will be met at run time. To select an appropriate Operating System for an embedded system for a specific application, OS services needs to be analyzed. These OS services are identified by parameters to form Performance Metrics. The Performance Metrics selected include Context switching time, Preemption time, Interrupt Latency, Semaphore Shuffling time. In this research the Performance Metrics is analyzed in order to select right OS for the an embedded system for a specific application.","PeriodicalId":307661,"journal":{"name":"2009 International Conference on Emerging Technologies","volume":"85 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2009-12-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129247540","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2009-12-11DOI: 10.1109/ICET.2009.5353157
Umair Mateen Khan, S. Khan, N. Ejaz, Riaz ur Rehman
Minutiae based approach is most widely used for fingerprint matching. Minutiae can be extracted either directly from gray-scaled image or from a thinned image. During matching, finding an exact match depends on the best matched minutiae pairs from both images. For matching stage, different kinds of features are extracted from extracted minutiae. The structure of some features allows us to have rotation and translation invariance. Minutiae based approach also has some drawbacks because it requires very lengthy preprocessing operations for minutiae extraction and still can result in false minutiae. Previously, to overcome this problem some kind of post-processing is used, which also eliminates valid minutiae along with false ones. So eventually, we can say that the strength of matching algorithm depends on the strength of extracted features from fingerprint. In our research, we have presented a new approach which uses wavelet based features which are fused with minutiae based features for matching purpose. In particular, we find that among the algorithms we studied, our proposed work have significant effects on overall performance. Experiment results show that using these features have made the matching process much more accurate even in the presence of false minutiae.
{"title":"A fingerprint verification system using minutiae and wavelet based features","authors":"Umair Mateen Khan, S. Khan, N. Ejaz, Riaz ur Rehman","doi":"10.1109/ICET.2009.5353157","DOIUrl":"https://doi.org/10.1109/ICET.2009.5353157","url":null,"abstract":"Minutiae based approach is most widely used for fingerprint matching. Minutiae can be extracted either directly from gray-scaled image or from a thinned image. During matching, finding an exact match depends on the best matched minutiae pairs from both images. For matching stage, different kinds of features are extracted from extracted minutiae. The structure of some features allows us to have rotation and translation invariance. Minutiae based approach also has some drawbacks because it requires very lengthy preprocessing operations for minutiae extraction and still can result in false minutiae. Previously, to overcome this problem some kind of post-processing is used, which also eliminates valid minutiae along with false ones. So eventually, we can say that the strength of matching algorithm depends on the strength of extracted features from fingerprint. In our research, we have presented a new approach which uses wavelet based features which are fused with minutiae based features for matching purpose. In particular, we find that among the algorithms we studied, our proposed work have significant effects on overall performance. Experiment results show that using these features have made the matching process much more accurate even in the presence of false minutiae.","PeriodicalId":307661,"journal":{"name":"2009 International Conference on Emerging Technologies","volume":"156 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2009-12-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"132450777","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}