Pub Date : 2012-10-01DOI: 10.1109/IPTA.2012.6469562
F. Derraz, L. Peyrodie, A. Taleb-Ahmed, G. Forzy
We present a new unsupervised segmentation based active contours model and local region texture descriptor. The proposed local region texture descriptor intrinsically describes the geometry of textural regions using the shape operator defined in Beltrami framework. The local texture descriptor is incorporated in the active contours using the Cauchy-Schwarz distance. The texture is discriminated by maximizing distance between the probability density functions which leads to distinguish textural objects of interest and background. We propose a fast Bregman split implementation of our segmentation algorithm based on the dual formulation of the Total Variation norm. Finally, we show results on some challenging images to illustrate segmentations that are possible.
{"title":"Texture segmentation using globally active contours model and Cauchy-Schwarz distance","authors":"F. Derraz, L. Peyrodie, A. Taleb-Ahmed, G. Forzy","doi":"10.1109/IPTA.2012.6469562","DOIUrl":"https://doi.org/10.1109/IPTA.2012.6469562","url":null,"abstract":"We present a new unsupervised segmentation based active contours model and local region texture descriptor. The proposed local region texture descriptor intrinsically describes the geometry of textural regions using the shape operator defined in Beltrami framework. The local texture descriptor is incorporated in the active contours using the Cauchy-Schwarz distance. The texture is discriminated by maximizing distance between the probability density functions which leads to distinguish textural objects of interest and background. We propose a fast Bregman split implementation of our segmentation algorithm based on the dual formulation of the Total Variation norm. Finally, we show results on some challenging images to illustrate segmentations that are possible.","PeriodicalId":267290,"journal":{"name":"2012 3rd International Conference on Image Processing Theory, Tools and Applications (IPTA)","volume":"53 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2012-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114542337","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2012-10-01DOI: 10.1109/IPTA.2012.6469509
Yannick Dennemont, Guillaume Bouyer, S. Otmane, M. Mallem
This work studies, implements and evaluates a gestures recognition module based on discrete Hidden Markov Models. The module is implemented on Matlab and used from Virtools. It can be used with different inputs therefore serves different recognition purposes. We focus on the 3D positions, our devices common information, as inputs for gesture recognition. Experiments are realized with an infra-red tracked flystick. Finally, the recognition rate is more than 90% with a personalized learning base. Otherwise, the results are beyond 70%, for an evaluation of 8 users on a real time mini-game. The rates are basically 80% for simple gestures and 60% for complex ones.
{"title":"A discrete Hidden Markov models recognition module for temporal series: Application to real-time 3D hand gestures","authors":"Yannick Dennemont, Guillaume Bouyer, S. Otmane, M. Mallem","doi":"10.1109/IPTA.2012.6469509","DOIUrl":"https://doi.org/10.1109/IPTA.2012.6469509","url":null,"abstract":"This work studies, implements and evaluates a gestures recognition module based on discrete Hidden Markov Models. The module is implemented on Matlab and used from Virtools. It can be used with different inputs therefore serves different recognition purposes. We focus on the 3D positions, our devices common information, as inputs for gesture recognition. Experiments are realized with an infra-red tracked flystick. Finally, the recognition rate is more than 90% with a personalized learning base. Otherwise, the results are beyond 70%, for an evaluation of 8 users on a real time mini-game. The rates are basically 80% for simple gestures and 60% for complex ones.","PeriodicalId":267290,"journal":{"name":"2012 3rd International Conference on Image Processing Theory, Tools and Applications (IPTA)","volume":"22 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2012-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129105058","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2012-10-01DOI: 10.1109/IPTA.2012.6469554
Arkadiusz Pawlik
We present a range of image and video analysis techniques that we have developed in connection with license plate recognition. Our methods focus on two areas - efficient image preprocessing to improve low-quality detection rate and combining the detection results from multiple frames to improve the accuracy of the recognized license plates. To evaluate our algorithms, we have implemented a complete ANPR system that detects and reads license plates. The system can process up to 110 frames per second on single CPU core and scales well to at least 4 cores. The recognition rate varies depending on the quality of video streams (amount of motion blur, resolution), but approaches 100% for clear, sharp license plate input data. The software is currently marketed commercially as CarID1. Some of our methods are more general and may have applications outside of the ANPR domain.
{"title":"High performance automatic number plate recognition in video streams","authors":"Arkadiusz Pawlik","doi":"10.1109/IPTA.2012.6469554","DOIUrl":"https://doi.org/10.1109/IPTA.2012.6469554","url":null,"abstract":"We present a range of image and video analysis techniques that we have developed in connection with license plate recognition. Our methods focus on two areas - efficient image preprocessing to improve low-quality detection rate and combining the detection results from multiple frames to improve the accuracy of the recognized license plates. To evaluate our algorithms, we have implemented a complete ANPR system that detects and reads license plates. The system can process up to 110 frames per second on single CPU core and scales well to at least 4 cores. The recognition rate varies depending on the quality of video streams (amount of motion blur, resolution), but approaches 100% for clear, sharp license plate input data. The software is currently marketed commercially as CarID1. Some of our methods are more general and may have applications outside of the ANPR domain.","PeriodicalId":267290,"journal":{"name":"2012 3rd International Conference on Image Processing Theory, Tools and Applications (IPTA)","volume":"200 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2012-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"125559339","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2012-10-01DOI: 10.1109/IPTA.2012.6469555
R. Samet, S. E. Amrahov, Ali Hikmet Ziroglu
Image segmentation is a process of partitioning the images into meaningful regions that are ready to analyze. Segmentation of rock thin section images is not trivial task due to the unpredictable structures and features of minerals. In this paper, we propose Fuzzy Rule-Based Image Segmentation technique to segment rock thin section images. Proposed technique uses RGB images of rock thin sections as input and gives segmented into minerals images as output. In order to show an advantage of proposed technique the rock thin section images were also segmented by known Fuzzy C-Means technique. Both techniques were applied to many different rock thin section images. The obtained results of proposed Fuzzy Rule-Based Image Segmentation and Fuzzy C-Means techniques were compared. Implementation results showed that proposed image segmentation technique has better accuracy than known ones.
{"title":"Fuzzy Rule-Based Image Segmentation technique for rock thin section images","authors":"R. Samet, S. E. Amrahov, Ali Hikmet Ziroglu","doi":"10.1109/IPTA.2012.6469555","DOIUrl":"https://doi.org/10.1109/IPTA.2012.6469555","url":null,"abstract":"Image segmentation is a process of partitioning the images into meaningful regions that are ready to analyze. Segmentation of rock thin section images is not trivial task due to the unpredictable structures and features of minerals. In this paper, we propose Fuzzy Rule-Based Image Segmentation technique to segment rock thin section images. Proposed technique uses RGB images of rock thin sections as input and gives segmented into minerals images as output. In order to show an advantage of proposed technique the rock thin section images were also segmented by known Fuzzy C-Means technique. Both techniques were applied to many different rock thin section images. The obtained results of proposed Fuzzy Rule-Based Image Segmentation and Fuzzy C-Means techniques were compared. Implementation results showed that proposed image segmentation technique has better accuracy than known ones.","PeriodicalId":267290,"journal":{"name":"2012 3rd International Conference on Image Processing Theory, Tools and Applications (IPTA)","volume":"292 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2012-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114720497","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2012-10-01DOI: 10.1109/IPTA.2012.6469508
S. Martinis
This paper describes the workflow of an automatic near-real time oil spill detection approach using single-polarized high resolution X-Band Synthetic Aperture Radar satellite data. Dark formations on the water surface are classified in a completely unsupervised way using an automatic tile-based thresholding procedure. The derived global threshold value is used for the initialization of a hybrid multi-contextual Markov image model which integrates scale-dependent and spatial contextual information on irregular hierarchical graph structures into the segment-based labeling process of slick-covered and slick-free water surfaces. Experimental investigations performed on TerraSAR-X ScanSAR data acquired during large-scale oil pollutions in the Gulf of Mexico in May 2010 confirm the effectiveness of the proposed method with respect to accuracy and computational effort.
{"title":"Automatic oil spill detection in TerraSAR-X data using multi-contextual Markov modeling on irregular graphs","authors":"S. Martinis","doi":"10.1109/IPTA.2012.6469508","DOIUrl":"https://doi.org/10.1109/IPTA.2012.6469508","url":null,"abstract":"This paper describes the workflow of an automatic near-real time oil spill detection approach using single-polarized high resolution X-Band Synthetic Aperture Radar satellite data. Dark formations on the water surface are classified in a completely unsupervised way using an automatic tile-based thresholding procedure. The derived global threshold value is used for the initialization of a hybrid multi-contextual Markov image model which integrates scale-dependent and spatial contextual information on irregular hierarchical graph structures into the segment-based labeling process of slick-covered and slick-free water surfaces. Experimental investigations performed on TerraSAR-X ScanSAR data acquired during large-scale oil pollutions in the Gulf of Mexico in May 2010 confirm the effectiveness of the proposed method with respect to accuracy and computational effort.","PeriodicalId":267290,"journal":{"name":"2012 3rd International Conference on Image Processing Theory, Tools and Applications (IPTA)","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2012-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126962804","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2012-10-01DOI: 10.1109/IPTA.2012.6469541
F. Vernier, Renaud Fallourd, J. Friedt, Yajing Yan, E. Trouvé, J. Nicolas, L. Moreau
Most of the image processing techniques have been first proposed and developed on small size images and progressively applied to larger and larger data sets resulting from new sensors and application requirements. In geosciences, digital cameras and remote sensing images can be used to monitor glaciers and to measure their surface velocity by different techniques. However, the image size and the number of acquisitions to be processed to analyze time series become a critical issue to derive displacement fields by the conventional correlation technique. In this paper, an efficient correlation software is used to compute from optical images the motion of a serac fall and from Synthetic Aperture Radar (SAR) images the motion of Alpine glaciers. The optical images are acquired by a digital camera installed near the Argentière glacier (Chamonix, France) and the SAR images are acquired by the high resolution TerraSAR-X satellite over the Mont-Blanc area. The results illustrate the potential of this software to monitor the glacier flow with camera images acquired every 2 h and with the size of the TerraSAR-X scenes covering 30 × 50 km2.
{"title":"Glacier flow monitoring by digital camera and space-borne SAR images","authors":"F. Vernier, Renaud Fallourd, J. Friedt, Yajing Yan, E. Trouvé, J. Nicolas, L. Moreau","doi":"10.1109/IPTA.2012.6469541","DOIUrl":"https://doi.org/10.1109/IPTA.2012.6469541","url":null,"abstract":"Most of the image processing techniques have been first proposed and developed on small size images and progressively applied to larger and larger data sets resulting from new sensors and application requirements. In geosciences, digital cameras and remote sensing images can be used to monitor glaciers and to measure their surface velocity by different techniques. However, the image size and the number of acquisitions to be processed to analyze time series become a critical issue to derive displacement fields by the conventional correlation technique. In this paper, an efficient correlation software is used to compute from optical images the motion of a serac fall and from Synthetic Aperture Radar (SAR) images the motion of Alpine glaciers. The optical images are acquired by a digital camera installed near the Argentière glacier (Chamonix, France) and the SAR images are acquired by the high resolution TerraSAR-X satellite over the Mont-Blanc area. The results illustrate the potential of this software to monitor the glacier flow with camera images acquired every 2 h and with the size of the TerraSAR-X scenes covering 30 × 50 km2.","PeriodicalId":267290,"journal":{"name":"2012 3rd International Conference on Image Processing Theory, Tools and Applications (IPTA)","volume":"181 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2012-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"124525511","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2012-10-01DOI: 10.1109/IPTA.2012.6469556
K. A. Saadi, Khalil Zebbiche, M. Laadjel, M. Morsli
Fingerprints are becoming popular in automated systems and for IT system user authentication. They are unique to each person and are designed to allow instant establishment personal identity in real time application. Enhancing their security in terms of fidelity and integrity becomes paramount. Since fingerprint images are usually compressed using Wavelet-packet Scalar Quantization (WSQ) before they are transmitted over networks, in this paper, we apply a fragile watermarking algorithm operating directly in compressed domain for protecting the evidentiary integrity of the WSQ bitstream. This work is motivated by the results obtained in previous video watermarking methods working in variable length codeword (VLC) domain to provide real time detection. The principle of the method is based on mapping the codewords to the outside of the used codespace, the watermark is embedded into stream as forced bit errors. The developed algorithm achieves high capacity and preserves the file size of WSQ bitstream while maintaining high perceptible quality.
{"title":"Real time watermarking to authenticate the WSQ bitstream","authors":"K. A. Saadi, Khalil Zebbiche, M. Laadjel, M. Morsli","doi":"10.1109/IPTA.2012.6469556","DOIUrl":"https://doi.org/10.1109/IPTA.2012.6469556","url":null,"abstract":"Fingerprints are becoming popular in automated systems and for IT system user authentication. They are unique to each person and are designed to allow instant establishment personal identity in real time application. Enhancing their security in terms of fidelity and integrity becomes paramount. Since fingerprint images are usually compressed using Wavelet-packet Scalar Quantization (WSQ) before they are transmitted over networks, in this paper, we apply a fragile watermarking algorithm operating directly in compressed domain for protecting the evidentiary integrity of the WSQ bitstream. This work is motivated by the results obtained in previous video watermarking methods working in variable length codeword (VLC) domain to provide real time detection. The principle of the method is based on mapping the codewords to the outside of the used codespace, the watermark is embedded into stream as forced bit errors. The developed algorithm achieves high capacity and preserves the file size of WSQ bitstream while maintaining high perceptible quality.","PeriodicalId":267290,"journal":{"name":"2012 3rd International Conference on Image Processing Theory, Tools and Applications (IPTA)","volume":"75 15","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2012-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114005467","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2012-10-01DOI: 10.1109/IPTA.2012.6469551
E. Goceri, M. Z. Unlu, C. Guzelis, O. Dicle
A fast and accurate liver segmentation method is a challenging work in medical image analysis area. Liver segmentation is an important process for computer-assisted diagnosis, pre-evaluation of liver transplantation and therapy planning of liver tumors. There are several advantages of magnetic resonance imaging such as free form ionizing radiation and good contrast visualization of soft tissue. Also, innovations in recent technology and image acquisition techniques have made magnetic resonance imaging a major tool in modern medicine. However, the use of magnetic resonance images for liver segmentation has been slow when we compare applications with the central nervous systems and musculoskeletal. The reasons are irregular shape, size and position of the liver, contrast agent effects and similarities of the gray values of neighbor organs. Therefore, in this study, we present a fully automatic liver segmentation method by using an approximation of the level set based contour evolution from T2 weighted magnetic resonance data sets. The method avoids solving partial differential equations and applies only integer operations with a two-cycle segmentation algorithm. The efficiency of the proposed approach is achieved by applying the algorithm to all slices with a constant number of iteration and performing the contour evolution without any user defined initial contour. The obtained results are evaluated with four different similarity measures and they show that the automatic segmentation approach gives successful results.
{"title":"An automatic level set based liver segmentation from MRI data sets","authors":"E. Goceri, M. Z. Unlu, C. Guzelis, O. Dicle","doi":"10.1109/IPTA.2012.6469551","DOIUrl":"https://doi.org/10.1109/IPTA.2012.6469551","url":null,"abstract":"A fast and accurate liver segmentation method is a challenging work in medical image analysis area. Liver segmentation is an important process for computer-assisted diagnosis, pre-evaluation of liver transplantation and therapy planning of liver tumors. There are several advantages of magnetic resonance imaging such as free form ionizing radiation and good contrast visualization of soft tissue. Also, innovations in recent technology and image acquisition techniques have made magnetic resonance imaging a major tool in modern medicine. However, the use of magnetic resonance images for liver segmentation has been slow when we compare applications with the central nervous systems and musculoskeletal. The reasons are irregular shape, size and position of the liver, contrast agent effects and similarities of the gray values of neighbor organs. Therefore, in this study, we present a fully automatic liver segmentation method by using an approximation of the level set based contour evolution from T2 weighted magnetic resonance data sets. The method avoids solving partial differential equations and applies only integer operations with a two-cycle segmentation algorithm. The efficiency of the proposed approach is achieved by applying the algorithm to all slices with a constant number of iteration and performing the contour evolution without any user defined initial contour. The obtained results are evaluated with four different similarity measures and they show that the automatic segmentation approach gives successful results.","PeriodicalId":267290,"journal":{"name":"2012 3rd International Conference on Image Processing Theory, Tools and Applications (IPTA)","volume":"12 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2012-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"121295351","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2012-10-01DOI: 10.1109/IPTA.2012.6469573
Gizem Akti, Dionysis Goularas
This paper presents a method allowing the conversion of images into sound. Initially, a frequency component extraction is realized from the original image. At this stage, the image is divided into windows in order to represent consecutive different time periods using STFT. Then, the dominant frequencies of each window are mapped into corresponding sound frequencies through Fourier analysis. This procedure is applied twice and two series of sound frequency components are produced: The first is originated from the brightness of the image, the second from the dominant RGB layer. The connection between the visual impression of the image and the psychoacoustic effect of the sound mapping is done by using different musical scales according to the dominant color of the image. The results revealed that the melody extracted from this analysis produces a certain psychoacoustic impression, as it has reported by several volunteers. Despite the fact that volunteers could not always do the association between image and sound, they could hardly believe that the music was produced by an algorithmic procedure.
{"title":"Frequency component extraction from color images for specific sound transformation and analysis","authors":"Gizem Akti, Dionysis Goularas","doi":"10.1109/IPTA.2012.6469573","DOIUrl":"https://doi.org/10.1109/IPTA.2012.6469573","url":null,"abstract":"This paper presents a method allowing the conversion of images into sound. Initially, a frequency component extraction is realized from the original image. At this stage, the image is divided into windows in order to represent consecutive different time periods using STFT. Then, the dominant frequencies of each window are mapped into corresponding sound frequencies through Fourier analysis. This procedure is applied twice and two series of sound frequency components are produced: The first is originated from the brightness of the image, the second from the dominant RGB layer. The connection between the visual impression of the image and the psychoacoustic effect of the sound mapping is done by using different musical scales according to the dominant color of the image. The results revealed that the melody extracted from this analysis produces a certain psychoacoustic impression, as it has reported by several volunteers. Despite the fact that volunteers could not always do the association between image and sound, they could hardly believe that the music was produced by an algorithmic procedure.","PeriodicalId":267290,"journal":{"name":"2012 3rd International Conference on Image Processing Theory, Tools and Applications (IPTA)","volume":"65 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2012-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"125510566","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2012-10-01DOI: 10.1109/IPTA.2012.6469520
J. Abdul-Jabbar, Zena N. Abdulkader
In this paper, a new identification method for iris recognition is presented. Among the four main steps of iris recognition, traditional segmentation and normalization steps are utilized in the proposed method. A non-traditional step for feature extraction is applied where a new bank of two-dimensional (2-D) elliptical-support wavelet Haar filter bank is used to capture the iris characteristics. The idea is based on a new geometrical image transform called 2-D elliptical-support wavelet transform (2-D ESWT). A five-level 2-D elliptical-support wavelet decomposition is needed to form a reduced fixed length quantized feature vector with improved performance. The efficient approach of Hamming distance is then applied as a final step for iris matching. Experimental results show that the proposed method is reliable with rapid recognition, since it achieves good recognition rate with reduced feature vector length. Thus, a less complex-implementation can be obtained for this identification method.
{"title":"Iris recognition using 2-D elliptical-support wavelet filter bank","authors":"J. Abdul-Jabbar, Zena N. Abdulkader","doi":"10.1109/IPTA.2012.6469520","DOIUrl":"https://doi.org/10.1109/IPTA.2012.6469520","url":null,"abstract":"In this paper, a new identification method for iris recognition is presented. Among the four main steps of iris recognition, traditional segmentation and normalization steps are utilized in the proposed method. A non-traditional step for feature extraction is applied where a new bank of two-dimensional (2-D) elliptical-support wavelet Haar filter bank is used to capture the iris characteristics. The idea is based on a new geometrical image transform called 2-D elliptical-support wavelet transform (2-D ESWT). A five-level 2-D elliptical-support wavelet decomposition is needed to form a reduced fixed length quantized feature vector with improved performance. The efficient approach of Hamming distance is then applied as a final step for iris matching. Experimental results show that the proposed method is reliable with rapid recognition, since it achieves good recognition rate with reduced feature vector length. Thus, a less complex-implementation can be obtained for this identification method.","PeriodicalId":267290,"journal":{"name":"2012 3rd International Conference on Image Processing Theory, Tools and Applications (IPTA)","volume":"5 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2012-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"122489557","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}