Pub Date : 2010-07-07DOI: 10.1109/IPTA.2010.5586796
Angela D'Angelo, J. Dugelay
Image segmentation is a fundamental task in many computer vision applications. In this paper, we describe a new unsupervised color image segmentation algorithm, which exploits the color characteristics of the image. The introduced system is based on a color quantization of the image in the Lab color space using the popular eleven culture colors in order to avoid the well known problem of oversegmentation. To partially overcome the problem of highlight and shadows in the image, which is one of the main aspect affecting the performance of color segmentation systems, the proposed approach uses a fuzzy classifier trained on an ad-hoc designed dataset. A Markov Random Field description of the full algorithm is moreover provided which helps to remove resilient errors trough the use of an iterative strategy. The experimantal results show the good performance of the proposed approach which is comparable to state of the art systems even if based only on the color information of the image.
{"title":"A Markov Random Field description of fuzzy color segmentation","authors":"Angela D'Angelo, J. Dugelay","doi":"10.1109/IPTA.2010.5586796","DOIUrl":"https://doi.org/10.1109/IPTA.2010.5586796","url":null,"abstract":"Image segmentation is a fundamental task in many computer vision applications. In this paper, we describe a new unsupervised color image segmentation algorithm, which exploits the color characteristics of the image. The introduced system is based on a color quantization of the image in the Lab color space using the popular eleven culture colors in order to avoid the well known problem of oversegmentation. To partially overcome the problem of highlight and shadows in the image, which is one of the main aspect affecting the performance of color segmentation systems, the proposed approach uses a fuzzy classifier trained on an ad-hoc designed dataset. A Markov Random Field description of the full algorithm is moreover provided which helps to remove resilient errors trough the use of an iterative strategy. The experimantal results show the good performance of the proposed approach which is comparable to state of the art systems even if based only on the color information of the image.","PeriodicalId":236574,"journal":{"name":"2010 2nd International Conference on Image Processing Theory, Tools and Applications","volume":"112 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2010-07-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114633141","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2010-07-07DOI: 10.1109/IPTA.2010.5586745
Séverine Dubuisson
In this paper we present a new method for fast histogram computing and its extension to bin to bin histogram distance computing. The idea consists in using the information of spatial differences between images, or between regions of images (a current and a reference one), and encoding it into a specific data structure: a tree. The Bhattacharyya distance between two histograms is then computed using an incremental approach that avoid histogram: we just need histograms of the reference image, and spatial differences between the reference and the current image to compute this distance using an updating process. We compare our approach with the well-known Integral Histogram one, and obtain better results in terms of processing time while reducing the memory footprint. We show theoretically and with experimental results the superiority of our approach in many cases. Finally, we demonstrate the advantages of our approach on a real visual tracking application using a particle filter framework by improving its correction step computation time.
{"title":"The computation of the Bhattacharyya distance between histograms without histograms","authors":"Séverine Dubuisson","doi":"10.1109/IPTA.2010.5586745","DOIUrl":"https://doi.org/10.1109/IPTA.2010.5586745","url":null,"abstract":"In this paper we present a new method for fast histogram computing and its extension to bin to bin histogram distance computing. The idea consists in using the information of spatial differences between images, or between regions of images (a current and a reference one), and encoding it into a specific data structure: a tree. The Bhattacharyya distance between two histograms is then computed using an incremental approach that avoid histogram: we just need histograms of the reference image, and spatial differences between the reference and the current image to compute this distance using an updating process. We compare our approach with the well-known Integral Histogram one, and obtain better results in terms of processing time while reducing the memory footprint. We show theoretically and with experimental results the superiority of our approach in many cases. Finally, we demonstrate the advantages of our approach on a real visual tracking application using a particle filter framework by improving its correction step computation time.","PeriodicalId":236574,"journal":{"name":"2010 2nd International Conference on Image Processing Theory, Tools and Applications","volume":"60 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2010-07-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"117167444","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2010-07-07DOI: 10.1109/IPTA.2010.5586778
M. Ismail, H. Frigui
We propose a novel image database categorization approach using a possibilistic clustering algorithm. The proposed algorithm is based on a robust data modeling using the Generalized Dirichlet (GD) finite mixture and generates two types of membership degrees. The first one is a posterior probability that indicates the degree to which the point fits the estimated distribution. The second membership represents the degree of “typicality” and is used to indentify and discard noise points. The algorithm minimizes one objective function to optimize GD mixture parameters and possibilistic membership values. This optimization is done iteratively by dynamically updating the density mixture parameters and the membership values in each iteration. The performance of the proposed algorithm is illustrated by using it to categorize a collection of 500 color images. The results are compared with those obtained by the Fuzzy C-means algorithm.
{"title":"Image database categorization using robust modeling of finite Generalized Dirichlet mixture","authors":"M. Ismail, H. Frigui","doi":"10.1109/IPTA.2010.5586778","DOIUrl":"https://doi.org/10.1109/IPTA.2010.5586778","url":null,"abstract":"We propose a novel image database categorization approach using a possibilistic clustering algorithm. The proposed algorithm is based on a robust data modeling using the Generalized Dirichlet (GD) finite mixture and generates two types of membership degrees. The first one is a posterior probability that indicates the degree to which the point fits the estimated distribution. The second membership represents the degree of “typicality” and is used to indentify and discard noise points. The algorithm minimizes one objective function to optimize GD mixture parameters and possibilistic membership values. This optimization is done iteratively by dynamically updating the density mixture parameters and the membership values in each iteration. The performance of the proposed algorithm is illustrated by using it to categorize a collection of 500 color images. The results are compared with those obtained by the Fuzzy C-means algorithm.","PeriodicalId":236574,"journal":{"name":"2010 2nd International Conference on Image Processing Theory, Tools and Applications","volume":"132 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2010-07-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128485902","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2010-07-07DOI: 10.1109/IPTA.2010.5586758
A. Çelebi, S. Ertürk
Most underwater vehicles are nowadays equipped with vision sensors. However, underwater images captured using optic cameras can be of poor quality due to lighting conditions underwater. In such cases it is necessary to apply image enhancement methods to underwater images in order to enhance visual quality as well as interpretability. In this paper, an Empirical Mode Decomposition (EMD) based image enhancement algorithm is applied to underwater images for this purpose. EMD has been shown to be particularly suitable for non-linear and non-stationary signals in the literature, and therefore provides very useful in real life applications. In the approach presented in this paper, initially each R, G and B channel of the color underwater image is separately decomposed into Intrinsic Mode Functions (IMFs) using EMD. Then, the enhanced image is constructed by combining the IMFs of each channel with different weights, so as to obtain a new image with increased visual quality. It is shown that the proposed approach provides superior results compared to conventional image enhancement methods such as contrast stretching.
{"title":"Empirical mode decomposition based visual enhancement of underwater images","authors":"A. Çelebi, S. Ertürk","doi":"10.1109/IPTA.2010.5586758","DOIUrl":"https://doi.org/10.1109/IPTA.2010.5586758","url":null,"abstract":"Most underwater vehicles are nowadays equipped with vision sensors. However, underwater images captured using optic cameras can be of poor quality due to lighting conditions underwater. In such cases it is necessary to apply image enhancement methods to underwater images in order to enhance visual quality as well as interpretability. In this paper, an Empirical Mode Decomposition (EMD) based image enhancement algorithm is applied to underwater images for this purpose. EMD has been shown to be particularly suitable for non-linear and non-stationary signals in the literature, and therefore provides very useful in real life applications. In the approach presented in this paper, initially each R, G and B channel of the color underwater image is separately decomposed into Intrinsic Mode Functions (IMFs) using EMD. Then, the enhanced image is constructed by combining the IMFs of each channel with different weights, so as to obtain a new image with increased visual quality. It is shown that the proposed approach provides superior results compared to conventional image enhancement methods such as contrast stretching.","PeriodicalId":236574,"journal":{"name":"2010 2nd International Conference on Image Processing Theory, Tools and Applications","volume":"133 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2010-07-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"132662353","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2010-07-07DOI: 10.1109/IPTA.2010.5586787
M. Maatouk, Majd Bellaj, N. Amara
The ancient documents have a major importance in the history of every people and every nation. These documents involve important information that many people need. As a consequence, it is necessary to preserve these documents in order to build a numerical library in the service of the public. Therefore, the necessity of digitizing these documents permits the simultaneous access to the same documents and provides the possibility of the reproduction of these documents existing most of the time in just one example. This task is considered as an important step in the research domain. In fact, many researches have been invested for the processing, compression, segmentation and indexation of these documents. Nevertheless, with a numerical form, there is a threat of hacking, stocking, copying, modifying and finally diffusing these documents in an illegal way without losing their quality. As a consequence, we face the problem of losing the intellectual property because of the lack of methods that concern the protection of data. In order to prevent these frauds, watermarking represents a promising method to protect these images. In this context, our work makes part of protecting ancient documents essentially. In this paper, we have proposed the method of watermarking ancient documents. This method is based on the Wavelet Packet Transform (WPT) and it provides a good robustness which can face different attacks like signal processing (noise, filter and compression) and noticeable signature invisibility.
{"title":"Watermarking ancient documents schema using wavelet packets and convolutional code","authors":"M. Maatouk, Majd Bellaj, N. Amara","doi":"10.1109/IPTA.2010.5586787","DOIUrl":"https://doi.org/10.1109/IPTA.2010.5586787","url":null,"abstract":"The ancient documents have a major importance in the history of every people and every nation. These documents involve important information that many people need. As a consequence, it is necessary to preserve these documents in order to build a numerical library in the service of the public. Therefore, the necessity of digitizing these documents permits the simultaneous access to the same documents and provides the possibility of the reproduction of these documents existing most of the time in just one example. This task is considered as an important step in the research domain. In fact, many researches have been invested for the processing, compression, segmentation and indexation of these documents. Nevertheless, with a numerical form, there is a threat of hacking, stocking, copying, modifying and finally diffusing these documents in an illegal way without losing their quality. As a consequence, we face the problem of losing the intellectual property because of the lack of methods that concern the protection of data. In order to prevent these frauds, watermarking represents a promising method to protect these images. In this context, our work makes part of protecting ancient documents essentially. In this paper, we have proposed the method of watermarking ancient documents. This method is based on the Wavelet Packet Transform (WPT) and it provides a good robustness which can face different attacks like signal processing (noise, filter and compression) and noticeable signature invisibility.","PeriodicalId":236574,"journal":{"name":"2010 2nd International Conference on Image Processing Theory, Tools and Applications","volume":"3 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2010-07-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"133991109","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2010-07-07DOI: 10.1109/IPTA.2010.5586740
B. Recur, P. Desbarats, J. Domenger
Apart from the usual methods based on the Radon theorem, the Mojette transform proposes a specific algorithm called Corner Based Inversion (CBI) to reconstruct an image from its projections. Contrary to other transforms, it offers two interesting properties. First, the acquisition follows discrete image geometry and resolves the well-known irregular sampling problem. Second, it updates projection values during the reconstruction such that the sinogram contains only data for not yet reconstructed pixels. Unfortunately, the CBI algorithm is noise sensitive and reconstruction from corrupted data fails. In this paper, we develop a new noise-robust CBI algorithm based on data redundancy and noise modelling in the projections. This algorithm is applied in discrete tomography from a Radon acquisition. Reconstructed image results are discussed and applications in usual tomography are detailed.
{"title":"Mojette reconstruction from noisy projections","authors":"B. Recur, P. Desbarats, J. Domenger","doi":"10.1109/IPTA.2010.5586740","DOIUrl":"https://doi.org/10.1109/IPTA.2010.5586740","url":null,"abstract":"Apart from the usual methods based on the Radon theorem, the Mojette transform proposes a specific algorithm called Corner Based Inversion (CBI) to reconstruct an image from its projections. Contrary to other transforms, it offers two interesting properties. First, the acquisition follows discrete image geometry and resolves the well-known irregular sampling problem. Second, it updates projection values during the reconstruction such that the sinogram contains only data for not yet reconstructed pixels. Unfortunately, the CBI algorithm is noise sensitive and reconstruction from corrupted data fails. In this paper, we develop a new noise-robust CBI algorithm based on data redundancy and noise modelling in the projections. This algorithm is applied in discrete tomography from a Radon acquisition. Reconstructed image results are discussed and applications in usual tomography are detailed.","PeriodicalId":236574,"journal":{"name":"2010 2nd International Conference on Image Processing Theory, Tools and Applications","volume":"15 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2010-07-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115643037","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2010-07-07DOI: 10.1109/IPTA.2010.5586738
S. Villette, C. Gée, E. Piron, R. Martin, D. Miclet, M. Paindavoine
This article reports a new approach to measure the velocity and the mass flow distribution of granules in the vicinity of a spinning disc in order to improve fertiliser spreading in agriculture. In this approach, the acquisition system consists of a digital camera placed above the disc so that its view axis corresponds to the disc axle. This provides useful geometrical properties to develop a simple and efficient image processing. A specific Hough transform is implemented to extract relevant data (polar coordinates of granule trajectory with respect to the disc centre) from granule streaks deduced from “motion-blurred images”. The Hough space directly provides the mean-radius of the polar coordinates of the trajectories from which the mean value of the outlet velocity is deduced. The Hough space also provides the angular distribution of the trajectories from which an estimation of the mass flow distribution is deduced. Results are compared with those obtained with reference methods.
{"title":"An efficient vision system to measure granule velocity and mass flow distribution in fertiliser centrifugal spreading","authors":"S. Villette, C. Gée, E. Piron, R. Martin, D. Miclet, M. Paindavoine","doi":"10.1109/IPTA.2010.5586738","DOIUrl":"https://doi.org/10.1109/IPTA.2010.5586738","url":null,"abstract":"This article reports a new approach to measure the velocity and the mass flow distribution of granules in the vicinity of a spinning disc in order to improve fertiliser spreading in agriculture. In this approach, the acquisition system consists of a digital camera placed above the disc so that its view axis corresponds to the disc axle. This provides useful geometrical properties to develop a simple and efficient image processing. A specific Hough transform is implemented to extract relevant data (polar coordinates of granule trajectory with respect to the disc centre) from granule streaks deduced from “motion-blurred images”. The Hough space directly provides the mean-radius of the polar coordinates of the trajectories from which the mean value of the outlet velocity is deduced. The Hough space also provides the angular distribution of the trajectories from which an estimation of the mass flow distribution is deduced. Results are compared with those obtained with reference methods.","PeriodicalId":236574,"journal":{"name":"2010 2nd International Conference on Image Processing Theory, Tools and Applications","volume":"133 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2010-07-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"117270549","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2010-07-07DOI: 10.1109/IPTA.2010.5586791
Gilad Avidor, E. Gur
In this paper, we present an adaptive Gerchberg-Saxton algorithm for phase retrieval. One of the drawbacks of the original Gerchberg-Saxton algorithm is the poor results it yields for very bright images. In this paper we demonstrate how a dynamic phase retrieval approach can improve the correlation between the required image and the reconstructed image by up to 10 percent. The paper gives explicit explanations to the principle behind the algorithm and shows experimental results to support the dynamic approach.
{"title":"An adaptive algorithm for phase retrieval from high intensity images","authors":"Gilad Avidor, E. Gur","doi":"10.1109/IPTA.2010.5586791","DOIUrl":"https://doi.org/10.1109/IPTA.2010.5586791","url":null,"abstract":"In this paper, we present an adaptive Gerchberg-Saxton algorithm for phase retrieval. One of the drawbacks of the original Gerchberg-Saxton algorithm is the poor results it yields for very bright images. In this paper we demonstrate how a dynamic phase retrieval approach can improve the correlation between the required image and the reconstructed image by up to 10 percent. The paper gives explicit explanations to the principle behind the algorithm and shows experimental results to support the dynamic approach.","PeriodicalId":236574,"journal":{"name":"2010 2nd International Conference on Image Processing Theory, Tools and Applications","volume":"53 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2010-07-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115851793","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2010-07-07DOI: 10.1109/IPTA.2010.5586746
Lotfi Tlig, M. Sayadi, Farhat Fnaeich
In this paper we present a novel segmentation approach that performs fuzzy clustering and feature extraction. The proposed method consists in forming a new descriptor combining a set of texture sub-features derived from the Grating Cell Operator (GCO) responses of an optimized Gabor filter bank, and Local Binary Pattern (LBP) outputs. The new feature vector offers two advantages. First, it only considers the optimized filters. Second, it aims to characterize both micro and macro textures. In addition, an extended version of a type 2 fuzzy c-means clustering algorithm is proposed. The extension is based on the integration of spatial information in the membership function (MF). The performance of this method is demonstrated by several experiments on natural textures.
{"title":"A new descriptor for textured image segmentation based on fuzzy type-2 clustering approach","authors":"Lotfi Tlig, M. Sayadi, Farhat Fnaeich","doi":"10.1109/IPTA.2010.5586746","DOIUrl":"https://doi.org/10.1109/IPTA.2010.5586746","url":null,"abstract":"In this paper we present a novel segmentation approach that performs fuzzy clustering and feature extraction. The proposed method consists in forming a new descriptor combining a set of texture sub-features derived from the Grating Cell Operator (GCO) responses of an optimized Gabor filter bank, and Local Binary Pattern (LBP) outputs. The new feature vector offers two advantages. First, it only considers the optimized filters. Second, it aims to characterize both micro and macro textures. In addition, an extended version of a type 2 fuzzy c-means clustering algorithm is proposed. The extension is based on the integration of spatial information in the membership function (MF). The performance of this method is demonstrated by several experiments on natural textures.","PeriodicalId":236574,"journal":{"name":"2010 2nd International Conference on Image Processing Theory, Tools and Applications","volume":"127 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2010-07-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"116146409","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2010-07-07DOI: 10.1109/IPTA.2010.5586751
R. Kachouri, K. Djemal, H. Maaref
Various visual characteristics based discriminative classification has become a standard technique for image recognition tasks in heterogeneous databases. Nevertheless, the encountered problem is the choice of the most relevant features depending on the considered image database content. In this aim, feature selection methods are used to remove the effect of the outlier features. Therefore, they allow to reduce the cost of extracting features and improve the classification accuracy. We propose, in this paper, an original feature selection method, that we call Adaptive Feature Selection (AFS). Proposed method combines Filter and Wrapper approaches. From an extracted feature set, AFS ensures a multiple learning of Support Vector Machine classifiers (SVM). Based on Fisher Linear Discrimination (FLD), it removes then redundant and irrelevant features automatically depending on their corresponding discrimination power. Using a large number of features, extensive experiments are performed on the heterogeneous COREL image database. A comparison with existing selection method is also provided. Results prove the efficiency and the robustness of the proposed AFS method.
{"title":"Adaptive feature selection for heterogeneous image databases","authors":"R. Kachouri, K. Djemal, H. Maaref","doi":"10.1109/IPTA.2010.5586751","DOIUrl":"https://doi.org/10.1109/IPTA.2010.5586751","url":null,"abstract":"Various visual characteristics based discriminative classification has become a standard technique for image recognition tasks in heterogeneous databases. Nevertheless, the encountered problem is the choice of the most relevant features depending on the considered image database content. In this aim, feature selection methods are used to remove the effect of the outlier features. Therefore, they allow to reduce the cost of extracting features and improve the classification accuracy. We propose, in this paper, an original feature selection method, that we call Adaptive Feature Selection (AFS). Proposed method combines Filter and Wrapper approaches. From an extracted feature set, AFS ensures a multiple learning of Support Vector Machine classifiers (SVM). Based on Fisher Linear Discrimination (FLD), it removes then redundant and irrelevant features automatically depending on their corresponding discrimination power. Using a large number of features, extensive experiments are performed on the heterogeneous COREL image database. A comparison with existing selection method is also provided. Results prove the efficiency and the robustness of the proposed AFS method.","PeriodicalId":236574,"journal":{"name":"2010 2nd International Conference on Image Processing Theory, Tools and Applications","volume":"28 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2010-07-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"125792585","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}