Many computer vision tasks require efficient evaluation of Support Vector Machine (SVM) classifiers on large image databases. Our goal is to efficiently evaluate SVM classifiers on a large number of images. We propose a novel Error Space Encoding (ESE) scheme for SVM evaluation which utilizes large number of classifiers already evaluated on the similar data set. We model this problem as an encoding of a novel classifier (query) in terms of the existing classifiers (query logs). With sufficiently large query logs, we show that ESE performs far better than any other existing encoding schemes. With this method we are able to retrieve nearly 100% correct top-k images from a dataset of 1 Million images spanning across 1000 categories. We also demonstrate application of our method in terms of relevance feedback and query expansion mechanism and show that our method achieves the same accuracy 90 times faster than exhaustive SVM evaluations.
{"title":"Efficient Evaluation of SVM Classifiers Using Error Space Encoding","authors":"Nisarg Raval, Rashmi Vilas Tonge, C. V. Jawahar","doi":"10.1109/ICPR.2014.755","DOIUrl":"https://doi.org/10.1109/ICPR.2014.755","url":null,"abstract":"Many computer vision tasks require efficient evaluation of Support Vector Machine (SVM) classifiers on large image databases. Our goal is to efficiently evaluate SVM classifiers on a large number of images. We propose a novel Error Space Encoding (ESE) scheme for SVM evaluation which utilizes large number of classifiers already evaluated on the similar data set. We model this problem as an encoding of a novel classifier (query) in terms of the existing classifiers (query logs). With sufficiently large query logs, we show that ESE performs far better than any other existing encoding schemes. With this method we are able to retrieve nearly 100% correct top-k images from a dataset of 1 Million images spanning across 1000 categories. We also demonstrate application of our method in terms of relevance feedback and query expansion mechanism and show that our method achieves the same accuracy 90 times faster than exhaustive SVM evaluations.","PeriodicalId":142159,"journal":{"name":"2014 22nd International Conference on Pattern Recognition","volume":"188 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2014-12-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127449678","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
We present a scheme for recovering the orientation of a planar scene from a single translation ally-motion blurred image. By leveraging the homography relationship among image coordinates of 3D points lying on a plane, and by exploiting natural correspondences among the extremities of the blur kernels derived from the motion blurred observation, the proposed method can accurately infer the normal of the planar surface. We validate our approach on synthetic as well as real planar scenes.
{"title":"Inferring Plane Orientation from a Single Motion Blurred Image","authors":"M. P. Rao, A. Rajagopalan, G. Seetharaman","doi":"10.1109/ICPR.2014.364","DOIUrl":"https://doi.org/10.1109/ICPR.2014.364","url":null,"abstract":"We present a scheme for recovering the orientation of a planar scene from a single translation ally-motion blurred image. By leveraging the homography relationship among image coordinates of 3D points lying on a plane, and by exploiting natural correspondences among the extremities of the blur kernels derived from the motion blurred observation, the proposed method can accurately infer the normal of the planar surface. We validate our approach on synthetic as well as real planar scenes.","PeriodicalId":142159,"journal":{"name":"2014 22nd International Conference on Pattern Recognition","volume":"7 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2014-12-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115050752","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
In this study, a super-resolution reconstruction approach for binocular 3D data is proposed. The aim is to obtain the high-resolution (HR) disparity map from a low-resolution (LR) binocular image pair by super-resolution reconstruction. The proposed approach contains five stages, i.e., initial disparity map estimation using local aggregation, disparity plane model computation, global energy cost minimization, HR disparity map composition by region-based fusion (selection), and fused HR disparity map refinement. Based on the experimental results obtained in this study, in terms of PSNR and bad pixel rate (BPR), the final HR disparity maps by the proposed approach are better than those by four comparison approaches.
{"title":"Super-resolution Reconstruction for Binocular 3D Data","authors":"Wei-Tsung Hsiao, Jin-Jang Leou, H. Hsiao","doi":"10.1109/ICPR.2014.721","DOIUrl":"https://doi.org/10.1109/ICPR.2014.721","url":null,"abstract":"In this study, a super-resolution reconstruction approach for binocular 3D data is proposed. The aim is to obtain the high-resolution (HR) disparity map from a low-resolution (LR) binocular image pair by super-resolution reconstruction. The proposed approach contains five stages, i.e., initial disparity map estimation using local aggregation, disparity plane model computation, global energy cost minimization, HR disparity map composition by region-based fusion (selection), and fused HR disparity map refinement. Based on the experimental results obtained in this study, in terms of PSNR and bad pixel rate (BPR), the final HR disparity maps by the proposed approach are better than those by four comparison approaches.","PeriodicalId":142159,"journal":{"name":"2014 22nd International Conference on Pattern Recognition","volume":"22 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2014-12-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115509905","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
In this paper a novel technique for saliency detection called Global Information Divergence is proposed. The technique is based on the diversity in information between two regions. Initially patches are extracted at multi-scales from the input images. This is followed by reducing the dimensionality of the extracted patches using Principal Component Analysis. After that the information divergence is evaluated between the reduced dimensionality patches, and calculated between a center and a surround region. Our technique uses a global method for defining the center patch and the surround patches collectively. The technique is tested on four competitive and complex datasets both for saliency detection and segmentation. The results obtained show a good performance in terms of quality of the saliency maps and speed compared with 16 state-of-the-art techniques.
{"title":"Information Divergence Based Saliency Detection with a Global Center-Surround Mechanism","authors":"Ibrahim M. H. Rahman, C. Hollitt, Mengjie Zhang","doi":"10.1109/ICPR.2014.590","DOIUrl":"https://doi.org/10.1109/ICPR.2014.590","url":null,"abstract":"In this paper a novel technique for saliency detection called Global Information Divergence is proposed. The technique is based on the diversity in information between two regions. Initially patches are extracted at multi-scales from the input images. This is followed by reducing the dimensionality of the extracted patches using Principal Component Analysis. After that the information divergence is evaluated between the reduced dimensionality patches, and calculated between a center and a surround region. Our technique uses a global method for defining the center patch and the surround patches collectively. The technique is tested on four competitive and complex datasets both for saliency detection and segmentation. The results obtained show a good performance in terms of quality of the saliency maps and speed compared with 16 state-of-the-art techniques.","PeriodicalId":142159,"journal":{"name":"2014 22nd International Conference on Pattern Recognition","volume":"24 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2014-12-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"123561504","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Automation of Electroencephalogram (EEG) analysis can significantly help the neurologist during the diagnosis of epilepsy. During last few years lot of work has been done in the field of computer assisted analysis to detect an epileptic activity in an EEG. Still there is a significant amount of need to make these computer assisted EEG analysis systems more convenient and informative for a neurologist. After briefly discussing some of the existing work we have suggested an approach which can make these systems more helpful, detailed and precise for the neurologist. In our proposed approach we have handled each epoch of each channel for each type of epileptic pattern exclusive to each other. In our approach feature extraction starts with an application of multilevel Discrete Wavelet Transform (DWT) on each 1 sec non-overlapping epochs. Then we apply Principal Component Analysis (PCA) to reduce the effect of redundant and noisy data. Afterwards we apply Support Vector Machine (SVM) to classify these epochs as Epileptic or not. In our system a user can mark any mistakes he encounters. The concept behind the inclusion of the retraining is that, if there is more than one example with same attributes but different labels, the classifier is going to get trained to the one with most population. These corrective marking will be saved as examples. On retraining the classifier will improve its classification, hence it will tries to adapt the user. In the end we have discussed the results we have acquired till now. Due to limitation in the available data we are only able to report the classification performance for generalised absence seizure. The reported accuracy is resulted on very versatile dataset of 21 patients from Punjab Institute of Mental Health (PIMH) and 21 patients from Children Hospital Boston (CHB) which have different number of channel and sampling frequency. This usage of the data proves the robustness of our algorithm.
{"title":"Computer Assisted Analysis System of Electroencephalogram for Diagnosing Epilepsy","authors":"Malik Anas Ahmad, N. Khan, W. Majeed","doi":"10.1109/ICPR.2014.583","DOIUrl":"https://doi.org/10.1109/ICPR.2014.583","url":null,"abstract":"Automation of Electroencephalogram (EEG) analysis can significantly help the neurologist during the diagnosis of epilepsy. During last few years lot of work has been done in the field of computer assisted analysis to detect an epileptic activity in an EEG. Still there is a significant amount of need to make these computer assisted EEG analysis systems more convenient and informative for a neurologist. After briefly discussing some of the existing work we have suggested an approach which can make these systems more helpful, detailed and precise for the neurologist. In our proposed approach we have handled each epoch of each channel for each type of epileptic pattern exclusive to each other. In our approach feature extraction starts with an application of multilevel Discrete Wavelet Transform (DWT) on each 1 sec non-overlapping epochs. Then we apply Principal Component Analysis (PCA) to reduce the effect of redundant and noisy data. Afterwards we apply Support Vector Machine (SVM) to classify these epochs as Epileptic or not. In our system a user can mark any mistakes he encounters. The concept behind the inclusion of the retraining is that, if there is more than one example with same attributes but different labels, the classifier is going to get trained to the one with most population. These corrective marking will be saved as examples. On retraining the classifier will improve its classification, hence it will tries to adapt the user. In the end we have discussed the results we have acquired till now. Due to limitation in the available data we are only able to report the classification performance for generalised absence seizure. The reported accuracy is resulted on very versatile dataset of 21 patients from Punjab Institute of Mental Health (PIMH) and 21 patients from Children Hospital Boston (CHB) which have different number of channel and sampling frequency. This usage of the data proves the robustness of our algorithm.","PeriodicalId":142159,"journal":{"name":"2014 22nd International Conference on Pattern Recognition","volume":"14 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2014-12-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"125212202","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Andrew D. O'Harney, A. Marquand, K. Rubia, K. Chantiluke, Anna B. Smith, Ana Cubillo, C. Blain, M. Filippone
In clinical neuroimaging applications where subjects belong to one of multiple classes of disease states and multiple imaging sources are available, the aim is to achieve accurate classification while assessing the importance of the sources in the classification task. This work proposes the use of fully Bayesian multiple-class multiple-kernel learning based on Gaussian Processes, as it offers flexible classification capabilities and a sound quantification of uncertainty in parameter estimates and predictions. The exact inference of parameters and accurate quantification of uncertainty in Gaussian Process models, however, poses a computationally challenging problem. This paper proposes the application of advanced inference techniques based on Markov chain Monte Carlo and unbiased estimates of the marginal likelihood, and demonstrates their ability to accurately and efficiently carry out inference in their application on synthetic data and real clinical neuroimaging data. The results in this paper are important as they further work in the direction of achieving computationally feasible fully Bayesian models for a wide range of real world applications.
{"title":"Pseudo-Marginal Bayesian Multiple-Class Multiple-Kernel Learning for Neuroimaging Data","authors":"Andrew D. O'Harney, A. Marquand, K. Rubia, K. Chantiluke, Anna B. Smith, Ana Cubillo, C. Blain, M. Filippone","doi":"10.1109/ICPR.2014.549","DOIUrl":"https://doi.org/10.1109/ICPR.2014.549","url":null,"abstract":"In clinical neuroimaging applications where subjects belong to one of multiple classes of disease states and multiple imaging sources are available, the aim is to achieve accurate classification while assessing the importance of the sources in the classification task. This work proposes the use of fully Bayesian multiple-class multiple-kernel learning based on Gaussian Processes, as it offers flexible classification capabilities and a sound quantification of uncertainty in parameter estimates and predictions. The exact inference of parameters and accurate quantification of uncertainty in Gaussian Process models, however, poses a computationally challenging problem. This paper proposes the application of advanced inference techniques based on Markov chain Monte Carlo and unbiased estimates of the marginal likelihood, and demonstrates their ability to accurately and efficiently carry out inference in their application on synthetic data and real clinical neuroimaging data. The results in this paper are important as they further work in the direction of achieving computationally feasible fully Bayesian models for a wide range of real world applications.","PeriodicalId":142159,"journal":{"name":"2014 22nd International Conference on Pattern Recognition","volume":"34 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2014-12-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"116801281","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Since a human face can be represented by a few feature points (FPs) with less redundant information, and calculated by a linear combination of a small number of prototypical faces, we propose a two-step 3D face reconstruction approach including FP depth estimation and shape deformation. The proposed approach can reconstruct a realistic 3D face from a 2D frontal face image. In the first step, a coupled dictionary learning method based on sparse representation is employed to explore the underlying mappings between 2D and 3D training FPs, and then the depth of the FPs is estimated. In the second step, a novel shape deformation method is proposed to reconstruct the 3D face by combining a small number of most relevant deformed faces by the estimated FPs. The proposed approach can explore the distributions of 2D and 3D faces and the underlying mappings between them well, because human faces are represented by low-dimensional FPs, and their distributions are described by sparse representations. Moreover, it is much more flexible since we can make any change in any step. Extensive experiments are conducted on BJUT_3D database, and the results validate the effectiveness of the proposed approach.
{"title":"3D Face Reconstruction via Feature Point Depth Estimation and Shape Deformation","authors":"Quan Xiao, Lihua Han, Peizhong Liu","doi":"10.1109/ICPR.2014.392","DOIUrl":"https://doi.org/10.1109/ICPR.2014.392","url":null,"abstract":"Since a human face can be represented by a few feature points (FPs) with less redundant information, and calculated by a linear combination of a small number of prototypical faces, we propose a two-step 3D face reconstruction approach including FP depth estimation and shape deformation. The proposed approach can reconstruct a realistic 3D face from a 2D frontal face image. In the first step, a coupled dictionary learning method based on sparse representation is employed to explore the underlying mappings between 2D and 3D training FPs, and then the depth of the FPs is estimated. In the second step, a novel shape deformation method is proposed to reconstruct the 3D face by combining a small number of most relevant deformed faces by the estimated FPs. The proposed approach can explore the distributions of 2D and 3D faces and the underlying mappings between them well, because human faces are represented by low-dimensional FPs, and their distributions are described by sparse representations. Moreover, it is much more flexible since we can make any change in any step. Extensive experiments are conducted on BJUT_3D database, and the results validate the effectiveness of the proposed approach.","PeriodicalId":142159,"journal":{"name":"2014 22nd International Conference on Pattern Recognition","volume":"152 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2014-12-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"131782718","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
The performance of object recognition and classification on remote sensing imagery is highly dependent on the quality of extracted features, amount of labelled data and the priors defined for contextual models. In this study, we examine the representation learning opportunities for remote sensing. First we attacked localization of contextual cues for complex object detection using disentangling factors learnt from a small amount of labelled data. The complex object, which consists of several sub-parts is further represented under the Conditional Markov Random Fields framework. As a second task, end-to-end target detection using convolutional sparse auto-encoders (CSA) using large amount of unlabelled data is analysed. Proposed methodologies are tested on complex airfield detection problem using Conditional Random Fields and recognition of dispersal areas, park areas, taxi routes, airplanes using CSA. The method is also tested on the detection of the dry docks in harbours. Performance of the proposed method is compared with standard feature engineering methods and found competitive with currently used rule-based and supervised methods.
{"title":"Representation Learning for Contextual Object and Region Detection in Remote Sensing","authors":"Orhan Firat, G. Can, F. Yarman-Vural","doi":"10.1109/ICPR.2014.637","DOIUrl":"https://doi.org/10.1109/ICPR.2014.637","url":null,"abstract":"The performance of object recognition and classification on remote sensing imagery is highly dependent on the quality of extracted features, amount of labelled data and the priors defined for contextual models. In this study, we examine the representation learning opportunities for remote sensing. First we attacked localization of contextual cues for complex object detection using disentangling factors learnt from a small amount of labelled data. The complex object, which consists of several sub-parts is further represented under the Conditional Markov Random Fields framework. As a second task, end-to-end target detection using convolutional sparse auto-encoders (CSA) using large amount of unlabelled data is analysed. Proposed methodologies are tested on complex airfield detection problem using Conditional Random Fields and recognition of dispersal areas, park areas, taxi routes, airplanes using CSA. The method is also tested on the detection of the dry docks in harbours. Performance of the proposed method is compared with standard feature engineering methods and found competitive with currently used rule-based and supervised methods.","PeriodicalId":142159,"journal":{"name":"2014 22nd International Conference on Pattern Recognition","volume":"29 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2014-12-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129010782","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
This paper presents a novel technique for efficient and generic matching of compressed video shots, through compact signatures extracted directly without decompression. The compact signature is based on the Dominant Color Profile (DCP), a sequence of dominant colors extracted and arranged as a sequence of spikes in analogy to the human retinal representation of a scene. The proposed signature represents a given video shot with ~490 integer values, facilitating for real time processing to retrieve a maximum set of matching videos. The technique is able to work directly on MPEG compressed videos, without full decompression, as it utilizes the DC-image as a base for extracting color features. The DC-image has a highly reduced size, while retaining most of visual aspects, and provides high performance compared to the full I-frame. The experiments and results on various standard datasets show the promising performance, both the accuracy and the efficient computation complexity, of the proposed technique.
{"title":"Compact Signature-Based Compressed Video Matching Using Dominant Color Profiles (DCP)","authors":"Saddam Bekhet, Amr Ahmed","doi":"10.1109/ICPR.2014.674","DOIUrl":"https://doi.org/10.1109/ICPR.2014.674","url":null,"abstract":"This paper presents a novel technique for efficient and generic matching of compressed video shots, through compact signatures extracted directly without decompression. The compact signature is based on the Dominant Color Profile (DCP), a sequence of dominant colors extracted and arranged as a sequence of spikes in analogy to the human retinal representation of a scene. The proposed signature represents a given video shot with ~490 integer values, facilitating for real time processing to retrieve a maximum set of matching videos. The technique is able to work directly on MPEG compressed videos, without full decompression, as it utilizes the DC-image as a base for extracting color features. The DC-image has a highly reduced size, while retaining most of visual aspects, and provides high performance compared to the full I-frame. The experiments and results on various standard datasets show the promising performance, both the accuracy and the efficient computation complexity, of the proposed technique.","PeriodicalId":142159,"journal":{"name":"2014 22nd International Conference on Pattern Recognition","volume":"127 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2014-12-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114660164","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Existing linear projection based hashing methods have witnessed many progresses in finding the approximate nearest neighbor(s) of a given query. They perform well when using a short code. But their code length depends on the original data dimension, thus their performance can not be further improved with higher number of bits for low dimensional data. In addition, in the case of high dimensional data, it is not a good choice to produce each bit by a sign function. In this paper, we propose a novel random forest based approach to cope with the above shortcomings. The bits are obtained by recording the paths when a point traversing each tree in the forest. Then we propose a new metric to calculate the similarity between any two codes. Experimental results on two large benchmark datasets show that our approach outperforms its counterparts and demonstrate its superiority over the existing state-of-the-art hashing methods for descriptor retrieval.
{"title":"Learning Flexible Binary Code for Linear Projection Based Hashing with Random Forest","authors":"Shuze Du, Wei Zhang, Shifeng Chen, Y. Wen","doi":"10.1109/ICPR.2014.464","DOIUrl":"https://doi.org/10.1109/ICPR.2014.464","url":null,"abstract":"Existing linear projection based hashing methods have witnessed many progresses in finding the approximate nearest neighbor(s) of a given query. They perform well when using a short code. But their code length depends on the original data dimension, thus their performance can not be further improved with higher number of bits for low dimensional data. In addition, in the case of high dimensional data, it is not a good choice to produce each bit by a sign function. In this paper, we propose a novel random forest based approach to cope with the above shortcomings. The bits are obtained by recording the paths when a point traversing each tree in the forest. Then we propose a new metric to calculate the similarity between any two codes. Experimental results on two large benchmark datasets show that our approach outperforms its counterparts and demonstrate its superiority over the existing state-of-the-art hashing methods for descriptor retrieval.","PeriodicalId":142159,"journal":{"name":"2014 22nd International Conference on Pattern Recognition","volume":"140 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2014-12-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127482380","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}