Pub Date : 2016-12-01Epub Date: 2017-04-24DOI: 10.1109/ICPR.2016.7900010
Benjamin Quachtran, Robert Hamilton, Fabien Scalzo
Intracranial Hypertension, a disorder characterized by elevated pressure in the brain, is typically monitored in neurointensive care and diagnosed only after elevation has occurred. This reaction-based method of treatment leaves patients at higher risk of additional complications in case of misdetection. The detection of intracranial hypertension has been the subject of many recent studies in an attempt to accurately characterize the causes of hypertension, specifically examining waveform morphology. We investigate the use of Deep Learning, a hierarchical form of machine learning, to model the relationship between hypertension and waveform morphology, giving us the ability to accurately detect presence hypertension. Data from 60 patients, showing intracranial pressure levels over a half hour time span, was used to evaluate the model. We divided each patient's recording into average normalized beats over 30 sec segments, assigning each beat a label of high (i.e. greater than 15 mmHg) or low intracranial pressure. The model was tested to predict the presence of elevated intracranial pressure. The algorithm was found to be 92.05± 2.25% accurate in detecting intracranial hypertension on our dataset.
{"title":"Detection of Intracranial Hypertension using Deep Learning.","authors":"Benjamin Quachtran, Robert Hamilton, Fabien Scalzo","doi":"10.1109/ICPR.2016.7900010","DOIUrl":"https://doi.org/10.1109/ICPR.2016.7900010","url":null,"abstract":"<p><p>Intracranial Hypertension, a disorder characterized by elevated pressure in the brain, is typically monitored in neurointensive care and diagnosed only after elevation has occurred. This reaction-based method of treatment leaves patients at higher risk of additional complications in case of misdetection. The detection of intracranial hypertension has been the subject of many recent studies in an attempt to accurately characterize the causes of hypertension, specifically examining waveform morphology. We investigate the use of Deep Learning, a hierarchical form of machine learning, to model the relationship between hypertension and waveform morphology, giving us the ability to accurately detect presence hypertension. Data from 60 patients, showing intracranial pressure levels over a half hour time span, was used to evaluate the model. We divided each patient's recording into average normalized beats over 30 sec segments, assigning each beat a label of high (i.e. greater than 15 mmHg) or low intracranial pressure. The model was tested to predict the presence of elevated intracranial pressure. The algorithm was found to be 92.05± 2.25% accurate in detecting intracranial hypertension on our dataset.</p>","PeriodicalId":74516,"journal":{"name":"Proceedings of the ... IAPR International Conference on Pattern Recognition. International Conference on Pattern Recognition","volume":"2016 ","pages":"2491-2496"},"PeriodicalIF":0.0,"publicationDate":"2016-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.1109/ICPR.2016.7900010","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"35377867","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2016-01-01DOI: 10.1109/ICPR.2016.7900204
Xiao-Guang Gao, Yu Yang, Zhi-gao Guo, Daqing Chen
{"title":"Bayesian approach to learn Bayesian networks using data and constraints","authors":"Xiao-Guang Gao, Yu Yang, Zhi-gao Guo, Daqing Chen","doi":"10.1109/ICPR.2016.7900204","DOIUrl":"https://doi.org/10.1109/ICPR.2016.7900204","url":null,"abstract":"","PeriodicalId":74516,"journal":{"name":"Proceedings of the ... IAPR International Conference on Pattern Recognition. International Conference on Pattern Recognition","volume":"38 1","pages":"3667-3672"},"PeriodicalIF":0.0,"publicationDate":"2016-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"87275154","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Forensic video analysis is the offline analysis of video aimed at understanding what happened in a scene in the past. Two of its key tasks are the recognition of specific actions, e.g., walking or fighting, and the search for specific persons, also referred to as re-identification. Although these tasks have traditionally been performed manually in forensic investigations, the current growing number of cameras and recorded video leads to the need for automated analysis. In this paper we propose an unsupervised retrieval system for surveillance videos based on human action and appearance. Given a query window, the system retrieves people performing the same action as the one in the query, the same person performing any action, or the same person performing the same action. We use an adaptive search algorithm that focuses the analysis on relevant frames based on the inter-frame difference of foreground masks. Then, for each analyzed frame, a pedestrian detector is used to extract windows containing each pedestrian in the scene. For each detection, we use optical flow features to represent its action and color features to represent its appearance. These extracted features are used to compute the probability that the detection matches the query according to the specified criterion. The algorithm is fully unsupervised, i.e., no training or constraints on the appearance, actions or number of actions that will appear in the test video are made. The proposed algorithm is tested on a surveillance video with different people performing different actions, providing satisfactory retrieval performance.
{"title":"Unsupervised Surveillance Video Retrieval Based on Human Action and Appearance","authors":"D. Gómez, H. Kjellström","doi":"10.1109/ICPR.2014.792","DOIUrl":"https://doi.org/10.1109/ICPR.2014.792","url":null,"abstract":"Forensic video analysis is the offline analysis of video aimed at understanding what happened in a scene in the past. Two of its key tasks are the recognition of specific actions, e.g., walking or fighting, and the search for specific persons, also referred to as re-identification. Although these tasks have traditionally been performed manually in forensic investigations, the current growing number of cameras and recorded video leads to the need for automated analysis. In this paper we propose an unsupervised retrieval system for surveillance videos based on human action and appearance. Given a query window, the system retrieves people performing the same action as the one in the query, the same person performing any action, or the same person performing the same action. We use an adaptive search algorithm that focuses the analysis on relevant frames based on the inter-frame difference of foreground masks. Then, for each analyzed frame, a pedestrian detector is used to extract windows containing each pedestrian in the scene. For each detection, we use optical flow features to represent its action and color features to represent its appearance. These extracted features are used to compute the probability that the detection matches the query according to the specified criterion. The algorithm is fully unsupervised, i.e., no training or constraints on the appearance, actions or number of actions that will appear in the test video are made. The proposed algorithm is tested on a surveillance video with different people performing different actions, providing satisfactory retrieval performance.","PeriodicalId":74516,"journal":{"name":"Proceedings of the ... IAPR International Conference on Pattern Recognition. International Conference on Pattern Recognition","volume":"210 1","pages":"4630-4635"},"PeriodicalIF":0.0,"publicationDate":"2014-08-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"76262271","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Cleft lip is a birth defect that results in deformity of the upper lip and nose. Its severity is widely variable and the results of treatment are influenced by the initial deformity. Objective assessment of severity would help to guide prognosis and treatment. However, most assessments are subjective. The purpose of this study is to develop and test quantitative computer-based methods of measuring cleft lip severity. In this paper, a grid-patch based measurement of symmetry is introduced, with which a computer program learns to rank the severity of cleft lip on 3D meshes of human infant faces. Three computer-based methods to define the midfacial reference plane were compared to two manual methods. Four different symmetry features were calculated based upon these reference planes, and evaluated. The result shows that the rankings predicted by the proposed features were highly correlated with the ranking orders provided by experts that were used as the ground truth.
{"title":"Learning to Rank the Severity of Unrepaired Cleft Lip Nasal Deformity on 3D Mesh Data.","authors":"Jia Wu, Raymond Tse, Linda G Shapiro","doi":"10.1109/ICPR.2014.88","DOIUrl":"https://doi.org/10.1109/ICPR.2014.88","url":null,"abstract":"<p><p>Cleft lip is a birth defect that results in deformity of the upper lip and nose. Its severity is widely variable and the results of treatment are influenced by the initial deformity. Objective assessment of severity would help to guide prognosis and treatment. However, most assessments are subjective. The purpose of this study is to develop and test quantitative computer-based methods of measuring cleft lip severity. In this paper, a grid-patch based measurement of symmetry is introduced, with which a computer program learns to rank the severity of cleft lip on 3D meshes of human infant faces. Three computer-based methods to define the midfacial reference plane were compared to two manual methods. Four different symmetry features were calculated based upon these reference planes, and evaluated. The result shows that the rankings predicted by the proposed features were highly correlated with the ranking orders provided by experts that were used as the ground truth.</p>","PeriodicalId":74516,"journal":{"name":"Proceedings of the ... IAPR International Conference on Pattern Recognition. International Conference on Pattern Recognition","volume":"2014 ","pages":"460-464"},"PeriodicalIF":0.0,"publicationDate":"2014-08-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.1109/ICPR.2014.88","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"32949304","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Dakai Jin, Krishna S Iyer, Eric A Hoffman, Punam K Saha
Traditional arc skeletonization algorithms using the principle of Blum's transform, often, produce unwanted spurious branches due to boundary irregularities and digital effects on objects and other artifacts. This paper presents a new robust approach of extracting arc skeletons for three-dimensional (3-D) elongated fuzzy objects, which avoids spurious branches without requiring post-pruning. Starting from a root voxel, the method iteratively expands the skeleton by adding a new branch in each iteration that connects the farthest voxel to the current skeleton using a minimum-cost geodesic path. The path-cost function is formulated using a novel measure of local significance factor defined by fuzzy distance transform field, which forces the path to stick to the centerline of the object. The algorithm terminates when dilated skeletal branches fill the entire object volume or the current farthest voxel fails to generate a meaningful branch. Accuracy of the algorithm has been evaluated using computer-generated blurred and noisy phantoms with known skeletons. Performance of the method in terms of false and missing skeletal branches, as defined by human expert, has been examined using in vivo CT imaging of human intrathoracic airways. Experimental results from both experiments have established the superiority of the new method as compared to a widely used conventional method in terms of accuracy of medialness as well as robustness of true and false skeletal branches.
{"title":"A New Approach of Arc Skeletonization for Tree-Like Objects Using Minimum Cost Path.","authors":"Dakai Jin, Krishna S Iyer, Eric A Hoffman, Punam K Saha","doi":"10.1109/ICPR.2014.172","DOIUrl":"https://doi.org/10.1109/ICPR.2014.172","url":null,"abstract":"<p><p>Traditional arc skeletonization algorithms using the principle of Blum's transform, often, produce unwanted spurious branches due to boundary irregularities and digital effects on objects and other artifacts. This paper presents a new robust approach of extracting arc skeletons for three-dimensional (3-D) elongated fuzzy objects, which avoids spurious branches without requiring post-pruning. Starting from a root voxel, the method iteratively expands the skeleton by adding a new branch in each iteration that connects the farthest voxel to the current skeleton using a minimum-cost geodesic path. The path-cost function is formulated using a novel measure of local significance factor defined by fuzzy distance transform field, which forces the path to stick to the centerline of the object. The algorithm terminates when dilated skeletal branches fill the entire object volume or the current farthest voxel fails to generate a meaningful branch. Accuracy of the algorithm has been evaluated using computer-generated blurred and noisy phantoms with known skeletons. Performance of the method in terms of false and missing skeletal branches, as defined by human expert, has been examined using <i>in vivo</i> CT imaging of human intrathoracic airways. Experimental results from both experiments have established the superiority of the new method as compared to a widely used conventional method in terms of accuracy of medialness as well as robustness of true and false skeletal branches.</p>","PeriodicalId":74516,"journal":{"name":"Proceedings of the ... IAPR International Conference on Pattern Recognition. International Conference on Pattern Recognition","volume":"2014 ","pages":"942-947"},"PeriodicalIF":0.0,"publicationDate":"2014-08-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.1109/ICPR.2014.172","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"33003518","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Ting Liu, Elizabeth Jurrus, Mojtaba Seyedhosseini, Mark Ellisman, Tolga Tasdizen
Automated segmentation of electron microscopy (EM) images is a challenging problem. In this paper, we present a novel method that utilizes a hierarchical structure and boundary classification for 2D neuron segmentation. With a membrane detection probability map, a watershed merge tree is built for the representation of hierarchical region merging from the watershed algorithm. A boundary classifier is learned with non-local image features to predict each potential merge in the tree, upon which merge decisions are made with consistency constraints to acquire the final segmentation. Independent of classifiers and decision strategies, our approach proposes a general framework for efficient hierarchical segmentation with statistical learning. We demonstrate that our method leads to a substantial improvement in segmentation accuracy.
{"title":"Watershed Merge Tree Classification for Electron Microscopy Image Segmentation.","authors":"Ting Liu, Elizabeth Jurrus, Mojtaba Seyedhosseini, Mark Ellisman, Tolga Tasdizen","doi":"","DOIUrl":"","url":null,"abstract":"<p><p>Automated segmentation of electron microscopy (EM) images is a challenging problem. In this paper, we present a novel method that utilizes a hierarchical structure and boundary classification for 2D neuron segmentation. With a membrane detection probability map, a watershed merge tree is built for the representation of hierarchical region merging from the watershed algorithm. A boundary classifier is learned with non-local image features to predict each potential merge in the tree, upon which merge decisions are made with consistency constraints to acquire the final segmentation. Independent of classifiers and decision strategies, our approach proposes a general framework for efficient hierarchical segmentation with statistical learning. We demonstrate that our method leads to a substantial improvement in segmentation accuracy.</p>","PeriodicalId":74516,"journal":{"name":"Proceedings of the ... IAPR International Conference on Pattern Recognition. International Conference on Pattern Recognition","volume":"2012 ","pages":"133-137"},"PeriodicalIF":0.0,"publicationDate":"2012-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC4256108/pdf/nihms606909.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"32889499","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Finding correspondences between two 3D shapes is common both in computer vision and computer graphics. In this paper, we propose a general framework that shows how to build correspondences by utilizing the isometric property. We show that the problem of finding such correspondences can be reduced to the problem of spectral assignment, which can be solved by finding the principal eigenvector of the pairwise correspondence matrix. The proposed framework consists of four main steps. First, it obtains initial candidate pairs by performing a preliminary matching using local shape features. Second, it constructs a pairwise correspondence matrix using geodesic distance and these initial pairs. Next, the principal eigenvector of the matrix is computed. Finally, the final correspondence is obtained from the maximal elements of the principal eigenvector. In our experiments, we show that the proposed method is robust under a variety of poses. Furthermore, our results show a great improvement over the best related method in the literature.
{"title":"3D shape isometric correspondence by spectral assignment.","authors":"Xiang Pan, Linda Shapiro","doi":"","DOIUrl":"","url":null,"abstract":"<p><p>Finding correspondences between two 3D shapes is common both in computer vision and computer graphics. In this paper, we propose a general framework that shows how to build correspondences by utilizing the isometric property. We show that the problem of finding such correspondences can be reduced to the problem of spectral assignment, which can be solved by finding the principal eigenvector of the pairwise correspondence matrix. The proposed framework consists of four main steps. First, it obtains initial candidate pairs by performing a preliminary matching using local shape features. Second, it constructs a pairwise correspondence matrix using geodesic distance and these initial pairs. Next, the principal eigenvector of the matrix is computed. Finally, the final correspondence is obtained from the maximal elements of the principal eigenvector. In our experiments, we show that the proposed method is robust under a variety of poses. Furthermore, our results show a great improvement over the best related method in the literature.</p>","PeriodicalId":74516,"journal":{"name":"Proceedings of the ... IAPR International Conference on Pattern Recognition. International Conference on Pattern Recognition","volume":"2012 ","pages":"2210-2213"},"PeriodicalIF":0.0,"publicationDate":"2012-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC4166483/pdf/nihms432519.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"32685854","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
This paper introduces a binarization method based on edge for video text images, especially for images with complex background or low contrast. The binarization method first detects the contour of the text, and utilizes a local thresholding method to decide the inner side of the contour, and then fills up the contour to form characters that are recognizable to OCR software. Experiment results show that our method is especially effective on complex background and low contrast images.
{"title":"Edge Based Binarization for Video Text Images","authors":"Zhou Zhiwei, Liu Linlin, T. C. Lim","doi":"10.1109/ICPR.2010.41","DOIUrl":"https://doi.org/10.1109/ICPR.2010.41","url":null,"abstract":"This paper introduces a binarization method based on edge for video text images, especially for images with complex background or low contrast. The binarization method first detects the contour of the text, and utilizes a local thresholding method to decide the inner side of the contour, and then fills up the contour to form characters that are recognizable to OCR software. Experiment results show that our method is especially effective on complex background and low contrast images.","PeriodicalId":74516,"journal":{"name":"Proceedings of the ... IAPR International Conference on Pattern Recognition. International Conference on Pattern Recognition","volume":"25 1","pages":"133-136"},"PeriodicalIF":0.0,"publicationDate":"2010-08-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"76030533","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Sila Kurugol, Necmiye Ozay, Jennifer G Dy, Gregory C Sharp, Dana H Brooks
In this paper we propose a supervised 3D segmentation algorithm to locate the esophagus in thoracic CT scans using a variational framework. To address challenges due to low contrast, several priors are learned from a training set of segmented images. Our algorithm first estimates the centerline based on a spatial model learned at a few manually marked anatomical reference points. Then an implicit shape model is learned by subtracting the centerline and applying PCA to these shapes. To allow local variations in the shapes, we propose to use nonlinear smooth local deformations. Finally, the esophageal wall is located within a 3D level set framework by optimizing a cost function including terms for appearance, the shape model, smoothness constraints and an air/contrast model.
{"title":"Locally Deformable Shape Model to Improve 3D Level Set based Esophagus Segmentation.","authors":"Sila Kurugol, Necmiye Ozay, Jennifer G Dy, Gregory C Sharp, Dana H Brooks","doi":"10.1109/ICPR.2010.962","DOIUrl":"https://doi.org/10.1109/ICPR.2010.962","url":null,"abstract":"<p><p>In this paper we propose a supervised 3D segmentation algorithm to locate the esophagus in thoracic CT scans using a variational framework. To address challenges due to low contrast, several priors are learned from a training set of segmented images. Our algorithm first estimates the centerline based on a spatial model learned at a few manually marked anatomical reference points. Then an implicit shape model is learned by subtracting the centerline and applying PCA to these shapes. To allow local variations in the shapes, we propose to use nonlinear smooth local deformations. Finally, the esophageal wall is located within a 3D level set framework by optimizing a cost function including terms for appearance, the shape model, smoothness constraints and an air/contrast model.</p>","PeriodicalId":74516,"journal":{"name":"Proceedings of the ... IAPR International Conference on Pattern Recognition. International Conference on Pattern Recognition","volume":" ","pages":"3955-3958"},"PeriodicalIF":0.0,"publicationDate":"2010-08-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.1109/ICPR.2010.962","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"29985684","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
In this paper we present a scalable and high performance near-duplicate image search method. The proposed algorithm follows the common paradigm of computing local features around repeatable scale invariant interest points. Unlike existing methods, much shorter hashes are used (40 bits). By leveraging on the shortness of the hashes, a novel high performance search algorithm is introduced which analyzes the reliability of each bit of a hash and performs content adaptive hash lookups by adaptively adjusting the "range" of each hash bit based on reliability. Matched features are post-processed to determine the final match results. We experimentally show that the algorithm can detect cropped, resized, print-scanned and re-encoded images and pieces from images among thousands of images. The proposed algorithm can search for a 200x200 piece of image in a database of 2,250 images with size 2400x4000 in 0.020 seconds on 2.5GHz Intel Core 2.
{"title":"Content Adaptive Hash Lookups for Near-Duplicate Image Search by Full or Partial Image Queries","authors":"Harmanci Oztan, R. HaritaogluIsmail","doi":"10.1109/ICPR.2010.391","DOIUrl":"https://doi.org/10.1109/ICPR.2010.391","url":null,"abstract":"In this paper we present a scalable and high performance near-duplicate image search method. The proposed algorithm follows the common paradigm of computing local features around repeatable scale invariant interest points. Unlike existing methods, much shorter hashes are used (40 bits). By leveraging on the shortness of the hashes, a novel high performance search algorithm is introduced which analyzes the reliability of each bit of a hash and performs content adaptive hash lookups by adaptively adjusting the \"range\" of each hash bit based on reliability. Matched features are post-processed to determine the final match results. We experimentally show that the algorithm can detect cropped, resized, print-scanned and re-encoded images and pieces from images among thousands of images. The proposed algorithm can search for a 200x200 piece of image in a database of 2,250 images with size 2400x4000 in 0.020 seconds on 2.5GHz Intel Core 2.","PeriodicalId":74516,"journal":{"name":"Proceedings of the ... IAPR International Conference on Pattern Recognition. International Conference on Pattern Recognition","volume":"73 1","pages":"1582-1585"},"PeriodicalIF":0.0,"publicationDate":"2010-08-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"77289320","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}