Pub Date : 2008-06-23DOI: 10.1109/CVPRW.2008.4563034
D. Rother, K. A. Patwardhan, I. Aganj, G. Sapiro
A framework for scene learning from a single still video camera is presented in this work. In particular, the camera transformation and the direction of the shadows are learned using information extracted from pedestrians walking in the scene. The proposed approach poses the scene learning estimation as a likelihood maximization problem, efficiently solved via factorization and dynamic programming, and amenable to an online implementation. We introduce a 3D prior to model the pedestrianpsilas appearance from any viewpoint, and learn it using a standard off-the-shelf consumer video camera and the Radon transform. This 3D prior or ldquoappearance modelrdquo is used to quantify the agreement between the tentative parameters and the actual video observations, taking into account not only the pixels occupied by the pedestrian, but also those occupied by the his shadows and/or reflections. The presentation of the framework is complemented with an example of a casual video scene showing the importance of the learned 3D pedestrian prior and the accuracy of the proposed approach.
{"title":"3D priors for scene learning from a single view","authors":"D. Rother, K. A. Patwardhan, I. Aganj, G. Sapiro","doi":"10.1109/CVPRW.2008.4563034","DOIUrl":"https://doi.org/10.1109/CVPRW.2008.4563034","url":null,"abstract":"A framework for scene learning from a single still video camera is presented in this work. In particular, the camera transformation and the direction of the shadows are learned using information extracted from pedestrians walking in the scene. The proposed approach poses the scene learning estimation as a likelihood maximization problem, efficiently solved via factorization and dynamic programming, and amenable to an online implementation. We introduce a 3D prior to model the pedestrianpsilas appearance from any viewpoint, and learn it using a standard off-the-shelf consumer video camera and the Radon transform. This 3D prior or ldquoappearance modelrdquo is used to quantify the agreement between the tentative parameters and the actual video observations, taking into account not only the pixels occupied by the pedestrian, but also those occupied by the his shadows and/or reflections. The presentation of the framework is complemented with an example of a casual video scene showing the importance of the learned 3D pedestrian prior and the accuracy of the proposed approach.","PeriodicalId":102206,"journal":{"name":"2008 IEEE Computer Society Conference on Computer Vision and Pattern Recognition Workshops","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2008-06-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130778601","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2008-06-23DOI: 10.1109/CVPRW.2008.4562965
E. Muñoz-Moreno, S. Aja‐Fernández, M. Martín-Fernández
Since tensor usage has become more and more popular in image processing, the assessment of the quality between tensor images is necessary for the evaluation of the advanced processing algorithms that deal with this kind of data. In this paper, we expose the methodology that should be followed to extend well-known image quality measures to tensor data. Two of these measures based on structural comparison are adapted to tensor images and their performance is shown by a set of examples. By means of these experiments the advantages of structural based measures will be highlighted, as well as the need for considering all the tensor components in the quality assessment.
{"title":"A methodology for quality assessment in tensor images","authors":"E. Muñoz-Moreno, S. Aja‐Fernández, M. Martín-Fernández","doi":"10.1109/CVPRW.2008.4562965","DOIUrl":"https://doi.org/10.1109/CVPRW.2008.4562965","url":null,"abstract":"Since tensor usage has become more and more popular in image processing, the assessment of the quality between tensor images is necessary for the evaluation of the advanced processing algorithms that deal with this kind of data. In this paper, we expose the methodology that should be followed to extend well-known image quality measures to tensor data. Two of these measures based on structural comparison are adapted to tensor images and their performance is shown by a set of examples. By means of these experiments the advantages of structural based measures will be highlighted, as well as the need for considering all the tensor components in the quality assessment.","PeriodicalId":102206,"journal":{"name":"2008 IEEE Computer Society Conference on Computer Vision and Pattern Recognition Workshops","volume":"107 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2008-06-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"132851702","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2008-06-23DOI: 10.1109/CVPRW.2008.4563069
P. Roth, H. Bischof
To learn an object detector labeled training data is required. Since unlabeled training data is often given as an image sequence we propose a tracking-based approach to minimize the manual effort when learning an object detector. The main idea is to apply a tracker within an active on-line learning framework for selecting and labeling unlabeled samples. For that purpose the current classifier is evaluated on a test image and the obtained detection result is verified by the tracker. In this way the most valuable samples can be estimated and used for updating the classifier. Thus, the number of needed samples can be reduced and an incrementally better detector is obtained. To enable efficient learning (i.e., to have real-time performance) and to assure robust tracking results, we apply on-line boosting for both, learning and tracking. If the tracker can be initialized automatically no user interaction is needed and we have an autonomous learning/labeling system. In the experiments the approach is evaluated in detail for learning a face detector. In addition, to show the generality, also results for completely different objects are presented.
{"title":"Active sampling via tracking","authors":"P. Roth, H. Bischof","doi":"10.1109/CVPRW.2008.4563069","DOIUrl":"https://doi.org/10.1109/CVPRW.2008.4563069","url":null,"abstract":"To learn an object detector labeled training data is required. Since unlabeled training data is often given as an image sequence we propose a tracking-based approach to minimize the manual effort when learning an object detector. The main idea is to apply a tracker within an active on-line learning framework for selecting and labeling unlabeled samples. For that purpose the current classifier is evaluated on a test image and the obtained detection result is verified by the tracker. In this way the most valuable samples can be estimated and used for updating the classifier. Thus, the number of needed samples can be reduced and an incrementally better detector is obtained. To enable efficient learning (i.e., to have real-time performance) and to assure robust tracking results, we apply on-line boosting for both, learning and tracking. If the tracker can be initialized automatically no user interaction is needed and we have an autonomous learning/labeling system. In the experiments the approach is evaluated in detail for learning a face detector. In addition, to show the generality, also results for completely different objects are presented.","PeriodicalId":102206,"journal":{"name":"2008 IEEE Computer Society Conference on Computer Vision and Pattern Recognition Workshops","volume":"51 Suppl 1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2008-06-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"132006732","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2008-06-23DOI: 10.1109/CVPRW.2008.4563048
P. Fechteler, P. Eisert
We present a system to capture high accuracy 3D models of faces by taking just one photo without the need of specialized hardware, just a consumer grade digital camera and beamer. The proposed 3D face scanner utilizes structured light techniques: A colored pattern is projected into the face of interest while a photo is taken. Then, the 3D geometry is calculated based on the distortions of the pattern detected in the face. This is performed by triangulating the pattern found in the captured image with the projected one.
{"title":"Adaptive color classification for structured light systems","authors":"P. Fechteler, P. Eisert","doi":"10.1109/CVPRW.2008.4563048","DOIUrl":"https://doi.org/10.1109/CVPRW.2008.4563048","url":null,"abstract":"We present a system to capture high accuracy 3D models of faces by taking just one photo without the need of specialized hardware, just a consumer grade digital camera and beamer. The proposed 3D face scanner utilizes structured light techniques: A colored pattern is projected into the face of interest while a photo is taken. Then, the 3D geometry is calculated based on the distortions of the pattern detected in the face. This is performed by triangulating the pattern found in the captured image with the projected one.","PeriodicalId":102206,"journal":{"name":"2008 IEEE Computer Society Conference on Computer Vision and Pattern Recognition Workshops","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2008-06-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130604870","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2008-06-23DOI: 10.1109/CVPRW.2008.4563153
A. Belbachir, M. Hofstätter, Nenad Milosevic, P. Schön
The paper presents a compact vision system for efficient contours extraction in high-speed applications. By exploiting the ultra high temporal resolution and the sparse representation of the sensorpsilas data in reacting to scene dynamics, the system fosters efficient embedded computer vision for ultra high-speed applications. The results reported in this paper show the sensor output quality for a wide range of object velocity (5-40 m/s), and demonstrate the object data volume independence from the velocity as well as the steadiness of the object quality. The influence of object velocity on high-performance embedded computer vision is also discussed.
{"title":"Embedded contours extraction for high-speed scene dynamics based on a neuromorphic temporal contrast vision sensor","authors":"A. Belbachir, M. Hofstätter, Nenad Milosevic, P. Schön","doi":"10.1109/CVPRW.2008.4563153","DOIUrl":"https://doi.org/10.1109/CVPRW.2008.4563153","url":null,"abstract":"The paper presents a compact vision system for efficient contours extraction in high-speed applications. By exploiting the ultra high temporal resolution and the sparse representation of the sensorpsilas data in reacting to scene dynamics, the system fosters efficient embedded computer vision for ultra high-speed applications. The results reported in this paper show the sensor output quality for a wide range of object velocity (5-40 m/s), and demonstrate the object data volume independence from the velocity as well as the steadiness of the object quality. The influence of object velocity on high-performance embedded computer vision is also discussed.","PeriodicalId":102206,"journal":{"name":"2008 IEEE Computer Society Conference on Computer Vision and Pattern Recognition Workshops","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2008-06-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130285923","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2008-06-23DOI: 10.1109/CVPRW.2008.4563074
Facundo Mémoli
The purpose of this paper is to study the relationship between measures of dissimilarity between shapes in Euclidean space. We first concentrate on the pair Gromov-Hausdorff distance (GH) versus Hausdorff distance under the action of Euclidean isometries (EH). Then, we (1) show they are comparable in a precise sense that is not the linear behaviour one would expect and (2) explain the source of this phenomenon via explicit constructions. Finally, (3) by conveniently modifying the expression for the GH distance, we recover the EH distance. This allows us to uncover a connection that links the problem of computing GH and EH and the family of Euclidean Distance Matrix completion problems. The second pair of dissimilarity notions we study is the so called Lp-Gromov-Hausdorff distance versus the Earth Moverpsilas distance under the action of Euclidean isometries. We obtain results about comparability in this situation as well.
{"title":"Gromov-Hausdorff distances in Euclidean spaces","authors":"Facundo Mémoli","doi":"10.1109/CVPRW.2008.4563074","DOIUrl":"https://doi.org/10.1109/CVPRW.2008.4563074","url":null,"abstract":"The purpose of this paper is to study the relationship between measures of dissimilarity between shapes in Euclidean space. We first concentrate on the pair Gromov-Hausdorff distance (GH) versus Hausdorff distance under the action of Euclidean isometries (EH). Then, we (1) show they are comparable in a precise sense that is not the linear behaviour one would expect and (2) explain the source of this phenomenon via explicit constructions. Finally, (3) by conveniently modifying the expression for the GH distance, we recover the EH distance. This allows us to uncover a connection that links the problem of computing GH and EH and the family of Euclidean Distance Matrix completion problems. The second pair of dissimilarity notions we study is the so called Lp-Gromov-Hausdorff distance versus the Earth Moverpsilas distance under the action of Euclidean isometries. We obtain results about comparability in this situation as well.","PeriodicalId":102206,"journal":{"name":"2008 IEEE Computer Society Conference on Computer Vision and Pattern Recognition Workshops","volume":"69 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2008-06-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130441785","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2008-06-23DOI: 10.1109/CVPRW.2008.4563068
Alex Holub, P. Perona, M. Burl
Most methods for learning object categories require large amounts of labeled training data. However, obtaining such data can be a difficult and time-consuming endeavor. We have developed a novel, entropy-based ldquoactive learningrdquo approach which makes significant progress towards this problem. The main idea is to sequentially acquire labeled data by presenting an oracle (the user) with unlabeled images that will be particularly informative when labeled. Active learning adaptively prioritizes the order in which the training examples are acquired, which, as shown by our experiments, can significantly reduce the overall number of training examples required to reach near-optimal performance. At first glance this may seem counter-intuitive: how can the algorithm know whether a group of unlabeled images will be informative, when, by definition, there is no label directly associated with any of the images? Our approach is based on choosing an image to label that maximizes the expected amount of information we gain about the set of unlabeled images. The technique is demonstrated in several contexts, including improving the efficiency of Web image-search queries and open-world visual learning by an autonomous agent. Experiments on a large set of 140 visual object categories taken directly from text-based Web image searches show that our technique can provide large improvements (up to 10 x reduction in the number of training examples needed) over baseline techniques.
{"title":"Entropy-based active learning for object recognition","authors":"Alex Holub, P. Perona, M. Burl","doi":"10.1109/CVPRW.2008.4563068","DOIUrl":"https://doi.org/10.1109/CVPRW.2008.4563068","url":null,"abstract":"Most methods for learning object categories require large amounts of labeled training data. However, obtaining such data can be a difficult and time-consuming endeavor. We have developed a novel, entropy-based ldquoactive learningrdquo approach which makes significant progress towards this problem. The main idea is to sequentially acquire labeled data by presenting an oracle (the user) with unlabeled images that will be particularly informative when labeled. Active learning adaptively prioritizes the order in which the training examples are acquired, which, as shown by our experiments, can significantly reduce the overall number of training examples required to reach near-optimal performance. At first glance this may seem counter-intuitive: how can the algorithm know whether a group of unlabeled images will be informative, when, by definition, there is no label directly associated with any of the images? Our approach is based on choosing an image to label that maximizes the expected amount of information we gain about the set of unlabeled images. The technique is demonstrated in several contexts, including improving the efficiency of Web image-search queries and open-world visual learning by an autonomous agent. Experiments on a large set of 140 visual object categories taken directly from text-based Web image searches show that our technique can provide large improvements (up to 10 x reduction in the number of training examples needed) over baseline techniques.","PeriodicalId":102206,"journal":{"name":"2008 IEEE Computer Society Conference on Computer Vision and Pattern Recognition Workshops","volume":"5 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2008-06-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126500225","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2008-06-23DOI: 10.1109/CVPRW.2008.4563148
Henry Medeiros, Johnny Park, A. Kak
Porting well known computer vision algorithms to low power, high performance computing devices such as SIMD linear processor arrays can be a challenging task. One especially useful such algorithm is the color-based particle filter, which has been applied successfully by many research groups to the problem of tracking non-rigid objects. In this paper, we propose an implementation of the color-based particle filter suitable for SIMD processors. The main focus of our work is on the parallel computation of the particle weights. This step is the major bottleneck of standard implementations of the color-based particle filter since it requires the knowledge of the histograms of the regions surrounding each hypothesized target position. We expect this approach to perform faster in an SIMD processor than an implementation in a standard desktop computer even running at much lower clock speeds.
{"title":"A parallel color-based particle filter for object tracking","authors":"Henry Medeiros, Johnny Park, A. Kak","doi":"10.1109/CVPRW.2008.4563148","DOIUrl":"https://doi.org/10.1109/CVPRW.2008.4563148","url":null,"abstract":"Porting well known computer vision algorithms to low power, high performance computing devices such as SIMD linear processor arrays can be a challenging task. One especially useful such algorithm is the color-based particle filter, which has been applied successfully by many research groups to the problem of tracking non-rigid objects. In this paper, we propose an implementation of the color-based particle filter suitable for SIMD processors. The main focus of our work is on the parallel computation of the particle weights. This step is the major bottleneck of standard implementations of the color-based particle filter since it requires the knowledge of the histograms of the regions surrounding each hypothesized target position. We expect this approach to perform faster in an SIMD processor than an implementation in a standard desktop computer even running at much lower clock speeds.","PeriodicalId":102206,"journal":{"name":"2008 IEEE Computer Society Conference on Computer Vision and Pattern Recognition Workshops","volume":"193 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2008-06-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"121191365","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2008-06-23DOI: 10.1109/CVPRW.2008.4562967
L. Florack
Regularization is an important aspect in high angular resolution diffusion imaging (HARDI), since, unlike with classical diffusion tensor imaging (DTI), there is no a priori regularity of raw data in the co-domain, i.e. considered as a multispectral signal for fixed spatial position. HARDI preprocessing is therefore a crucial step prior to any subsequent analysis, and some insight in regularization paradigms and their interrelations is compulsory. In this paper we posit a codomain scale space regularization paradigm that has hitherto not been applied in the context of HARDI. Unlike previous (first and second order) schemes it is based on infinite order regularization, yet can be fully operationalized. We furthermore establish a closed-form relation with first order Tikhonov regularization via the Laplace transform.
{"title":"Codomain scale space and regularization for high angular resolution diffusion imaging","authors":"L. Florack","doi":"10.1109/CVPRW.2008.4562967","DOIUrl":"https://doi.org/10.1109/CVPRW.2008.4562967","url":null,"abstract":"Regularization is an important aspect in high angular resolution diffusion imaging (HARDI), since, unlike with classical diffusion tensor imaging (DTI), there is no a priori regularity of raw data in the co-domain, i.e. considered as a multispectral signal for fixed spatial position. HARDI preprocessing is therefore a crucial step prior to any subsequent analysis, and some insight in regularization paradigms and their interrelations is compulsory. In this paper we posit a codomain scale space regularization paradigm that has hitherto not been applied in the context of HARDI. Unlike previous (first and second order) schemes it is based on infinite order regularization, yet can be fully operationalized. We furthermore establish a closed-form relation with first order Tikhonov regularization via the Laplace transform.","PeriodicalId":102206,"journal":{"name":"2008 IEEE Computer Society Conference on Computer Vision and Pattern Recognition Workshops","volume":"95 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2008-06-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"121212919","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2008-06-23DOI: 10.1109/CVPRW.2008.4563017
J. Levy, M. Foskey, S. Pizer
We introduce a locally defined shape-maintaining method for interpolating between corresponding oriented samples (vertices) from a pair of surfaces. We have applied this method to interpolate synthetic data sets in two and three dimensions and to interpolate medially represented shape models of anatomical objects in three dimensions. In the plane, each oriented vertex follows a circular arc as if it was rotating to its destination. In three dimensions, each oriented vertex moves along a helical path that combines in-plane rotation with translation along the axis of rotation. We show that our planar method provides shape-maintaining interpolations when the reference and target objects are similar. Moreover, the interpolations are size maintaining when the reference and target objects are congruent. In three dimensions, similar objects are interpolated by an affine transformation. We use measurements of the fractional anisotropy of such global affine transformations to demonstrate that our method is generally more-shape preserving than the alternative of interpolating vertices along linear paths irrespective of changes in orientation. In both two and three dimensions we have experimental evidence that when non-shape-preserving deformations are applied to template shapes, the interpolation tends to be visually satisfying with each intermediate object appearing to belong to the same class of objects as the end points.
{"title":"Rotational flows for interpolation between sampled surfaces","authors":"J. Levy, M. Foskey, S. Pizer","doi":"10.1109/CVPRW.2008.4563017","DOIUrl":"https://doi.org/10.1109/CVPRW.2008.4563017","url":null,"abstract":"We introduce a locally defined shape-maintaining method for interpolating between corresponding oriented samples (vertices) from a pair of surfaces. We have applied this method to interpolate synthetic data sets in two and three dimensions and to interpolate medially represented shape models of anatomical objects in three dimensions. In the plane, each oriented vertex follows a circular arc as if it was rotating to its destination. In three dimensions, each oriented vertex moves along a helical path that combines in-plane rotation with translation along the axis of rotation. We show that our planar method provides shape-maintaining interpolations when the reference and target objects are similar. Moreover, the interpolations are size maintaining when the reference and target objects are congruent. In three dimensions, similar objects are interpolated by an affine transformation. We use measurements of the fractional anisotropy of such global affine transformations to demonstrate that our method is generally more-shape preserving than the alternative of interpolating vertices along linear paths irrespective of changes in orientation. In both two and three dimensions we have experimental evidence that when non-shape-preserving deformations are applied to template shapes, the interpolation tends to be visually satisfying with each intermediate object appearing to belong to the same class of objects as the end points.","PeriodicalId":102206,"journal":{"name":"2008 IEEE Computer Society Conference on Computer Vision and Pattern Recognition Workshops","volume":"2 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2008-06-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"121225081","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}