We present a novel discriminative-generative hybrid approach in this paper, with emphasis on application in multiview object detection. Our method includes a novel generative model called Random Attributed Relational Graph (RARG) which is able to capture the structural and appearance characteristics of parts extracted from objects. We develop new variational learning methods to compute the approximation of the detection likelihood ratio function. The variaitonal likelihood ratio function can be shown to be a linear combination of the individual generative classifiers defined at nodes and edges of the RARG. Such insight inspires us to replace the generative classifiers at nodes and edges with discriminative classifiers, such as support vector machines, to further improve the detection performance. Our experiments have shown the robustness of the hybrid approach - the combined detection method incorporating the SVM-based discriminative classifiers yields superior detection performances compared to prior works in multiview object detection.
{"title":"A Generative-Discriminative Hybrid Method for Multi-View Object Detection","authors":"Dongqing Zhang, Shih-Fu Chang","doi":"10.1109/CVPR.2006.27","DOIUrl":"https://doi.org/10.1109/CVPR.2006.27","url":null,"abstract":"We present a novel discriminative-generative hybrid approach in this paper, with emphasis on application in multiview object detection. Our method includes a novel generative model called Random Attributed Relational Graph (RARG) which is able to capture the structural and appearance characteristics of parts extracted from objects. We develop new variational learning methods to compute the approximation of the detection likelihood ratio function. The variaitonal likelihood ratio function can be shown to be a linear combination of the individual generative classifiers defined at nodes and edges of the RARG. Such insight inspires us to replace the generative classifiers at nodes and edges with discriminative classifiers, such as support vector machines, to further improve the detection performance. Our experiments have shown the robustness of the hybrid approach - the combined detection method incorporating the SVM-based discriminative classifiers yields superior detection performances compared to prior works in multiview object detection.","PeriodicalId":421737,"journal":{"name":"2006 IEEE Computer Society Conference on Computer Vision and Pattern Recognition (CVPR'06)","volume":"19 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2006-06-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115082689","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
It is a challenging task to accurately model the performance of a face recognition system, and to predict its individual recognition results under various environments. This paper presents generic methods to model and predict the face recognition performance based on analysis of similarity measurement. We first introduce a concept of "perfect recognition", which only depends on the intrinsic structure of a recognition system. A metric extracted from perfect recognition similarity scores (PRSS) allows modeling the face recognition performance without empirical testing. This paper also presents an EM algorithm to predict the recognition rate of a query set. Furthermore, features are extracted from similarity scores to predict recognition results of individual queries. The presented methods can select algorithm parameters offline, predict recognition performance online, and adjust face alignment online for better recognition. Experimental results show that the performance of recognition systems can be greatly improved using presented methods.
{"title":"Performance Modeling and Prediction of Face Recognition Systems","authors":"Peng Wang, Q. Ji","doi":"10.1109/CVPR.2006.222","DOIUrl":"https://doi.org/10.1109/CVPR.2006.222","url":null,"abstract":"It is a challenging task to accurately model the performance of a face recognition system, and to predict its individual recognition results under various environments. This paper presents generic methods to model and predict the face recognition performance based on analysis of similarity measurement. We first introduce a concept of \"perfect recognition\", which only depends on the intrinsic structure of a recognition system. A metric extracted from perfect recognition similarity scores (PRSS) allows modeling the face recognition performance without empirical testing. This paper also presents an EM algorithm to predict the recognition rate of a query set. Furthermore, features are extracted from similarity scores to predict recognition results of individual queries. The presented methods can select algorithm parameters offline, predict recognition performance online, and adjust face alignment online for better recognition. Experimental results show that the performance of recognition systems can be greatly improved using presented methods.","PeriodicalId":421737,"journal":{"name":"2006 IEEE Computer Society Conference on Computer Vision and Pattern Recognition (CVPR'06)","volume":"30 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2006-06-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"116435029","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
A. Makadia, Alexander Patterson, Kostas Daniilidis
We propose a novel technique for the registration of 3D point clouds which makes very few assumptions: we avoid any manual rough alignment or the use of landmarks, displacement can be arbitrarily large, and the two point sets can have very little overlap. Crude alignment is achieved by estimation of the 3D-rotation from two Extended Gaussian Images even when the data sets inducing them have partial overlap. The technique is based on the correlation of the two EGIs in the Fourier domain and makes use of the spherical and rotational harmonic transforms. For pairs with low overlap which fail a critical verification step, the rotational alignment can be obtained by the alignment of constellation images generated from the EGIs. Rotationally aligned sets are matched by correlation using the Fourier transform of volumetric functions. A fine alignment is acquired in the final step by running Iterative Closest Points with just few iterations.
{"title":"Fully Automatic Registration of 3D Point Clouds","authors":"A. Makadia, Alexander Patterson, Kostas Daniilidis","doi":"10.1109/CVPR.2006.122","DOIUrl":"https://doi.org/10.1109/CVPR.2006.122","url":null,"abstract":"We propose a novel technique for the registration of 3D point clouds which makes very few assumptions: we avoid any manual rough alignment or the use of landmarks, displacement can be arbitrarily large, and the two point sets can have very little overlap. Crude alignment is achieved by estimation of the 3D-rotation from two Extended Gaussian Images even when the data sets inducing them have partial overlap. The technique is based on the correlation of the two EGIs in the Fourier domain and makes use of the spherical and rotational harmonic transforms. For pairs with low overlap which fail a critical verification step, the rotational alignment can be obtained by the alignment of constellation images generated from the EGIs. Rotationally aligned sets are matched by correlation using the Fourier transform of volumetric functions. A fine alignment is acquired in the final step by running Iterative Closest Points with just few iterations.","PeriodicalId":421737,"journal":{"name":"2006 IEEE Computer Society Conference on Computer Vision and Pattern Recognition (CVPR'06)","volume":"209 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2006-06-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"123672940","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Ankur Jain, Dan Koppel, Kyle Kakligian, Yuan-fang Wang
In this paper, we present a prototype video surveillance system that uses stationary-dynamic (or master-slave) camera assemblies to achieve wide-area surveillance and selective focus-of-attention. We address two critical issues in deploying such camera assemblies in real-world applications: off-line camera calibration and on-line selective focus-ofattention. Our contributions over existing techniques are twofold: (1) in terms of camera calibration, our technique calibrates all degrees-of-freedom (DOFs) of both stationary and dynamic cameras, using a closed-form solution that is both efficient and accurate, and (2) in terms of selective focus-of-attention, our technique correctly handles dynamic changes in the scene and varying object depths. This is a significant improvement over existing techniques that use an expensive and non-adaptable table-look-up process.
{"title":"Using Stationary-Dynamic Camera Assemblies for Wide-area Video Surveillance and Selective Attention","authors":"Ankur Jain, Dan Koppel, Kyle Kakligian, Yuan-fang Wang","doi":"10.1109/CVPR.2006.327","DOIUrl":"https://doi.org/10.1109/CVPR.2006.327","url":null,"abstract":"In this paper, we present a prototype video surveillance system that uses stationary-dynamic (or master-slave) camera assemblies to achieve wide-area surveillance and selective focus-of-attention. We address two critical issues in deploying such camera assemblies in real-world applications: off-line camera calibration and on-line selective focus-ofattention. Our contributions over existing techniques are twofold: (1) in terms of camera calibration, our technique calibrates all degrees-of-freedom (DOFs) of both stationary and dynamic cameras, using a closed-form solution that is both efficient and accurate, and (2) in terms of selective focus-of-attention, our technique correctly handles dynamic changes in the scene and varying object depths. This is a significant improvement over existing techniques that use an expensive and non-adaptable table-look-up process.","PeriodicalId":421737,"journal":{"name":"2006 IEEE Computer Society Conference on Computer Vision and Pattern Recognition (CVPR'06)","volume":"55 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2006-06-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126173744","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
In this paper we investigate whether the 2.5D shape information delivered by a novel shape-from-shading algorithm can be used for illumination insensitive face recognition. We present a robust and efficient facial shape-fromshading algorithm which uses principal geodesic analysis to model the variation in surface orientation across a face. We show how this algorithm can be used to recover accurate facial shape and albedo from real world images. Our second contribution is to use the recovered 2.5D shape information in a variety of recognition methods. We present a novel recognition strategy in which similarity is measured in the space of the principal geodesic parameters. We also use the recovered shape information to generate illumination normalised prototype images on which recognition can be performed. Finally we show that, from a single input image, we are able to generate the basis images employed by a number of well known illumination-insensitive recognition algorithms. We also demonstrate that the principal geodesics provide an efficient parameterisation of the space of harmonic basis images.
{"title":"Face Recognition using 2.5D Shape Information","authors":"W. Smith, E. Hancock","doi":"10.1109/CVPR.2006.117","DOIUrl":"https://doi.org/10.1109/CVPR.2006.117","url":null,"abstract":"In this paper we investigate whether the 2.5D shape information delivered by a novel shape-from-shading algorithm can be used for illumination insensitive face recognition. We present a robust and efficient facial shape-fromshading algorithm which uses principal geodesic analysis to model the variation in surface orientation across a face. We show how this algorithm can be used to recover accurate facial shape and albedo from real world images. Our second contribution is to use the recovered 2.5D shape information in a variety of recognition methods. We present a novel recognition strategy in which similarity is measured in the space of the principal geodesic parameters. We also use the recovered shape information to generate illumination normalised prototype images on which recognition can be performed. Finally we show that, from a single input image, we are able to generate the basis images employed by a number of well known illumination-insensitive recognition algorithms. We also demonstrate that the principal geodesics provide an efficient parameterisation of the space of harmonic basis images.","PeriodicalId":421737,"journal":{"name":"2006 IEEE Computer Society Conference on Computer Vision and Pattern Recognition (CVPR'06)","volume":"45 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2006-06-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129702093","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
We address the problem of multiclass object detection. Our aims are to enable models for new categories to benefit from the detectors built previously for other categories, and for the complexity of the multiclass system to grow sublinearly with the number of categories. To this end we introduce a visual alphabet representation which can be learnt incrementally, and explicitly shares boundary fragments (contours) and spatial configurations (relation to centroid) across object categories. We develop a learning algorithm with the following novel contributions: (i) AdaBoost is adapted to learn jointly, based on shape features; (ii) a new learning schedule enables incremental additions of new categories; and (iii) the algorithm learns to detect objects (instead of categorizing images). Furthermore, we show that category similarities can be predicted from the alphabet. We obtain excellent experimental results on a variety of complex categories over several visual aspects. We show that the sharing of shape features not only reduces the number of features required per category, but also often improves recognition performance, as compared to individual detectors which are trained on a per-class basis.
{"title":"Incremental learning of object detectors using a visual shape alphabet","authors":"A. Opelt, A. Pinz, Andrew Zisserman","doi":"10.1109/CVPR.2006.153","DOIUrl":"https://doi.org/10.1109/CVPR.2006.153","url":null,"abstract":"We address the problem of multiclass object detection. Our aims are to enable models for new categories to benefit from the detectors built previously for other categories, and for the complexity of the multiclass system to grow sublinearly with the number of categories. To this end we introduce a visual alphabet representation which can be learnt incrementally, and explicitly shares boundary fragments (contours) and spatial configurations (relation to centroid) across object categories. We develop a learning algorithm with the following novel contributions: (i) AdaBoost is adapted to learn jointly, based on shape features; (ii) a new learning schedule enables incremental additions of new categories; and (iii) the algorithm learns to detect objects (instead of categorizing images). Furthermore, we show that category similarities can be predicted from the alphabet. We obtain excellent experimental results on a variety of complex categories over several visual aspects. We show that the sharing of shape features not only reduces the number of features required per category, but also often improves recognition performance, as compared to individual detectors which are trained on a per-class basis.","PeriodicalId":421737,"journal":{"name":"2006 IEEE Computer Society Conference on Computer Vision and Pattern Recognition (CVPR'06)","volume":"47 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2006-06-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129917868","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
We present a new approach to model and classify breast parenchymal tissue. Given a mammogram, first, we will discover the distribution of the different tissue densities in an unsupervised manner, and second, we will use this tissue distribution to perform the classification. We achieve this using a classifier based on local descriptors and probabilistic Latent Semantic Analysis (pLSA), a generative model from the statistical text literature. We studied the influence of different descriptors like texture and SIFT features at the classification stage showing that textons outperform SIFT in all cases. Moreover we demonstrate that pLSA automatically extracts meaningful latent aspects generating a compact tissue representation based on their densities, useful for discriminating on mammogram classification. We show the results of tissue classification over the MIAS and DDSM datasets. We compare our method with approaches that classified these same datasets showing a better performance of our proposal.
{"title":"Modeling and Classifying Breast Tissue Density in Mammograms","authors":"Anna Bosch, X. Muñoz, A. Oliver, J. Martí","doi":"10.1109/CVPR.2006.188","DOIUrl":"https://doi.org/10.1109/CVPR.2006.188","url":null,"abstract":"We present a new approach to model and classify breast parenchymal tissue. Given a mammogram, first, we will discover the distribution of the different tissue densities in an unsupervised manner, and second, we will use this tissue distribution to perform the classification. We achieve this using a classifier based on local descriptors and probabilistic Latent Semantic Analysis (pLSA), a generative model from the statistical text literature. We studied the influence of different descriptors like texture and SIFT features at the classification stage showing that textons outperform SIFT in all cases. Moreover we demonstrate that pLSA automatically extracts meaningful latent aspects generating a compact tissue representation based on their densities, useful for discriminating on mammogram classification. We show the results of tissue classification over the MIAS and DDSM datasets. We compare our method with approaches that classified these same datasets showing a better performance of our proposal.","PeriodicalId":421737,"journal":{"name":"2006 IEEE Computer Society Conference on Computer Vision and Pattern Recognition (CVPR'06)","volume":"3 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2006-06-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129919509","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Sang-Mook Lee, A. L. Abbott, Neil A. Clark, P. Araman
This paper introduces a new representation for planar curves. From the well-known Dirichlet problem for a disk, the harmonic function embedded in a circular disk is solely dependent on specified boundary values and can be obtained from Poisson’s integral formula. We derive a discrete version of Poisson’s formula and assess its harmonic properties. Various shape signatures can be used as boundary values, whereas only the corresponding Fourier descriptors are needed for the framework. The proposed approach is similar to a scale space representation but exhibits greater generality by accommodating using any type of shape signature. In addition, it is robust to noise and computationally efficient, and it is guaranteed to have a unique solution. In this paper, we demonstrate that the approach has strong potential for shape representation and matching applications.
{"title":"A Shape Representation for Planar Curves by Shape Signature Harmonic Embedding","authors":"Sang-Mook Lee, A. L. Abbott, Neil A. Clark, P. Araman","doi":"10.1109/CVPR.2006.40","DOIUrl":"https://doi.org/10.1109/CVPR.2006.40","url":null,"abstract":"This paper introduces a new representation for planar curves. From the well-known Dirichlet problem for a disk, the harmonic function embedded in a circular disk is solely dependent on specified boundary values and can be obtained from Poisson’s integral formula. We derive a discrete version of Poisson’s formula and assess its harmonic properties. Various shape signatures can be used as boundary values, whereas only the corresponding Fourier descriptors are needed for the framework. The proposed approach is similar to a scale space representation but exhibits greater generality by accommodating using any type of shape signature. In addition, it is robust to noise and computationally efficient, and it is guaranteed to have a unique solution. In this paper, we demonstrate that the approach has strong potential for shape representation and matching applications.","PeriodicalId":421737,"journal":{"name":"2006 IEEE Computer Society Conference on Computer Vision and Pattern Recognition (CVPR'06)","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2006-06-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130056620","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Cloth modeling and recognition is an important and challenging problem in both vision and graphics tasks, such as dressed human recognition and tracking, human sketch and portrait. In this paper, we present a context sensitive grammar in an And-Or graph representation which will produce a large set of composite graphical templates to account for the wide variabilities of cloth configurations, such as T-shirts, jackets, etc. In a supervised learning phase, we ask an artist to draw sketches on a set of dressed people, and we decompose the sketches into categories of cloth and body components: collars, shoulders, cuff, hands, pants, shoes etc. Each component has a number of distinct subtemplates (sub-graphs). These sub-templates serve as leafnodes in a big And-Or graph where an And-node represents a decomposition of the graph into sub-configurations with Markov relations for context and constraints (soft or hard), and an Or-node is a switch for choosing one out of a set of alternative And-nodes (sub-configurations) - similar to a node in stochastic context free grammar (SCFG). This representation integrates the SCFG for structural variability and the Markov (graphical) model for context. An algorithm which integrates the bottom-up proposals and the topdown information is proposed to infer the composite cloth template from the image.
{"title":"Composite Templates for Cloth Modeling and Sketching","authors":"Hong Chen, Zijian Xu, Ziqiang Liu, Song-Chun Zhu","doi":"10.1109/CVPR.2006.81","DOIUrl":"https://doi.org/10.1109/CVPR.2006.81","url":null,"abstract":"Cloth modeling and recognition is an important and challenging problem in both vision and graphics tasks, such as dressed human recognition and tracking, human sketch and portrait. In this paper, we present a context sensitive grammar in an And-Or graph representation which will produce a large set of composite graphical templates to account for the wide variabilities of cloth configurations, such as T-shirts, jackets, etc. In a supervised learning phase, we ask an artist to draw sketches on a set of dressed people, and we decompose the sketches into categories of cloth and body components: collars, shoulders, cuff, hands, pants, shoes etc. Each component has a number of distinct subtemplates (sub-graphs). These sub-templates serve as leafnodes in a big And-Or graph where an And-node represents a decomposition of the graph into sub-configurations with Markov relations for context and constraints (soft or hard), and an Or-node is a switch for choosing one out of a set of alternative And-nodes (sub-configurations) - similar to a node in stochastic context free grammar (SCFG). This representation integrates the SCFG for structural variability and the Markov (graphical) model for context. An algorithm which integrates the bottom-up proposals and the topdown information is proposed to infer the composite cloth template from the image.","PeriodicalId":421737,"journal":{"name":"2006 IEEE Computer Society Conference on Computer Vision and Pattern Recognition (CVPR'06)","volume":"140 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2006-06-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"132867556","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
In vision-based autonomous spacecraft docking multiple views of scene structure captured with the same camera and scene geometry is available under different lighting conditions. These "multiple-exposure" images must be processed to localize visual features to compute the pose of the target object. This paper describes a robust multi-channel edge detection algorithm that localizes the structure of the target object from the local gradient distribution computed over these multiple-exposure images. This approach reduces the effect of the illumination variation including the effect of shadow edges over the use of a single image. Experiments demonstrate that this approach has a lower false detection rate than the average response of the Canny edge detector applied to the individual images separately.
{"title":"A Multi-Channel Algorithm for Edge Detection Under Varying Lighting","authors":"W. Xu, M. Jenkin, Y. Lespérance","doi":"10.1109/CVPR.2006.33","DOIUrl":"https://doi.org/10.1109/CVPR.2006.33","url":null,"abstract":"In vision-based autonomous spacecraft docking multiple views of scene structure captured with the same camera and scene geometry is available under different lighting conditions. These \"multiple-exposure\" images must be processed to localize visual features to compute the pose of the target object. This paper describes a robust multi-channel edge detection algorithm that localizes the structure of the target object from the local gradient distribution computed over these multiple-exposure images. This approach reduces the effect of the illumination variation including the effect of shadow edges over the use of a single image. Experiments demonstrate that this approach has a lower false detection rate than the average response of the Canny edge detector applied to the individual images separately.","PeriodicalId":421737,"journal":{"name":"2006 IEEE Computer Society Conference on Computer Vision and Pattern Recognition (CVPR'06)","volume":"2 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2006-06-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130599524","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}