When we look at a picture, our prior knowledge about the world allows us to resolve some of the ambiguities that are inherent to monocular vision, and thereby infer 3d information about the scene. We also recognize different objects, decide on their orientations, and identify how they are connected to their environment. Focusing on the problem of autonomous 3d reconstruction of indoor scenes, in this paper we present a dynamic Bayesian network model capable of resolving some of these ambiguities and recovering 3d information for many images. Our model assumes a "floorwall" geometry on the scene and is trained to recognize the floor-wall boundary in each column of the image. When the image is produced under perspective geometry, we show that this model can be used for 3d reconstruction from a single image. To our knowledge, this was the first monocular approach to automatically recover 3d reconstructions from single indoor images.
{"title":"A Dynamic Bayesian Network Model for Autonomous 3D Reconstruction from a Single Indoor Image","authors":"E. Delage, Honglak Lee, A. Ng","doi":"10.1109/CVPR.2006.23","DOIUrl":"https://doi.org/10.1109/CVPR.2006.23","url":null,"abstract":"When we look at a picture, our prior knowledge about the world allows us to resolve some of the ambiguities that are inherent to monocular vision, and thereby infer 3d information about the scene. We also recognize different objects, decide on their orientations, and identify how they are connected to their environment. Focusing on the problem of autonomous 3d reconstruction of indoor scenes, in this paper we present a dynamic Bayesian network model capable of resolving some of these ambiguities and recovering 3d information for many images. Our model assumes a \"floorwall\" geometry on the scene and is trained to recognize the floor-wall boundary in each column of the image. When the image is produced under perspective geometry, we show that this model can be used for 3d reconstruction from a single image. To our knowledge, this was the first monocular approach to automatically recover 3d reconstructions from single indoor images.","PeriodicalId":421737,"journal":{"name":"2006 IEEE Computer Society Conference on Computer Vision and Pattern Recognition (CVPR'06)","volume":"8 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2006-10-09","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126810094","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
A new method for multiple viewpoint 3D shape reconstruction is presented that relies on the polarization properties of surface reflection. The method is intended to complement existing stereo techniques by establishing correspondence for surfaces without salient features. The phase and degree of polarization from two views of an object are used to reconstruct surface patches. Local surface properties are then used to both align these patches and to compute a cost for that alignment. This cost is used as a basis to establish correspondence between the two views. The method is tested on an object library comprising shapes of varying complexity and material. An accuracy assessment is also presented where real world data are compared to ground truth.
{"title":"Polarization-based Surface Reconstruction via Patch Matching","authors":"G. Atkinson, E. Hancock","doi":"10.1109/CVPR.2006.226","DOIUrl":"https://doi.org/10.1109/CVPR.2006.226","url":null,"abstract":"A new method for multiple viewpoint 3D shape reconstruction is presented that relies on the polarization properties of surface reflection. The method is intended to complement existing stereo techniques by establishing correspondence for surfaces without salient features. The phase and degree of polarization from two views of an object are used to reconstruct surface patches. Local surface properties are then used to both align these patches and to compute a cost for that alignment. This cost is used as a basis to establish correspondence between the two views. The method is tested on an object library comprising shapes of varying complexity and material. An accuracy assessment is also presented where real world data are compared to ground truth.","PeriodicalId":421737,"journal":{"name":"2006 IEEE Computer Society Conference on Computer Vision and Pattern Recognition (CVPR'06)","volume":"9 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2006-06-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115628931","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
E. Mouragnon, M. Lhuillier, M. Dhome, F. Dekeyser, P. Sayd
In this paper we describe a method that estimates the motion of a calibrated camera (settled on an experimental vehicle) and the tridimensional geometry of the environment. The only data used is a video input. In fact, interest points are tracked and matched between frames at video rate. Robust estimates of the camera motion are computed in real-time, key-frames are selected and permit the features 3D reconstruction. The algorithm is particularly appropriate to the reconstruction of long images sequences thanks to the introduction of a fast and local bundle adjustment method that ensures both good accuracy and consistency of the estimated camera poses along the sequence. It also largely reduces computational complexity compared to a global bundle adjustment. Experiments on real data were carried out to evaluate speed and robustness of the method for a sequence of about one kilometer long. Results are also compared to the ground truth measured with a differential GPS.
{"title":"Real Time Localization and 3D Reconstruction","authors":"E. Mouragnon, M. Lhuillier, M. Dhome, F. Dekeyser, P. Sayd","doi":"10.1109/CVPR.2006.236","DOIUrl":"https://doi.org/10.1109/CVPR.2006.236","url":null,"abstract":"In this paper we describe a method that estimates the motion of a calibrated camera (settled on an experimental vehicle) and the tridimensional geometry of the environment. The only data used is a video input. In fact, interest points are tracked and matched between frames at video rate. Robust estimates of the camera motion are computed in real-time, key-frames are selected and permit the features 3D reconstruction. The algorithm is particularly appropriate to the reconstruction of long images sequences thanks to the introduction of a fast and local bundle adjustment method that ensures both good accuracy and consistency of the estimated camera poses along the sequence. It also largely reduces computational complexity compared to a global bundle adjustment. Experiments on real data were carried out to evaluate speed and robustness of the method for a sequence of about one kilometer long. Results are also compared to the ground truth measured with a differential GPS.","PeriodicalId":421737,"journal":{"name":"2006 IEEE Computer Society Conference on Computer Vision and Pattern Recognition (CVPR'06)","volume":"262 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2006-06-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"124276654","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
A novel family of 2D and 3D geometrically invariant features, called summation invariants is proposed for the recognition of the 3D surface of human faces. Focusing on a rectangular region surrounding the nose of a 3D facial depth map, a subset of the so called semi-local summation invariant features is extracted. Then the similarity between a pair of 3D facial depth maps is computed to determine whether they belong to the same person. Out of many possible combinations of these set of features, we select, through careful experimentation, a subset of features that yields best combined performance. Tested with the 3D facial data from the on-going Face Recognition Grand Challenge v1.0 dataset, the proposed new features exhibit significant performance improvement over the baseline algorithm distributed with the datase
{"title":"Fusion of Summation Invariants in 3D Human Face Recognition","authors":"Wei-Yang Lin, Kin-Chung Wong, N. Boston, Y. Hu","doi":"10.1109/CVPR.2006.124","DOIUrl":"https://doi.org/10.1109/CVPR.2006.124","url":null,"abstract":"A novel family of 2D and 3D geometrically invariant features, called summation invariants is proposed for the recognition of the 3D surface of human faces. Focusing on a rectangular region surrounding the nose of a 3D facial depth map, a subset of the so called semi-local summation invariant features is extracted. Then the similarity between a pair of 3D facial depth maps is computed to determine whether they belong to the same person. Out of many possible combinations of these set of features, we select, through careful experimentation, a subset of features that yields best combined performance. Tested with the 3D facial data from the on-going Face Recognition Grand Challenge v1.0 dataset, the proposed new features exhibit significant performance improvement over the baseline algorithm distributed with the datase","PeriodicalId":421737,"journal":{"name":"2006 IEEE Computer Society Conference on Computer Vision and Pattern Recognition (CVPR'06)","volume":"16 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2006-06-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"116708174","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
V. Vaish, M. Levoy, R. Szeliski, C. L. Zitnick, S. B. Kang
Most algorithms for 3D reconstruction from images use cost functions based on SSD, which assume that the surfaces being reconstructed are visible to all cameras. This makes it difficult to reconstruct objects which are partially occluded. Recently, researchers working with large camera arrays have shown it is possible to "see through" occlusions using a technique called synthetic aperture focusing. This suggests that we can design alternative cost functions that are robust to occlusions using synthetic apertures. Our paper explores this design space. We compare classical shape from stereo with shape from synthetic aperture focus. We also describe two variants of multi-view stereo based on color medians and entropy that increase robustness to occlusions. We present an experimental comparison of these cost functions on complex light fields, measuring their accuracy against the amount of occlusion.
{"title":"Reconstructing Occluded Surfaces Using Synthetic Apertures: Stereo, Focus and Robust Measures","authors":"V. Vaish, M. Levoy, R. Szeliski, C. L. Zitnick, S. B. Kang","doi":"10.1109/CVPR.2006.244","DOIUrl":"https://doi.org/10.1109/CVPR.2006.244","url":null,"abstract":"Most algorithms for 3D reconstruction from images use cost functions based on SSD, which assume that the surfaces being reconstructed are visible to all cameras. This makes it difficult to reconstruct objects which are partially occluded. Recently, researchers working with large camera arrays have shown it is possible to \"see through\" occlusions using a technique called synthetic aperture focusing. This suggests that we can design alternative cost functions that are robust to occlusions using synthetic apertures. Our paper explores this design space. We compare classical shape from stereo with shape from synthetic aperture focus. We also describe two variants of multi-view stereo based on color medians and entropy that increase robustness to occlusions. We present an experimental comparison of these cost functions on complex light fields, measuring their accuracy against the amount of occlusion.","PeriodicalId":421737,"journal":{"name":"2006 IEEE Computer Society Conference on Computer Vision and Pattern Recognition (CVPR'06)","volume":"16 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2006-06-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"117134659","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
This paper presents a generative model based approach to deal with occlusions in vision problems which can be formulated as MAP-estimation problems. The approach is generic and targets applications in diverse domains like model-based object recognition, depth-from-stereo and image registration. It relies on a probabilistic imaging model, in which visible regions and occlusions are generated by two separate processes. The partitioning into visible and occluded regions is made explicit by the introduction of an hidden binary visibility map, which, to account for the coherent nature of occlusions, is modelled as a Markov Random Field. Inference is made tractable by a mean field EMalgorithm, which alternates between estimation of visibility and optimisation of model parameters. We demonstrate the effectiveness of the approach with two examples. First, in a N-view stereo experiment, we compute a dense depth map of a scene which is contaminated by multiple occluding objects. Finally, in a 2D-face recognition experiment, we try to identify people from partially occluded facial images.
{"title":"A Mean Field EM-algorithm for Coherent Occlusion Handling in MAP-Estimation Prob","authors":"R. Fransens, C. Strecha, L. Gool","doi":"10.1109/CVPR.2006.31","DOIUrl":"https://doi.org/10.1109/CVPR.2006.31","url":null,"abstract":"This paper presents a generative model based approach to deal with occlusions in vision problems which can be formulated as MAP-estimation problems. The approach is generic and targets applications in diverse domains like model-based object recognition, depth-from-stereo and image registration. It relies on a probabilistic imaging model, in which visible regions and occlusions are generated by two separate processes. The partitioning into visible and occluded regions is made explicit by the introduction of an hidden binary visibility map, which, to account for the coherent nature of occlusions, is modelled as a Markov Random Field. Inference is made tractable by a mean field EMalgorithm, which alternates between estimation of visibility and optimisation of model parameters. We demonstrate the effectiveness of the approach with two examples. First, in a N-view stereo experiment, we compute a dense depth map of a scene which is contaminated by multiple occluding objects. Finally, in a 2D-face recognition experiment, we try to identify people from partially occluded facial images.","PeriodicalId":421737,"journal":{"name":"2006 IEEE Computer Society Conference on Computer Vision and Pattern Recognition (CVPR'06)","volume":"17 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2006-06-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"121055807","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
In this paper, we present a novel approach to learning semantic localized patterns with binary projections in a supervised manner. The pursuit of these binary projections is reformulated into a problem of feature clustering, which optimizes the separability of different classes by taking the members within each cluster as the nonzero entries of a projection vector. An efficient greedy procedure is proposed to incrementally combine the sub-clusters by ensuring the cardinality constraints of the projections and the increase of the objective function. Compared with other algorithms for sparse representations, our proposed algorithm, referred to as Discriminant Localized Binary Projections (dlb), has the following characteristics: 1) dlb is supervised, hence is much more effective than other unsupervised sparse algorithms like Non-negative Matrix Factorization (NMF) in terms of classification power; 2) similar to NMF, dlb can derive spatially localized sparse bases; furthermore, the sparsity of dlb is controllable, and an interesting result is that the bases have explicit semantics in human perception, like eyes and mouth; and 3) classification with dlb is extremely efficient, and only addition operations are required for dimensionality reduction. Extensive experimental results show significant improvements of dlb in sparsity and face recognition accuracy in comparison to the state-of-the-art algorithms for dimensionality reduction and sparse representations.
{"title":"Learning Semantic Patterns with Discriminant Localized Binary Projections","authors":"Shuicheng Yan, Tianqiang Yuan, Xiaoou Tang","doi":"10.1109/CVPR.2006.173","DOIUrl":"https://doi.org/10.1109/CVPR.2006.173","url":null,"abstract":"In this paper, we present a novel approach to learning semantic localized patterns with binary projections in a supervised manner. The pursuit of these binary projections is reformulated into a problem of feature clustering, which optimizes the separability of different classes by taking the members within each cluster as the nonzero entries of a projection vector. An efficient greedy procedure is proposed to incrementally combine the sub-clusters by ensuring the cardinality constraints of the projections and the increase of the objective function. Compared with other algorithms for sparse representations, our proposed algorithm, referred to as Discriminant Localized Binary Projections (dlb), has the following characteristics: 1) dlb is supervised, hence is much more effective than other unsupervised sparse algorithms like Non-negative Matrix Factorization (NMF) in terms of classification power; 2) similar to NMF, dlb can derive spatially localized sparse bases; furthermore, the sparsity of dlb is controllable, and an interesting result is that the bases have explicit semantics in human perception, like eyes and mouth; and 3) classification with dlb is extremely efficient, and only addition operations are required for dimensionality reduction. Extensive experimental results show significant improvements of dlb in sparsity and face recognition accuracy in comparison to the state-of-the-art algorithms for dimensionality reduction and sparse representations.","PeriodicalId":421737,"journal":{"name":"2006 IEEE Computer Society Conference on Computer Vision and Pattern Recognition (CVPR'06)","volume":"65 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2006-06-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127324405","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Byung-Woo Hong, E. Prados, Stefano Soatto, L. Vese
This paper presents a shape representation and a variational framework for the construction of diffeomorphisms that establish "meaningful"correspondences between images, in that they preserve the local geometry of singularities such as region boundaries. At the same time, the shape representation allows enforcing shape information locally in determining such region boundaries. Our representation is based on a kernel descriptor that characterizes local shape. This shape descriptor is robust to noise and forms a scale-space in which an appropriate scale can be chosen depending on the size of features of interest in the scene. In order to preserve local shape during the matching procedure, we introduce a novel constraint to traditional energybased approaches to estimate diffeomorphic deformations, and enforce it in a variational framework.
{"title":"Shape Representation based on Integral Kernels: Application to Image Matching and Segmentation","authors":"Byung-Woo Hong, E. Prados, Stefano Soatto, L. Vese","doi":"10.1109/CVPR.2006.277","DOIUrl":"https://doi.org/10.1109/CVPR.2006.277","url":null,"abstract":"This paper presents a shape representation and a variational framework for the construction of diffeomorphisms that establish \"meaningful\"correspondences between images, in that they preserve the local geometry of singularities such as region boundaries. At the same time, the shape representation allows enforcing shape information locally in determining such region boundaries. Our representation is based on a kernel descriptor that characterizes local shape. This shape descriptor is robust to noise and forms a scale-space in which an appropriate scale can be chosen depending on the size of features of interest in the scene. In order to preserve local shape during the matching procedure, we introduce a novel constraint to traditional energybased approaches to estimate diffeomorphic deformations, and enforce it in a variational framework.","PeriodicalId":421737,"journal":{"name":"2006 IEEE Computer Society Conference on Computer Vision and Pattern Recognition (CVPR'06)","volume":"16 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2006-06-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"124799511","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
We propose a simple and elegant algorithm to track nonrigid objects using a covariance based object description and a Lie algebra based update mechanism. We represent an object window as the covariance matrix of features, therefore we manage to capture the spatial and statistical properties as well as their correlation within the same representation. The covariance matrix enables efficient fusion of different types of features and modalities, and its dimensionality is small. We incorporated a model update algorithm using the Lie group structure of the positive definite matrices. The update mechanism effectively adapts to the undergoing object deformations and appearance changes. The covariance tracking method does not make any assumption on the measurement noise and the motion of the tracked objects, and provides the global optimal solution. We show that it is capable of accurately detecting the nonrigid, moving objects in non-stationary camera sequences while achieving a promising detection rate of 97.4 percent.
{"title":"Covariance Tracking using Model Update Based on Lie Algebra","authors":"F. Porikli, Oncel Tuzel, P. Meer","doi":"10.1109/CVPR.2006.94","DOIUrl":"https://doi.org/10.1109/CVPR.2006.94","url":null,"abstract":"We propose a simple and elegant algorithm to track nonrigid objects using a covariance based object description and a Lie algebra based update mechanism. We represent an object window as the covariance matrix of features, therefore we manage to capture the spatial and statistical properties as well as their correlation within the same representation. The covariance matrix enables efficient fusion of different types of features and modalities, and its dimensionality is small. We incorporated a model update algorithm using the Lie group structure of the positive definite matrices. The update mechanism effectively adapts to the undergoing object deformations and appearance changes. The covariance tracking method does not make any assumption on the measurement noise and the motion of the tracked objects, and provides the global optimal solution. We show that it is capable of accurately detecting the nonrigid, moving objects in non-stationary camera sequences while achieving a promising detection rate of 97.4 percent.","PeriodicalId":421737,"journal":{"name":"2006 IEEE Computer Society Conference on Computer Vision and Pattern Recognition (CVPR'06)","volume":"31 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2006-06-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"125295127","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Tracking of humans in videos is important for many applications. A major source of difficulty in performing this task is due to inter-human or scene occlusion. We present an approach based on representing humans as an assembly of four body parts and detection of the body parts in single frames which makes the method insensitive to camera motions. The responses of the body part detectors and a combined human detector provide the "observations" used for tracking. Trajectory initialization and termination are both fully automatic and rely on the confidences computed from the detection responses. An object is tracked by data association if its corresponding detection response can be found; otherwise it is tracked by a meanshift style tracker. Our method can track humans with both inter-object and scene occlusions. The system is evaluated on three sets of videos and compared with previous method.
{"title":"Tracking of Multiple, Partially Occluded Humans based on Static Body Part Detection","authors":"Bo Wu, R. Nevatia","doi":"10.1109/CVPR.2006.312","DOIUrl":"https://doi.org/10.1109/CVPR.2006.312","url":null,"abstract":"Tracking of humans in videos is important for many applications. A major source of difficulty in performing this task is due to inter-human or scene occlusion. We present an approach based on representing humans as an assembly of four body parts and detection of the body parts in single frames which makes the method insensitive to camera motions. The responses of the body part detectors and a combined human detector provide the \"observations\" used for tracking. Trajectory initialization and termination are both fully automatic and rely on the confidences computed from the detection responses. An object is tracked by data association if its corresponding detection response can be found; otherwise it is tracked by a meanshift style tracker. Our method can track humans with both inter-object and scene occlusions. The system is evaluated on three sets of videos and compared with previous method.","PeriodicalId":421737,"journal":{"name":"2006 IEEE Computer Society Conference on Computer Vision and Pattern Recognition (CVPR'06)","volume":"45 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2006-06-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"125682240","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}