We introduce the term cosegmentation which denotes the task of segmenting simultaneously the common parts of an image pair. A generative model for cosegmentation is presented. Inference in the model leads to minimizing an energy with an MRF term encoding spatial coherency and a global constraint which attempts to match the appearance histograms of the common parts. This energy has not been proposed previously and its optimization is challenging and NP-hard. For this problem a novel optimization scheme which we call trust region graph cuts is presented. We demonstrate that this framework has the potential to improve a wide range of research: Object driven image retrieval, video tracking and segmentation, and interactive image editing. The power of the framework lies in its generality, the common part can be a rigid/non-rigid object (or scene), observed from different viewpoints or even similar objects of the same class.
{"title":"Cosegmentation of Image Pairs by Histogram Matching - Incorporating a Global Constraint into MRFs","authors":"C. Rother, T. Minka, A. Blake, V. Kolmogorov","doi":"10.1109/CVPR.2006.91","DOIUrl":"https://doi.org/10.1109/CVPR.2006.91","url":null,"abstract":"We introduce the term cosegmentation which denotes the task of segmenting simultaneously the common parts of an image pair. A generative model for cosegmentation is presented. Inference in the model leads to minimizing an energy with an MRF term encoding spatial coherency and a global constraint which attempts to match the appearance histograms of the common parts. This energy has not been proposed previously and its optimization is challenging and NP-hard. For this problem a novel optimization scheme which we call trust region graph cuts is presented. We demonstrate that this framework has the potential to improve a wide range of research: Object driven image retrieval, video tracking and segmentation, and interactive image editing. The power of the framework lies in its generality, the common part can be a rigid/non-rigid object (or scene), observed from different viewpoints or even similar objects of the same class.","PeriodicalId":421737,"journal":{"name":"2006 IEEE Computer Society Conference on Computer Vision and Pattern Recognition (CVPR'06)","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2006-06-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129177797","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
The RVM-based learning method for whole body pose estimation proposed by Agarwal and Triggs is adapted to hand pose recovery. To help overcome the difficulties presented by the greater degree of self-occlusion and the wider range of poses exhibited in hand imagery, the adaptation proposes a method for combining multiple views. Comparisons of performance using single versus multiple views are reported for both synthesized and real imagery, and the effects of the number of image measurements and the number of training samples on performance are explored.
{"title":"Regression-based Hand Pose Estimation from Multiple Cameras","authors":"T. D. Campos, D. W. Murray","doi":"10.1109/CVPR.2006.252","DOIUrl":"https://doi.org/10.1109/CVPR.2006.252","url":null,"abstract":"The RVM-based learning method for whole body pose estimation proposed by Agarwal and Triggs is adapted to hand pose recovery. To help overcome the difficulties presented by the greater degree of self-occlusion and the wider range of poses exhibited in hand imagery, the adaptation proposes a method for combining multiple views. Comparisons of performance using single versus multiple views are reported for both synthesized and real imagery, and the effects of the number of image measurements and the number of training samples on performance are explored.","PeriodicalId":421737,"journal":{"name":"2006 IEEE Computer Society Conference on Computer Vision and Pattern Recognition (CVPR'06)","volume":"27 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2006-06-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"123884748","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
The photorealistic modeling of large-scale scenes, such as urban structures, requires a fusion of range sensing technology and traditional digital photography. This paper presents a system that integrates multiview geometry and automated 3D registration techniques for texture mapping 2D images onto 3D range data. The 3D range scans and the 2D photographs are respectively used to generate a pair of 3D models of the scene. The first model consists of a dense 3D point cloud, produced by using a 3D-to-3D registration method that matches 3D lines in the range images. The second model consists of a sparse 3D point cloud, produced by applying a multiview geometry (structure-from-motion) algorithm directly on a sequence of 2D photographs. This paper introduces a novel algorithm for automatically recovering the rotation, scale, and translation that best aligns the dense and sparse models. This alignment is necessary to enable the photographs to be optimally texture mapped onto the dense model. The contribution of this work is that it merges the benefits of multiview geometry with automated registration of 3D range scans to produce photorealistic models with minimal human interaction. We present results from experiments in large-scale urban scenes.
{"title":"Multiview Geometry for Texture Mapping 2D Images Onto 3D Range Data","authors":"Lingyun Liu, G. Yu, G. Wolberg, Siavash Zokai","doi":"10.1109/CVPR.2006.204","DOIUrl":"https://doi.org/10.1109/CVPR.2006.204","url":null,"abstract":"The photorealistic modeling of large-scale scenes, such as urban structures, requires a fusion of range sensing technology and traditional digital photography. This paper presents a system that integrates multiview geometry and automated 3D registration techniques for texture mapping 2D images onto 3D range data. The 3D range scans and the 2D photographs are respectively used to generate a pair of 3D models of the scene. The first model consists of a dense 3D point cloud, produced by using a 3D-to-3D registration method that matches 3D lines in the range images. The second model consists of a sparse 3D point cloud, produced by applying a multiview geometry (structure-from-motion) algorithm directly on a sequence of 2D photographs. This paper introduces a novel algorithm for automatically recovering the rotation, scale, and translation that best aligns the dense and sparse models. This alignment is necessary to enable the photographs to be optimally texture mapped onto the dense model. The contribution of this work is that it merges the benefits of multiview geometry with automated registration of 3D range scans to produce photorealistic models with minimal human interaction. We present results from experiments in large-scale urban scenes.","PeriodicalId":421737,"journal":{"name":"2006 IEEE Computer Society Conference on Computer Vision and Pattern Recognition (CVPR'06)","volume":"68 3","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2006-06-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"120873723","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
While crowds of various subjects may offer applicationspecific cues to detect individuals, we demonstrate that for the general case, motion itself contains more information than previously exploited. This paper describes an unsupervised data driven Bayesian clustering algorithm which has detection of individual entities as its primary goal. We track simple image features and probabilistically group them into clusters representing independently moving entities. The numbers of clusters and the grouping of constituent features are determined without supervised learning or any subject-specific model. The new approach is instead, that space-time proximity and trajectory coherence through image space are used as the only probabilistic criteria for clustering. An important contribution of this work is how these criteria are used to perform a one-shot data association without iterating through combinatorial hypotheses of cluster assignments. Our proposed general detection algorithm can be augmented with subject-specific filtering, but is shown to already be effective at detecting individual entities in crowds of people, insects, and animals. This paper and the associated video examine the implementation and experiments of our motion clustering framework.
{"title":"Unsupervised Bayesian Detection of Independent Motion in Crowds","authors":"G. Brostow, R. Cipolla","doi":"10.1109/CVPR.2006.320","DOIUrl":"https://doi.org/10.1109/CVPR.2006.320","url":null,"abstract":"While crowds of various subjects may offer applicationspecific cues to detect individuals, we demonstrate that for the general case, motion itself contains more information than previously exploited. This paper describes an unsupervised data driven Bayesian clustering algorithm which has detection of individual entities as its primary goal. We track simple image features and probabilistically group them into clusters representing independently moving entities. The numbers of clusters and the grouping of constituent features are determined without supervised learning or any subject-specific model. The new approach is instead, that space-time proximity and trajectory coherence through image space are used as the only probabilistic criteria for clustering. An important contribution of this work is how these criteria are used to perform a one-shot data association without iterating through combinatorial hypotheses of cluster assignments. Our proposed general detection algorithm can be augmented with subject-specific filtering, but is shown to already be effective at detecting individual entities in crowds of people, insects, and animals. This paper and the associated video examine the implementation and experiments of our motion clustering framework.","PeriodicalId":421737,"journal":{"name":"2006 IEEE Computer Society Conference on Computer Vision and Pattern Recognition (CVPR'06)","volume":"19 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2006-06-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"116506599","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
There are various situations where image data is binary: character recognition, result of image segmentation etc. As a first contribution, we compare Gaussian based principal component analysis (PCA), which is often used to model images, and "binary PCA" which models the binary data more naturally using Bernoulli distributions. Furthermore, we address the problem of data alignment. Image data is often perturbed by some global transformations such as shifting, rotation, scaling etc. In such cases the data needs to be transformed to some canonical aligned form. As a second contribution, we extend the binary PCA to the "transformation invariant mixture of binary PCAs" which simultaneously corrects the data for a set of global transformations and learns the binary PCA model on the aligned data.
{"title":"Transformation invariant component analysis for binary images","authors":"Zoran Zivkovic, Jakob Verbeek","doi":"10.1109/CVPR.2006.316","DOIUrl":"https://doi.org/10.1109/CVPR.2006.316","url":null,"abstract":"There are various situations where image data is binary: character recognition, result of image segmentation etc. As a first contribution, we compare Gaussian based principal component analysis (PCA), which is often used to model images, and \"binary PCA\" which models the binary data more naturally using Bernoulli distributions. Furthermore, we address the problem of data alignment. Image data is often perturbed by some global transformations such as shifting, rotation, scaling etc. In such cases the data needs to be transformed to some canonical aligned form. As a second contribution, we extend the binary PCA to the \"transformation invariant mixture of binary PCAs\" which simultaneously corrects the data for a set of global transformations and learns the binary PCA model on the aligned data.","PeriodicalId":421737,"journal":{"name":"2006 IEEE Computer Society Conference on Computer Vision and Pattern Recognition (CVPR'06)","volume":"18 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2006-06-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"113967573","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
This paper is about mapping images to continuous output spaces using powerful Bayesian learning techniques. A sparse, semi-supervised Gaussian process regression model (S3GP) is introduced which learns a mapping using only partially labelled training data. We show that sparsity bestows efficiency on the S3GP which requires minimal CPU utilization for real-time operation; the predictions of uncertainty made by the S3GP are more accurate than those of other models leading to considerable performance improvements when combined with a probabilistic filter; and the ability to learn from semi-supervised data simplifies the process of collecting training data. The S3GP uses a mixture of different image features: this is also shown to improve the accuracy and consistency of the mapping. A major application of this work is its use as a gaze tracking system in which images of a human eye are mapped to screen coordinates: in this capacity our approach is efficient, accurate and versatile.
{"title":"Sparse and Semi-supervised Visual Mapping with the S^3GP","authors":"Oliver Williams, A. Blake, R. Cipolla","doi":"10.1109/CVPR.2006.285","DOIUrl":"https://doi.org/10.1109/CVPR.2006.285","url":null,"abstract":"This paper is about mapping images to continuous output spaces using powerful Bayesian learning techniques. A sparse, semi-supervised Gaussian process regression model (S3GP) is introduced which learns a mapping using only partially labelled training data. We show that sparsity bestows efficiency on the S3GP which requires minimal CPU utilization for real-time operation; the predictions of uncertainty made by the S3GP are more accurate than those of other models leading to considerable performance improvements when combined with a probabilistic filter; and the ability to learn from semi-supervised data simplifies the process of collecting training data. The S3GP uses a mixture of different image features: this is also shown to improve the accuracy and consistency of the mapping. A major application of this work is its use as a gaze tracking system in which images of a human eye are mapped to screen coordinates: in this capacity our approach is efficient, accurate and versatile.","PeriodicalId":421737,"journal":{"name":"2006 IEEE Computer Society Conference on Computer Vision and Pattern Recognition (CVPR'06)","volume":"38 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2006-06-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114806731","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
In this paper, we propose a sequential approach to hallucinate/ synthesize high-resolution images of multiple facial expressions. We propose an idea of multi-resolution tensor for super-resolution, and decompose facial expression images into small local patches. We build a multi-resolution patch tensor across different facial expressions. By unifying the identity parameters and learning the subspace mappings across different resolutions and expressions, we simplify the facial expression hallucination as a problem of parameter recovery in a patch tensor space. We further add a high-frequency component residue using nonparametric patch learning from high-resolution training data. We integrate the sequential statistical modelling into a Bayesian framework, so that given any low-resolution facial image of a single expression, we are able to synthesize multiple facial expression images in high-resolution. We show promising experimental results from both facial expression database and live video sequences.
{"title":"Multi-Resolution Patch Tensor for Facial Expression Hallucination","authors":"K. Jia, S. Gong","doi":"10.1109/CVPR.2006.196","DOIUrl":"https://doi.org/10.1109/CVPR.2006.196","url":null,"abstract":"In this paper, we propose a sequential approach to hallucinate/ synthesize high-resolution images of multiple facial expressions. We propose an idea of multi-resolution tensor for super-resolution, and decompose facial expression images into small local patches. We build a multi-resolution patch tensor across different facial expressions. By unifying the identity parameters and learning the subspace mappings across different resolutions and expressions, we simplify the facial expression hallucination as a problem of parameter recovery in a patch tensor space. We further add a high-frequency component residue using nonparametric patch learning from high-resolution training data. We integrate the sequential statistical modelling into a Bayesian framework, so that given any low-resolution facial image of a single expression, we are able to synthesize multiple facial expression images in high-resolution. We show promising experimental results from both facial expression database and live video sequences.","PeriodicalId":421737,"journal":{"name":"2006 IEEE Computer Society Conference on Computer Vision and Pattern Recognition (CVPR'06)","volume":"42 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2006-06-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127752067","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Lambert’s model for diffuse reflection is a main assumption in most of shape from shading (SFS) literature. Even with this simplified model, the SFS is still a difficult problem. Nevertheless, Lambert’s model has been proven to be an inaccurate approximation of the diffuse component of the surface reflectance. In this paper, we propose a new solution of the SFS problem based on a more comprehensive diffuse reflectance model: the Oren and Nayar model. In this work, we slightly modify this more realistic model in order to take into account the attenuation of the illumination due to distance. Using the modified non-Lambertian reflectance, we design a new explicit Partial Differential Equation (PDE) and then solve it using Lax-Friedrichs Sweeping method. Our experiments on synthetic data show that the proposed modeling gives a unique solution without any information about the height at the singular points of the surface. Additional results for real data are presented to show the efficiency of the proposed method . To the best of our knowledge, this is the first non-Lambertian SFS formulation that eliminates the concave/convex ambiguity which is a well known problem in SFS.
{"title":"A New Formulation for Shape from Shading for Non-Lambertian Surfaces","authors":"Abdelrehim H. Ahmed, A. Farag","doi":"10.1109/CVPR.2006.35","DOIUrl":"https://doi.org/10.1109/CVPR.2006.35","url":null,"abstract":"Lambert’s model for diffuse reflection is a main assumption in most of shape from shading (SFS) literature. Even with this simplified model, the SFS is still a difficult problem. Nevertheless, Lambert’s model has been proven to be an inaccurate approximation of the diffuse component of the surface reflectance. In this paper, we propose a new solution of the SFS problem based on a more comprehensive diffuse reflectance model: the Oren and Nayar model. In this work, we slightly modify this more realistic model in order to take into account the attenuation of the illumination due to distance. Using the modified non-Lambertian reflectance, we design a new explicit Partial Differential Equation (PDE) and then solve it using Lax-Friedrichs Sweeping method. Our experiments on synthetic data show that the proposed modeling gives a unique solution without any information about the height at the singular points of the surface. Additional results for real data are presented to show the efficiency of the proposed method . To the best of our knowledge, this is the first non-Lambertian SFS formulation that eliminates the concave/convex ambiguity which is a well known problem in SFS.","PeriodicalId":421737,"journal":{"name":"2006 IEEE Computer Society Conference on Computer Vision and Pattern Recognition (CVPR'06)","volume":"100 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2006-06-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128146936","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Gang Wu, Amir M. Rahimi, E. Chang, Kingshy Goh, Tomy Tsai, Ankur Jain, Yuan-fang Wang
Identifying or matching the surface color of a moving object in surveillance video is critical for achieving reliable object-tracking and searching. Traditional color models provide little help, since the surface of an object is usually not flat, the object’s motion can alter the surface’s orientation, and the lighting conditions can vary when the object moves. To tackle this research problem, we conduct extensive data mining on video clips collected under various lighting conditions and distances from several video-cameras. We observe how each of the eleven culture colors can drift in the color space when an object’s surface is in motion. In the color space, we then learn the drift pattern of each culture color for classifying unseen surface colors. Finally, we devise a distance function taking color drift into consideration to perform color identification and matching. Empirical studies show our approach to be very promising: achieving over 95% color-prediction accuracy.
{"title":"Identifying Color in Motion in Video Sensors","authors":"Gang Wu, Amir M. Rahimi, E. Chang, Kingshy Goh, Tomy Tsai, Ankur Jain, Yuan-fang Wang","doi":"10.1109/CVPR.2006.139","DOIUrl":"https://doi.org/10.1109/CVPR.2006.139","url":null,"abstract":"Identifying or matching the surface color of a moving object in surveillance video is critical for achieving reliable object-tracking and searching. Traditional color models provide little help, since the surface of an object is usually not flat, the object’s motion can alter the surface’s orientation, and the lighting conditions can vary when the object moves. To tackle this research problem, we conduct extensive data mining on video clips collected under various lighting conditions and distances from several video-cameras. We observe how each of the eleven culture colors can drift in the color space when an object’s surface is in motion. In the color space, we then learn the drift pattern of each culture color for classifying unseen surface colors. Finally, we devise a distance function taking color drift into consideration to perform color identification and matching. Empirical studies show our approach to be very promising: achieving over 95% color-prediction accuracy.","PeriodicalId":421737,"journal":{"name":"2006 IEEE Computer Society Conference on Computer Vision and Pattern Recognition (CVPR'06)","volume":"19 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2006-06-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"125631551","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
In this paper, we address the problem of 3D volume reconstruction from depth adjacent subvolumes (i.e., sets of image frames) acquired using a confocal laser scanning microscope (CLSM). Our goal is to align sub-volumes by estimating an optimal global image transformation which preserves morphological smoothness of medical structures (called features, e.g., blood vessels) inside of a reconstructed 3D volume. We approached the problem by learning morphological characteristics of structures inside of each sub-volume, i.e. centroid trajectories of features. Next, adjacent sub-volumes are aligned by fusing the morphological characteristics of structures using extrapolation or model fitting. Finally, a global sub-volume to subvolume transformation is computed based on the entire set of fused structures. The trajectory-based 3D volume reconstruction method described here is evaluated with a pair of consecutive physical sections using two evaluation metrics for morphological continu
{"title":"Three-Dimensional Volume Reconstruction Based on Trajectory Fusion from Confocal Laser Scanning Microscope Images","authors":"Sang-chul Lee, P. Bajcsy","doi":"10.1109/CVPR.2006.308","DOIUrl":"https://doi.org/10.1109/CVPR.2006.308","url":null,"abstract":"In this paper, we address the problem of 3D volume reconstruction from depth adjacent subvolumes (i.e., sets of image frames) acquired using a confocal laser scanning microscope (CLSM). Our goal is to align sub-volumes by estimating an optimal global image transformation which preserves morphological smoothness of medical structures (called features, e.g., blood vessels) inside of a reconstructed 3D volume. We approached the problem by learning morphological characteristics of structures inside of each sub-volume, i.e. centroid trajectories of features. Next, adjacent sub-volumes are aligned by fusing the morphological characteristics of structures using extrapolation or model fitting. Finally, a global sub-volume to subvolume transformation is computed based on the entire set of fused structures. The trajectory-based 3D volume reconstruction method described here is evaluated with a pair of consecutive physical sections using two evaluation metrics for morphological continu","PeriodicalId":421737,"journal":{"name":"2006 IEEE Computer Society Conference on Computer Vision and Pattern Recognition (CVPR'06)","volume":"103 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2006-06-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"122251067","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}