Pub Date : 2008-06-23DOI: 10.1109/CVPRW.2008.4563038
T. Goh, Ryan West, K. Okada
We propose a novel and robust detection of semantically equivalent but visually dissimilar object parts with the presence of geometric domain variations. The presented algorithms follow a part-based object learning and recognition framework proposed by Epshtein and Ullman. This approach characterizes the location of a visually dissimilar object (i.e., root fragment) as a function of its relative geometrical configuration to a set of local context patches (i.e., context fragments). This work extends the original detection algorithm for handling more realistic geometric domain variation by using robust candidate generation, exploiting geometric invariances of a pair of similar polygons, as well as SIFT-based context descriptors. An entropic feature selection is also integrated in order to improve its performance. Furthermore, robust voting in a maximum density framework is realized by variable bandwidth mean shift, allowing better root detection performance with the presence of significant errors in detecting corresponding context fragments. We evaluate the proposed solution for the task of detecting various facial parts using FERET database. Our experimental results demonstrate the advantage of our solution by indicating significant improvement of detection performance and robustness over the original system.
{"title":"Robust detection of semantically equivalent visually dissimilar objects","authors":"T. Goh, Ryan West, K. Okada","doi":"10.1109/CVPRW.2008.4563038","DOIUrl":"https://doi.org/10.1109/CVPRW.2008.4563038","url":null,"abstract":"We propose a novel and robust detection of semantically equivalent but visually dissimilar object parts with the presence of geometric domain variations. The presented algorithms follow a part-based object learning and recognition framework proposed by Epshtein and Ullman. This approach characterizes the location of a visually dissimilar object (i.e., root fragment) as a function of its relative geometrical configuration to a set of local context patches (i.e., context fragments). This work extends the original detection algorithm for handling more realistic geometric domain variation by using robust candidate generation, exploiting geometric invariances of a pair of similar polygons, as well as SIFT-based context descriptors. An entropic feature selection is also integrated in order to improve its performance. Furthermore, robust voting in a maximum density framework is realized by variable bandwidth mean shift, allowing better root detection performance with the presence of significant errors in detecting corresponding context fragments. We evaluate the proposed solution for the task of detecting various facial parts using FERET database. Our experimental results demonstrate the advantage of our solution by indicating significant improvement of detection performance and robustness over the original system.","PeriodicalId":102206,"journal":{"name":"2008 IEEE Computer Society Conference on Computer Vision and Pattern Recognition Workshops","volume":"22 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2008-06-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"124949490","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2008-06-23DOI: 10.1109/CVPRW.2008.4563142
Zhichao Chen, Stan Birchfield
Doors are important landmarks for indoor mobile robot navigation. Most existing algorithms for door detection use range sensors or work in limited environments because of restricted assumptions about color, pose, or lighting. We present a vision-based door detection algorithm that achieves robustness by utilizing a variety of features, including color, texture, and intensity edges. We introduce two novel geometric features that increase performance significantly: concavity and bottom-edge intensity profile. The features are combined using Adaboost to ensure optimal linear weighting. On a large database of images collected in a wide variety of conditions, the algorithm achieves more than 90% detection with a low false positive rate. Additional experiments demonstrate the suitability of the algorithm for real-time applications using a mobile robot equipped with an off-the-shelf camera and laptop.
{"title":"Visual detection of lintel-occluded doors from a single image","authors":"Zhichao Chen, Stan Birchfield","doi":"10.1109/CVPRW.2008.4563142","DOIUrl":"https://doi.org/10.1109/CVPRW.2008.4563142","url":null,"abstract":"Doors are important landmarks for indoor mobile robot navigation. Most existing algorithms for door detection use range sensors or work in limited environments because of restricted assumptions about color, pose, or lighting. We present a vision-based door detection algorithm that achieves robustness by utilizing a variety of features, including color, texture, and intensity edges. We introduce two novel geometric features that increase performance significantly: concavity and bottom-edge intensity profile. The features are combined using Adaboost to ensure optimal linear weighting. On a large database of images collected in a wide variety of conditions, the algorithm achieves more than 90% detection with a low false positive rate. Additional experiments demonstrate the suitability of the algorithm for real-time applications using a mobile robot equipped with an off-the-shelf camera and laptop.","PeriodicalId":102206,"journal":{"name":"2008 IEEE Computer Society Conference on Computer Vision and Pattern Recognition Workshops","volume":"26 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2008-06-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"125099960","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2008-06-23DOI: 10.1109/CVPRW.2008.4563071
Appu Shaji, S. Chandran
This paper address the problem of automatically extracting the 3D configurations of deformable objects from 2D features. Our focus in this work is to build on the observation that the subspace spanned by the motion parameters is a subset of a smooth manifold, and therefore we hunt for the solution in this space, rather than use heuristics (as previously attempted earlier). We succeed in this by attaching a canonical Riemannian metric, and using a variant of the non-rigid factorisation algorithm for structure from motion. We qualitatively and quantitatively show that our algorithm produces better results when compared to the state of art.
{"title":"Riemannian manifold optimisation for non-rigid structure from motion","authors":"Appu Shaji, S. Chandran","doi":"10.1109/CVPRW.2008.4563071","DOIUrl":"https://doi.org/10.1109/CVPRW.2008.4563071","url":null,"abstract":"This paper address the problem of automatically extracting the 3D configurations of deformable objects from 2D features. Our focus in this work is to build on the observation that the subspace spanned by the motion parameters is a subset of a smooth manifold, and therefore we hunt for the solution in this space, rather than use heuristics (as previously attempted earlier). We succeed in this by attaching a canonical Riemannian metric, and using a variant of the non-rigid factorisation algorithm for structure from motion. We qualitatively and quantitatively show that our algorithm produces better results when compared to the state of art.","PeriodicalId":102206,"journal":{"name":"2008 IEEE Computer Society Conference on Computer Vision and Pattern Recognition Workshops","volume":"44 12 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2008-06-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"122430170","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2008-06-23DOI: 10.1109/CVPRW.2008.4563184
H. Yamazoe, A. Utsumi, Tomoko Yonezawa, Shinji Abe
We propose a gaze estimation method that substantially relaxes the practical constraints possessed by most conventional methods. Gaze estimation research has a long history, and many systems including some commercial schemes have been proposed. However, the application domain of gaze estimation is still limited (e.g, measurement devices for HCI issues, input devices for VDT works) due to the limitations of such systems. First, users must be close to the system (or must wear it) since most systems employ IR illumination and/or stereo cameras. Second, users are required to perform manual calibrations to get geometrically meaningful data. These limitations prevent applications of the system that capture and utilize useful human gaze information in daily situations. In our method, inspired by a bundled adjustment framework, the parameters of the 3D head-eye model are robustly estimated by minimizing pixel-wise re-projection errors between single-camera input images and eye model projections for multiple frames with adjacently estimated head poses. Since this process runs automatically, users does not need to be aware of it. Using the estimated parameters, 3D head poses and gaze directions for newly observed images can be directly determined with the same error minimization manner. This mechanism enables robust gaze estimation with single-camera-based low resolution images without user-aware preparation tasks (i.e., calibration). Experimental results show the proposed method achieves 6deg accuracy with QVGA (320 times 240) images. The proposed algorithm is free from observation distances. We confirmed that our system works with long-distance observations (10 meters).
{"title":"Remote and head-motion-free gaze tracking for real environments with automated head-eye model calibrations","authors":"H. Yamazoe, A. Utsumi, Tomoko Yonezawa, Shinji Abe","doi":"10.1109/CVPRW.2008.4563184","DOIUrl":"https://doi.org/10.1109/CVPRW.2008.4563184","url":null,"abstract":"We propose a gaze estimation method that substantially relaxes the practical constraints possessed by most conventional methods. Gaze estimation research has a long history, and many systems including some commercial schemes have been proposed. However, the application domain of gaze estimation is still limited (e.g, measurement devices for HCI issues, input devices for VDT works) due to the limitations of such systems. First, users must be close to the system (or must wear it) since most systems employ IR illumination and/or stereo cameras. Second, users are required to perform manual calibrations to get geometrically meaningful data. These limitations prevent applications of the system that capture and utilize useful human gaze information in daily situations. In our method, inspired by a bundled adjustment framework, the parameters of the 3D head-eye model are robustly estimated by minimizing pixel-wise re-projection errors between single-camera input images and eye model projections for multiple frames with adjacently estimated head poses. Since this process runs automatically, users does not need to be aware of it. Using the estimated parameters, 3D head poses and gaze directions for newly observed images can be directly determined with the same error minimization manner. This mechanism enables robust gaze estimation with single-camera-based low resolution images without user-aware preparation tasks (i.e., calibration). Experimental results show the proposed method achieves 6deg accuracy with QVGA (320 times 240) images. The proposed algorithm is free from observation distances. We confirmed that our system works with long-distance observations (10 meters).","PeriodicalId":102206,"journal":{"name":"2008 IEEE Computer Society Conference on Computer Vision and Pattern Recognition Workshops","volume":"189 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2008-06-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"122475170","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2008-06-23DOI: 10.1109/CVPRW.2008.4563105
S. Tulyakov, Zhi Zhang, V. Govindaraju
The combination of biometric matching scores can be enhanced by taking into account the matching scores related to all enrolled persons in addition to traditional combinations utilizing only matching scores related to a single person. Identification models take into account the dependence between matching scores assigned to different persons and can be used for such enhancement. In this paper we compare the use of two such models - T-normalization and second best score model. The comparison is performed using two combination algorithms - likelihood ratio and multilayer perceptron. The results show, that while second best score model delivers better performance improvement than T-normalization, two models are complementary to each other and can be used together for further improvements.
{"title":"Comparison of combination methods utilizing T-normalization and second best score model","authors":"S. Tulyakov, Zhi Zhang, V. Govindaraju","doi":"10.1109/CVPRW.2008.4563105","DOIUrl":"https://doi.org/10.1109/CVPRW.2008.4563105","url":null,"abstract":"The combination of biometric matching scores can be enhanced by taking into account the matching scores related to all enrolled persons in addition to traditional combinations utilizing only matching scores related to a single person. Identification models take into account the dependence between matching scores assigned to different persons and can be used for such enhancement. In this paper we compare the use of two such models - T-normalization and second best score model. The comparison is performed using two combination algorithms - likelihood ratio and multilayer perceptron. The results show, that while second best score model delivers better performance improvement than T-normalization, two models are complementary to each other and can be used together for further improvements.","PeriodicalId":102206,"journal":{"name":"2008 IEEE Computer Society Conference on Computer Vision and Pattern Recognition Workshops","volume":"127 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2008-06-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"122654300","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2008-06-23DOI: 10.1109/CVPRW.2008.4563134
Gerhard Schall, H. Grabner, Michael Grabner, Paul Wohlhart, D. Schmalstieg, H. Bischof
In this paper we present a natural feature tracking algorithm based on on-line boosting used for localizing a mobile computer. Mobile augmented reality requires highly accurate and fast six degrees of freedom tracking in order to provide registered graphical overlays to a mobile user. With advances in mobile computer hardware, vision-based tracking approaches have the potential to provide efficient solutions that are non-invasive in contrast to the currently dominating marker-based approaches. We propose to use a tracking approach which can use in an unknown environment, i.e. the target has not be known beforehand. The core of the tracker is an on-line learning algorithm, which updates the tracker as new data becomes available. This is suitable in many mobile augmented reality applications. We demonstrate the applicability of our approach on tasks where the target objects are not known beforehand, i.e. interactive planing.
{"title":"3D tracking in unknown environments using on-line keypoint learning for mobile augmented reality","authors":"Gerhard Schall, H. Grabner, Michael Grabner, Paul Wohlhart, D. Schmalstieg, H. Bischof","doi":"10.1109/CVPRW.2008.4563134","DOIUrl":"https://doi.org/10.1109/CVPRW.2008.4563134","url":null,"abstract":"In this paper we present a natural feature tracking algorithm based on on-line boosting used for localizing a mobile computer. Mobile augmented reality requires highly accurate and fast six degrees of freedom tracking in order to provide registered graphical overlays to a mobile user. With advances in mobile computer hardware, vision-based tracking approaches have the potential to provide efficient solutions that are non-invasive in contrast to the currently dominating marker-based approaches. We propose to use a tracking approach which can use in an unknown environment, i.e. the target has not be known beforehand. The core of the tracker is an on-line learning algorithm, which updates the tracker as new data becomes available. This is suitable in many mobile augmented reality applications. We demonstrate the applicability of our approach on tasks where the target objects are not known beforehand, i.e. interactive planing.","PeriodicalId":102206,"journal":{"name":"2008 IEEE Computer Society Conference on Computer Vision and Pattern Recognition Workshops","volume":"10 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2008-06-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"133180596","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2008-06-23DOI: 10.1109/CVPRW.2008.4563092
Joel Gibson, Oge Marques
This paper describes how the calculation of depth from stereo images was accelerated using a GPU. The Compute Unified Device Architecture (CUDA) from NVIDIA was employed in novel ways to compute depth using BT cost matching and the semi-global matching algorithm. The challenges of mapping a sequential algorithm to a massively parallel thread environment and performance optimization techniques are considered.
{"title":"Stereo depth with a Unified Architecture GPU","authors":"Joel Gibson, Oge Marques","doi":"10.1109/CVPRW.2008.4563092","DOIUrl":"https://doi.org/10.1109/CVPRW.2008.4563092","url":null,"abstract":"This paper describes how the calculation of depth from stereo images was accelerated using a GPU. The Compute Unified Device Architecture (CUDA) from NVIDIA was employed in novel ways to compute depth using BT cost matching and the semi-global matching algorithm. The challenges of mapping a sequential algorithm to a massively parallel thread environment and performance optimization techniques are considered.","PeriodicalId":102206,"journal":{"name":"2008 IEEE Computer Society Conference on Computer Vision and Pattern Recognition Workshops","volume":"129 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2008-06-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"133212470","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2008-06-23DOI: 10.1109/CVPRW.2008.4563043
Lei Zhang, Q. Ji
We propose a Bayesian network (BN) model to integrate multiple contextual information and the image measurements for image segmentation. The BN model systematically encodes the contextual relationships between regions, edges and vertices, as well as their image measurements with uncertainties. It allows a principled probabilistic inference to be performed so that image segmentation can be achieved through a most probable explanation (MPE) inference in the BN model. We have achieved encouraging results on the horse images from the Weizmann dataset. We have also demonstrated the possible ways to extend the BN model so as to incorporate other contextual information such as the global object shape and human intervention for improving image segmentation. Human intervention is encoded as new evidence in the BN model. Its impact is propagated through belief propagation to update the states of the whole model. From the updated BN model, new image segmentation is produced.
{"title":"Integration of multiple contextual information for image segmentation using a Bayesian Network","authors":"Lei Zhang, Q. Ji","doi":"10.1109/CVPRW.2008.4563043","DOIUrl":"https://doi.org/10.1109/CVPRW.2008.4563043","url":null,"abstract":"We propose a Bayesian network (BN) model to integrate multiple contextual information and the image measurements for image segmentation. The BN model systematically encodes the contextual relationships between regions, edges and vertices, as well as their image measurements with uncertainties. It allows a principled probabilistic inference to be performed so that image segmentation can be achieved through a most probable explanation (MPE) inference in the BN model. We have achieved encouraging results on the horse images from the Weizmann dataset. We have also demonstrated the possible ways to extend the BN model so as to incorporate other contextual information such as the global object shape and human intervention for improving image segmentation. Human intervention is encoded as new evidence in the BN model. Its impact is propagated through belief propagation to update the states of the whole model. From the updated BN model, new image segmentation is produced.","PeriodicalId":102206,"journal":{"name":"2008 IEEE Computer Society Conference on Computer Vision and Pattern Recognition Workshops","volume":"5 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2008-06-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"133951475","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2008-06-23DOI: 10.1109/CVPRW.2008.4563077
A. Bronstein, M. Bronstein
Partial matching is probably one of the most challenging problems in nonrigid shape analysis. The problem consists of matching similar parts of shapes that are dissimilar on the whole and can assume different forms by undergoing nonrigid deformations. Conceptually, two shapes can be considered partially matching if they have significant similar parts, with the simplest definition of significance being the size of the parts. Thus, partial matching can be defined as a multicriterion optimization problem trying to simultaneously maximize the similarity and the size of these parts. In this paper, we propose a different definition of significance, taking into account the regularity of parts besides their size. The regularity term proposed here is similar to the spirit of the Mumford-Shah functional. Numerical experiments show that the regularized partial matching produces semantically better results compared to the non-regularized one.
{"title":"Not only size matters: Regularized partial matching of nonrigid shapes","authors":"A. Bronstein, M. Bronstein","doi":"10.1109/CVPRW.2008.4563077","DOIUrl":"https://doi.org/10.1109/CVPRW.2008.4563077","url":null,"abstract":"Partial matching is probably one of the most challenging problems in nonrigid shape analysis. The problem consists of matching similar parts of shapes that are dissimilar on the whole and can assume different forms by undergoing nonrigid deformations. Conceptually, two shapes can be considered partially matching if they have significant similar parts, with the simplest definition of significance being the size of the parts. Thus, partial matching can be defined as a multicriterion optimization problem trying to simultaneously maximize the similarity and the size of these parts. In this paper, we propose a different definition of significance, taking into account the regularity of parts besides their size. The regularity term proposed here is similar to the spirit of the Mumford-Shah functional. Numerical experiments show that the regularized partial matching produces semantically better results compared to the non-regularized one.","PeriodicalId":102206,"journal":{"name":"2008 IEEE Computer Society Conference on Computer Vision and Pattern Recognition Workshops","volume":"30 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2008-06-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"133379410","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2008-06-23DOI: 10.1109/CVPRW.2008.4563042
Stefan Mathe, A. Fazly, Sven J. Dickinson, S. Stevenson
We propose an algorithm for learning the semantics of a (motion) verb from videos depicting the action expressed by the verb, paired with sentences describing the action participants and their roles. Acknowledging that commonalities among example videos may not exist at the level of the input features, our approximation algorithm efficiently searches the space of more abstract features for a common solution. We test our algorithm by using it to learn the semantics of a sample set of verbs; results demonstrate the usefulness of the proposed framework, while identifying directions for further improvement.
{"title":"Learning the abstract motion semantics of verbs from captioned videos","authors":"Stefan Mathe, A. Fazly, Sven J. Dickinson, S. Stevenson","doi":"10.1109/CVPRW.2008.4563042","DOIUrl":"https://doi.org/10.1109/CVPRW.2008.4563042","url":null,"abstract":"We propose an algorithm for learning the semantics of a (motion) verb from videos depicting the action expressed by the verb, paired with sentences describing the action participants and their roles. Acknowledging that commonalities among example videos may not exist at the level of the input features, our approximation algorithm efficiently searches the space of more abstract features for a common solution. We test our algorithm by using it to learn the semantics of a sample set of verbs; results demonstrate the usefulness of the proposed framework, while identifying directions for further improvement.","PeriodicalId":102206,"journal":{"name":"2008 IEEE Computer Society Conference on Computer Vision and Pattern Recognition Workshops","volume":"25 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2008-06-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130307625","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}