Pub Date : 2008-06-23DOI: 10.1109/CVPRW.2008.4563116
A. Rattani, G. Marcialis, F. Roli
The representativeness of a biometric template gallery to the novel data has been recently faced by proposing ldquotemplate updaterdquo algorithms that update the enrolled templates in order to capture, and represent better, the subjectpsilas intra-class variations. Majority of the proposed approaches have adopted ldquoselfrdquo update technique, in which the system updates itself using its own knowledge. Recently an approach named template co-update, using two complementary biometrics to ldquoco-updaterdquo each other, has been introduced. In this paper, we investigate if template co-update is able to capture intra-class variations better than those captured by state of art self update algorithms. Accordingly, experiments are conducted under two conditions, i.e., a controlled and an uncontrolled environment. Reported results show that co-update can outperform ldquoselfrdquo update technique, when initial enrolled templates are poor representative of the novel data (uncontrolled environment), whilst almost similar performances are obtained when initial enrolled templates well represent the input data (controlled environment).
{"title":"Capturing large intra-class variations of biometric data by template co-updating","authors":"A. Rattani, G. Marcialis, F. Roli","doi":"10.1109/CVPRW.2008.4563116","DOIUrl":"https://doi.org/10.1109/CVPRW.2008.4563116","url":null,"abstract":"The representativeness of a biometric template gallery to the novel data has been recently faced by proposing ldquotemplate updaterdquo algorithms that update the enrolled templates in order to capture, and represent better, the subjectpsilas intra-class variations. Majority of the proposed approaches have adopted ldquoselfrdquo update technique, in which the system updates itself using its own knowledge. Recently an approach named template co-update, using two complementary biometrics to ldquoco-updaterdquo each other, has been introduced. In this paper, we investigate if template co-update is able to capture intra-class variations better than those captured by state of art self update algorithms. Accordingly, experiments are conducted under two conditions, i.e., a controlled and an uncontrolled environment. Reported results show that co-update can outperform ldquoselfrdquo update technique, when initial enrolled templates are poor representative of the novel data (uncontrolled environment), whilst almost similar performances are obtained when initial enrolled templates well represent the input data (controlled environment).","PeriodicalId":102206,"journal":{"name":"2008 IEEE Computer Society Conference on Computer Vision and Pattern Recognition Workshops","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2008-06-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"131248669","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2008-06-23DOI: 10.1109/CVPRW.2008.4563109
Jinyu Zuo, N. Ratha, J. Connell
Iris segmentation is an important first step for high accuracy iris recognition. A robust iris segmentation procedure should be able to handle noise, occlusion and non-uniform lighting. It also impacts system accuracy - high FAR or FRR values may come directly from bad or wrong segmentations. In this paper a simple new approach for iris segmentation is proposed that tries to integrate quality evaluation ideas directly into the segmentation algorithm. By cutting out all the bad areas, the fraction of the iris that remains can be used as a comprehensive quality measure. This eliminates images with high occlusion (e.g. by the eyelids) as well as images with other quality problems (e.g. low contrast), all using the same mechanism. The proposed method has been tested on a medium-sized (450 image) public database (MMU1) and the score distribution investigated. We also show that, as expected, overall matching accuracy can be improved by rejecting images which have a low quality assessment, thus validating the utility of this measure.
{"title":"A new approach for iris segmentation","authors":"Jinyu Zuo, N. Ratha, J. Connell","doi":"10.1109/CVPRW.2008.4563109","DOIUrl":"https://doi.org/10.1109/CVPRW.2008.4563109","url":null,"abstract":"Iris segmentation is an important first step for high accuracy iris recognition. A robust iris segmentation procedure should be able to handle noise, occlusion and non-uniform lighting. It also impacts system accuracy - high FAR or FRR values may come directly from bad or wrong segmentations. In this paper a simple new approach for iris segmentation is proposed that tries to integrate quality evaluation ideas directly into the segmentation algorithm. By cutting out all the bad areas, the fraction of the iris that remains can be used as a comprehensive quality measure. This eliminates images with high occlusion (e.g. by the eyelids) as well as images with other quality problems (e.g. low contrast), all using the same mechanism. The proposed method has been tested on a medium-sized (450 image) public database (MMU1) and the score distribution investigated. We also show that, as expected, overall matching accuracy can be improved by rejecting images which have a low quality assessment, thus validating the utility of this measure.","PeriodicalId":102206,"journal":{"name":"2008 IEEE Computer Society Conference on Computer Vision and Pattern Recognition Workshops","volume":"58 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2008-06-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"133301994","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2008-06-23DOI: 10.1109/CVPRW.2008.4562989
N. Cahill, J. Schnabel, J. Noble, D. Hawkes
In Studholme et al. introduced normalized mutual information (NMI) as an overlap invariant generalization of mutual information (MI). Even though Studholme showed how NMI could be used effectively in multimodal medical image alignment, the overlap invariance was only established empirically on a few simple examples. In this paper, we illustrate a simple example in which NMI fails to be invariant to changes in overlap size, as do other standard similarity measures including MI, cross correlation (CCorr), correlation coefficient (CCoeff), correlation ratio (CR), and entropy correlation coefficient (ECC). We then derive modified forms of all of these similarity measures that are proven to be invariant to changes in overlap size. This is done by making certain assumptions about background statistics. Experiments on multimodal rigid registration of brain images show that 1) most of the modified similarity measures outperform their standard forms, and 2) the modified version of MI exhibits superior performance over any of the other similarity measures for both CT/MR and PET/MR registration.
{"title":"Revisiting overlap invariance in medical image alignment","authors":"N. Cahill, J. Schnabel, J. Noble, D. Hawkes","doi":"10.1109/CVPRW.2008.4562989","DOIUrl":"https://doi.org/10.1109/CVPRW.2008.4562989","url":null,"abstract":"In Studholme et al. introduced normalized mutual information (NMI) as an overlap invariant generalization of mutual information (MI). Even though Studholme showed how NMI could be used effectively in multimodal medical image alignment, the overlap invariance was only established empirically on a few simple examples. In this paper, we illustrate a simple example in which NMI fails to be invariant to changes in overlap size, as do other standard similarity measures including MI, cross correlation (CCorr), correlation coefficient (CCoeff), correlation ratio (CR), and entropy correlation coefficient (ECC). We then derive modified forms of all of these similarity measures that are proven to be invariant to changes in overlap size. This is done by making certain assumptions about background statistics. Experiments on multimodal rigid registration of brain images show that 1) most of the modified similarity measures outperform their standard forms, and 2) the modified version of MI exhibits superior performance over any of the other similarity measures for both CT/MR and PET/MR registration.","PeriodicalId":102206,"journal":{"name":"2008 IEEE Computer Society Conference on Computer Vision and Pattern Recognition Workshops","volume":"119 8 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2008-06-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"133612898","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2008-06-23DOI: 10.1109/CVPRW.2008.4563027
N. Batmanghelich, R. Verma
Tissue deterioration as induced by disease can be viewed as a continuous change of tissue from healthy to diseased and hence can be modeled as a non-linear manifold with completely healthy tissue at one end of the spectrum and fully abnormal tissue such as lesions, being on the other end. The ability to quantify this tissue deterioration as a continuous score of tissue abnormality will help determine the degree of disease progression and treatment effects. We propose a semi-supervised method for determining such an abnormality manifold, using multi-parametric magnetic resonance features incorporated into a support vector machine framework in combination with manifold regularization. The position of a tissue voxel on this spatially and temporally smooth manifold, determines its degree of abnormality. We apply the framework towards the characterization of tissue abnormality in brains of multiple sclerosis patients followed longitudinally, to obtain a voxel-wise score of abnormality called the tissue abnormality map, thereby obtaining a voxel-wise measure of disease progression.
{"title":"On non-linear characterization of tissue abnormality by constructing disease manifolds","authors":"N. Batmanghelich, R. Verma","doi":"10.1109/CVPRW.2008.4563027","DOIUrl":"https://doi.org/10.1109/CVPRW.2008.4563027","url":null,"abstract":"Tissue deterioration as induced by disease can be viewed as a continuous change of tissue from healthy to diseased and hence can be modeled as a non-linear manifold with completely healthy tissue at one end of the spectrum and fully abnormal tissue such as lesions, being on the other end. The ability to quantify this tissue deterioration as a continuous score of tissue abnormality will help determine the degree of disease progression and treatment effects. We propose a semi-supervised method for determining such an abnormality manifold, using multi-parametric magnetic resonance features incorporated into a support vector machine framework in combination with manifold regularization. The position of a tissue voxel on this spatially and temporally smooth manifold, determines its degree of abnormality. We apply the framework towards the characterization of tissue abnormality in brains of multiple sclerosis patients followed longitudinally, to obtain a voxel-wise score of abnormality called the tissue abnormality map, thereby obtaining a voxel-wise measure of disease progression.","PeriodicalId":102206,"journal":{"name":"2008 IEEE Computer Society Conference on Computer Vision and Pattern Recognition Workshops","volume":"41 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2008-06-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"132407236","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2008-06-23DOI: 10.1109/CVPRW.2008.4563046
Nikhil Rasiwasia, N. Vasconcelos
In recent years, query-by-semantic-example (QBSE) has become a popular approach to do content based image retrieval. QBSE extends the well established query-by-example retrieval paradigm to the semantic domain. While various authors have pointed out the benefits of QBSE, there are still various open questions with respect to this paradigm. These include a lack of precise understanding of how the overall performance depends on various different parameters of the system. In this work, we present a systematic experimental study of the QBSE framework. This can be broadly divided into three categories. First, we examine the space of low-level visual features for its effects on the retrieval performance. Second, we study the space of learned semantic concepts, herein denoted as the ldquosemantic spacerdquo, and show that not all semantic concepts are equally informative for retrieval. Finally, we present a study of the intrinsic structure of the semantic space, by analyzing the contextual relationships between semantic concepts and show that this intrinsic structure is crucial for the performance improvements.
{"title":"A study of query by semantic example","authors":"Nikhil Rasiwasia, N. Vasconcelos","doi":"10.1109/CVPRW.2008.4563046","DOIUrl":"https://doi.org/10.1109/CVPRW.2008.4563046","url":null,"abstract":"In recent years, query-by-semantic-example (QBSE) has become a popular approach to do content based image retrieval. QBSE extends the well established query-by-example retrieval paradigm to the semantic domain. While various authors have pointed out the benefits of QBSE, there are still various open questions with respect to this paradigm. These include a lack of precise understanding of how the overall performance depends on various different parameters of the system. In this work, we present a systematic experimental study of the QBSE framework. This can be broadly divided into three categories. First, we examine the space of low-level visual features for its effects on the retrieval performance. Second, we study the space of learned semantic concepts, herein denoted as the ldquosemantic spacerdquo, and show that not all semantic concepts are equally informative for retrieval. Finally, we present a study of the intrinsic structure of the semantic space, by analyzing the contextual relationships between semantic concepts and show that this intrinsic structure is crucial for the performance improvements.","PeriodicalId":102206,"journal":{"name":"2008 IEEE Computer Society Conference on Computer Vision and Pattern Recognition Workshops","volume":"61 3 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2008-06-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"133037558","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2008-06-23DOI: 10.1109/CVPRW.2008.4562986
Scott McCloskey
The use of photo-response non-uniformity (PRNU) has been proposed as the basis of a sensor fingerprint for common source camera identification. We perform tests of the PRNU-based fingerprint on a set of videos chosen to represent a wide range of potential inputs. Based on the results of these tests, we propose a confidence weighting scheme to address the problem of extracting a viable fingerprint from videos where high-frequency content (e.g. edges) persist at a given image location. We further show that the extended PRNU estimation algorithm with confidence weighting has improved performance on such problematic videos.
{"title":"Confidence weighting for sensor fingerprinting","authors":"Scott McCloskey","doi":"10.1109/CVPRW.2008.4562986","DOIUrl":"https://doi.org/10.1109/CVPRW.2008.4562986","url":null,"abstract":"The use of photo-response non-uniformity (PRNU) has been proposed as the basis of a sensor fingerprint for common source camera identification. We perform tests of the PRNU-based fingerprint on a set of videos chosen to represent a wide range of potential inputs. Based on the results of these tests, we propose a confidence weighting scheme to address the problem of extracting a viable fingerprint from videos where high-frequency content (e.g. edges) persist at a given image location. We further show that the extended PRNU estimation algorithm with confidence weighting has improved performance on such problematic videos.","PeriodicalId":102206,"journal":{"name":"2008 IEEE Computer Society Conference on Computer Vision and Pattern Recognition Workshops","volume":"30 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2008-06-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"133053976","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2008-06-23DOI: 10.1109/CVPRW.2008.4562997
L. Astola, L. Florack
This paper is about geometric measures in diffusion tensor imaging (DTI) analysis, and it is a continuation of our previous work (L. Astola et al., 2007), where we discussed two measures for diffusion tensor (DT) image (fiber tractography) analysis. Its contribution is threefold. First, we show how the so called connectivity measure performs on a real DTI image with three different interpolation methods. Secondly, we introduce a new vector field on DTI images, that points out the locally most coherent direction for fiber tracking, and we illustrate it on bundles of tracked fibers. Thirdly, we introduce an inhomogeneity- (edge-, crossing-) detector for symmetric positive matrix valued images, including DTI images. One possible application is segmentation of diffusion tensor fields.
{"title":"Sticky vector fields, and other geometric measures on diffusion tensor images","authors":"L. Astola, L. Florack","doi":"10.1109/CVPRW.2008.4562997","DOIUrl":"https://doi.org/10.1109/CVPRW.2008.4562997","url":null,"abstract":"This paper is about geometric measures in diffusion tensor imaging (DTI) analysis, and it is a continuation of our previous work (L. Astola et al., 2007), where we discussed two measures for diffusion tensor (DT) image (fiber tractography) analysis. Its contribution is threefold. First, we show how the so called connectivity measure performs on a real DTI image with three different interpolation methods. Secondly, we introduce a new vector field on DTI images, that points out the locally most coherent direction for fiber tracking, and we illustrate it on bundles of tracked fibers. Thirdly, we introduce an inhomogeneity- (edge-, crossing-) detector for symmetric positive matrix valued images, including DTI images. One possible application is segmentation of diffusion tensor fields.","PeriodicalId":102206,"journal":{"name":"2008 IEEE Computer Society Conference on Computer Vision and Pattern Recognition Workshops","volume":"20 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2008-06-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114821256","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2008-06-23DOI: 10.1109/CVPRW.2008.4563174
Guoying Zhao, M. Pietikäinen
Feature definition and selection are two important aspects in visual analysis of motion. In this paper, spatiotemporal local binary patterns computed at multiple resolutions are proposed for describing dynamic events, combining static and dynamic information from different spatiotemporal resolutions. Appearance and motion are the key components for visual analysis related to movements. AdaBoost algorithm is utilized for learning the principal appearance and motion from spatiotemporal descriptors derived from three orthogonal planes, providing important information about the locations and types of features for further analysis. In addition, learners are designed for selecting the most important features for each specific pair of different classes. The experiments carried out on diverse visual analysis tasks: facial expression recognition and visual speech recognition, show the effectiveness of the approach.
{"title":"Principal appearance and motion from boosted spatiotemporal descriptors","authors":"Guoying Zhao, M. Pietikäinen","doi":"10.1109/CVPRW.2008.4563174","DOIUrl":"https://doi.org/10.1109/CVPRW.2008.4563174","url":null,"abstract":"Feature definition and selection are two important aspects in visual analysis of motion. In this paper, spatiotemporal local binary patterns computed at multiple resolutions are proposed for describing dynamic events, combining static and dynamic information from different spatiotemporal resolutions. Appearance and motion are the key components for visual analysis related to movements. AdaBoost algorithm is utilized for learning the principal appearance and motion from spatiotemporal descriptors derived from three orthogonal planes, providing important information about the locations and types of features for further analysis. In addition, learners are designed for selecting the most important features for each specific pair of different classes. The experiments carried out on diverse visual analysis tasks: facial expression recognition and visual speech recognition, show the effectiveness of the approach.","PeriodicalId":102206,"journal":{"name":"2008 IEEE Computer Society Conference on Computer Vision and Pattern Recognition Workshops","volume":"15 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2008-06-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114928533","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2008-06-23DOI: 10.1109/CVPRW.2008.4563082
P. Taddei, A. Bartoli
We deal with the 3D reconstruction of deformed paperlike surfaces given a template and a single perspective image, for which the internal camera parameters are known. The general problem is ill-posed. We show that when the surface rulings are parallel the problem is well-posed. Given a procedure to recover the rulings direction, this particular problem is equivalent to the reconstruction of a 2D curve seen from a set of ID camera pairs given a ID template. Paper can be physically modeled by exploiting local properties. This allows us to formulate the reconstruction problem by non linear variational optimization. We provide experimental results which validate our approach on simulated and real data.
{"title":"Template-based paper reconstruction from a single image is well posed when the rulings are parallel","authors":"P. Taddei, A. Bartoli","doi":"10.1109/CVPRW.2008.4563082","DOIUrl":"https://doi.org/10.1109/CVPRW.2008.4563082","url":null,"abstract":"We deal with the 3D reconstruction of deformed paperlike surfaces given a template and a single perspective image, for which the internal camera parameters are known. The general problem is ill-posed. We show that when the surface rulings are parallel the problem is well-posed. Given a procedure to recover the rulings direction, this particular problem is equivalent to the reconstruction of a 2D curve seen from a set of ID camera pairs given a ID template. Paper can be physically modeled by exploiting local properties. This allows us to formulate the reconstruction problem by non linear variational optimization. We provide experimental results which validate our approach on simulated and real data.","PeriodicalId":102206,"journal":{"name":"2008 IEEE Computer Society Conference on Computer Vision and Pattern Recognition Workshops","volume":"235 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2008-06-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"116367358","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2008-06-23DOI: 10.1109/cvprw.2008.4563106
Q. Tao, R. Veldhuis
A general framework of fusion at decision level, which works on ROCs instead of matching scores, is investigated. Under this framework, we further propose a hybrid fusion method, which combines the score-level and decision-level fusions, taking advantage of both fusion modes. The hybrid fusion adaptively tunes itself between the two levels of fusion, and improves the final performance over the original two levels. The proposed hybrid fusion is simple and effective for combining different biometrics.
{"title":"Hybrid fusion for biometrics: Combining score-level and decision-level fusion","authors":"Q. Tao, R. Veldhuis","doi":"10.1109/cvprw.2008.4563106","DOIUrl":"https://doi.org/10.1109/cvprw.2008.4563106","url":null,"abstract":"A general framework of fusion at decision level, which works on ROCs instead of matching scores, is investigated. Under this framework, we further propose a hybrid fusion method, which combines the score-level and decision-level fusions, taking advantage of both fusion modes. The hybrid fusion adaptively tunes itself between the two levels of fusion, and improves the final performance over the original two levels. The proposed hybrid fusion is simple and effective for combining different biometrics.","PeriodicalId":102206,"journal":{"name":"2008 IEEE Computer Society Conference on Computer Vision and Pattern Recognition Workshops","volume":"24 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2008-06-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"116524919","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}