Detection of pedestrians under occlusion has been addressed previously with body-part-based approaches, in particular using the generalised Hough transform. Tracking is usually addressed by first detecting pedestrians in each frame independently and then tracking the detections over time. This paper presents a novel variation on the generalised Hough approach: tracking is performed first, and detection second. Robust features on a pedestrian are tracked over short time-frames to form tracklets. Not only do tracklets reduce false alarms due to unstable features, but they provide temporal correspondence information in Hough space. Consequently tracking can be posed as optimal path finding in Hough space and efficiently solved using the Viterbi algorithm. The paper also presents an improvement to the random Hough forest training method by using multi-objective optimisation.
{"title":"Occluded Pedestrian Tracking Using Body-Part Tracklets","authors":"J. Sherrah","doi":"10.1109/DICTA.2010.61","DOIUrl":"https://doi.org/10.1109/DICTA.2010.61","url":null,"abstract":"Detection of pedestrians under occlusion has been addressed previously with body-part-based approaches, in particular using the generalised Hough transform. Tracking is usually addressed by first detecting pedestrians in each frame independently and then tracking the detections over time. This paper presents a novel variation on the generalised Hough approach: tracking is performed first, and detection second. Robust features on a pedestrian are tracked over short time-frames to form tracklets. Not only do tracklets reduce false alarms due to unstable features, but they provide temporal correspondence information in Hough space. Consequently tracking can be posed as optimal path finding in Hough space and efficiently solved using the Viterbi algorithm. The paper also presents an improvement to the random Hough forest training method by using multi-objective optimisation.","PeriodicalId":246460,"journal":{"name":"2010 International Conference on Digital Image Computing: Techniques and Applications","volume":"05 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2010-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130521543","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
A. Mehnert, M. Wildermoth, S. Crozier, E. Bengtsson, D. Kennedy
This paper proffers two non-linear empirical parametric models—linear slope and Ricker—for use in characterising contrast enhancement in dynamic contrast enhanced (DCE) MRI. The advantage of these models over existing empirical parametric and pharmacokinetic models is that they can be fitted using linear least squares (LS). This means that fitting is quick, there is no need to specify initial parameter estimates, and there are no convergence issues. Furthermore the LS fit can itself be used to provide initial parameter estimates for a subsequent NLS fit (self-starting models). The results of an empirical evaluation of the goodness of fit (GoF) of these two models, measured in terms of both MSE and R^2, relative to a two-compartment pharmacokinetic model and the Hayton model are also presented. The GoF was evaluated using both routine clinical breast MRI data and a single high temporal resolution breast MRI data set. The results demonstrate that the linear slope model fits the routine clinical data better than any of the other models and that the two parameter self-starting Ricker model fits the data nearly as well as the three parameter Hayton model. This is also demonstrated by the results for the high temporal data and for several temporally sub-sampled versions of this data.
{"title":"Two Non-linear Parametric Models of Contrast Enhancement for DCE-MRI of the Breast Amenable to Fitting Using Linear Least Squares","authors":"A. Mehnert, M. Wildermoth, S. Crozier, E. Bengtsson, D. Kennedy","doi":"10.1109/DICTA.2010.108","DOIUrl":"https://doi.org/10.1109/DICTA.2010.108","url":null,"abstract":"This paper proffers two non-linear empirical parametric models—linear slope and Ricker—for use in characterising contrast enhancement in dynamic contrast enhanced (DCE) MRI. The advantage of these models over existing empirical parametric and pharmacokinetic models is that they can be fitted using linear least squares (LS). This means that fitting is quick, there is no need to specify initial parameter estimates, and there are no convergence issues. Furthermore the LS fit can itself be used to provide initial parameter estimates for a subsequent NLS fit (self-starting models). The results of an empirical evaluation of the goodness of fit (GoF) of these two models, measured in terms of both MSE and R^2, relative to a two-compartment pharmacokinetic model and the Hayton model are also presented. The GoF was evaluated using both routine clinical breast MRI data and a single high temporal resolution breast MRI data set. The results demonstrate that the linear slope model fits the routine clinical data better than any of the other models and that the two parameter self-starting Ricker model fits the data nearly as well as the three parameter Hayton model. This is also demonstrated by the results for the high temporal data and for several temporally sub-sampled versions of this data.","PeriodicalId":246460,"journal":{"name":"2010 International Conference on Digital Image Computing: Techniques and Applications","volume":"29 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2010-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115584172","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Dual-modality PET-CT imaging has been prevalently used as an essential diagnostic tool for monitoring treatment response in malignant disease patients. However, evaluation of treatment outcomes in serial scans by visual inspecting multiple PET-CT volumes is time consuming and laborious. In this paper, we propose an automated algorithm to detect the occurrence and changes of hot-spots in intro-subject FDG-PET images from combined PET-CT scanners. In this algorithm, multiple CT images of the same subject are aligned by using an affine transformation, and the estimated transformation is then used to align the corresponding PET images into the same coordinate system. Hot-spots are identified using thresholding and region growing with parameters determined specifically for different body parts. The changes of the detected hot-spots over time are analysed and presented. Our results in 19 clinical PET-CT studies demonstrate that the proposed algorithm has a good performance.
{"title":"Automated Detection of the Occurrence and Changes of Hot-Spots in Intro-subject FDG-PET Images from Combined PET-CT Scanners","authors":"Jiyong Wang, D. Feng, Yong Xia","doi":"10.1109/DICTA.2010.20","DOIUrl":"https://doi.org/10.1109/DICTA.2010.20","url":null,"abstract":"Dual-modality PET-CT imaging has been prevalently used as an essential diagnostic tool for monitoring treatment response in malignant disease patients. However, evaluation of treatment outcomes in serial scans by visual inspecting multiple PET-CT volumes is time consuming and laborious. In this paper, we propose an automated algorithm to detect the occurrence and changes of hot-spots in intro-subject FDG-PET images from combined PET-CT scanners. In this algorithm, multiple CT images of the same subject are aligned by using an affine transformation, and the estimated transformation is then used to align the corresponding PET images into the same coordinate system. Hot-spots are identified using thresholding and region growing with parameters determined specifically for different body parts. The changes of the detected hot-spots over time are analysed and presented. Our results in 19 clinical PET-CT studies demonstrate that the proposed algorithm has a good performance.","PeriodicalId":246460,"journal":{"name":"2010 International Conference on Digital Image Computing: Techniques and Applications","volume":"770 ","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2010-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"120865219","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
A dense point-based registration is an ideal starting point for detailed comparison between two neuroanatomical objects. This paper presents a new algorithm for global dense point-based registration between anatomical objects without assumptions about their shape. We represent mesh models of the surfaces of two similar 3D anatomical objects using a Markov Random Field and seek correspondence pairs between points in each shape. However, for densely sampled objects the set of possible point by point correspondences is very large. We solve the global non-rigid matching problem between the two objects in an efficient manner by applying loopy belief propagation. Typically loopy belief propagation is of order m^3 for each iteration, where m is the number of nodes in a mesh. By avoiding computation of probabilities of configurations that cannot occur in practice, we reduce this to order m^2. We demonstrate the method and its performance by registering hippocampi from a population of individuals aged 60-69. We find a corresponding rigid registration, and compare the results to a state-of-the-art technique and show comparable accuracy. Our method provides a global registration without prior information about alignment, and handles arbitrary shapes of spherical topology.
{"title":"Sparse Update for Loopy Belief Propagation: Fast Dense Registration for Large State Spaces","authors":"Pengdong Xiao, N. Barnes, P. Lieby, T. Caetano","doi":"10.1109/DICTA.2010.97","DOIUrl":"https://doi.org/10.1109/DICTA.2010.97","url":null,"abstract":"A dense point-based registration is an ideal starting point for detailed comparison between two neuroanatomical objects. This paper presents a new algorithm for global dense point-based registration between anatomical objects without assumptions about their shape. We represent mesh models of the surfaces of two similar 3D anatomical objects using a Markov Random Field and seek correspondence pairs between points in each shape. However, for densely sampled objects the set of possible point by point correspondences is very large. We solve the global non-rigid matching problem between the two objects in an efficient manner by applying loopy belief propagation. Typically loopy belief propagation is of order m^3 for each iteration, where m is the number of nodes in a mesh. By avoiding computation of probabilities of configurations that cannot occur in practice, we reduce this to order m^2. We demonstrate the method and its performance by registering hippocampi from a population of individuals aged 60-69. We find a corresponding rigid registration, and compare the results to a state-of-the-art technique and show comparable accuracy. Our method provides a global registration without prior information about alignment, and handles arbitrary shapes of spherical topology.","PeriodicalId":246460,"journal":{"name":"2010 International Conference on Digital Image Computing: Techniques and Applications","volume":"84 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2010-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"121184367","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
The Support Vector Machine (SVM) is an effective classification tool. Though extremely effective, SVMs are not a panacea. SVM training and testing is computationally expensive. Also, tuning the kernel parameters is a complicated procedure. On the other hand, the Nearest Neighbor (KNN) classifier is computationally efficient. In order to achieve the classification efficiency of an SVM and the computational efficiency of a KNN classifier, it has been shown previously that, rather than training a single global SVM, a separate SVM can be trained for the neighbourhood of each query point. In this work, we have extended this Local SVM (LSVM) formulation. Our Local Adaptive SVM (LASVM) formulation trains a local SVM in a modified neighborhood space of a query point. The main contributions of the paper are twofold: First, we present a novel LASVM algorithm to train a local SVM. Second, we discuss in detail the motivations behind the LSVM and LASVM formulations and its possible impacts on tuning the kernel parameters of an SVM. We found that training an SVM in a local adaptive neighborhood can result in significant classification performance gain. Experiments have been conducted on a selection of the UCIML, face, object, and digit databases.
{"title":"Local Adaptive SVM for Object Recognition","authors":"Nayyar Zaidi, D. Squire","doi":"10.1109/DICTA.2010.44","DOIUrl":"https://doi.org/10.1109/DICTA.2010.44","url":null,"abstract":"The Support Vector Machine (SVM) is an effective classification tool. Though extremely effective, SVMs are not a panacea. SVM training and testing is computationally expensive. Also, tuning the kernel parameters is a complicated procedure. On the other hand, the Nearest Neighbor (KNN) classifier is computationally efficient. In order to achieve the classification efficiency of an SVM and the computational efficiency of a KNN classifier, it has been shown previously that, rather than training a single global SVM, a separate SVM can be trained for the neighbourhood of each query point. In this work, we have extended this Local SVM (LSVM) formulation. Our Local Adaptive SVM (LASVM) formulation trains a local SVM in a modified neighborhood space of a query point. The main contributions of the paper are twofold: First, we present a novel LASVM algorithm to train a local SVM. Second, we discuss in detail the motivations behind the LSVM and LASVM formulations and its possible impacts on tuning the kernel parameters of an SVM. We found that training an SVM in a local adaptive neighborhood can result in significant classification performance gain. Experiments have been conducted on a selection of the UCIML, face, object, and digit databases.","PeriodicalId":246460,"journal":{"name":"2010 International Conference on Digital Image Computing: Techniques and Applications","volume":"13 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2010-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"123306731","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Current state-of-art of image retrieval methods represent images as an unordered collection of local patches, each of which is classified as a "visual word" from a fixed vocabulary. This paper presents a simple but innovative way to uncover the spatial relationship between visual words so that we can discover words that represent the same latent topic and thereby improve the retrieval results. The method in this paper is borrowed from text retrieval, and is analogous to a text thesaurus in that it describes a broad set of equivalence relationship between words. We evaluate our method on the popular Oxford Building dataset. This makes it possible to compare our method with previous work on image retrieval, and the results show that our method is comparable to more complex state of the art methods.
{"title":"Image Retrieval with a Visual Thesaurus","authors":"Yanzhi Chen, A. Dick, A. Hengel","doi":"10.1109/DICTA.2010.11","DOIUrl":"https://doi.org/10.1109/DICTA.2010.11","url":null,"abstract":"Current state-of-art of image retrieval methods represent images as an unordered collection of local patches, each of which is classified as a \"visual word\" from a fixed vocabulary. This paper presents a simple but innovative way to uncover the spatial relationship between visual words so that we can discover words that represent the same latent topic and thereby improve the retrieval results. The method in this paper is borrowed from text retrieval, and is analogous to a text thesaurus in that it describes a broad set of equivalence relationship between words. We evaluate our method on the popular Oxford Building dataset. This makes it possible to compare our method with previous work on image retrieval, and the results show that our method is comparable to more complex state of the art methods.","PeriodicalId":246460,"journal":{"name":"2010 International Conference on Digital Image Computing: Techniques and Applications","volume":"2 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2010-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114824853","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Matthew Andrew Kitchener, A. Bouzerdoum, S. L. Phung
In this paper the image restoration problem is solved using a Compressive Sensing approach, and the translation invariant, a Trous, undecimated wavelet transform. The problem is cast as an unconstrained optimization problem which is solved using the Fletcher-Reeves nonlinear conjugate gradient method. A comparison based on experimental results shows that the proposed method achieves comparable if not better performance as other state-of-the-art techniques.
{"title":"A Compressive Sensing Approach to Image Restoration","authors":"Matthew Andrew Kitchener, A. Bouzerdoum, S. L. Phung","doi":"10.1109/DICTA.2010.28","DOIUrl":"https://doi.org/10.1109/DICTA.2010.28","url":null,"abstract":"In this paper the image restoration problem is solved using a Compressive Sensing approach, and the translation invariant, a Trous, undecimated wavelet transform. The problem is cast as an unconstrained optimization problem which is solved using the Fletcher-Reeves nonlinear conjugate gradient method. A comparison based on experimental results shows that the proposed method achieves comparable if not better performance as other state-of-the-art techniques.","PeriodicalId":246460,"journal":{"name":"2010 International Conference on Digital Image Computing: Techniques and Applications","volume":"197 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2010-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126747818","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Md. Tanvir Hossain, S. Teng, Guojun Lu, M. Lackmann
Symmetric-SIFT is a recently proposed local technique used for registering multimodal images. It is based on a well-known general image registration technique named Scale Invariant Feature Transform (SIFT). Symmetric SIFT makes use of the gradient magnitude information at the image’s key regions to build the descriptors. In this paper, we highlight an issue with how the magnitude information is used in this process. This issue may result in similar descriptors being built to represent regions in images that are visually different. To address this issue, we have proposed two new strategies for weighting the descriptors. Our experimental results show that Symmetric-SIFT descriptors built using our proposed strategies can lead to better registration accuracy than descriptors built using the original Symmetric-SIFT technique. The issue highlighted and the two strategies proposed are also applicable to the general SIFT technique.
{"title":"An Enhancement to SIFT-Based Techniques for Image Registration","authors":"Md. Tanvir Hossain, S. Teng, Guojun Lu, M. Lackmann","doi":"10.1109/DICTA.2010.39","DOIUrl":"https://doi.org/10.1109/DICTA.2010.39","url":null,"abstract":"Symmetric-SIFT is a recently proposed local technique used for registering multimodal images. It is based on a well-known general image registration technique named Scale Invariant Feature Transform (SIFT). Symmetric SIFT makes use of the gradient magnitude information at the image’s key regions to build the descriptors. In this paper, we highlight an issue with how the magnitude information is used in this process. This issue may result in similar descriptors being built to represent regions in images that are visually different. To address this issue, we have proposed two new strategies for weighting the descriptors. Our experimental results show that Symmetric-SIFT descriptors built using our proposed strategies can lead to better registration accuracy than descriptors built using the original Symmetric-SIFT technique. The issue highlighted and the two strategies proposed are also applicable to the general SIFT technique.","PeriodicalId":246460,"journal":{"name":"2010 International Conference on Digital Image Computing: Techniques and Applications","volume":"6 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2010-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"124520977","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Jianbing Shen, Xing Yan, Lin Chen, Hanqiu Sun, Xuelong Li
In this paper, we present a novel re-texturing approach using intrinsic video. Our approach begins with indicating the regions of interests by contour-aware layer segmentation. Then, the intrinsic video (including reflectance and illumination components) within the segmented region is recovered by our weighted energy optimization. After that, we compute the normals for the re-textured region, and the texture coordinates in key frames through our newly developed optimization approach. At the same time, the texture coordinates in non-key frames are optimized by our proposed energy function. Finally, when the target sample texture is specified, the re-textured video is created by multiplying the re-textured reflectance component by the original illumination component within the replaced region. As demonstrated in our experimental results, our method can produce high quality video re-texturing results with preserving the lighting and shading effect of the original video.
{"title":"Re-texturing by Intrinsic Video","authors":"Jianbing Shen, Xing Yan, Lin Chen, Hanqiu Sun, Xuelong Li","doi":"10.1109/DICTA.2010.88","DOIUrl":"https://doi.org/10.1109/DICTA.2010.88","url":null,"abstract":"In this paper, we present a novel re-texturing approach using intrinsic video. Our approach begins with indicating the regions of interests by contour-aware layer segmentation. Then, the intrinsic video (including reflectance and illumination components) within the segmented region is recovered by our weighted energy optimization. After that, we compute the normals for the re-textured region, and the texture coordinates in key frames through our newly developed optimization approach. At the same time, the texture coordinates in non-key frames are optimized by our proposed energy function. Finally, when the target sample texture is specified, the re-textured video is created by multiplying the re-textured reflectance component by the original illumination component within the replaced region. As demonstrated in our experimental results, our method can produce high quality video re-texturing results with preserving the lighting and shading effect of the original video.","PeriodicalId":246460,"journal":{"name":"2010 International Conference on Digital Image Computing: Techniques and Applications","volume":"32 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2010-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130343344","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
R. Beare, W. Chong, M. Ren, G. Das, V. Srikanth, T. Phan
Stenos is of the internal carotid artery (ICA) is implicated in approximately one quarter of stroke cases. The degree of stenos is is currently used to decide whether to undertake a surgical procedure to reduce the risk of further stroke. However it is known that the degree of stenos is is not a good predictor of stroke risk. It is hoped that prediction might be improved by incorporation of other geometric factors. This paper describes a data driven approach using classical methods from the field of mathematical morphology to automatically segment the carotid artery tree in computed tomography angiography (CTA) images following user initialization. The resulting segmentation may be used to characterize the the arterial geometery in a variety of more complex ways than is possible using manual approaches.
{"title":"Segmentation of Carotid Arteries in CTA Images","authors":"R. Beare, W. Chong, M. Ren, G. Das, V. Srikanth, T. Phan","doi":"10.1109/DICTA.2010.21","DOIUrl":"https://doi.org/10.1109/DICTA.2010.21","url":null,"abstract":"Stenos is of the internal carotid artery (ICA) is implicated in approximately one quarter of stroke cases. The degree of stenos is is currently used to decide whether to undertake a surgical procedure to reduce the risk of further stroke. However it is known that the degree of stenos is is not a good predictor of stroke risk. It is hoped that prediction might be improved by incorporation of other geometric factors. This paper describes a data driven approach using classical methods from the field of mathematical morphology to automatically segment the carotid artery tree in computed tomography angiography (CTA) images following user initialization. The resulting segmentation may be used to characterize the the arterial geometery in a variety of more complex ways than is possible using manual approaches.","PeriodicalId":246460,"journal":{"name":"2010 International Conference on Digital Image Computing: Techniques and Applications","volume":"22 ","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2010-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"133617802","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}