Pinaki Ranjan Sarkar, Priya Prabhakar, Deepak Mishra, Gorthi R. K. S. S. Manyam
Due to high variability in shape, structure and occurrence; the non-palpable breast masses are often missed by the experienced radiologists. To aid them with more accurate identification, computer-aided detection (CAD) systems are widely used. Most of the developed CAD systems use complex handcrafted features which introduce difficulties for further improvement in performance. Deep or high-level features extracted using deep learning models already have proven its superiority over the low or middle-level handcrafted features. In this paper, we propose an automated deep CAD system performing both the functions: mass detection and classification. Our proposed framework is composed of three cascaded structures: suspicious region identification, mass/no-mass detection and mass classification. To detect the suspicious regions in a breast mammogram, we have used a deep hierarchical mass prediction network. Then we take a decision on whether the predicted lesions contain any abnormal masses using CNN high-level features from the augmented intensity and wavelet features. Afterwards, the mass classification is carried out only for abnormal cases with the same CNN structure. The whole process of breast mass classification including the extraction of wavelet features is automated in this work. We have tested our proposed model on widely used DDSM and INbreast databases in which mass prediction network has achieved the sensitivity of 0.94 and 0.96 followed by a mass/no-mass detection with the area under the curve (AUC) of 0.9976 and 0.9922 respectively on receiver operating characteristic (ROC) curve. Finally, the classification network has obtained an accuracy of 98.05% in DDSM and 98.14% in INbreast database which we believe is the best reported so far.
{"title":"Towards Automated Breast Mass Classification using Deep Learning Framework","authors":"Pinaki Ranjan Sarkar, Priya Prabhakar, Deepak Mishra, Gorthi R. K. S. S. Manyam","doi":"10.1109/DSAA.2019.00060","DOIUrl":"https://doi.org/10.1109/DSAA.2019.00060","url":null,"abstract":"Due to high variability in shape, structure and occurrence; the non-palpable breast masses are often missed by the experienced radiologists. To aid them with more accurate identification, computer-aided detection (CAD) systems are widely used. Most of the developed CAD systems use complex handcrafted features which introduce difficulties for further improvement in performance. Deep or high-level features extracted using deep learning models already have proven its superiority over the low or middle-level handcrafted features. In this paper, we propose an automated deep CAD system performing both the functions: mass detection and classification. Our proposed framework is composed of three cascaded structures: suspicious region identification, mass/no-mass detection and mass classification. To detect the suspicious regions in a breast mammogram, we have used a deep hierarchical mass prediction network. Then we take a decision on whether the predicted lesions contain any abnormal masses using CNN high-level features from the augmented intensity and wavelet features. Afterwards, the mass classification is carried out only for abnormal cases with the same CNN structure. The whole process of breast mass classification including the extraction of wavelet features is automated in this work. We have tested our proposed model on widely used DDSM and INbreast databases in which mass prediction network has achieved the sensitivity of 0.94 and 0.96 followed by a mass/no-mass detection with the area under the curve (AUC) of 0.9976 and 0.9922 respectively on receiver operating characteristic (ROC) curve. Finally, the classification network has obtained an accuracy of 98.05% in DDSM and 98.14% in INbreast database which we believe is the best reported so far.","PeriodicalId":416037,"journal":{"name":"2019 IEEE International Conference on Data Science and Advanced Analytics (DSAA)","volume":"59 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127766811","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Crowd counting, which aims to predict the number of persons in a highly congested scene, has been widely explored and can be used in many applications like video surveillance, pedestrian flow, etc. The severe mutual occlusion among person, the large perspective distortion and the scale variations always hinder an accurate estimation. Although existing approaches have made much progress, there still has room for improvement. The drawbacks of existing methods are 2-fold: (1)the scale information, which is an important factor for crowd counting, is always insufficiently explored and thus cannot bring well-estimated results; (2)using a unified framework for the whole image may result to a rough estimation in subregions, and thus leads to inaccurate estimation. Motivated by this, we propose a new method to address these problems. We first construct a crowd-specific and scale-aware convolutional neural network, which considers crowd scale variations and integrates multi-scale feature representations in the Cross Scale Module (CSM), to produce the initial predicted density map. Then the proposed Local Refine Modules (LRMs) are performed to gradually re-estimate predictions of subregions. We conduct experiments on three crowd counting datasets (the ShanghaiTech dataset, the UCF_CC_50 dataset and the UCSD dataset). Experiments show that our proposed method achieves superior performance compared with the state-of-the-arts. Besides, we conduct experiments on counting vehicles in the TRANCOS dataset and get better results, which proves the generalization ability of the proposed method.
{"title":"Deep Crowd Counting In Congested Scenes Through Refine Modules","authors":"Tong Li, Chuan Wang, Xiaochun Cao","doi":"10.1109/DSAA.2019.00033","DOIUrl":"https://doi.org/10.1109/DSAA.2019.00033","url":null,"abstract":"Crowd counting, which aims to predict the number of persons in a highly congested scene, has been widely explored and can be used in many applications like video surveillance, pedestrian flow, etc. The severe mutual occlusion among person, the large perspective distortion and the scale variations always hinder an accurate estimation. Although existing approaches have made much progress, there still has room for improvement. The drawbacks of existing methods are 2-fold: (1)the scale information, which is an important factor for crowd counting, is always insufficiently explored and thus cannot bring well-estimated results; (2)using a unified framework for the whole image may result to a rough estimation in subregions, and thus leads to inaccurate estimation. Motivated by this, we propose a new method to address these problems. We first construct a crowd-specific and scale-aware convolutional neural network, which considers crowd scale variations and integrates multi-scale feature representations in the Cross Scale Module (CSM), to produce the initial predicted density map. Then the proposed Local Refine Modules (LRMs) are performed to gradually re-estimate predictions of subregions. We conduct experiments on three crowd counting datasets (the ShanghaiTech dataset, the UCF_CC_50 dataset and the UCSD dataset). Experiments show that our proposed method achieves superior performance compared with the state-of-the-arts. Besides, we conduct experiments on counting vehicles in the TRANCOS dataset and get better results, which proves the generalization ability of the proposed method.","PeriodicalId":416037,"journal":{"name":"2019 IEEE International Conference on Data Science and Advanced Analytics (DSAA)","volume":"29 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"131853163","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
We present a simple, two step procedure that selects central and extremal prototypes from a given set of data. The key idea is to identify minima of the function that characterizes the interior of a kernel minimum enclosing ball of the data. We discuss how to efficiently compute kernel minimim enclosing balls using the Frank-Wolfe algorithm and show that, for Gaussian kernels, the sought after prototypes can be naturally found via a variant of the mean shift procedure. Practical results demonstrate that prototypes found this way are descriptive, meaningful, and interpretable.
{"title":"Joint Selection of Central and Extremal Prototypes Based on Kernel Minimum Enclosing Balls","authors":"C. Bauckhage, R. Sifa","doi":"10.1109/DSAA.2019.00040","DOIUrl":"https://doi.org/10.1109/DSAA.2019.00040","url":null,"abstract":"We present a simple, two step procedure that selects central and extremal prototypes from a given set of data. The key idea is to identify minima of the function that characterizes the interior of a kernel minimum enclosing ball of the data. We discuss how to efficiently compute kernel minimim enclosing balls using the Frank-Wolfe algorithm and show that, for Gaussian kernels, the sought after prototypes can be naturally found via a variant of the mean shift procedure. Practical results demonstrate that prototypes found this way are descriptive, meaningful, and interpretable.","PeriodicalId":416037,"journal":{"name":"2019 IEEE International Conference on Data Science and Advanced Analytics (DSAA)","volume":"5 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"116385308","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
The U.S. Census Bureau classifies survey respondents into hundreds of detailed industry and occupation categories. The classification systems change periodically, creating breaks in time series. Standard crosswalks and unified category systems bridge the periods but these often leave sparse or empty cells, or induce sharp changes in time series. We propose a methodology to predict standardized industry, occupation, and related variables for each employed respondent in the public use samples from recent Censuses of Population and CPS data. Unlike earlier approaches, predictions draw from micro data on each individual and large training data sets. Tests of the resulting “augmented” data sets can evaluate their consistency with known trends, smoothness criteria, and benchmarks.
{"title":"Augmenting U.S. Census data on industry and occupation of respondents","authors":"P. Meyer, Kendra Asher","doi":"10.1109/dsaa.2019.00076","DOIUrl":"https://doi.org/10.1109/dsaa.2019.00076","url":null,"abstract":"The U.S. Census Bureau classifies survey respondents into hundreds of detailed industry and occupation categories. The classification systems change periodically, creating breaks in time series. Standard crosswalks and unified category systems bridge the periods but these often leave sparse or empty cells, or induce sharp changes in time series. We propose a methodology to predict standardized industry, occupation, and related variables for each employed respondent in the public use samples from recent Censuses of Population and CPS data. Unlike earlier approaches, predictions draw from micro data on each individual and large training data sets. Tests of the resulting “augmented” data sets can evaluate their consistency with known trends, smoothness criteria, and benchmarks.","PeriodicalId":416037,"journal":{"name":"2019 IEEE International Conference on Data Science and Advanced Analytics (DSAA)","volume":"62 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126036119","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
In this paper, we study the problem of image-text matching in order to make the image and text have better semantic matching. In the previous work, people just simply used the pre-training network to extract image and text features and project directly into a common subspace, or change various loss functions on this basis, or use the attention mechanism to directly match the image region proposals and the text phrases. This is not a good match for the semantics of the image and the text. In this study, we propose a method of cross-media retrieval based on global representation and local representation. We constructed a cross-media two-level network to explore better semantic matching between images and text, which contains subnets that handle both global and local features. Specifically, we not only use the self-attention network to obtain a macro representation of the global image but also use the local fine-grained patch with the attention mechanism. Then, we use a two-level alignment framework to promote each other to learn different representations of cross-media retrieval. The innovation of this study lies in the use of more comprehensive features of image and text to design the two kinds of similarity and add them up in some way. Experimental results show that this method is effective in image-text retrieval. Experimental results on the Flickr30K and MS-COCO datasets show that this model has a better recall rate than many of the current advanced cross-media retrieval models.
{"title":"Cross-Media Image-Text Retrieval Combined with Global Similarity and Local Similarity","authors":"Zhixin Li, Feng Ling, Canlong Zhang","doi":"10.1109/DSAA.2019.00029","DOIUrl":"https://doi.org/10.1109/DSAA.2019.00029","url":null,"abstract":"In this paper, we study the problem of image-text matching in order to make the image and text have better semantic matching. In the previous work, people just simply used the pre-training network to extract image and text features and project directly into a common subspace, or change various loss functions on this basis, or use the attention mechanism to directly match the image region proposals and the text phrases. This is not a good match for the semantics of the image and the text. In this study, we propose a method of cross-media retrieval based on global representation and local representation. We constructed a cross-media two-level network to explore better semantic matching between images and text, which contains subnets that handle both global and local features. Specifically, we not only use the self-attention network to obtain a macro representation of the global image but also use the local fine-grained patch with the attention mechanism. Then, we use a two-level alignment framework to promote each other to learn different representations of cross-media retrieval. The innovation of this study lies in the use of more comprehensive features of image and text to design the two kinds of similarity and add them up in some way. Experimental results show that this method is effective in image-text retrieval. Experimental results on the Flickr30K and MS-COCO datasets show that this model has a better recall rate than many of the current advanced cross-media retrieval models.","PeriodicalId":416037,"journal":{"name":"2019 IEEE International Conference on Data Science and Advanced Analytics (DSAA)","volume":"26 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126046334","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"DSAA Keynotes","authors":"","doi":"10.1109/dsaa.2019.00011","DOIUrl":"https://doi.org/10.1109/dsaa.2019.00011","url":null,"abstract":"","PeriodicalId":416037,"journal":{"name":"2019 IEEE International Conference on Data Science and Advanced Analytics (DSAA)","volume":"15 11","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"120905338","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2019-10-01DOI: 10.1109/DSAA46601.2019.9062728
Philip E. Brown, T. Dasu, Y. Kanza, E. Koutsofios, R. Malik, D. Srivastava
Real world anomaly management systems oversee thousands of dynamic data streams and generate an overwhelming number of alerts. As a consequence, important alerts often go unnoticed until there is a crisis. The absence of ground truth, and the fact that the streams are constantly changing (new content, new applications, software and hardware changes) makes assessing the value of alerts difficult. In order to identify groups of important and actionable alerts, we propose: (1) superalerts that reflect characteristics of persistence, pervasiveness and priority, (2) three types of super-alerting based on three types of aggregations and, (3) corresponding metrics for evaluating them. We demonstrate using real-world entertainment data streams.
{"title":"Don't Cry Wolf","authors":"Philip E. Brown, T. Dasu, Y. Kanza, E. Koutsofios, R. Malik, D. Srivastava","doi":"10.1109/DSAA46601.2019.9062728","DOIUrl":"https://doi.org/10.1109/DSAA46601.2019.9062728","url":null,"abstract":"Real world anomaly management systems oversee thousands of dynamic data streams and generate an overwhelming number of alerts. As a consequence, important alerts often go unnoticed until there is a crisis. The absence of ground truth, and the fact that the streams are constantly changing (new content, new applications, software and hardware changes) makes assessing the value of alerts difficult. In order to identify groups of important and actionable alerts, we propose: (1) superalerts that reflect characteristics of persistence, pervasiveness and priority, (2) three types of super-alerting based on three types of aggregations and, (3) corresponding metrics for evaluating them. We demonstrate using real-world entertainment data streams.","PeriodicalId":416037,"journal":{"name":"2019 IEEE International Conference on Data Science and Advanced Analytics (DSAA)","volume":"50 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"134382440","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
We propose a supervised learning algorithm whose aim is to derive features that explain the response variable better than the original features. Moreover, when there is a meaning for positive vs negative samples, our aim is to derive features that explain the positive samples, or subsets of positive samples that have the same root-cause. Each derived feature represents a single or multi-dimensional subspace of the feature space, where each dimension is specified as a feature-range pair for numeric features, and as a feature-level pair for categorical features. Unlike most Rule Learning and Subgroup Discovery algorithms, the response variable can be numeric, and our algorithm does not require a discretization of the response. The algorithm has been applied successfully to numerous real-life root-causing tasks in chip design, manufacturing, and validation, at Intel.
{"title":"Range Analysis and Applications to Root Causing","authors":"Z. Khasidashvili, A. Norman","doi":"10.1109/DSAA.2019.00045","DOIUrl":"https://doi.org/10.1109/DSAA.2019.00045","url":null,"abstract":"We propose a supervised learning algorithm whose aim is to derive features that explain the response variable better than the original features. Moreover, when there is a meaning for positive vs negative samples, our aim is to derive features that explain the positive samples, or subsets of positive samples that have the same root-cause. Each derived feature represents a single or multi-dimensional subspace of the feature space, where each dimension is specified as a feature-range pair for numeric features, and as a feature-level pair for categorical features. Unlike most Rule Learning and Subgroup Discovery algorithms, the response variable can be numeric, and our algorithm does not require a discretization of the response. The algorithm has been applied successfully to numerous real-life root-causing tasks in chip design, manufacturing, and validation, at Intel.","PeriodicalId":416037,"journal":{"name":"2019 IEEE International Conference on Data Science and Advanced Analytics (DSAA)","volume":"373 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"133278102","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
We have witnessed an increasing number of activity-aware applications being deployed in real-world environments, including smart home and mobile healthcare. The key enabler to these applications is sensor-based human activity recognition; that is, recognising and analysing human daily activities from wearable and ambient sensors. With the power of machine learning we can recognise complex correlations between various types of sensor data and the activities being observed. However the challenges still remain: (1) they often rely on a large amount of labelled training data to build the model, and (2) they cannot dynamically adapt the model with emerging or changing activity patterns over time. To directly address these challenges, we propose a Bayesian nonparametric model, i.e. Dirichlet process mixture of conditionally independent von Mises Fisher models, to enable both unsupervised and semi-supervised dynamic learning of human activities. The Bayesian nonparametric model can dynamically adapt itself to the evolving activity patterns without human intervention and the learning results can be used to alleviate the annotation effort. We evaluate our approach against real-world, third-party smart home datasets, and demonstrate significant improvements over the state-of-the-art techniques in both unsupervised and supervised settings.
{"title":"Sensor-Based Human Activity Mining Using Dirichlet Process Mixtures of Directional Statistical Models","authors":"L. Fang, Juan Ye, S. Dobson","doi":"10.1109/DSAA.2019.00030","DOIUrl":"https://doi.org/10.1109/DSAA.2019.00030","url":null,"abstract":"We have witnessed an increasing number of activity-aware applications being deployed in real-world environments, including smart home and mobile healthcare. The key enabler to these applications is sensor-based human activity recognition; that is, recognising and analysing human daily activities from wearable and ambient sensors. With the power of machine learning we can recognise complex correlations between various types of sensor data and the activities being observed. However the challenges still remain: (1) they often rely on a large amount of labelled training data to build the model, and (2) they cannot dynamically adapt the model with emerging or changing activity patterns over time. To directly address these challenges, we propose a Bayesian nonparametric model, i.e. Dirichlet process mixture of conditionally independent von Mises Fisher models, to enable both unsupervised and semi-supervised dynamic learning of human activities. The Bayesian nonparametric model can dynamically adapt itself to the evolving activity patterns without human intervention and the learning results can be used to alleviate the annotation effort. We evaluate our approach against real-world, third-party smart home datasets, and demonstrate significant improvements over the state-of-the-art techniques in both unsupervised and supervised settings.","PeriodicalId":416037,"journal":{"name":"2019 IEEE International Conference on Data Science and Advanced Analytics (DSAA)","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130152680","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
A bibliographical data set is often visualized as a network to depict relationships among authors. However, static networks only display minimal information when a dataset accommodates temporal features. This paper proposes an embedded network visualization to present concealed temporal patterns in a data set and leverage multiple intelligent filters to reduce occlusion. We compare different graphing styles, such as feature representation and time direction, then determine the best approach for displaying temporal features. We demonstrate the usability of our approach with case studies and an evaluation of the IEEE InfoVis and VAST conference dataset.
{"title":"Colorwall: An Embedded Temporal Display of Bibliographic Data","authors":"Jing Ming, Li Zhang","doi":"10.1109/DSAA.2019.00063","DOIUrl":"https://doi.org/10.1109/DSAA.2019.00063","url":null,"abstract":"A bibliographical data set is often visualized as a network to depict relationships among authors. However, static networks only display minimal information when a dataset accommodates temporal features. This paper proposes an embedded network visualization to present concealed temporal patterns in a data set and leverage multiple intelligent filters to reduce occlusion. We compare different graphing styles, such as feature representation and time direction, then determine the best approach for displaying temporal features. We demonstrate the usability of our approach with case studies and an evaluation of the IEEE InfoVis and VAST conference dataset.","PeriodicalId":416037,"journal":{"name":"2019 IEEE International Conference on Data Science and Advanced Analytics (DSAA)","volume":"19 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114953185","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}