M. Shrestha, Howard J. Hamilton, Yiyu Yao, K. Konkel, Liqiang Geng
Peculiar data are objects that are relatively few in number and significantly different from the other objects in a data set. In this paper, we propose the PDD framework for detecting multiple categories of peculiar data. This framework provides an extensible set of perspectives for viewing data, currently including viewing data as a set of records, attributes, frequencies, intervals, sequences, or sequences of changes. By using these six views of the data, multiple categories of peculiar data can be detected to reveal different aspects of the data. For each view, the framework provides an extensible set of peculiarity measures to detect outliers and other kinds of peculiar data. The PDD framework has been implemented for Oracle and Access. Experiments are reported for data sets concerning Regina weather and NHL hockey.
{"title":"The PDD Framework for Detecting Categories of Peculiar Data","authors":"M. Shrestha, Howard J. Hamilton, Yiyu Yao, K. Konkel, Liqiang Geng","doi":"10.1109/ICDM.2006.159","DOIUrl":"https://doi.org/10.1109/ICDM.2006.159","url":null,"abstract":"Peculiar data are objects that are relatively few in number and significantly different from the other objects in a data set. In this paper, we propose the PDD framework for detecting multiple categories of peculiar data. This framework provides an extensible set of perspectives for viewing data, currently including viewing data as a set of records, attributes, frequencies, intervals, sequences, or sequences of changes. By using these six views of the data, multiple categories of peculiar data can be detected to reveal different aspects of the data. For each view, the framework provides an extensible set of peculiarity measures to detect outliers and other kinds of peculiar data. The PDD framework has been implemented for Oracle and Access. Experiments are reported for data sets concerning Regina weather and NHL hockey.","PeriodicalId":356443,"journal":{"name":"Sixth International Conference on Data Mining (ICDM'06)","volume":"05 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2006-12-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127325740","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Given a collection of trajectories of moving objects with different types (e.g., pumas, deers, vultures, etc.), we introduce the problem of discovering collocation episodes in them (e.g., if a puma is moving near a deer, then a vulture is also going to move close to the same deer with high probability within the next 3 minutes). Collocation episodes catch the inter-movement regularities among different types of objects. We formally define the problem of mining collocation episodes and propose two scaleable algorithms for its efficient solution. We empirically evaluate the performance of the proposed methods using synthetically generated data that emulate real-world object movements.
{"title":"Discovery of Collocation Episodes in Spatiotemporal Data","authors":"H. Cao, N. Mamoulis, D. Cheung","doi":"10.1109/ICDM.2006.59","DOIUrl":"https://doi.org/10.1109/ICDM.2006.59","url":null,"abstract":"Given a collection of trajectories of moving objects with different types (e.g., pumas, deers, vultures, etc.), we introduce the problem of discovering collocation episodes in them (e.g., if a puma is moving near a deer, then a vulture is also going to move close to the same deer with high probability within the next 3 minutes). Collocation episodes catch the inter-movement regularities among different types of objects. We formally define the problem of mining collocation episodes and propose two scaleable algorithms for its efficient solution. We empirically evaluate the performance of the proposed methods using synthetically generated data that emulate real-world object movements.","PeriodicalId":356443,"journal":{"name":"Sixth International Conference on Data Mining (ICDM'06)","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2006-12-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129036244","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Association rule-based classifiers have recently emerged as competitive classification systems. However, there are still deficiencies that hinder their performance. One deficiency is the use of rules in the classification stage. Current systems assign classes to new objects based on the best rule applied or on some predefined scoring of multiple rules. In this paper we propose a new technique where the system automatically learns how to use the rules. We achieve this by developing a two-stage classification model. First, we use association rule mining to discover classification rules. Second, we employ another learning algorithm to learn how to use these rules in the prediction process. Our two-stage approach outperforms C4.5 and RIPPER on the UCI datasets in our study, and outperforms other rule- learning methods on more than half the datasets. The versatility of our method is also demonstrated by applying it to text classification, where it equals the performance of the best known systems for this task, SVMs.
{"title":"Learning to Use a Learned Model: A Two-Stage Approach to Classification","authors":"M. Antonie, Osmar R Zaiane, R. Holte","doi":"10.1109/ICDM.2006.97","DOIUrl":"https://doi.org/10.1109/ICDM.2006.97","url":null,"abstract":"Association rule-based classifiers have recently emerged as competitive classification systems. However, there are still deficiencies that hinder their performance. One deficiency is the use of rules in the classification stage. Current systems assign classes to new objects based on the best rule applied or on some predefined scoring of multiple rules. In this paper we propose a new technique where the system automatically learns how to use the rules. We achieve this by developing a two-stage classification model. First, we use association rule mining to discover classification rules. Second, we employ another learning algorithm to learn how to use these rules in the prediction process. Our two-stage approach outperforms C4.5 and RIPPER on the UCI datasets in our study, and outperforms other rule- learning methods on more than half the datasets. The versatility of our method is also demonstrated by applying it to text classification, where it equals the performance of the best known systems for this task, SVMs.","PeriodicalId":356443,"journal":{"name":"Sixth International Conference on Data Mining (ICDM'06)","volume":"10 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2006-12-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129192499","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
In this article, we present trajectory representation algorithms for tangible features found in temporally varying scientific datasets. Rather than modeling the features as points, we take attributes like shape and extent of the feature into account. Our contention is that these attributes play an important role in understanding the temporal evolution and interactions among features. The proposed representation scheme is based on motion and shape parameters including linear velocity, angular velocity, etc. We use these parameters to segment the trajectory instead of relying on the geometry of the trajectory. We evaluate our algorithms on real datasets originating from different domains. We show the accuracy of the motion and shape parameter estimation by reconstructing the trajectories with high accuracy. Finally, we present performance and scalability results.
{"title":"On Trajectory Representation for Scientific Features","authors":"S. Mehta, S. Parthasarathy, R. Machiraju","doi":"10.1109/ICDM.2006.120","DOIUrl":"https://doi.org/10.1109/ICDM.2006.120","url":null,"abstract":"In this article, we present trajectory representation algorithms for tangible features found in temporally varying scientific datasets. Rather than modeling the features as points, we take attributes like shape and extent of the feature into account. Our contention is that these attributes play an important role in understanding the temporal evolution and interactions among features. The proposed representation scheme is based on motion and shape parameters including linear velocity, angular velocity, etc. We use these parameters to segment the trajectory instead of relying on the geometry of the trajectory. We evaluate our algorithms on real datasets originating from different domains. We show the accuracy of the motion and shape parameter estimation by reconstructing the trajectories with high accuracy. Finally, we present performance and scalability results.","PeriodicalId":356443,"journal":{"name":"Sixth International Conference on Data Mining (ICDM'06)","volume":"111 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2006-12-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"117234303","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Feature extraction techniques have been used to handle high-dimensional data and experimental studies often show improved classification accuracies. Unfortunately very few studies provide concrete evidences on the effectiveness of these feature extraction techniques and they largely remain to be black boxes. In this study, we design and implement a visualization prototype system that allows users to look into the classification processes, explore the links among the original and extracted features in different classifiers, examine why and how an instance is correctly or incorrectly classified. We demonstrate the prototype's capabilities by combining a feature extraction method based on hierarchical feature space clustering with J48 decision tree classifiers and perform experiments on a real hyperspectral remote sensing image dataset.
{"title":"Opening the Black Box of Feature Extraction: Incorporating Visualization into High-Dimensional Data Mining Processes","authors":"Jianting Zhang, L. Gruenwald","doi":"10.1109/ICDM.2006.121","DOIUrl":"https://doi.org/10.1109/ICDM.2006.121","url":null,"abstract":"Feature extraction techniques have been used to handle high-dimensional data and experimental studies often show improved classification accuracies. Unfortunately very few studies provide concrete evidences on the effectiveness of these feature extraction techniques and they largely remain to be black boxes. In this study, we design and implement a visualization prototype system that allows users to look into the classification processes, explore the links among the original and extracted features in different classifiers, examine why and how an instance is correctly or incorrectly classified. We demonstrate the prototype's capabilities by combining a feature extraction method based on hierarchical feature space clustering with J48 decision tree classifiers and perform experiments on a real hyperspectral remote sensing image dataset.","PeriodicalId":356443,"journal":{"name":"Sixth International Conference on Data Mining (ICDM'06)","volume":"2 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2006-12-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"122700583","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
O. Verscheure, M. Vlachos, A. Anagnostopoulos, P. Frossard, E. Bouillet, Philip S. Yu
Technologies that use the Internet network to deliver voice communications have the potential to reduce costs and improve access to communications services around the world. However, these new technologies pose several challenges in terms of confidentiality of the conversations and anonymity of the conversing parties. Call authentication and encryption techniques provide a way to protect confidentiality, while anonymity is typically preserved by an anonymizing service (anonymous call). This work studies the feasibility of revealing pairs of anonymous and encrypted conversing parties (caller/callee pair of streams) by exploiting the vulnerabilities inherent to VoIP systems. In particular, by exploiting the aperiodic inter-departure time of VoIP packets, we can trivialize each VoIP stream into a binary time-series. We first define a simple yet intuitive metric to gauge the correlation between two VoIP binary streams. Then we propose an effective technique that progressively pairs conversing parties with high accuracy and in a limited amount of time. Our metric and method are justified analytically and validated by experiments on a very large standard corpus of conversational speech. We obtain impressively high pairing accuracy that reaches 97% after 5 minutes of voice conversations.
{"title":"Finding \"Who Is Talking to Whom\" in VoIP Networks via Progressive Stream Clustering","authors":"O. Verscheure, M. Vlachos, A. Anagnostopoulos, P. Frossard, E. Bouillet, Philip S. Yu","doi":"10.1109/ICDM.2006.72","DOIUrl":"https://doi.org/10.1109/ICDM.2006.72","url":null,"abstract":"Technologies that use the Internet network to deliver voice communications have the potential to reduce costs and improve access to communications services around the world. However, these new technologies pose several challenges in terms of confidentiality of the conversations and anonymity of the conversing parties. Call authentication and encryption techniques provide a way to protect confidentiality, while anonymity is typically preserved by an anonymizing service (anonymous call). This work studies the feasibility of revealing pairs of anonymous and encrypted conversing parties (caller/callee pair of streams) by exploiting the vulnerabilities inherent to VoIP systems. In particular, by exploiting the aperiodic inter-departure time of VoIP packets, we can trivialize each VoIP stream into a binary time-series. We first define a simple yet intuitive metric to gauge the correlation between two VoIP binary streams. Then we propose an effective technique that progressively pairs conversing parties with high accuracy and in a limited amount of time. Our metric and method are justified analytically and validated by experiments on a very large standard corpus of conversational speech. We obtain impressively high pairing accuracy that reaches 97% after 5 minutes of voice conversations.","PeriodicalId":356443,"journal":{"name":"Sixth International Conference on Data Mining (ICDM'06)","volume":"26 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2006-12-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"123011769","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
How closely related are two nodes in a graph? How to compute this score quickly, on huge, disk-resident, real graphs? Random walk with restart (RWR) provides a good relevance score between two nodes in a weighted graph, and it has been successfully used in numerous settings, like automatic captioning of images, generalizations to the "connection subgraphs", personalized PageRank, and many more. However, the straightforward implementations of RWR do not scale for large graphs, requiring either quadratic space and cubic pre-computation time, or slow response time on queries. We propose fast solutions to this problem. The heart of our approach is to exploit two important properties shared by many real graphs: (a) linear correlations and (b) block- wise, community-like structure. We exploit the linearity by using low-rank matrix approximation, and the community structure by graph partitioning, followed by the Sherman- Morrison lemma for matrix inversion. Experimental results on the Corel image and the DBLP dabasets demonstrate that our proposed methods achieve significant savings over the straightforward implementations: they can save several orders of magnitude in pre-computation and storage cost, and they achieve up to 150x speed up with 90%+ quality preservation.
{"title":"Fast Random Walk with Restart and Its Applications","authors":"Hanghang Tong, C. Faloutsos, Jia-Yu Pan","doi":"10.1109/ICDM.2006.70","DOIUrl":"https://doi.org/10.1109/ICDM.2006.70","url":null,"abstract":"How closely related are two nodes in a graph? How to compute this score quickly, on huge, disk-resident, real graphs? Random walk with restart (RWR) provides a good relevance score between two nodes in a weighted graph, and it has been successfully used in numerous settings, like automatic captioning of images, generalizations to the \"connection subgraphs\", personalized PageRank, and many more. However, the straightforward implementations of RWR do not scale for large graphs, requiring either quadratic space and cubic pre-computation time, or slow response time on queries. We propose fast solutions to this problem. The heart of our approach is to exploit two important properties shared by many real graphs: (a) linear correlations and (b) block- wise, community-like structure. We exploit the linearity by using low-rank matrix approximation, and the community structure by graph partitioning, followed by the Sherman- Morrison lemma for matrix inversion. Experimental results on the Corel image and the DBLP dabasets demonstrate that our proposed methods achieve significant savings over the straightforward implementations: they can save several orders of magnitude in pre-computation and storage cost, and they achieve up to 150x speed up with 90%+ quality preservation.","PeriodicalId":356443,"journal":{"name":"Sixth International Conference on Data Mining (ICDM'06)","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2006-12-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130806345","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
With advances in technology, a flood of data can be produced in many applications such as sensor networks and Web click streams. This calls for efficient techniques for extracting useful information from streams of data. In this paper, we propose a novel tree structure, called DSTree (Data Stream Tree), that captures important data from the streams. By exploiting its nice properties, the DSTree can be easily maintained and mined for frequent itemsets as well as various other patterns like constrained itemsets.
{"title":"DSTree: A Tree Structure for the Mining of Frequent Sets from Data Streams","authors":"C. Leung, Quamrul I. Khan","doi":"10.1109/ICDM.2006.62","DOIUrl":"https://doi.org/10.1109/ICDM.2006.62","url":null,"abstract":"With advances in technology, a flood of data can be produced in many applications such as sensor networks and Web click streams. This calls for efficient techniques for extracting useful information from streams of data. In this paper, we propose a novel tree structure, called DSTree (Data Stream Tree), that captures important data from the streams. By exploiting its nice properties, the DSTree can be easily maintained and mined for frequent itemsets as well as various other patterns like constrained itemsets.","PeriodicalId":356443,"journal":{"name":"Sixth International Conference on Data Mining (ICDM'06)","volume":"21 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2006-12-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126660586","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Heterogeneous object co-clustering has become an important research topic in data mining. In early years of this research, people mainly worked on two types of heterogeneous data (denoted by pair-wise co-clustering); while recently more and more attention was paid to multiple types of heterogeneous data (denoted by high- order co-clustering). In this paper, we studied the high- order co-clustering of objects with star-structured interrelationship, i.e., there is a central type of objects that connects the other types of objects. Actually, this case could be a very good model for many real-world applications, such as the co-clustering of Web images, their low-level visual features, and the surrounding text. We used a tripartite graph to represent the interrelationships among different objects, and proposed a consistent information theory which generates an effective algorithm to obtain the co-clusters of different types of objects. Experiments on a Web image show that our proposed algorithm is a better choice compared with previous work on heterogeneous object co-clustering.
{"title":"Star-Structured High-Order Heterogeneous Data Co-clustering Based on Consistent Information Theory","authors":"Bin Gao, Tie-Yan Liu, Wei-Ying Ma","doi":"10.1109/ICDM.2006.154","DOIUrl":"https://doi.org/10.1109/ICDM.2006.154","url":null,"abstract":"Heterogeneous object co-clustering has become an important research topic in data mining. In early years of this research, people mainly worked on two types of heterogeneous data (denoted by pair-wise co-clustering); while recently more and more attention was paid to multiple types of heterogeneous data (denoted by high- order co-clustering). In this paper, we studied the high- order co-clustering of objects with star-structured interrelationship, i.e., there is a central type of objects that connects the other types of objects. Actually, this case could be a very good model for many real-world applications, such as the co-clustering of Web images, their low-level visual features, and the surrounding text. We used a tripartite graph to represent the interrelationships among different objects, and proposed a consistent information theory which generates an effective algorithm to obtain the co-clusters of different types of objects. Experiments on a Web image show that our proposed algorithm is a better choice compared with previous work on heterogeneous object co-clustering.","PeriodicalId":356443,"journal":{"name":"Sixth International Conference on Data Mining (ICDM'06)","volume":"124 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2006-12-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128156216","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Kun Zhang, W. Fan, B. Buckles, Xiao-Xia Yuan, Zujia Xu
There has been increasing interest to design better probability estimation trees, or PETs, for ranking and probability estimation. Capable of generating class membership probabilities, PETs have been shown to be highly accurate and flexible for many difficult problems, such as cost-sensitive learning and matching skewed distributions. There are a large number of PET algorithms available, and about ten of them are well- known. This large number provides an advantage, but it also creates confusion in practice. One would ask "given a new dataset, which algorithm to choose and what performance to expect and not to expect? What are the reasons to explain either good or bad performance under different situations? " In this paper, we systematically, for the first time, answer these important questions by conducting a large-scale empirical comparison of five popular PETs by examining their AUC, MSE and error rate "learning curves" (instead of training-test split based cross-validation). Using the maximum AUC achieved by any of the evaluated probability estimation tree algorithms, we demonstrate that the preference of a probability estimation tree on different evaluation metrics can be accurately characterized by the "signal-noise separability" of the dataset, as well as some other observable statistics of the dataset explained further in the paper. Moreover, in order to understand their relative performance, many important and previously unrevealed properties of each PET's mechanism and heuristics are analyzed and evaluated. Importantly, a practical guide for choosing the most appropriate PET algorithm given a new data mining problem is provided.
{"title":"Discovering Unrevealed Properties of Probability Estimation Trees: On Algorithm Selection and Performance Explanation","authors":"Kun Zhang, W. Fan, B. Buckles, Xiao-Xia Yuan, Zujia Xu","doi":"10.1109/ICDM.2006.58","DOIUrl":"https://doi.org/10.1109/ICDM.2006.58","url":null,"abstract":"There has been increasing interest to design better probability estimation trees, or PETs, for ranking and probability estimation. Capable of generating class membership probabilities, PETs have been shown to be highly accurate and flexible for many difficult problems, such as cost-sensitive learning and matching skewed distributions. There are a large number of PET algorithms available, and about ten of them are well- known. This large number provides an advantage, but it also creates confusion in practice. One would ask \"given a new dataset, which algorithm to choose and what performance to expect and not to expect? What are the reasons to explain either good or bad performance under different situations? \" In this paper, we systematically, for the first time, answer these important questions by conducting a large-scale empirical comparison of five popular PETs by examining their AUC, MSE and error rate \"learning curves\" (instead of training-test split based cross-validation). Using the maximum AUC achieved by any of the evaluated probability estimation tree algorithms, we demonstrate that the preference of a probability estimation tree on different evaluation metrics can be accurately characterized by the \"signal-noise separability\" of the dataset, as well as some other observable statistics of the dataset explained further in the paper. Moreover, in order to understand their relative performance, many important and previously unrevealed properties of each PET's mechanism and heuristics are analyzed and evaluated. Importantly, a practical guide for choosing the most appropriate PET algorithm given a new data mining problem is provided.","PeriodicalId":356443,"journal":{"name":"Sixth International Conference on Data Mining (ICDM'06)","volume":"30 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2006-12-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"121584678","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}