Document classification plays an increasingly important role in extracting and organizing the knowledge, however, the Web document classification task was hindered by the huge number of Web documents while limited resource of human judgment on the training data. To obtain sufficient training data in a cost-efficient way, in this paper, we propose a semi-supervised learning approach to predict a document’s class label by mining the click graph. To overcome the sparseness problem of click graph, we enrich it by including hyperlinks between the Web documents. Content-based constraints are further added to regularize the graph. The resulting graph unifies three data sources: click-through data, hyperlinks and content relevance. Starting from a very small seed set of manually labeled documents, we automatically explore large amount of relevant documents by applying a Markov random walk model to the enriched click graph. The top pages with high confidence scores are included to the current training data for classifier model training. We investigate various combinations among the three sources and conduct extensive experiments on six typical web classification tasks. The experimental results show that the click graph enriched with hyperlink and content information can significantly improve the classification quality across multiple tasks only with a minimal human labeling cost.
{"title":"Learning Document Labels from Enriched Click Graphs","authors":"Lan Nie, Zhigang Hua, Xiaofeng He, S. Gaffney","doi":"10.1109/ICDMW.2010.190","DOIUrl":"https://doi.org/10.1109/ICDMW.2010.190","url":null,"abstract":"Document classification plays an increasingly important role in extracting and organizing the knowledge, however, the Web document classification task was hindered by the huge number of Web documents while limited resource of human judgment on the training data. To obtain sufficient training data in a cost-efficient way, in this paper, we propose a semi-supervised learning approach to predict a document’s class label by mining the click graph. To overcome the sparseness problem of click graph, we enrich it by including hyperlinks between the Web documents. Content-based constraints are further added to regularize the graph. The resulting graph unifies three data sources: click-through data, hyperlinks and content relevance. Starting from a very small seed set of manually labeled documents, we automatically explore large amount of relevant documents by applying a Markov random walk model to the enriched click graph. The top pages with high confidence scores are included to the current training data for classifier model training. We investigate various combinations among the three sources and conduct extensive experiments on six typical web classification tasks. The experimental results show that the click graph enriched with hyperlink and content information can significantly improve the classification quality across multiple tasks only with a minimal human labeling cost.","PeriodicalId":170201,"journal":{"name":"2010 IEEE International Conference on Data Mining Workshops","volume":"26 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2010-12-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"133795774","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Generally, numerous data may increase the statistical power. However, many algorithms in data mining community only focus on small samples. This is because when the sample size increases, the data set is not necessarily identically distributed in spite of being generated by some common data generating mechanism. In this paper, we realize restricted Bayesian network classifiers are robust even when training data set is non-i.i.d. sampling. Empirical studies show that these algorithms performs as well as others which combine independent experimental results by some statistical methods.
{"title":"Learning Restricted Bayesian Network Classifiers with Mixed Non-i.i.d. Sampling","authors":"Zhongfeng Wang, Zhihai Wang, Bin Fu","doi":"10.1109/ICDMW.2010.199","DOIUrl":"https://doi.org/10.1109/ICDMW.2010.199","url":null,"abstract":"Generally, numerous data may increase the statistical power. However, many algorithms in data mining community only focus on small samples. This is because when the sample size increases, the data set is not necessarily identically distributed in spite of being generated by some common data generating mechanism. In this paper, we realize restricted Bayesian network classifiers are robust even when training data set is non-i.i.d. sampling. Empirical studies show that these algorithms performs as well as others which combine independent experimental results by some statistical methods.","PeriodicalId":170201,"journal":{"name":"2010 IEEE International Conference on Data Mining Workshops","volume":"29 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2010-12-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130539607","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
This study proposes a framework of Uncertainty-based Group Decision Support System (UGDSS). It provides a platform for multiple criteria decision analysis in six aspects including (1) decision environment, (2) decision problem, (3) decision group, (4) decision conflict, (5) decision schemes and (6) group negotiation. Based on multiple artificial intelligent technologies, this framework provides reliable support for the comprehensive manipulation of applications and advanced decision approaches through the design of an integrated multi-agents architecture.
{"title":"Towards a Reliable Framework of Uncertainty-Based Group Decision Support System","authors":"J. Chai, J. Liu","doi":"10.1109/ICDMW.2010.80","DOIUrl":"https://doi.org/10.1109/ICDMW.2010.80","url":null,"abstract":"This study proposes a framework of Uncertainty-based Group Decision Support System (UGDSS). It provides a platform for multiple criteria decision analysis in six aspects including (1) decision environment, (2) decision problem, (3) decision group, (4) decision conflict, (5) decision schemes and (6) group negotiation. Based on multiple artificial intelligent technologies, this framework provides reliable support for the comprehensive manipulation of applications and advanced decision approaches through the design of an integrated multi-agents architecture.","PeriodicalId":170201,"journal":{"name":"2010 IEEE International Conference on Data Mining Workshops","volume":"10 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2010-12-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"125855292","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
P. Kranen, Hardy Kremer, Timm Jansen, T. Seidl, A. Bifet, G. Holmes, B. Pfahringer
In today's applications, evolving data streams are ubiquitous. Stream clustering algorithms were introduced to gain useful knowledge from these streams in real-time. The quality of the obtained clusterings, i.e. how good they reflect the data, can be assessed by evaluation measures. A multitude of stream clustering algorithms and evaluation measures for clusterings were introduced in the literature, however, until now there is no general tool for a direct comparison of the different algorithms or the evaluation measures. In our demo, we present a novel experimental framework for both tasks. It offers the means for extensive evaluation and visualization and is an extension of the Massive Online Analysis (MOA) software environment released under the GNU GPL License.
{"title":"Clustering Performance on Evolving Data Streams: Assessing Algorithms and Evaluation Measures within MOA","authors":"P. Kranen, Hardy Kremer, Timm Jansen, T. Seidl, A. Bifet, G. Holmes, B. Pfahringer","doi":"10.1109/ICDMW.2010.17","DOIUrl":"https://doi.org/10.1109/ICDMW.2010.17","url":null,"abstract":"In today's applications, evolving data streams are ubiquitous. Stream clustering algorithms were introduced to gain useful knowledge from these streams in real-time. The quality of the obtained clusterings, i.e. how good they reflect the data, can be assessed by evaluation measures. A multitude of stream clustering algorithms and evaluation measures for clusterings were introduced in the literature, however, until now there is no general tool for a direct comparison of the different algorithms or the evaluation measures. In our demo, we present a novel experimental framework for both tasks. It offers the means for extensive evaluation and visualization and is an extension of the Massive Online Analysis (MOA) software environment released under the GNU GPL License.","PeriodicalId":170201,"journal":{"name":"2010 IEEE International Conference on Data Mining Workshops","volume":"87 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2010-12-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"122254314","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
This paper describes an approach to the road traffic prediction problem in Warsaw in the context of a data mining competition that is part of the IEEE ICDM 2010. A solution based on a convex combination of models mining different wells of information within the data is described. Such convex combination allows the final model compensate highly uncorrelated errors from the different underlying models and to achieve higher prediction accuracy.
{"title":"A Convex Combination of Models for Predicting Road Traffic","authors":"Carlos J. Gil Bellosta","doi":"10.1109/ICDMW.2010.23","DOIUrl":"https://doi.org/10.1109/ICDMW.2010.23","url":null,"abstract":"This paper describes an approach to the road traffic prediction problem in Warsaw in the context of a data mining competition that is part of the IEEE ICDM 2010. A solution based on a convex combination of models mining different wells of information within the data is described. Such convex combination allows the final model compensate highly uncorrelated errors from the different underlying models and to achieve higher prediction accuracy.","PeriodicalId":170201,"journal":{"name":"2010 IEEE International Conference on Data Mining Workshops","volume":"19 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2010-12-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"125240408","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
We describe simple yet scalable and distributed algorithms for solving the maximum flow problem and its minimum cost flow variant, motivated by problems of interest in objects similarity visualization. We formulate the fundamental problem as a convex-concave saddle point problem. We then show that this problem can be efficiently solved by a first order method or by exploiting faster quasi-Newton steps. Our proposed approach costs at most O(|E|) per iteration for a graph with |E| edges. Further, the number of required iterations can be shown to be independent of number of edges for the first order approximation method. We present experimental results in two applications: mosaic generation and color similarity based image layouting.
{"title":"Distributed Flow Algorithms for Scalable Similarity Visualization","authors":"Novi Quadrianto, Dale Schuurmans, Alex Smola","doi":"10.1109/ICDMW.2010.120","DOIUrl":"https://doi.org/10.1109/ICDMW.2010.120","url":null,"abstract":"We describe simple yet scalable and distributed algorithms for solving the maximum flow problem and its minimum cost flow variant, motivated by problems of interest in objects similarity visualization. We formulate the fundamental problem as a convex-concave saddle point problem. We then show that this problem can be efficiently solved by a first order method or by exploiting faster quasi-Newton steps. Our proposed approach costs at most O(|E|) per iteration for a graph with |E| edges. Further, the number of required iterations can be shown to be independent of number of edges for the first order approximation method. We present experimental results in two applications: mosaic generation and color similarity based image layouting.","PeriodicalId":170201,"journal":{"name":"2010 IEEE International Conference on Data Mining Workshops","volume":"173 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2010-12-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126176197","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Yanchang Zhao, H. Bohlscheid, Shanshan Wu, Longbing Cao
This paper presents a real-world application of data mining techniques to optimise debt recovery in social security. The traditional method of contacting a customer for the purpose of putting in place a debt recovery schedule has been an out-bound phone call, and by and large, customers are chosen at random. This obsolete and inefficient method of selecting customers for debt recovery purposes has existed for years and in order to improve this process, decision trees were built to model debt recovery and predict the response of customers if contacted by phone. Test results on historical data show that, the built model is effective to rank customers in their likelihood of entering into a successful debt recovery repayment schedule. If contacting the top 20 per cent of customers in debt, instead of contacting all of them, approximately 50 per cent of repayments would be received.
{"title":"Less Effort, More Outcomes: Optimising Debt Recovery with Decision Trees","authors":"Yanchang Zhao, H. Bohlscheid, Shanshan Wu, Longbing Cao","doi":"10.1109/ICDMW.2010.114","DOIUrl":"https://doi.org/10.1109/ICDMW.2010.114","url":null,"abstract":"This paper presents a real-world application of data mining techniques to optimise debt recovery in social security. The traditional method of contacting a customer for the purpose of putting in place a debt recovery schedule has been an out-bound phone call, and by and large, customers are chosen at random. This obsolete and inefficient method of selecting customers for debt recovery purposes has existed for years and in order to improve this process, decision trees were built to model debt recovery and predict the response of customers if contacted by phone. Test results on historical data show that, the built model is effective to rank customers in their likelihood of entering into a successful debt recovery repayment schedule. If contacting the top 20 per cent of customers in debt, instead of contacting all of them, approximately 50 per cent of repayments would be received.","PeriodicalId":170201,"journal":{"name":"2010 IEEE International Conference on Data Mining Workshops","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2010-12-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129108239","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Data aggregation is an important element in information processing systems, including MapReduce clusters and cyber physical networks. Unlike simple sensor networks, all the data in information processing systems must be eventually aggregated. Our goal is to lower overall latency in these systems by intelligently scheduling aggregation on intermediate routing nodes. In order to understand the potential challenges associated with constructing a distributed scheduler that minimizes latency, we developed a simple model of wireless information processing systems and simulation of our model. Unlike previous models, our model explicitly takes into account link latency and computation time. Our model also considers heterogeneous computing capabilities. We tested the latency while randomly assigning aggregation computation to nodes in the network. Preliminary results indicate that in cases where the computation time is greater than transmission time, in-network aggregation can have a large effect (reducing latency by 50% or more). However, naive scheduling can have a detrimental effect. Specifically, when the root node (a.k.a the base station) is faster than the other nodes, the latency can increase with increased coverage, and these effects vary with the number of nodes present.
{"title":"Challenges in Scheduling Aggregation in Cyberphysical Information Processing Systems","authors":"James L. Horey","doi":"10.1109/ICDMW.2010.96","DOIUrl":"https://doi.org/10.1109/ICDMW.2010.96","url":null,"abstract":"Data aggregation is an important element in information processing systems, including MapReduce clusters and cyber physical networks. Unlike simple sensor networks, all the data in information processing systems must be eventually aggregated. Our goal is to lower overall latency in these systems by intelligently scheduling aggregation on intermediate routing nodes. In order to understand the potential challenges associated with constructing a distributed scheduler that minimizes latency, we developed a simple model of wireless information processing systems and simulation of our model. Unlike previous models, our model explicitly takes into account link latency and computation time. Our model also considers heterogeneous computing capabilities. We tested the latency while randomly assigning aggregation computation to nodes in the network. Preliminary results indicate that in cases where the computation time is greater than transmission time, in-network aggregation can have a large effect (reducing latency by 50% or more). However, naive scheduling can have a detrimental effect. Specifically, when the root node (a.k.a the base station) is faster than the other nodes, the latency can increase with increased coverage, and these effects vary with the number of nodes present.","PeriodicalId":170201,"journal":{"name":"2010 IEEE International Conference on Data Mining Workshops","volume":"2014 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2010-12-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127553901","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
L. Khan, J. Partyka, Satyen Abrol, B. Thuraisingham
In this talk, we will present how semantics can improve the quality of the data mining process. In particular, first, we will focus on geospatial schema matching with high quality cluster assurance. Next, we will focus on location mining from social network. With regard to the first problem, resolving semantic heterogeneity across distinct data sources remains a highly relevant problem in the GIS domain requiring innovative solutions. Our approach, called GSim, semantically aligns tables from respective GIS databases by first choosing attributes for comparison. We then examine their instances and calculate a similarity value between them called Entropy-Based Distribution (EBD) by combining two separate methods. Our primary method discerns the geographic types from instances of compared attributes. If geographic type matching is not possible, we then apply a generic schema matching method which employs normalized Google distance with the usage of clustering process. GSim proceeds by deriving clusters from attribute instances based on content and their geographic types (if possible), gleaned from a gazetteer. However, clustering algorithms may produce inconsistent results based on variable cluster quality. We apply novel metrics measuring cluster distance and purity to guarantee high-quality homogeneous clusters. The end result is a wholly geospatial similarity value, expressed as EBD. We show the effectiveness of our approach over the traditional N-gram approach across multi-jurisdictional datasets by generating impressive results. With regard to the second problem, we will predict the location of the user on the basis of his social network (e.g., Twitter) using the strong theoretical framework of semi-supervised learning, in particular, we employ label propagation algorithm. For privacy and security reasons, most of the people on social networking sites like Twitter are unwilling to specify their locations explicitly. On the city locations returned by the algorithm, the system performs agglomerative clustering based on geospatial proximity and their individual scores to return cluster of locations with higher confidence. We perform extensive experiments to show the validity of our system in terms of both accuracy and running time. Experimental results show that our approach outperforms the content based geo-tagging approach in both accuracy and running time.
{"title":"Geospatial Schema Matching with High-Quality Cluster Assurance and Location Mining from Social Network","authors":"L. Khan, J. Partyka, Satyen Abrol, B. Thuraisingham","doi":"10.1109/ICDMW.2010.204","DOIUrl":"https://doi.org/10.1109/ICDMW.2010.204","url":null,"abstract":"In this talk, we will present how semantics can improve the quality of the data mining process. In particular, first, we will focus on geospatial schema matching with high quality cluster assurance. Next, we will focus on location mining from social network. With regard to the first problem, resolving semantic heterogeneity across distinct data sources remains a highly relevant problem in the GIS domain requiring innovative solutions. Our approach, called GSim, semantically aligns tables from respective GIS databases by first choosing attributes for comparison. We then examine their instances and calculate a similarity value between them called Entropy-Based Distribution (EBD) by combining two separate methods. Our primary method discerns the geographic types from instances of compared attributes. If geographic type matching is not possible, we then apply a generic schema matching method which employs normalized Google distance with the usage of clustering process. GSim proceeds by deriving clusters from attribute instances based on content and their geographic types (if possible), gleaned from a gazetteer. However, clustering algorithms may produce inconsistent results based on variable cluster quality. We apply novel metrics measuring cluster distance and purity to guarantee high-quality homogeneous clusters. The end result is a wholly geospatial similarity value, expressed as EBD. We show the effectiveness of our approach over the traditional N-gram approach across multi-jurisdictional datasets by generating impressive results. With regard to the second problem, we will predict the location of the user on the basis of his social network (e.g., Twitter) using the strong theoretical framework of semi-supervised learning, in particular, we employ label propagation algorithm. For privacy and security reasons, most of the people on social networking sites like Twitter are unwilling to specify their locations explicitly. On the city locations returned by the algorithm, the system performs agglomerative clustering based on geospatial proximity and their individual scores to return cluster of locations with higher confidence. We perform extensive experiments to show the validity of our system in terms of both accuracy and running time. Experimental results show that our approach outperforms the content based geo-tagging approach in both accuracy and running time.","PeriodicalId":170201,"journal":{"name":"2010 IEEE International Conference on Data Mining Workshops","volume":"7 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2010-12-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115355487","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Previous sequential pattern mining algorithms have focused on improving performance in terms of runtime and memory consumption without considering the specifics of different data sources or application scenarios. In this paper, we focus on mining closed sequential patterns from website click streams by extending the state of the art Bi-Directional Extension (BIDE) algorithm in order to identify domain-specific rule sets. In particular, we focus on exploiting sequential patterns for landing page personalization and product recommendation in the e-commerce domain. Our contribution is therefore of algorithmic as well as of empirical nature. Based on a dataset that we derived from an online store for nutritional supplements, we evaluate the effectiveness of using different sources of domain knowledge, such as product hierarchies and search word categorizations, to enhance predictions about the conversion actions of users. Furthermore, we examine the performance of the recommender for two important user subgroups, namely those that use search functionality and those that don't. Our findings indicate for instance that search terms alone are already quite effective for predicting users' add-to-basket actions and that using additional domain knowledge to generate multi-dimensional rules does not always lead to improved accuracy.
{"title":"Insights from Applying Sequential Pattern Mining to E-commerce Click Stream Data","authors":"Arthur Pitman, M. Zanker","doi":"10.1109/ICDMW.2010.31","DOIUrl":"https://doi.org/10.1109/ICDMW.2010.31","url":null,"abstract":"Previous sequential pattern mining algorithms have focused on improving performance in terms of runtime and memory consumption without considering the specifics of different data sources or application scenarios. In this paper, we focus on mining closed sequential patterns from website click streams by extending the state of the art Bi-Directional Extension (BIDE) algorithm in order to identify domain-specific rule sets. In particular, we focus on exploiting sequential patterns for landing page personalization and product recommendation in the e-commerce domain. Our contribution is therefore of algorithmic as well as of empirical nature. Based on a dataset that we derived from an online store for nutritional supplements, we evaluate the effectiveness of using different sources of domain knowledge, such as product hierarchies and search word categorizations, to enhance predictions about the conversion actions of users. Furthermore, we examine the performance of the recommender for two important user subgroups, namely those that use search functionality and those that don't. Our findings indicate for instance that search terms alone are already quite effective for predicting users' add-to-basket actions and that using additional domain knowledge to generate multi-dimensional rules does not always lead to improved accuracy.","PeriodicalId":170201,"journal":{"name":"2010 IEEE International Conference on Data Mining Workshops","volume":"199 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2010-12-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"116005116","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}