The increasing number of test tasks lead to the rapid growth of test data. How to effectively and intuitively use the data has become the difficulty of the test data processing. Data visualization is the data is displayed by the modes of graphics and images, etc. it can effectively improve the data processing and interpretation capabilities, currently, and the data visualization has become an important means of the test data processing. Firstly, the test process, the flow of test data processing and the visualization requirements of the test data are briefly introduced. Secondly, the related technologies of the data visualization are deeply analyzed, the basic flow of data visualization, the interactive methods of data visualization and the realization tools of data visualization included. Thirdly, the types and characteristics of the test data are deeply analyzed, the basic flow the test data visualization is proposed, and the visualization of test data is presented. Lastly, for the dynamic geographical data of the test equipment, its visualization analysis and trajectory display are achieved. So, it can provide a reference for further research of the test data visualization.
{"title":"Research and Application of the Test Data Visualization","authors":"Hui Yan, Junfeng Wang, Chensen Xia","doi":"10.1109/DSC.2017.110","DOIUrl":"https://doi.org/10.1109/DSC.2017.110","url":null,"abstract":"The increasing number of test tasks lead to the rapid growth of test data. How to effectively and intuitively use the data has become the difficulty of the test data processing. Data visualization is the data is displayed by the modes of graphics and images, etc. it can effectively improve the data processing and interpretation capabilities, currently, and the data visualization has become an important means of the test data processing. Firstly, the test process, the flow of test data processing and the visualization requirements of the test data are briefly introduced. Secondly, the related technologies of the data visualization are deeply analyzed, the basic flow of data visualization, the interactive methods of data visualization and the realization tools of data visualization included. Thirdly, the types and characteristics of the test data are deeply analyzed, the basic flow the test data visualization is proposed, and the visualization of test data is presented. Lastly, for the dynamic geographical data of the test equipment, its visualization analysis and trajectory display are achieved. So, it can provide a reference for further research of the test data visualization.","PeriodicalId":427998,"journal":{"name":"2017 IEEE Second International Conference on Data Science in Cyberspace (DSC)","volume":"8 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2017-06-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128315445","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Hao Ge, Jinchao Huang, C. Di, Jianhua Li, Shenghong Li
Influence maximization problem aims at targeting a subset of entities in a network such that the influence cascade being maximized. It is proved to be a NP-hard problem, and many approximate solutions have been proposed. The state-ofart approach is known as CELF, who evaluates the marginal influence spread of each entity by Monte-Carlo simulation and picks the most influential entity in each round. However, as the cost of Monte-Carlo simulations is in proportion to the scale of network, which limits the application of CELF in real-world networks. Learning automata (LA) is a promising technique potential solution to many engineering problem. In this paper, we extend the confidence interval estimator based learning automata to S-model environment, based on this, an end-to-end approach for influence maximization is proposed, simulation on three real-world networks demonstrate that the proposed approach attains as large influence spread as CELF, and with a higher computational efficiency.
{"title":"Learning Automata Based Approach for Influence Maximization Problem on Social Networks","authors":"Hao Ge, Jinchao Huang, C. Di, Jianhua Li, Shenghong Li","doi":"10.1109/DSC.2017.54","DOIUrl":"https://doi.org/10.1109/DSC.2017.54","url":null,"abstract":"Influence maximization problem aims at targeting a subset of entities in a network such that the influence cascade being maximized. It is proved to be a NP-hard problem, and many approximate solutions have been proposed. The state-ofart approach is known as CELF, who evaluates the marginal influence spread of each entity by Monte-Carlo simulation and picks the most influential entity in each round. However, as the cost of Monte-Carlo simulations is in proportion to the scale of network, which limits the application of CELF in real-world networks. Learning automata (LA) is a promising technique potential solution to many engineering problem. In this paper, we extend the confidence interval estimator based learning automata to S-model environment, based on this, an end-to-end approach for influence maximization is proposed, simulation on three real-world networks demonstrate that the proposed approach attains as large influence spread as CELF, and with a higher computational efficiency.","PeriodicalId":427998,"journal":{"name":"2017 IEEE Second International Conference on Data Science in Cyberspace (DSC)","volume":"83 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2017-06-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"122522042","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
To extract key topics from news articles, this paper researches into a new method to discover an efficient way to construct text vectors and improve the efficiency and accuracy of document clustering based on Word2Vec model. This paper proposes a novel algorithm, which combines Jaccard similarity coefficient and inverse dimension frequency to calculate the importance degree between each dimension in text vector and the corresponding document. Text vectors is constructed based on the importance degree and improve the accuracy of text cluster and key topics extraction. The algorithm is also implemented on MapReduce and the efficiency is improved.
{"title":"Extracting Topics Based on Word2Vec and Improved Jaccard Similarity Coefficient","authors":"Chunzi Wu, Bai Wang","doi":"10.1109/DSC.2017.70","DOIUrl":"https://doi.org/10.1109/DSC.2017.70","url":null,"abstract":"To extract key topics from news articles, this paper researches into a new method to discover an efficient way to construct text vectors and improve the efficiency and accuracy of document clustering based on Word2Vec model. This paper proposes a novel algorithm, which combines Jaccard similarity coefficient and inverse dimension frequency to calculate the importance degree between each dimension in text vector and the corresponding document. Text vectors is constructed based on the importance degree and improve the accuracy of text cluster and key topics extraction. The algorithm is also implemented on MapReduce and the efficiency is improved.","PeriodicalId":427998,"journal":{"name":"2017 IEEE Second International Conference on Data Science in Cyberspace (DSC)","volume":"7 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2017-06-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127629957","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Chengsen Ru, Shasha Li, Jintao Tang, Yi Gao, Ting Wang
Relation extraction is very useful for many applications and has attracted much attention. The dominant prior methods for relation extraction were supervised methods which are relation-specific and limited by the availability of annotated training data. In this paper, we propose a method using hierarchical clustering to extract unbounded relations without relying on training data. The relation among entities in a sentence depends on the terms associated with the entities. Terms on the expandPath capture the relations between the entities. Given a relation, though an expandPath may have more than one dependency phrase, only the core dependency phrase describes the specific relation between the subject and the object. Our method uses heuristic rules to select the core dependency phrases and clusters entity pairs according to the similarity of the core dependency phrases in order to avoid irrelevant information and capture the semantics of the relation between entities more precisely. At last, our method automatically labels the relation clusters on basis of the semantics of core dependency phrases. The experimental results show that our method can cluster entity pairs which have the same relations more accurately and generate appropriate labels for the relations.
{"title":"Open Relation Extraction Based on Core Dependency Phrase Clustering","authors":"Chengsen Ru, Shasha Li, Jintao Tang, Yi Gao, Ting Wang","doi":"10.1109/DSC.2017.91","DOIUrl":"https://doi.org/10.1109/DSC.2017.91","url":null,"abstract":"Relation extraction is very useful for many applications and has attracted much attention. The dominant prior methods for relation extraction were supervised methods which are relation-specific and limited by the availability of annotated training data. In this paper, we propose a method using hierarchical clustering to extract unbounded relations without relying on training data. The relation among entities in a sentence depends on the terms associated with the entities. Terms on the expandPath capture the relations between the entities. Given a relation, though an expandPath may have more than one dependency phrase, only the core dependency phrase describes the specific relation between the subject and the object. Our method uses heuristic rules to select the core dependency phrases and clusters entity pairs according to the similarity of the core dependency phrases in order to avoid irrelevant information and capture the semantics of the relation between entities more precisely. At last, our method automatically labels the relation clusters on basis of the semantics of core dependency phrases. The experimental results show that our method can cluster entity pairs which have the same relations more accurately and generate appropriate labels for the relations.","PeriodicalId":427998,"journal":{"name":"2017 IEEE Second International Conference on Data Science in Cyberspace (DSC)","volume":"17 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2017-06-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127719590","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
With the rapid development of network, more and more people share and comment on the web to express their mends. How to predict the popularity of topic happening recently is a hot topic and lots of people are trying to find out the law of information diffusion hidden in it. However, many models assume that information spreads with no external interference in social networks. The research on competitive diffusion is still at the primary stage. The main contribution is to solve the problem that there are few or no work for popularity prediction based on multi-information, and propose a predicting model based on competitive matrix. The goal of this paper is to accurately estimate the popularity for a given viral topic at final based on the observation of historical popularity of the topic. And this model is mainly based on the competitive matrix and gradient descent method. Also, the capability of this method provides a better performance in the popularity prediction according to an empirical study on Tencent News.
{"title":"Predicting the Popularity of News Based on Competitive Matrix","authors":"Xiaomeng Wang, Binxing Fang, Hongli Zhang, XuanYu","doi":"10.1109/DSC.2017.88","DOIUrl":"https://doi.org/10.1109/DSC.2017.88","url":null,"abstract":"With the rapid development of network, more and more people share and comment on the web to express their mends. How to predict the popularity of topic happening recently is a hot topic and lots of people are trying to find out the law of information diffusion hidden in it. However, many models assume that information spreads with no external interference in social networks. The research on competitive diffusion is still at the primary stage. The main contribution is to solve the problem that there are few or no work for popularity prediction based on multi-information, and propose a predicting model based on competitive matrix. The goal of this paper is to accurately estimate the popularity for a given viral topic at final based on the observation of historical popularity of the topic. And this model is mainly based on the competitive matrix and gradient descent method. Also, the capability of this method provides a better performance in the popularity prediction according to an empirical study on Tencent News.","PeriodicalId":427998,"journal":{"name":"2017 IEEE Second International Conference on Data Science in Cyberspace (DSC)","volume":"46 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2017-06-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114682348","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Jun Ren, Jinbo Xiong, Zhiqiang Yao, Rong Ma, Mingwei Lin
K-means algorithm is an important type of clustering algorithm and the foundation of some data mining methods. But it has the risk of privacy disclosure in the process of clustering. In order to solve this problem, Blum et al. proposed a differential privacy K-means algorithm, which can prevent privacy disclosure effectively. However, the availability of clustering results is reduced due to the added noise. In this paper, we propose a novel DPLK-means algorithm based on differential privacy, which improves the selection of the initial center points through performing the differential privacy K-means algorithm to each subset divided by the original dataset. Performance evaluation shows that our algorithm improves the availability of clustering results compared to the existing differential privacy K-means algorithm at the same privacy level.
{"title":"DPLK-Means: A Novel Differential Privacy K-Means Mechanism","authors":"Jun Ren, Jinbo Xiong, Zhiqiang Yao, Rong Ma, Mingwei Lin","doi":"10.1109/DSC.2017.64","DOIUrl":"https://doi.org/10.1109/DSC.2017.64","url":null,"abstract":"K-means algorithm is an important type of clustering algorithm and the foundation of some data mining methods. But it has the risk of privacy disclosure in the process of clustering. In order to solve this problem, Blum et al. proposed a differential privacy K-means algorithm, which can prevent privacy disclosure effectively. However, the availability of clustering results is reduced due to the added noise. In this paper, we propose a novel DPLK-means algorithm based on differential privacy, which improves the selection of the initial center points through performing the differential privacy K-means algorithm to each subset divided by the original dataset. Performance evaluation shows that our algorithm improves the availability of clustering results compared to the existing differential privacy K-means algorithm at the same privacy level.","PeriodicalId":427998,"journal":{"name":"2017 IEEE Second International Conference on Data Science in Cyberspace (DSC)","volume":"62 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2017-06-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"117048075","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Cyber Physical Systems(CPS) have achieved attention, research and applications from the governments, academic circles, industry circles of domestic and foreign, so, CPS have become an important content of China's two modernizations' deeply integration in future. Using Petri net model to describe CPS information security risk evaluation process. Colligating Petri net model analysis results and CPS information security risk evaluation related big data analysis results, to confirm CPS information security risk evaluation element index system and index weight value, and further by using RBF neural network model construct evaluation model to realize CPS information security risk's quantitative evaluation. The Petri net model constructed in the paper can realize the correlation relation analysis among CPS information security risk evaluation elements, and the description for system risk has the characteristics of temporality, integrity, diversification etc. The constructed index system and its weights have the characteristic of dynamic adaptability with the diversification of CPS information security risk evaluation related big data, that are according with the complexity and dynamic structure of CPS. The paper research has guide function to CPS information security risk evaluation, and has important practical significance and application value.
{"title":"CPS Information Security Risk Evaluation System Based on Petri Net","authors":"Yonggui Fu, Jian-ming Zhu, Sheng Gao","doi":"10.1109/DSC.2017.65","DOIUrl":"https://doi.org/10.1109/DSC.2017.65","url":null,"abstract":"Cyber Physical Systems(CPS) have achieved attention, research and applications from the governments, academic circles, industry circles of domestic and foreign, so, CPS have become an important content of China's two modernizations' deeply integration in future. Using Petri net model to describe CPS information security risk evaluation process. Colligating Petri net model analysis results and CPS information security risk evaluation related big data analysis results, to confirm CPS information security risk evaluation element index system and index weight value, and further by using RBF neural network model construct evaluation model to realize CPS information security risk's quantitative evaluation. The Petri net model constructed in the paper can realize the correlation relation analysis among CPS information security risk evaluation elements, and the description for system risk has the characteristics of temporality, integrity, diversification etc. The constructed index system and its weights have the characteristic of dynamic adaptability with the diversification of CPS information security risk evaluation related big data, that are according with the complexity and dynamic structure of CPS. The paper research has guide function to CPS information security risk evaluation, and has important practical significance and application value.","PeriodicalId":427998,"journal":{"name":"2017 IEEE Second International Conference on Data Science in Cyberspace (DSC)","volume":"17 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2017-06-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126788513","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Mobile phones have been developed from the general communication equipment to smart phones. People also use the phone from simply keeping in touch to storing more communication details and privacy. Nowadays, Android has became the most widely used operating system. Google proposed FDE (full disk encryption) in Android 5.0, 2014, and FBE (file-based encryption) in Android 7.0, 2016, to protect the user data from being stolen. This study introduces the the domestic encryption algorithm SM4, and through the AOSP (Android Open Source Project) to make it in Android kernel to achieve a rapid and complete "optional" file encryption to reflect the more humane way of operation.
手机已经从一般的通讯设备发展到智能手机。人们使用手机也不再仅仅是保持联系,而是存储更多的通信细节和隐私。如今,Android已经成为使用最广泛的操作系统。谷歌在2014年的Android 5.0中提出了FDE(全磁盘加密),在2016年的Android 7.0中提出了FBE(基于文件的加密),以保护用户数据不被窃取。本研究介绍了国内的加密算法SM4,并通过AOSP (Android Open Source Project)使其在Android内核中实现快速完整的“可选”文件加密,体现出更人性化的操作方式。
{"title":"File-Based Encryption with SM4","authors":"Chan Gao, Chung-Huang Yang","doi":"10.1109/DSC.2017.92","DOIUrl":"https://doi.org/10.1109/DSC.2017.92","url":null,"abstract":"Mobile phones have been developed from the general communication equipment to smart phones. People also use the phone from simply keeping in touch to storing more communication details and privacy. Nowadays, Android has became the most widely used operating system. Google proposed FDE (full disk encryption) in Android 5.0, 2014, and FBE (file-based encryption) in Android 7.0, 2016, to protect the user data from being stolen. This study introduces the the domestic encryption algorithm SM4, and through the AOSP (Android Open Source Project) to make it in Android kernel to achieve a rapid and complete \"optional\" file encryption to reflect the more humane way of operation.","PeriodicalId":427998,"journal":{"name":"2017 IEEE Second International Conference on Data Science in Cyberspace (DSC)","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2017-06-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130713622","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
With the popularity of the Internet, online news media are pouring numerous of news reports into the Internet every day. People get lost in the information explosion. Although the existing methods are able to extract news reports according to key words, and aggregate news reports into stories or events, they just list the related reports or events in order. Moreover, they are unable to provide the evolution relationships between events within a topic, thus people hardly capture the events development vein. In order to mine the underlying evolution relationships between events within the topic, we propose a novel event evolution Model in this paper. This model utilizes TFIEF and Temporal Distance Cost factor (TDC) to model the event evolution relationships. we construct event evolution relationships map to show the events development vein. The experimental evaluation on real dataset show that our technique precedes the baseline technique.
{"title":"EMMBTT: A Novel Event Evolution Model Based on TFxIEF and TDC in Tracking News Streams","authors":"Pengpeng Zhou, Bin Wu, Zhen Cao","doi":"10.1109/DSC.2017.53","DOIUrl":"https://doi.org/10.1109/DSC.2017.53","url":null,"abstract":"With the popularity of the Internet, online news media are pouring numerous of news reports into the Internet every day. People get lost in the information explosion. Although the existing methods are able to extract news reports according to key words, and aggregate news reports into stories or events, they just list the related reports or events in order. Moreover, they are unable to provide the evolution relationships between events within a topic, thus people hardly capture the events development vein. In order to mine the underlying evolution relationships between events within the topic, we propose a novel event evolution Model in this paper. This model utilizes TFIEF and Temporal Distance Cost factor (TDC) to model the event evolution relationships. we construct event evolution relationships map to show the events development vein. The experimental evaluation on real dataset show that our technique precedes the baseline technique.","PeriodicalId":427998,"journal":{"name":"2017 IEEE Second International Conference on Data Science in Cyberspace (DSC)","volume":"36 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2017-06-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115861660","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
With the rapid development of information technologies, our daily life has become deeply dependent on cyberspace. The new technologies provide more facilities and enhancements to the existing Internet services as it allows users more flexibility in terms of exploring webpages, sending messages or publishing tweets via cell phones or laptops. However, there are many security issues such as security policy definition and security policy enforcement of current cyberspace. In this paper, we study information access problems in cyberspace where users leverage devices via the Internet to access sensitive objects with temporal and spatial limitations. We propose a Cyberspace-oriented Access Control model (CoAC) to ensure the security of the mentioned accesses in cyberspace. The proposed model consists of seven atomic operations, such as Read, Write, Store, Execute, Publish, Forward and Select, which can denote all operations by the combination of several atomic operations in cyberspace. For each atomic operation, we assemble a suite of security policies and demonstrate its flexibility. By that, a series of security policies are denfined for CoAC.
{"title":"Cyberspace-Oriented Access Control: Model and Policies","authors":"Fenghua Li, Zifu Li, Weili Han, Ting Wu, Lihua Chen, Yunchuan Guo","doi":"10.1109/DSC.2017.100","DOIUrl":"https://doi.org/10.1109/DSC.2017.100","url":null,"abstract":"With the rapid development of information technologies, our daily life has become deeply dependent on cyberspace. The new technologies provide more facilities and enhancements to the existing Internet services as it allows users more flexibility in terms of exploring webpages, sending messages or publishing tweets via cell phones or laptops. However, there are many security issues such as security policy definition and security policy enforcement of current cyberspace. In this paper, we study information access problems in cyberspace where users leverage devices via the Internet to access sensitive objects with temporal and spatial limitations. We propose a Cyberspace-oriented Access Control model (CoAC) to ensure the security of the mentioned accesses in cyberspace. The proposed model consists of seven atomic operations, such as Read, Write, Store, Execute, Publish, Forward and Select, which can denote all operations by the combination of several atomic operations in cyberspace. For each atomic operation, we assemble a suite of security policies and demonstrate its flexibility. By that, a series of security policies are denfined for CoAC.","PeriodicalId":427998,"journal":{"name":"2017 IEEE Second International Conference on Data Science in Cyberspace (DSC)","volume":"42 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2017-06-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127471535","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}