Yujun Zhou, Bo Xu, Jiaming Xu, Lei Yang, Changliang Li, Bo Xu
Word segmentation is the first step in Chinese natural language processing, and the error caused by word segmentation can be transmitted to the whole system. In order to reduce the impact of word segmentation and improve the overall performance of Chinese short text classification system, we propose a hybrid model of character-level and word-level features based on recurrent neural network (RNN) with long short-term memory (LSTM). By integrating character-level feature into word-level feature, the missing semantic information by the error of word segmentation will be constructed, meanwhile the wrong semantic relevance will be reduced. The final feature representation is that it suppressed the error of word segmentation in the case of maintaining most of the semantic features of the sentence. The whole model is finally trained end-to-end with supervised Chinese short text classification task. Results demonstrate that the proposed model in this paper is able to represent Chinese short text effectively, and the performances of 32-class and 5-class categorization outperform some remarkable methods.
{"title":"Compositional Recurrent Neural Networks for Chinese Short Text Classification","authors":"Yujun Zhou, Bo Xu, Jiaming Xu, Lei Yang, Changliang Li, Bo Xu","doi":"10.1109/WI.2016.0029","DOIUrl":"https://doi.org/10.1109/WI.2016.0029","url":null,"abstract":"Word segmentation is the first step in Chinese natural language processing, and the error caused by word segmentation can be transmitted to the whole system. In order to reduce the impact of word segmentation and improve the overall performance of Chinese short text classification system, we propose a hybrid model of character-level and word-level features based on recurrent neural network (RNN) with long short-term memory (LSTM). By integrating character-level feature into word-level feature, the missing semantic information by the error of word segmentation will be constructed, meanwhile the wrong semantic relevance will be reduced. The final feature representation is that it suppressed the error of word segmentation in the case of maintaining most of the semantic features of the sentence. The whole model is finally trained end-to-end with supervised Chinese short text classification task. Results demonstrate that the proposed model in this paper is able to represent Chinese short text effectively, and the performances of 32-class and 5-class categorization outperform some remarkable methods.","PeriodicalId":6513,"journal":{"name":"2016 IEEE/WIC/ACM International Conference on Web Intelligence (WI)","volume":"7 1","pages":"137-144"},"PeriodicalIF":0.0,"publicationDate":"2016-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"90336560","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Message-level and word-level polarity classification are two popular tasks in Twitter sentiment analysis. They have been commonly addressed by training supervised models from labelled data. The main limitation of these models is the high cost of data annotation. Transferring existing labels from a related problem domain is one possible solution for this problem. In this paper, we propose a simple model for transferring sentiment labels from words to tweets and vice versa by representing both tweets and words using feature vectors residing in the same feature space. Tweets are represented by standard NLP features such as unigrams and part-of-speech tags. Words are represented by averaging the vectors of the tweets in which they occur. We evaluate our approach in two transfer learning problems: 1) training a tweet-level polarity classifier from a polarity lexicon, and 2) inducing a polarity lexicon from a collection of polarity-annotated tweets. Our results show that the proposed approach can successfully classify words and tweets after transfer.
{"title":"From Opinion Lexicons to Sentiment Classification of Tweets and Vice Versa: A Transfer Learning Approach","authors":"Felipe Bravo-Marquez, E. Frank, B. Pfahringer","doi":"10.1109/WI.2016.29","DOIUrl":"https://doi.org/10.1109/WI.2016.29","url":null,"abstract":"Message-level and word-level polarity classification are two popular tasks in Twitter sentiment analysis. They have been commonly addressed by training supervised models from labelled data. The main limitation of these models is the high cost of data annotation. Transferring existing labels from a related problem domain is one possible solution for this problem. In this paper, we propose a simple model for transferring sentiment labels from words to tweets and vice versa by representing both tweets and words using feature vectors residing in the same feature space. Tweets are represented by standard NLP features such as unigrams and part-of-speech tags. Words are represented by averaging the vectors of the tweets in which they occur. We evaluate our approach in two transfer learning problems: 1) training a tweet-level polarity classifier from a polarity lexicon, and 2) inducing a polarity lexicon from a collection of polarity-annotated tweets. Our results show that the proposed approach can successfully classify words and tweets after transfer.","PeriodicalId":6513,"journal":{"name":"2016 IEEE/WIC/ACM International Conference on Web Intelligence (WI)","volume":"52 1","pages":"145-152"},"PeriodicalIF":0.0,"publicationDate":"2016-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"89923499","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Online communities promise a new era of flexible and dynamic collaborations. However, these features also raise new security challenges, especially regarding how trust is managed. In this paper, we focus on situations wherein communities participants collaborate with each others via software agents that take trust decisions on their behalf based on policies. Due to the open and dynamic nature of Online Communities, participants can neither anticipate all possible interactions nor have foreknowledge of sensitive resources and potentially malicious partners. This makes the specification of trust policies complex and risky, especially for collective (i.e., community-level) policies, motivating the need for policies evolution. The aim of this paper is to introduce an approach in order to manage the evolution of trust policies within online communities. Our scenario allows any member of the community to trigger the evolution of the community-level policy and make the other members of the community converge towards it.
{"title":"Managing Evolving Trust Policies within Open and Decentralized Communities","authors":"Reda Yaich","doi":"10.1109/WI.2016.0119","DOIUrl":"https://doi.org/10.1109/WI.2016.0119","url":null,"abstract":"Online communities promise a new era of flexible and dynamic collaborations. However, these features also raise new security challenges, especially regarding how trust is managed. In this paper, we focus on situations wherein communities participants collaborate with each others via software agents that take trust decisions on their behalf based on policies. Due to the open and dynamic nature of Online Communities, participants can neither anticipate all possible interactions nor have foreknowledge of sensitive resources and potentially malicious partners. This makes the specification of trust policies complex and risky, especially for collective (i.e., community-level) policies, motivating the need for policies evolution. The aim of this paper is to introduce an approach in order to manage the evolution of trust policies within online communities. Our scenario allows any member of the community to trigger the evolution of the community-level policy and make the other members of the community converge towards it.","PeriodicalId":6513,"journal":{"name":"2016 IEEE/WIC/ACM International Conference on Web Intelligence (WI)","volume":"10 1","pages":"668-673"},"PeriodicalIF":0.0,"publicationDate":"2016-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"90154666","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Time always exists in our lives and time data can easily be collected in a variety of applications. For example, when you purchase items online or click on an ad, the time at which you chose the item or clicked the ad is recorded. The analysis of time information can therefore be applied in various areas. It is important to note that user preferences change over time. For example, a person who watched animated TV shows in childhood will most likely switch to watching the news in adulthood. It is effective to incorporate such changes into recommender systems. In this paper, we propose an approach that predicts user preferences with consideration of preference changes by learning the order of purchase history in a recommender system. Our approach is composed of three steps. First, we obtain user features based on matrix factorization and purchasing time. Next, we use a Kalman filter to predict user preference vectors from user features. Finally, we generate a recommendation list, at which time we propose two types of recommendation methods using the predicted vectors. We then show through experiments using a real-world dataset that our approach outperforms competitive methods such as the first order Markov model.
{"title":"Recommendation System Based on Prediction of User Preference Changes","authors":"Kenta Inuzuka, Tomonori Hayashi, T. Takagi","doi":"10.1109/WI.2016.0036","DOIUrl":"https://doi.org/10.1109/WI.2016.0036","url":null,"abstract":"Time always exists in our lives and time data can easily be collected in a variety of applications. For example, when you purchase items online or click on an ad, the time at which you chose the item or clicked the ad is recorded. The analysis of time information can therefore be applied in various areas. It is important to note that user preferences change over time. For example, a person who watched animated TV shows in childhood will most likely switch to watching the news in adulthood. It is effective to incorporate such changes into recommender systems. In this paper, we propose an approach that predicts user preferences with consideration of preference changes by learning the order of purchase history in a recommender system. Our approach is composed of three steps. First, we obtain user features based on matrix factorization and purchasing time. Next, we use a Kalman filter to predict user preference vectors from user features. Finally, we generate a recommendation list, at which time we propose two types of recommendation methods using the predicted vectors. We then show through experiments using a real-world dataset that our approach outperforms competitive methods such as the first order Markov model.","PeriodicalId":6513,"journal":{"name":"2016 IEEE/WIC/ACM International Conference on Web Intelligence (WI)","volume":"5 1","pages":"192-199"},"PeriodicalIF":0.0,"publicationDate":"2016-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"74322285","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
If just consider one feature of sentences to calculate sentences similarity, the performance of system is difficult to reach a satisfactory level. This paper presents a method of combining the features of semantic and structural to compute sentences similarity. It first discusses the methods of calculating the semantic similarity of sentences through word embedding and Tongyici Cilin. Next, it discusses the methods of calculating the morphological similarity and order similarity of sentences, and then combines the features through the neutral network to calculate the total similarity of the sentences. We include results from an evaluation of the system's performance and show that a combination of the features works better than any single approach.
{"title":"A Research on Sentence Similarity for Question Answering System Based on Multi-feature Fusion","authors":"Haipeng Ruan, Yuan Li, Qinling Wang, Yu Liu","doi":"10.1109/WI.2016.0085","DOIUrl":"https://doi.org/10.1109/WI.2016.0085","url":null,"abstract":"If just consider one feature of sentences to calculate sentences similarity, the performance of system is difficult to reach a satisfactory level. This paper presents a method of combining the features of semantic and structural to compute sentences similarity. It first discusses the methods of calculating the semantic similarity of sentences through word embedding and Tongyici Cilin. Next, it discusses the methods of calculating the morphological similarity and order similarity of sentences, and then combines the features through the neutral network to calculate the total similarity of the sentences. We include results from an evaluation of the system's performance and show that a combination of the features works better than any single approach.","PeriodicalId":6513,"journal":{"name":"2016 IEEE/WIC/ACM International Conference on Web Intelligence (WI)","volume":"64 1","pages":"507-510"},"PeriodicalIF":0.0,"publicationDate":"2016-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"76301879","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
There exists a class of problems in e-commerce and retail businesses where the shopping behavior of customers is analyzed in order to predict their repeat behavior for products or retail stores. This analysis plays a crucial role in advertisement budgeting, product placement and relevant customer targeting. Researchers have addressed this problem by using standard predictive models, which use ad hoc features. We propose a metamodel that abstracts the different dimensions of data present in transactional datasets. These dimensions can be customer, product, offer, target, marketplace and transactions. Our framework also has abstract functions for comprehensive feature set generation, and includes different machine learning algorithms to learn prediction model. Our framework works end-to-end from feature engineering to reporting repeat probabilities of customers for products (or marketplace, brand, website or storechain). Moreover, the predicted repeat behavior of customers for different products along with their transactional history is used by our offer optimization model i-Prescribe to suggest products to be offered to customers with the goal of maximizing the return on investment of given marketing budget. We prove that our abstract features work on two different data-challenge datasets, by sharing experimental results.
{"title":"Generic Framework to Predict Repeat Behavior of Customers Using Their Transaction History","authors":"Auon Haidar Kazmi, Gautam M. Shroff, P. Agarwal","doi":"10.1109/WI.2016.0072","DOIUrl":"https://doi.org/10.1109/WI.2016.0072","url":null,"abstract":"There exists a class of problems in e-commerce and retail businesses where the shopping behavior of customers is analyzed in order to predict their repeat behavior for products or retail stores. This analysis plays a crucial role in advertisement budgeting, product placement and relevant customer targeting. Researchers have addressed this problem by using standard predictive models, which use ad hoc features. We propose a metamodel that abstracts the different dimensions of data present in transactional datasets. These dimensions can be customer, product, offer, target, marketplace and transactions. Our framework also has abstract functions for comprehensive feature set generation, and includes different machine learning algorithms to learn prediction model. Our framework works end-to-end from feature engineering to reporting repeat probabilities of customers for products (or marketplace, brand, website or storechain). Moreover, the predicted repeat behavior of customers for different products along with their transactional history is used by our offer optimization model i-Prescribe to suggest products to be offered to customers with the goal of maximizing the return on investment of given marketing budget. We prove that our abstract features work on two different data-challenge datasets, by sharing experimental results.","PeriodicalId":6513,"journal":{"name":"2016 IEEE/WIC/ACM International Conference on Web Intelligence (WI)","volume":"263 1","pages":"449-452"},"PeriodicalIF":0.0,"publicationDate":"2016-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"76416714","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Ontologies are increasingly used by modern knowledge systems for representing and sharing knowledge. Supporting semantic processing, ontology-driven knowledge systems allow for more precise information interpretation, thus providing greater usability and effectiveness than traditional information systems. Manual construction of ontologies by domain experts and knowledge engineers is a costly task, therefore automatic and/or semi-automatic approaches to their development are needed, a field of research that is usually referred to as ontology learning and population. This is the main focus of this article which discusses main problems and corresponding solutions for the automated acquisition of each one of the components of an ontology (classes, properties, taxonomic and non-taxonomic relationships, axioms and instances).
{"title":"An Analysis of Main Solutions for the Automatic Construction of Ontologies from Text","authors":"R. Girardi","doi":"10.1109/WI.2016.0074","DOIUrl":"https://doi.org/10.1109/WI.2016.0074","url":null,"abstract":"Ontologies are increasingly used by modern knowledge systems for representing and sharing knowledge. Supporting semantic processing, ontology-driven knowledge systems allow for more precise information interpretation, thus providing greater usability and effectiveness than traditional information systems. Manual construction of ontologies by domain experts and knowledge engineers is a costly task, therefore automatic and/or semi-automatic approaches to their development are needed, a field of research that is usually referred to as ontology learning and population. This is the main focus of this article which discusses main problems and corresponding solutions for the automated acquisition of each one of the components of an ontology (classes, properties, taxonomic and non-taxonomic relationships, axioms and instances).","PeriodicalId":6513,"journal":{"name":"2016 IEEE/WIC/ACM International Conference on Web Intelligence (WI)","volume":"54 1","pages":"457-460"},"PeriodicalIF":0.0,"publicationDate":"2016-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"78135871","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
In recent years, there has been increased interest in real-world event identification using data collected from social media, where theWeb enables the general public to post real-time reactions to terrestrial events - thereby acting as social sensors of terrestrial activity. Automatically extracting and categorizing activity from streamed data is a non-trivial task. To address this task, we present a novel event detection framework which comprises five main components: data collection, pre-processing, classification, online clustering and summarization. The integration between classification and clustering allows events to be detected - including "disruptive" events - incidents that threaten social safety and security, or could disrupt the social order. We evaluate our framework on a large-scale, real-world dataset from Twitter. We also compare our results to other leading approaches using Flickr MediaEval Event Detection Benchmark.
{"title":"Sensing Real-World Events Using Social Media Data and a Classification-Clustering Framework","authors":"Nasser Alsaedi, P. Burnap, O. Rana","doi":"10.1109/WI.2016.0039","DOIUrl":"https://doi.org/10.1109/WI.2016.0039","url":null,"abstract":"In recent years, there has been increased interest in real-world event identification using data collected from social media, where theWeb enables the general public to post real-time reactions to terrestrial events - thereby acting as social sensors of terrestrial activity. Automatically extracting and categorizing activity from streamed data is a non-trivial task. To address this task, we present a novel event detection framework which comprises five main components: data collection, pre-processing, classification, online clustering and summarization. The integration between classification and clustering allows events to be detected - including \"disruptive\" events - incidents that threaten social safety and security, or could disrupt the social order. We evaluate our framework on a large-scale, real-world dataset from Twitter. We also compare our results to other leading approaches using Flickr MediaEval Event Detection Benchmark.","PeriodicalId":6513,"journal":{"name":"2016 IEEE/WIC/ACM International Conference on Web Intelligence (WI)","volume":"14 1","pages":"216-223"},"PeriodicalIF":0.0,"publicationDate":"2016-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"79306731","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Network functions virtualization is a new technology for the future internet that eliminates the dependency of the network function and the hardware requirement. The network functions virtualization provides a successful approach for meeting the increase in demand of the end-to-end (E2E) services with low operational and capital costs. Replacing the network specific purpose hardware (e.g. firewall) with a software implementation of the network functions in which a chain of Virtualized Network Functions (VNFs) can logically connect the end points and provide the desired network services. However, this approach is associated with the challenge of dynamically mapping the predefined VNFs onto the existing substrate network in an optimal way. In this paper, we propose a simple and effective approach for mapping the VNFs with the physical resources in a dynamic service request environment. The algorithm considers the priority dependency between the VNFs as a case of study, with the objective of minimizing the mapping blocking rate.
{"title":"Dynamic Allocation of Service Function Chains under Priority Dependency Constraint","authors":"M. Masoud, Sanghoon Lee, S. Belkasim","doi":"10.1109/WI.2016.0122","DOIUrl":"https://doi.org/10.1109/WI.2016.0122","url":null,"abstract":"Network functions virtualization is a new technology for the future internet that eliminates the dependency of the network function and the hardware requirement. The network functions virtualization provides a successful approach for meeting the increase in demand of the end-to-end (E2E) services with low operational and capital costs. Replacing the network specific purpose hardware (e.g. firewall) with a software implementation of the network functions in which a chain of Virtualized Network Functions (VNFs) can logically connect the end points and provide the desired network services. However, this approach is associated with the challenge of dynamically mapping the predefined VNFs onto the existing substrate network in an optimal way. In this paper, we propose a simple and effective approach for mapping the VNFs with the physical resources in a dynamic service request environment. The algorithm considers the priority dependency between the VNFs as a case of study, with the objective of minimizing the mapping blocking rate.","PeriodicalId":6513,"journal":{"name":"2016 IEEE/WIC/ACM International Conference on Web Intelligence (WI)","volume":"6 1","pages":"684-688"},"PeriodicalIF":0.0,"publicationDate":"2016-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"87637616","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Linked Open Data (LOD) consists of numerous data stores that are highly interconnected. LOD stores use Resource Description Framework (RDF) as a data representation format. A graph-based nature of RDF brings an opportunity to develop new approaches for accumulating data from multiple sources characterized by different levels of confidence in them. Recently, a participatory learning mechanism has been extended to cope with RDF. It is an attractive way of integrating new pieces of information with already known ones. Further, it has been recognized that pieces of information describing entities can have a disjunctive or conjunctive form. This paper uses an RDF-based participatory learning process to aggregate information obtained from multiple data stores. This process provides mechanisms that determine overall certainty in combined data based on levels of confidence in already known pieces of information and new ones. The behavior of such a process used for integrating information equipped with different levels of uncertainty is presented, and a simple case study is included.
{"title":"Learning Processes Based on Data Sources with Certainty Levels in Linked Open Data","authors":"Jesse Xi Chen, M. Reformat, R. Yager","doi":"10.1109/WI.2016.0068","DOIUrl":"https://doi.org/10.1109/WI.2016.0068","url":null,"abstract":"Linked Open Data (LOD) consists of numerous data stores that are highly interconnected. LOD stores use Resource Description Framework (RDF) as a data representation format. A graph-based nature of RDF brings an opportunity to develop new approaches for accumulating data from multiple sources characterized by different levels of confidence in them. Recently, a participatory learning mechanism has been extended to cope with RDF. It is an attractive way of integrating new pieces of information with already known ones. Further, it has been recognized that pieces of information describing entities can have a disjunctive or conjunctive form. This paper uses an RDF-based participatory learning process to aggregate information obtained from multiple data stores. This process provides mechanisms that determine overall certainty in combined data based on levels of confidence in already known pieces of information and new ones. The behavior of such a process used for integrating information equipped with different levels of uncertainty is presented, and a simple case study is included.","PeriodicalId":6513,"journal":{"name":"2016 IEEE/WIC/ACM International Conference on Web Intelligence (WI)","volume":"116 1","pages":"429-434"},"PeriodicalIF":0.0,"publicationDate":"2016-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"86796078","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}