A key constraint of REST API is that all the resources must be reachable by some hyperlink paths from an entry point. However, to apply this constraint without prudence can result in excessive hyperlinks that do not provide new services but increase the dependence between the resources. Excessive hyperlinks are difficult to identify because: 1) a REST API can have dynamic and unbounded paths, and 2) the hyperlinks used to navigate a path are not observable and can be ambiguous. To tackle the first challenge, we propose a REST API model and a random walk algorithm to reduce the paths of a REST API to a small set. To address the second challenge, we develop a client model and a connection minimization algorithm to identify excessive hyperlinks based on given paths. By combining the random walk and the connection minimization algorithms, our method can minimize the connections of a REST API in polynomial time without involving the actual clients. A prototype system has been implemented and the tests show that the method is correct and can converge 90.6% to 99.9% faster than the baseline approach.
{"title":"Connection Minimization in REST API with Random Walks","authors":"Li Li, Min Luo","doi":"10.1109/WI.2016.0059","DOIUrl":"https://doi.org/10.1109/WI.2016.0059","url":null,"abstract":"A key constraint of REST API is that all the resources must be reachable by some hyperlink paths from an entry point. However, to apply this constraint without prudence can result in excessive hyperlinks that do not provide new services but increase the dependence between the resources. Excessive hyperlinks are difficult to identify because: 1) a REST API can have dynamic and unbounded paths, and 2) the hyperlinks used to navigate a path are not observable and can be ambiguous. To tackle the first challenge, we propose a REST API model and a random walk algorithm to reduce the paths of a REST API to a small set. To address the second challenge, we develop a client model and a connection minimization algorithm to identify excessive hyperlinks based on given paths. By combining the random walk and the connection minimization algorithms, our method can minimize the connections of a REST API in polynomial time without involving the actual clients. A prototype system has been implemented and the tests show that the method is correct and can converge 90.6% to 99.9% faster than the baseline approach.","PeriodicalId":6513,"journal":{"name":"2016 IEEE/WIC/ACM International Conference on Web Intelligence (WI)","volume":"80 1","pages":"375-382"},"PeriodicalIF":0.0,"publicationDate":"2016-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"83955885","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Typically, news articles mention not just one but multiple events. These events can be classified into subject or background events. The former are events that the article is written about, while the latter are additional events referred to in order to explain the background of the subject events (e.g., causal relations, circumstances or the consequences of the main event). Background events are considered to play an important role in helping to understand articles. In this paper, we first propose to classify content of news articles into subject or background event descriptions. In the second part of the paper, we demonstrate a novel solution for improving the news article search. Based on the subject and background relationship structure between events and articles, our method outputs news articles that help with understanding of a given target article.
{"title":"Supporting News Article Understanding by Detecting Subject-Background Event Relations","authors":"Shotaro Tanaka, A. Jatowt, Katsumi Tanaka","doi":"10.1109/WI.2016.0044","DOIUrl":"https://doi.org/10.1109/WI.2016.0044","url":null,"abstract":"Typically, news articles mention not just one but multiple events. These events can be classified into subject or background events. The former are events that the article is written about, while the latter are additional events referred to in order to explain the background of the subject events (e.g., causal relations, circumstances or the consequences of the main event). Background events are considered to play an important role in helping to understand articles. In this paper, we first propose to classify content of news articles into subject or background event descriptions. In the second part of the paper, we demonstrate a novel solution for improving the news article search. Based on the subject and background relationship structure between events and articles, our method outputs news articles that help with understanding of a given target article.","PeriodicalId":6513,"journal":{"name":"2016 IEEE/WIC/ACM International Conference on Web Intelligence (WI)","volume":"39 1","pages":"256-263"},"PeriodicalIF":0.0,"publicationDate":"2016-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"85733724","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Coalition formation, a key factor in multi-agent cooperation, can be solved optimally for at most a few dozen agents. This paper proposes a general approach to find suboptimal solutions for a large-scale coalition formation problem containing thousands of agents using multi-agent simulation. We model coalition formation as an iterative process in which agents join and leave coalitions, and we propose several valuation functions that assign values to the coalitions. We propose several coalition selection strategies that agents may use to decide whether or not to leave their current coalition and which coalition to join. We also show how these valuation functions and coalition selection strategies represent specific coalition formation applications. Finally, we show almost-optimal performance of our algorithms in small-scale scenarios by comparing our solutions with an optimal solution, and we show stable performance in a large-scale setting in which searching for the optimal solution is not feasible.
{"title":"Multi-agent Simulation Framework for Large-Scale Coalition Formation","authors":"Pavel Janovsky, S. DeLoach","doi":"10.1109/WI.2016.0055","DOIUrl":"https://doi.org/10.1109/WI.2016.0055","url":null,"abstract":"Coalition formation, a key factor in multi-agent cooperation, can be solved optimally for at most a few dozen agents. This paper proposes a general approach to find suboptimal solutions for a large-scale coalition formation problem containing thousands of agents using multi-agent simulation. We model coalition formation as an iterative process in which agents join and leave coalitions, and we propose several valuation functions that assign values to the coalitions. We propose several coalition selection strategies that agents may use to decide whether or not to leave their current coalition and which coalition to join. We also show how these valuation functions and coalition selection strategies represent specific coalition formation applications. Finally, we show almost-optimal performance of our algorithms in small-scale scenarios by comparing our solutions with an optimal solution, and we show stable performance in a large-scale setting in which searching for the optimal solution is not feasible.","PeriodicalId":6513,"journal":{"name":"2016 IEEE/WIC/ACM International Conference on Web Intelligence (WI)","volume":"70 1","pages":"343-350"},"PeriodicalIF":0.0,"publicationDate":"2016-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"83909435","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Yulong Gu, Jiaxing Song, Weidong Liu, Y. Yao, Lixin Zou
Enormous efforts of human volunteers have made Wikipedia become a treasure of textual knowledge. Relation extraction that aims at extracting structured knowledge in the unstructured texts in Wikipedia is an appealing but quite challenging problem because it's hard for machines to understand plain texts. Existing methods are not effective enough because they understand relation types in textual level without exploiting knowledge behind plain texts. In this paper, we propose a novel framework called Athena 2.0 leveraging Semantic Patterns which are patterns that can understand relation types in semantic level to solve this problem. Extensive experiments show that Athena 2.0 significantly outperforms existing methods.
{"title":"Towards Accurate Relation Extraction from Wikipedia","authors":"Yulong Gu, Jiaxing Song, Weidong Liu, Y. Yao, Lixin Zou","doi":"10.1109/WI.2016.0023","DOIUrl":"https://doi.org/10.1109/WI.2016.0023","url":null,"abstract":"Enormous efforts of human volunteers have made Wikipedia become a treasure of textual knowledge. Relation extraction that aims at extracting structured knowledge in the unstructured texts in Wikipedia is an appealing but quite challenging problem because it's hard for machines to understand plain texts. Existing methods are not effective enough because they understand relation types in textual level without exploiting knowledge behind plain texts. In this paper, we propose a novel framework called Athena 2.0 leveraging Semantic Patterns which are patterns that can understand relation types in semantic level to solve this problem. Extensive experiments show that Athena 2.0 significantly outperforms existing methods.","PeriodicalId":6513,"journal":{"name":"2016 IEEE/WIC/ACM International Conference on Web Intelligence (WI)","volume":"66 1","pages":"89-96"},"PeriodicalIF":0.0,"publicationDate":"2016-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"79618997","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
A. Najjar, C. Gravier, Xavier Serpaggi, O. Boissier
As more personal and interactive applications are moving to the cloud, modeling the end-user expectations and satisfaction is becoming necessary for any SaaS provider to survive and thrive in today's competitive market. However, most of existing works addressing cloud elasticity management adopt a centralized approach where user preferences are mostly overlooked. Based on evidence from the fields of customer expectation management and psychophysics, in this article we propose a personal user model to represent end-user satisfaction and her expectations. To integrate the end-user into the decision loop we develop multi-agent negotiation architecture in which the end-user model is embodied by a personal agent who negotiates on her behalf. The results of the evaluation process show that automated negotiation provides a useful platform to empower the user choices, fulfill her expectations, and maximize her satisfaction hereby outperforming centralized approaches where the provider acts in a unilateral manner.
{"title":"Modeling User Expectations & Satisfaction for SaaS Applications Using Multi-agent Negotiation","authors":"A. Najjar, C. Gravier, Xavier Serpaggi, O. Boissier","doi":"10.1109/WI.2016.0062","DOIUrl":"https://doi.org/10.1109/WI.2016.0062","url":null,"abstract":"As more personal and interactive applications are moving to the cloud, modeling the end-user expectations and satisfaction is becoming necessary for any SaaS provider to survive and thrive in today's competitive market. However, most of existing works addressing cloud elasticity management adopt a centralized approach where user preferences are mostly overlooked. Based on evidence from the fields of customer expectation management and psychophysics, in this article we propose a personal user model to represent end-user satisfaction and her expectations. To integrate the end-user into the decision loop we develop multi-agent negotiation architecture in which the end-user model is embodied by a personal agent who negotiates on her behalf. The results of the evaluation process show that automated negotiation provides a useful platform to empower the user choices, fulfill her expectations, and maximize her satisfaction hereby outperforming centralized approaches where the provider acts in a unilateral manner.","PeriodicalId":6513,"journal":{"name":"2016 IEEE/WIC/ACM International Conference on Web Intelligence (WI)","volume":"61 1","pages":"399-406"},"PeriodicalIF":0.0,"publicationDate":"2016-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"73595369","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
In many applications, web surfers would like to get recommendation on which collections of web pages that would be interested to them or that they should follow. In order to discover this information and make recommendation, data mining in general—or frequent pattern mining in specific—can be applicable. Since its introduction, frequent pattern mining has drawn attention from many researchers. Consequently, many frequent pattern mining algorithms have been proposed, which include levelwise Apriori-based algorithms, tree-based algorithms, hyperlinked array structure based algorithms, as well as vertical mining algorithms. While these algorithms are popular, they also suffer from some drawbacks. To avoid these drawbacks, we propose an alternative frequent pattern mining algorithm called BW-mine in this paper. Evaluation results show that our proposed algorithm is both space-and time-efficient. Furthermore, to show the practicality of BW-mine in real-life applications, we apply BW-mine to discover popular pages on the web, which in turn gives the web surfers recommendation of web pages that might be interested to them.
{"title":"Web Page Recommendation Based on Bitwise Frequent Pattern Mining","authors":"Fan Jiang, C. Leung, Adam G. M. Pazdor","doi":"10.1109/WI.2016.0111","DOIUrl":"https://doi.org/10.1109/WI.2016.0111","url":null,"abstract":"In many applications, web surfers would like to get recommendation on which collections of web pages that would be interested to them or that they should follow. In order to discover this information and make recommendation, data mining in general—or frequent pattern mining in specific—can be applicable. Since its introduction, frequent pattern mining has drawn attention from many researchers. Consequently, many frequent pattern mining algorithms have been proposed, which include levelwise Apriori-based algorithms, tree-based algorithms, hyperlinked array structure based algorithms, as well as vertical mining algorithms. While these algorithms are popular, they also suffer from some drawbacks. To avoid these drawbacks, we propose an alternative frequent pattern mining algorithm called BW-mine in this paper. Evaluation results show that our proposed algorithm is both space-and time-efficient. Furthermore, to show the practicality of BW-mine in real-life applications, we apply BW-mine to discover popular pages on the web, which in turn gives the web surfers recommendation of web pages that might be interested to them.","PeriodicalId":6513,"journal":{"name":"2016 IEEE/WIC/ACM International Conference on Web Intelligence (WI)","volume":"8 1","pages":"632-635"},"PeriodicalIF":0.0,"publicationDate":"2016-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"76468248","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Real time summarization in microblog aims at providing new relevant and non redundant information about an event as soon as it occurs. In this paper, we introduce a new tweet summarization approach where the decision of selecting an incoming tweet is made immediately when a tweet is vailable. Unlike existing approaches where thresholds are redefined, the proposed method estimates thresholds for decision taking in real time as soon as the new tweet arrives. Tweet selection is based upon three criterion namely informativeness, novelty and relevance with regards of the user's interest which are combined as conjunctive condition. Only tweets having an informativeness and novelty scores above a parametric-free threshold are added to the summary. The evaluation of our approach was carried out on the TREC MB RTF 2015 data set and it was compared with well known baselines. The results have revealed that our approach produces the most precise summaries in comparison to all baselines and official runs of the TREC MB RTF 2015 task.
{"title":"Multi-criterion Real Time Tweet Summarization Based upon Adaptive Threshold","authors":"Abdelhamid Chellal, M. Boughanem, B. Dousset","doi":"10.1109/WI.2016.0045","DOIUrl":"https://doi.org/10.1109/WI.2016.0045","url":null,"abstract":"Real time summarization in microblog aims at providing new relevant and non redundant information about an event as soon as it occurs. In this paper, we introduce a new tweet summarization approach where the decision of selecting an incoming tweet is made immediately when a tweet is vailable. Unlike existing approaches where thresholds are redefined, the proposed method estimates thresholds for decision taking in real time as soon as the new tweet arrives. Tweet selection is based upon three criterion namely informativeness, novelty and relevance with regards of the user's interest which are combined as conjunctive condition. Only tweets having an informativeness and novelty scores above a parametric-free threshold are added to the summary. The evaluation of our approach was carried out on the TREC MB RTF 2015 data set and it was compared with well known baselines. The results have revealed that our approach produces the most precise summaries in comparison to all baselines and official runs of the TREC MB RTF 2015 task.","PeriodicalId":6513,"journal":{"name":"2016 IEEE/WIC/ACM International Conference on Web Intelligence (WI)","volume":"23 1","pages":"264-271"},"PeriodicalIF":0.0,"publicationDate":"2016-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"80119860","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
H. Drias, Samir Kechid, Sofia Adamou, Farouk Benyoucef
In the field of data science, we consider usually data independently from a problem to be solved. The originality of this paper consists in handling huge instances of combinatorial problems with datamining technologies in order to reduce the complexity of their treatment. Such task can be performed on Web combinatorial optimization such as internet data packet routing and web clustering. We focus in particular on the satisfiability of Boolean formulae but the proposed idea could be adopted for any other complex problem. The aim is to explore the satisfiability instance using datamining techniques in order to reduce its size, prior to solve it. An estimated solution for the obtained instance is then computed using a hybrid algorithm based on DPLL technique and a genetic algorithm. It is then compared to the solution of the initial instance in order to validate the method effectiveness. We performed experiments on the wellknown BMC datasets and show the benefits of using datamining techniques as a pretreatment, prior to solving the problem.
{"title":"Data Preprocessing for Web Combinatorial Problems","authors":"H. Drias, Samir Kechid, Sofia Adamou, Farouk Benyoucef","doi":"10.1109/WI.2016.0067","DOIUrl":"https://doi.org/10.1109/WI.2016.0067","url":null,"abstract":"In the field of data science, we consider usually data independently from a problem to be solved. The originality of this paper consists in handling huge instances of combinatorial problems with datamining technologies in order to reduce the complexity of their treatment. Such task can be performed on Web combinatorial optimization such as internet data packet routing and web clustering. We focus in particular on the satisfiability of Boolean formulae but the proposed idea could be adopted for any other complex problem. The aim is to explore the satisfiability instance using datamining techniques in order to reduce its size, prior to solve it. An estimated solution for the obtained instance is then computed using a hybrid algorithm based on DPLL technique and a genetic algorithm. It is then compared to the solution of the initial instance in order to validate the method effectiveness. We performed experiments on the wellknown BMC datasets and show the benefits of using datamining techniques as a pretreatment, prior to solving the problem.","PeriodicalId":6513,"journal":{"name":"2016 IEEE/WIC/ACM International Conference on Web Intelligence (WI)","volume":"17 1","pages":"425-428"},"PeriodicalIF":0.0,"publicationDate":"2016-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"83833311","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Qinpei Zhao, Zhenyu A. Liao, Jiangfeng Li, Yang Shi, Qirong Tang
The development of location-based applications raises a new challenge to manage and visualize large amounts of geo-tags presented on a web map. The visualization of the geo-tags often leads to a clutter problem, especially in web-mapping systems. We present a new clustering method to reduce the amount of visual clutter. A split smart swap strategy, which has the advantage that it can be applied to a certain data only once at all map scales, is employed in the method. We compare the proposed method to several other methods. Taking the advantage of the one-time running offline, the proposed method is more applicable for the clutter problem.
{"title":"A Split Smart Swap Clustering for Clutter Problem in Web Mapping System","authors":"Qinpei Zhao, Zhenyu A. Liao, Jiangfeng Li, Yang Shi, Qirong Tang","doi":"10.1109/WI.2016.0070","DOIUrl":"https://doi.org/10.1109/WI.2016.0070","url":null,"abstract":"The development of location-based applications raises a new challenge to manage and visualize large amounts of geo-tags presented on a web map. The visualization of the geo-tags often leads to a clutter problem, especially in web-mapping systems. We present a new clustering method to reduce the amount of visual clutter. A split smart swap strategy, which has the advantage that it can be applied to a certain data only once at all map scales, is employed in the method. We compare the proposed method to several other methods. Taking the advantage of the one-time running offline, the proposed method is more applicable for the clutter problem.","PeriodicalId":6513,"journal":{"name":"2016 IEEE/WIC/ACM International Conference on Web Intelligence (WI)","volume":"14 1","pages":"439-443"},"PeriodicalIF":0.0,"publicationDate":"2016-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"83063220","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Probabilistic topic models are powerful techniques which are widely used for discovering topics or semantic content from a large collection of documents. However, because topic models are entirely unsupervised, they may lead to topics that are not understandable in applications. Recently, several knowledge-based topic models have been proposed which primarily use word-level domain knowledge in the model to enhance the topic coherence and ignore the rich information carried by entities (e.g persons, location, organizations, etc.) associated with the documents. Additionally, there exists a vast amount of prior knowledge (background knowledge) represented as ontologies and Linked Open Data (LOD), which can be incorporated into the topic models to produce coherent topics. In this paper, we introduce a novel entity-based topic model, called EntLDA, to effectively integrate an ontology with an entity topic model to improve the topic modeling process. Furthermore, to increase the coherence of the identified topics, we introduce a novel ontology-based regularization framework, which is then integrated with the EntLDA model. Our experimental results demonstrate the effectiveness of the proposed model in improving the coherence of the topics.
{"title":"Discovering Coherent Topics with Entity Topic Models","authors":"M. Allahyari, K. Kochut","doi":"10.1109/WI.2016.0015","DOIUrl":"https://doi.org/10.1109/WI.2016.0015","url":null,"abstract":"Probabilistic topic models are powerful techniques which are widely used for discovering topics or semantic content from a large collection of documents. However, because topic models are entirely unsupervised, they may lead to topics that are not understandable in applications. Recently, several knowledge-based topic models have been proposed which primarily use word-level domain knowledge in the model to enhance the topic coherence and ignore the rich information carried by entities (e.g persons, location, organizations, etc.) associated with the documents. Additionally, there exists a vast amount of prior knowledge (background knowledge) represented as ontologies and Linked Open Data (LOD), which can be incorporated into the topic models to produce coherent topics. In this paper, we introduce a novel entity-based topic model, called EntLDA, to effectively integrate an ontology with an entity topic model to improve the topic modeling process. Furthermore, to increase the coherence of the identified topics, we introduce a novel ontology-based regularization framework, which is then integrated with the EntLDA model. Our experimental results demonstrate the effectiveness of the proposed model in improving the coherence of the topics.","PeriodicalId":6513,"journal":{"name":"2016 IEEE/WIC/ACM International Conference on Web Intelligence (WI)","volume":"17 1","pages":"26-33"},"PeriodicalIF":0.0,"publicationDate":"2016-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"88417419","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}