Ling Huang, M. Garofalakis, J. Hellerstein, A. Joseph, N. Taft
Recent research has proposed efficient protocols for distributed triggers, which can be used in monitoring infrastructures to maintain system-wide invariants and detect abnormal events with minimal communication overhead. To date, however, this work has been limited to simple thresholds on distributed aggregate functions like sums and counts. In this paper, we present our initial results that show how to use these simple threshold triggers to enable sophisticated anomaly detection in near-real time, with modest communication overheads. We design a distributed protocol to detect "unusual traffic patterns" buried in an Origin-Destination network flow matrix that: a) uses a Principal Components Analysis decomposition technique to detect anomalies via a threshold function on residual signals [10]; and b) efficiently tracks this threshold function in near-real time using a simple distributed protocol. In addition, we speculate that such simple thresholding can be a powerful tool for a variety of monitoring tasks beyond the one presented here, and we propose an agenda to explore additional sophisticated applications.
{"title":"Toward sophisticated detection with distributed triggers","authors":"Ling Huang, M. Garofalakis, J. Hellerstein, A. Joseph, N. Taft","doi":"10.1145/1162678.1162684","DOIUrl":"https://doi.org/10.1145/1162678.1162684","url":null,"abstract":"Recent research has proposed efficient protocols for distributed triggers, which can be used in monitoring infrastructures to maintain system-wide invariants and detect abnormal events with minimal communication overhead. To date, however, this work has been limited to simple thresholds on distributed aggregate functions like sums and counts. In this paper, we present our initial results that show how to use these simple threshold triggers to enable sophisticated anomaly detection in near-real time, with modest communication overheads. We design a distributed protocol to detect \"unusual traffic patterns\" buried in an Origin-Destination network flow matrix that: a) uses a Principal Components Analysis decomposition technique to detect anomalies via a threshold function on residual signals [10]; and b) efficiently tracks this threshold function in near-real time using a simple distributed protocol. In addition, we speculate that such simple thresholding can be a powerful tool for a variety of monitoring tasks beyond the one presented here, and we propose an agenda to explore additional sophisticated applications.","PeriodicalId":216113,"journal":{"name":"Annual ACM Workshop on Mining Network Data","volume":"75 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2006-09-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127404768","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
We describe an on-line service, and its underlying methodology, designed to extract BGP peerings from the Internet Routing Registry. Both the method and the service are based on: a consistency manager for integrating information across different registries, an RPSL analyzer that extracts peering specifications from RPSL objects, and a peering classifier that aims at understanding to what extent such peering specifications actually contribute to fully determine a peering. A peering graph is built with different levels of confidence. We compare the effectiveness of our method with the state of the art. The comparison puts in evidence the quality of the proposed method.
{"title":"How to extract BGP peering information from the internet routing registry","authors":"G. Battista, Tiziana Refice, M. Rimondini","doi":"10.1145/1162678.1162685","DOIUrl":"https://doi.org/10.1145/1162678.1162685","url":null,"abstract":"We describe an on-line service, and its underlying methodology, designed to extract BGP peerings from the Internet Routing Registry. Both the method and the service are based on: a consistency manager for integrating information across different registries, an RPSL analyzer that extracts peering specifications from RPSL objects, and a peering classifier that aims at understanding to what extent such peering specifications actually contribute to fully determine a peering. A peering graph is built with different levels of confidence. We compare the effectiveness of our method with the state of the art. The comparison puts in evidence the quality of the proposed method.","PeriodicalId":216113,"journal":{"name":"Annual ACM Workshop on Mining Network Data","volume":"13 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2006-09-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"121852075","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Classification of network traffic using port-based or payload-based analysis is becoming increasingly difficult with many peer-to-peer (P2P) applications using dynamic port numbers, masquerading techniques, and encryption to avoid detection. An alternative approach is to classify traffic by exploiting the distinctive characteristics of applications when they communicate on a network. We pursue this latter approach and demonstrate how cluster analysis can be used to effectively identify groups of traffic that are similar using only transport layer statistics. Our work considers two unsupervised clustering algorithms, namely K-Means and DBSCAN, that have previously not been used for network traffic classification. We evaluate these two algorithms and compare them to the previously used AutoClass algorithm, using empirical Internet traces. The experimental results show that both K-Means and DBSCAN work very well and much more quickly then AutoClass. Our results indicate that although DBSCAN has lower accuracy compared to K-Means and AutoClass, DBSCAN produces better clusters.
{"title":"Traffic classification using clustering algorithms","authors":"Jeffrey Erman, M. Arlitt, A. Mahanti","doi":"10.1145/1162678.1162679","DOIUrl":"https://doi.org/10.1145/1162678.1162679","url":null,"abstract":"Classification of network traffic using port-based or payload-based analysis is becoming increasingly difficult with many peer-to-peer (P2P) applications using dynamic port numbers, masquerading techniques, and encryption to avoid detection. An alternative approach is to classify traffic by exploiting the distinctive characteristics of applications when they communicate on a network. We pursue this latter approach and demonstrate how cluster analysis can be used to effectively identify groups of traffic that are similar using only transport layer statistics. Our work considers two unsupervised clustering algorithms, namely K-Means and DBSCAN, that have previously not been used for network traffic classification. We evaluate these two algorithms and compare them to the previously used AutoClass algorithm, using empirical Internet traces. The experimental results show that both K-Means and DBSCAN work very well and much more quickly then AutoClass. Our results indicate that although DBSCAN has lower accuracy compared to K-Means and AutoClass, DBSCAN produces better clusters.","PeriodicalId":216113,"journal":{"name":"Annual ACM Workshop on Mining Network Data","volume":"12 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2006-09-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126124821","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Security incidents have an adverse impact not only on end systems, but also on Internet routing, resulting in many out-of-reach prefixes. Previous work has looked at performance degradation in the data plane in terms of delay and loss. Also it has been reported that the number of routing updates increased significantly, which could be a reflection of increased routing instability in the control domain. In this paper, we perform a detailed forensic analysis of routing instability during known security incidents and present useful metrics in assessing damage in AS reachability. Any change in AS reachability is a direct indication of whether the AS had fallen victim to the security incident or not.We choose the Slammer worm attack in January, 2003, as a security incident for closer examination. For our forensic analysis, we use BGP routing data from RouteViews and RIPE. As a way to quantify AS reachability, we propose the following metrics: the prefix count and the address count. The number of unique prefixes in routing tables during the attack fluctuates greatly, but it does not represent the real scope of damage. We define the address count as the cardinality of the set of IP addresses an AS is responsible for either as an origin or transit AS, and observe how address counts changed over time. These two metrics together draw an accurate picture of how reachability to or through the AS had been affected. Though our analysis was done off-line, our methodology can be applied on-line and used in quick real-time assessment of AS reachability.
{"title":"Forensic analysis of autonomous system reachability","authors":"D. K. Lee, S. Moon, T. Choi, T. Jeong","doi":"10.1145/1162678.1162688","DOIUrl":"https://doi.org/10.1145/1162678.1162688","url":null,"abstract":"Security incidents have an adverse impact not only on end systems, but also on Internet routing, resulting in many out-of-reach prefixes. Previous work has looked at performance degradation in the data plane in terms of delay and loss. Also it has been reported that the number of routing updates increased significantly, which could be a reflection of increased routing instability in the control domain. In this paper, we perform a detailed forensic analysis of routing instability during known security incidents and present useful metrics in assessing damage in AS reachability. Any change in AS reachability is a direct indication of whether the AS had fallen victim to the security incident or not.We choose the Slammer worm attack in January, 2003, as a security incident for closer examination. For our forensic analysis, we use BGP routing data from RouteViews and RIPE. As a way to quantify AS reachability, we propose the following metrics: the prefix count and the address count. The number of unique prefixes in routing tables during the attack fluctuates greatly, but it does not represent the real scope of damage. We define the address count as the cardinality of the set of IP addresses an AS is responsible for either as an origin or transit AS, and observe how address counts changed over time. These two metrics together draw an accurate picture of how reachability to or through the AS had been affected. Though our analysis was done off-line, our methodology can be applied on-line and used in quick real-time assessment of AS reachability.","PeriodicalId":216113,"journal":{"name":"Annual ACM Workshop on Mining Network Data","volume":"48 4 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2006-09-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130007820","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
When failures occur in Internet overlay connections today, it is difficult for users to determine the root cause of failure. An overlay connection may require TCP connections between a series of overlay nodes to succeed, but accurately determining which of these connections has failed is difficult for users without access to the internal workings of the overlay. Diagnosis using active probing is costly and may be inaccurate if probe packets are filtered or blocked. To address this problem, we develop a passive diagnosis approach that infers the most likely cause of failure using a Bayesian network modeling the conditional probability of TCP failures given the IP addresses of the hosts along the overlay path. We collect TCP failure data for 28.3 million TCP connections using data from the new Planetseer overlay monitoring system and train a Bayesian network for the diagnosis of overlay connection failures. We evaluate the accuracy of diagnosis using this Bayesian network on a set of overlay connections generated from observations of CoDeeN traffic patterns and find that our approach can accurately diagnose failures.
{"title":"Diagnosis of TCP overlay connection failures using bayesian networks","authors":"George J. Lee, L. Poole","doi":"10.1145/1162678.1162683","DOIUrl":"https://doi.org/10.1145/1162678.1162683","url":null,"abstract":"When failures occur in Internet overlay connections today, it is difficult for users to determine the root cause of failure. An overlay connection may require TCP connections between a series of overlay nodes to succeed, but accurately determining which of these connections has failed is difficult for users without access to the internal workings of the overlay. Diagnosis using active probing is costly and may be inaccurate if probe packets are filtered or blocked. To address this problem, we develop a passive diagnosis approach that infers the most likely cause of failure using a Bayesian network modeling the conditional probability of TCP failures given the IP addresses of the hosts along the overlay path. We collect TCP failure data for 28.3 million TCP connections using data from the new Planetseer overlay monitoring system and train a Bayesian network for the diagnosis of overlay connection failures. We evaluate the accuracy of diagnosis using this Bayesian network on a set of overlay connections generated from observations of CoDeeN traffic patterns and find that our approach can accurately diagnose failures.","PeriodicalId":216113,"journal":{"name":"Annual ACM Workshop on Mining Network Data","volume":"39 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2006-09-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115972556","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Progress in networking research depends crucially on applying novel analysis tools to real-world traces of network activity. This often conflicts with privacy and security requirements; many raw network traces include information that should never be revealed to others.The traditional resolution of this dilemma uses trace anonymization to remove secret information from traces, theoretically leaving enough information for research purposes while protecting privacy and security. However, trace anonymization can have both technical and non-technical drawbacks.We propose an alternative to trace-to-trace transformation that operates at a different level of abstraction. Since the ultimate goal is to transform raw traces into research results, we say: cut out the middle step. We propose a model for shipping flexible analysis code to the data, rather than vice versa. Our model aims to support independent, expert, prior review of analysis code. We propose a system design using layered abstraction to provide both ease of use, and ease of verification of privacy and security properties. The system would provide pre-approved modules for common analysis functions. We hope our approach could significantly increase the willingness of trace owners to share their data with researchers. We have loosely prototyped this approach in previously published research.
{"title":"SC2D: an alternative to trace anonymization","authors":"J. Mogul, M. Arlitt","doi":"10.1145/1162678.1162686","DOIUrl":"https://doi.org/10.1145/1162678.1162686","url":null,"abstract":"Progress in networking research depends crucially on applying novel analysis tools to real-world traces of network activity. This often conflicts with privacy and security requirements; many raw network traces include information that should never be revealed to others.The traditional resolution of this dilemma uses trace anonymization to remove secret information from traces, theoretically leaving enough information for research purposes while protecting privacy and security. However, trace anonymization can have both technical and non-technical drawbacks.We propose an alternative to trace-to-trace transformation that operates at a different level of abstraction. Since the ultimate goal is to transform raw traces into research results, we say: cut out the middle step. We propose a model for shipping flexible analysis code to the data, rather than vice versa. Our model aims to support independent, expert, prior review of analysis code. We propose a system design using layered abstraction to provide both ease of use, and ease of verification of privacy and security properties. The system would provide pre-approved modules for common analysis functions. We hope our approach could significantly increase the willingness of trace owners to share their data with researchers. We have loosely prototyped this approach in previously published research.","PeriodicalId":216113,"journal":{"name":"Annual ACM Workshop on Mining Network Data","volume":"133 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2006-09-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"131324046","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
We examine the ability to exploit the hierarchical structure of Internet addresses in order to endow network agents with predictive capabilities. Specifically, we consider Support Vector Machines (SVMs) for prediction of round-trip latency to random network destinations the agent has not previously interacted with. We use kernel functions to transform the structured, yet fragmented and discontinuous, IP address space into a feature space amenable to SVMs. Our SVM approach is accurate, fast, suitable to on-line learning and generalizes well. SVM regression on a large, randomly collected data set of 30,000 Internet latencies yields a mean prediction error of 25ms using only 20% of the samples for training. Our results are promising for equipping end-nodes with intelligence for service selection, user-directed routing, resource scheduling and network inference. Finally, feature selection analysis finds that the eight most significant IP address bits provide surprisingly strong discriminative power.
{"title":"SVM learning of IP address structure for latency prediction","authors":"Robert Beverly, K. Sollins, A. Berger","doi":"10.1145/1162678.1162682","DOIUrl":"https://doi.org/10.1145/1162678.1162682","url":null,"abstract":"We examine the ability to exploit the hierarchical structure of Internet addresses in order to endow network agents with predictive capabilities. Specifically, we consider Support Vector Machines (SVMs) for prediction of round-trip latency to random network destinations the agent has not previously interacted with. We use kernel functions to transform the structured, yet fragmented and discontinuous, IP address space into a feature space amenable to SVMs. Our SVM approach is accurate, fast, suitable to on-line learning and generalizes well. SVM regression on a large, randomly collected data set of 30,000 Internet latencies yields a mean prediction error of 25ms using only 20% of the samples for training. Our results are promising for equipping end-nodes with intelligence for service selection, user-directed routing, resource scheduling and network inference. Finally, feature selection analysis finds that the eight most significant IP address bits provide surprisingly strong discriminative power.","PeriodicalId":216113,"journal":{"name":"Annual ACM Workshop on Mining Network Data","volume":"47 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2006-09-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"116504861","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Emre Kıcıman, D. Maltz, M. Goldszmidt, John C. Platt
Content providers base their business on their ability to receive and answer requests from clients distributed across the Internet. Since disruptions in the flow of these requests directly translate into lost revenue, there is tremendous incentive to diagnose why some requests fail and prod the responsible parties into corrective action. However, a content provider has only limited visibility into the state of the Internet outside its domain. Instead, it must mine failure diagnoses from available information sources to infer what is going wrong and who is responsible.Our ultimate goal is to help Internet content providers resolve reliability problems in the wide-area network that are affecting end-user perceived reliability. We describe two algorithms that represent our first steps towards enabling content providers to extract actionable debugging information from content provider logs, and we present the results of applying the algorithms to a week's worth of logs from a large content provider, during which time it handled over 1 billion requests originating from over 10 thousand ASes.
{"title":"Mining web logs to debug distant connectivity problems","authors":"Emre Kıcıman, D. Maltz, M. Goldszmidt, John C. Platt","doi":"10.1145/1162678.1162680","DOIUrl":"https://doi.org/10.1145/1162678.1162680","url":null,"abstract":"Content providers base their business on their ability to receive and answer requests from clients distributed across the Internet. Since disruptions in the flow of these requests directly translate into lost revenue, there is tremendous incentive to diagnose why some requests fail and prod the responsible parties into corrective action. However, a content provider has only limited visibility into the state of the Internet outside its domain. Instead, it must mine failure diagnoses from available information sources to infer what is going wrong and who is responsible.Our ultimate goal is to help Internet content providers resolve reliability problems in the wide-area network that are affecting end-user perceived reliability. We describe two algorithms that represent our first steps towards enabling content providers to extract actionable debugging information from content provider logs, and we present the results of applying the algorithms to a week's worth of logs from a large content provider, during which time it handled over 1 billion requests originating from over 10 thousand ASes.","PeriodicalId":216113,"journal":{"name":"Annual ACM Workshop on Mining Network Data","volume":"62 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2006-09-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130681512","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Internet performance is an issue of great interest, but it is not trivial to measure. A number of commercial companies try to measure this, as does RIPE, and many individual Internet Service Providers. However, all are hampered in their efforts by a fear of sharing such sensitive information. Customers make decision about "which provider" based on such measurements, and so service providers certainly do not want such data to be public (except in the case of the top provider), but at the same time, it is in everyones' interest to have good metrics in order to reduce the risk of large network problems, and to test the effect of proposed network improvements.This paper shows that it is possible to have your cake, and eat it too. Providers (and other interested parties) can make such measurements, and compute Internet-wide metrics securely in the knowledge that their private data is never shared, and so cannot be abused.
{"title":"Privacy-preserving performance measurements","authors":"M. Roughan, Yin Zhang","doi":"10.1145/1162678.1162687","DOIUrl":"https://doi.org/10.1145/1162678.1162687","url":null,"abstract":"Internet performance is an issue of great interest, but it is not trivial to measure. A number of commercial companies try to measure this, as does RIPE, and many individual Internet Service Providers. However, all are hampered in their efforts by a fear of sharing such sensitive information. Customers make decision about \"which provider\" based on such measurements, and so service providers certainly do not want such data to be public (except in the case of the top provider), but at the same time, it is in everyones' interest to have good metrics in order to reduce the risk of large network problems, and to test the effect of proposed network improvements.This paper shows that it is possible to have your cake, and eat it too. Providers (and other interested parties) can make such measurements, and compute Internet-wide metrics securely in the knowledge that their private data is never shared, and so cannot be abused.","PeriodicalId":216113,"journal":{"name":"Annual ACM Workshop on Mining Network Data","volume":"159 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2006-09-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127030029","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Problems arising from router misconfigurations cost time and money. The first step in fixing such misconfigurations is finding them. In this paper, we propose a method for detecting misconfigurations that does not depend on an a priori model of what constitutes a correct configuration. Our hypothesis is that uncommon or unexpected misconfigurations in router data can be identified as statistical anomalies within a Bayesian framework. We present a detection algorithm based on this framework, and show that it is able to detect errors in the router configuration files of a university network.
{"title":"Bayesian detection of router configuration anomalies","authors":"Khalid El-Arini, Kevin S. Killourhy","doi":"10.1145/1080173.1080190","DOIUrl":"https://doi.org/10.1145/1080173.1080190","url":null,"abstract":"Problems arising from router misconfigurations cost time and money. The first step in fixing such misconfigurations is finding them. In this paper, we propose a method for detecting misconfigurations that does not depend on an a priori model of what constitutes a correct configuration. Our hypothesis is that uncommon or unexpected misconfigurations in router data can be identified as statistical anomalies within a Bayesian framework. We present a detection algorithm based on this framework, and show that it is able to detect errors in the router configuration files of a university network.","PeriodicalId":216113,"journal":{"name":"Annual ACM Workshop on Mining Network Data","volume":"172 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2005-08-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"121048507","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}