Policies are familiar approach to preserve security and privacy of the Web contents and services. This paper is going to address the policy based constrained access to the home contents and services but in the context of distributed but connected devices. A community concept equipped with trust metric is introduced to facilitate such restricted access. The community structure is maintained through a knowledge base exploiting Web Ontology Language and the policies are formulated using Semantic Web Rule Language. Reasoner then executes the policies to derive the access authorization results. In this paper, we provide a prototypical implementation of the whole scenario where a community member can download videos from the owner's home devices through a Web application. Besides, this paper critically investigates several challenges of the proposed approach with regard to various implementation issues.
策略是维护Web内容和服务的安全性和隐私性的常用方法。本文将讨论基于对家庭内容和服务的受限访问的策略,但在分布式但连接的设备的背景下。引入了带有信任度量的社区概念来促进这种受限访问。利用Web本体语言(Web Ontology Language)建立知识库,维护社区结构;利用语义Web规则语言(Semantic Web Rule Language)制定策略。然后,Reasoner执行策略以派生访问授权结果。在本文中,我们提供了整个场景的原型实现,其中社区成员可以通过Web应用程序从所有者的家庭设备下载视频。此外,本文批判性地调查了关于各种实施问题的拟议方法的几个挑战。
{"title":"Policy based access for home contents and services","authors":"M. Chowdhury, Sarfraz Alam, Josef Noll","doi":"10.1145/1456223.1456288","DOIUrl":"https://doi.org/10.1145/1456223.1456288","url":null,"abstract":"Policies are familiar approach to preserve security and privacy of the Web contents and services. This paper is going to address the policy based constrained access to the home contents and services but in the context of distributed but connected devices. A community concept equipped with trust metric is introduced to facilitate such restricted access. The community structure is maintained through a knowledge base exploiting Web Ontology Language and the policies are formulated using Semantic Web Rule Language. Reasoner then executes the policies to derive the access authorization results. In this paper, we provide a prototypical implementation of the whole scenario where a community member can download videos from the owner's home devices through a Web application. Besides, this paper critically investigates several challenges of the proposed approach with regard to various implementation issues.","PeriodicalId":309453,"journal":{"name":"International Conference on Soft Computing as Transdisciplinary Science and Technology","volume":"78 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2008-10-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"134080246","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Linear Ordering Problem (LOP) is a well know optimization problem attractive for its complexity (it is a NP-hard problem), rich collection of testing data and variety of real world applications. In this paper, we investigate the usage and performance of two variants of Genetic Algorithms - Mutation Only Genetic Algorithms and Higher Level Chromosome Genetic Algorithms - on the Linear Ordering Problem. Both methods are tested and evaluated on a collection of real world and artificial LOP instances.
{"title":"Evolving feasible linear ordering problem solutions","authors":"P. Krömer, V. Snás̃el, J. Platoš","doi":"10.1145/1456223.1456293","DOIUrl":"https://doi.org/10.1145/1456223.1456293","url":null,"abstract":"Linear Ordering Problem (LOP) is a well know optimization problem attractive for its complexity (it is a NP-hard problem), rich collection of testing data and variety of real world applications. In this paper, we investigate the usage and performance of two variants of Genetic Algorithms - Mutation Only Genetic Algorithms and Higher Level Chromosome Genetic Algorithms - on the Linear Ordering Problem. Both methods are tested and evaluated on a collection of real world and artificial LOP instances.","PeriodicalId":309453,"journal":{"name":"International Conference on Soft Computing as Transdisciplinary Science and Technology","volume":"66 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2008-10-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"133483503","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
S. Stojanov, Ivan Ganchev, M. O'Droma, D. Mitev, I. Minov
A software architecture for context-aware mLearning provision via InfoStations within a University Campus is presented. The multi-agent nature of the proposed architecture receives particular attention.
{"title":"Multi-agent architecture for context-aware mLearning provision via InfoStations","authors":"S. Stojanov, Ivan Ganchev, M. O'Droma, D. Mitev, I. Minov","doi":"10.1145/1456223.1456334","DOIUrl":"https://doi.org/10.1145/1456223.1456334","url":null,"abstract":"A software architecture for context-aware mLearning provision via InfoStations within a University Campus is presented. The multi-agent nature of the proposed architecture receives particular attention.","PeriodicalId":309453,"journal":{"name":"International Conference on Soft Computing as Transdisciplinary Science and Technology","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2008-10-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"133417680","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Databases have become increasingly large and the data they contain is increasingly bulky. Thus the problem of knowledge extraction has become very significant and requires multiple techniques for processing the data available in order to extract the information contained from it. We particularly consider the data available on the web. Regarding the problem of the data exchange on the internet, XML is playing an increasing important role in this issue and has become a dominating standard proposed to deal with huge volumes of electronic documents. We are especially involved in extracting knowledge from complex tree structures such as XML documents. As they are heterogeneous and with complex structures, the resources available in such documents present the difficulty of querying these data. In order to deal with this problem, automatic tools are of compelling need. We especially consider the problem of constructing a mediator schema whose role is to give the necassary information about the resources structure and through which the data can be queried. In this paper, we present a new approach, called FFTM, dealing with the problem of schema mining through which we particularly focused on the use of soft embedding concept in order to extract more relevant knowledge. Indeed, crisp methods often discard interesting approximate patterns. For this purpose, we have adopted fuzzy constraints for discovering and validating frequent substructures in a large collection of semi-structured data, where both patterns and the data are modeled by labeled trees. The FFTM approach has been tested and validated on synthetic and XML document databases. The experimental results obtained show that our approach is very relevant and palliates the problem of the crisp approach.
{"title":"FFTM: optimized frequent tree mining with soft embedding constraints on siblings","authors":"M. Sghaier, S. Yahia, Anne Laurent, M. Teisseire","doi":"10.1145/1456223.1456309","DOIUrl":"https://doi.org/10.1145/1456223.1456309","url":null,"abstract":"Databases have become increasingly large and the data they contain is increasingly bulky. Thus the problem of knowledge extraction has become very significant and requires multiple techniques for processing the data available in order to extract the information contained from it. We particularly consider the data available on the web. Regarding the problem of the data exchange on the internet, XML is playing an increasing important role in this issue and has become a dominating standard proposed to deal with huge volumes of electronic documents. We are especially involved in extracting knowledge from complex tree structures such as XML documents.\u0000 As they are heterogeneous and with complex structures, the resources available in such documents present the difficulty of querying these data. In order to deal with this problem, automatic tools are of compelling need. We especially consider the problem of constructing a mediator schema whose role is to give the necassary information about the resources structure and through which the data can be queried. In this paper, we present a new approach, called FFTM, dealing with the problem of schema mining through which we particularly focused on the use of soft embedding concept in order to extract more relevant knowledge. Indeed, crisp methods often discard interesting approximate patterns. For this purpose, we have adopted fuzzy constraints for discovering and validating frequent substructures in a large collection of semi-structured data, where both patterns and the data are modeled by labeled trees. The FFTM approach has been tested and validated on synthetic and XML document databases. The experimental results obtained show that our approach is very relevant and palliates the problem of the crisp approach.","PeriodicalId":309453,"journal":{"name":"International Conference on Soft Computing as Transdisciplinary Science and Technology","volume":"7 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2008-10-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"122704955","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Il-Gu Jung, Eunjin Ko, Hyun-Chul Kang, Gilhaeng Lee
Recently, plenty of voice quality measuring devices of VoIP(Voice over Internet Protocol) service is developed to support high quality VoIP service for communication. However, the existing voice quality measuring devices select the unique measurement method, which uses PESQ(Perceptual Evaluation of Speech Quality) or R-factor only. However, the each voice quality measurement method differs from its usage and assumed range, which leads to the slightly different results at the same testing area for voice services. At the actual VoIP service business and related regulatory organizations, they request to use the above two types of algorithm at the same time for measuring voice quality. However, the development expenses are very high to use the above two types of algorithm at the same time for measuring voice quality due to additional resources and subsidiary devices. VoIP business and regulatory agencies also request the hardware specification of the voice quality measurement device is similar to that of VoIP service user terminal, considering VoIP service supply at the wireless mobile internet such as WiBro(Wireless Broadband). In this paper, we present the voice quality measurement method by using of the above two algorithms at the one physical measurement device at the same time for more accurate and reliable VoIP voice quality measurement. In addition, we introduce the development method of the voice quality measurement device with relatively low cost.
近年来,为了支持高质量的VoIP通信业务,开发了大量VoIP(voice over Internet Protocol)业务的语音质量测量设备。然而,现有的语音质量测量设备选择了独特的测量方法,即仅使用语音质量感知评价(PESQ)或r因子。但是,由于每种语音质量测量方法的用途和假设范围不同,导致在同一语音业务测试区域的结果略有不同。在实际的VoIP业务和相关监管机构中,他们要求同时使用上述两种算法来测量语音质量。但是,由于需要额外的资源和附属设备,同时使用上述两种算法测量语音质量的开发费用非常高。考虑到在WiBro(wireless Broadband)等无线移动互联网上提供VoIP业务,VoIP业务和监管机构还要求语音质量测量设备的硬件规格与VoIP业务用户终端的硬件规格相似。本文提出了在同一物理测量设备上同时使用上述两种算法进行语音质量测量的方法,以实现更加准确可靠的VoIP语音质量测量。此外,我们还介绍了成本相对较低的语音质量测量装置的研制方法。
{"title":"Voice quality measurement system for telephone service","authors":"Il-Gu Jung, Eunjin Ko, Hyun-Chul Kang, Gilhaeng Lee","doi":"10.1145/1456223.1456233","DOIUrl":"https://doi.org/10.1145/1456223.1456233","url":null,"abstract":"Recently, plenty of voice quality measuring devices of VoIP(Voice over Internet Protocol) service is developed to support high quality VoIP service for communication. However, the existing voice quality measuring devices select the unique measurement method, which uses PESQ(Perceptual Evaluation of Speech Quality) or R-factor only. However, the each voice quality measurement method differs from its usage and assumed range, which leads to the slightly different results at the same testing area for voice services. At the actual VoIP service business and related regulatory organizations, they request to use the above two types of algorithm at the same time for measuring voice quality. However, the development expenses are very high to use the above two types of algorithm at the same time for measuring voice quality due to additional resources and subsidiary devices. VoIP business and regulatory agencies also request the hardware specification of the voice quality measurement device is similar to that of VoIP service user terminal, considering VoIP service supply at the wireless mobile internet such as WiBro(Wireless Broadband). In this paper, we present the voice quality measurement method by using of the above two algorithms at the one physical measurement device at the same time for more accurate and reliable VoIP voice quality measurement. In addition, we introduce the development method of the voice quality measurement device with relatively low cost.","PeriodicalId":309453,"journal":{"name":"International Conference on Soft Computing as Transdisciplinary Science and Technology","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2008-10-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"125831454","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
M. Uzunova, D. Jolly, E. Nikolov, Kamel Boumediene
The main aim of this paper is the obtaining of transfer function of the macroscopic traffic flow model using an exact analytical method of solution of the non-linear partial differential equations, presenting the transportation problem model as distributed parameter system. The different analytical methods of solution are shown, some of them giving an exact solution. Using this exact method we obtain directly the plant transfer function for the traffic flow model viewed as a distributed parameter system. The entire physical phenomenon is presenting through the partial differential equations whose show the distribution in vehicles on the high ways. All the research is devoted on the transfer function result because of the control system modeling. In the presenting paper we shown the verification of the analytical result of the model simulation thought the time and frequency characteristics analysis.
{"title":"The macroscopic LWR model of the transport equation viewed as a distributed parameter system","authors":"M. Uzunova, D. Jolly, E. Nikolov, Kamel Boumediene","doi":"10.1145/1456223.1456339","DOIUrl":"https://doi.org/10.1145/1456223.1456339","url":null,"abstract":"The main aim of this paper is the obtaining of transfer function of the macroscopic traffic flow model using an exact analytical method of solution of the non-linear partial differential equations, presenting the transportation problem model as distributed parameter system. The different analytical methods of solution are shown, some of them giving an exact solution. Using this exact method we obtain directly the plant transfer function for the traffic flow model viewed as a distributed parameter system. The entire physical phenomenon is presenting through the partial differential equations whose show the distribution in vehicles on the high ways. All the research is devoted on the transfer function result because of the control system modeling. In the presenting paper we shown the verification of the analytical result of the model simulation thought the time and frequency characteristics analysis.","PeriodicalId":309453,"journal":{"name":"International Conference on Soft Computing as Transdisciplinary Science and Technology","volume":"148 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2008-10-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127466405","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
In this paper, we propose to solve the hybrid flow shop scheduling problem with multiple criteria, by using an hybrid method. This latter consists on the one hand, to use a meta-heuristic based on ant colony optimization algorithm to generate feasible solutions and, on the other hand, an aggregation multi-criteria method based on fuzzy logic is used to assist the decision-maker to express his preferences according to the considered objective functions. The aggregation method uses the Choquet integral which allows us to take into account the interactions between the different criteria. Experiments based on randomly generated instances were conducted to test the effectiveness of our approach.
{"title":"Hybrid approach using ant colony optimization and fuzzy logic to solve multi-criteria hybrid flow shop scheduling problem","authors":"Safa Khalouli, F. Ghedjati, A. Hamzaoui","doi":"10.1145/1456223.1456236","DOIUrl":"https://doi.org/10.1145/1456223.1456236","url":null,"abstract":"In this paper, we propose to solve the hybrid flow shop scheduling problem with multiple criteria, by using an hybrid method. This latter consists on the one hand, to use a meta-heuristic based on ant colony optimization algorithm to generate feasible solutions and, on the other hand, an aggregation multi-criteria method based on fuzzy logic is used to assist the decision-maker to express his preferences according to the considered objective functions. The aggregation method uses the Choquet integral which allows us to take into account the interactions between the different criteria. Experiments based on randomly generated instances were conducted to test the effectiveness of our approach.","PeriodicalId":309453,"journal":{"name":"International Conference on Soft Computing as Transdisciplinary Science and Technology","volume":"17 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2008-10-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"121886048","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
During the past several years, Geographic information system (GIS) has allowed storage, editing, maintenance, dissemination, display and access of geospatial data. Ontologies are a level of description of the knowledge of an application that is independent of internal structure. Existing techniques related to the impedance model for each road segment which is utilized in a route finding algorithm is inadequate. The most impedance models are based on one-dimensional criterion such as distance or time which does not give proper results. Hence, this research investigates how impedance criteria of road segment can represent the real world in GIS using ontology driven architecture. To address this, first several criteria including quantitative as well as qualitative criteria such as traffic and climate are taken into account. Second, to weight these criteria, the Analytical Network Process (ANP) method is proposed. Finally, the impedance model is implemented and verified with real road network data.
{"title":"Ontology driven road network analysis based on analytical network process technique","authors":"A. Sadeghi-Niaraki, Kyehyun Kim, Cholyoung Lee","doi":"10.1145/1456223.1456349","DOIUrl":"https://doi.org/10.1145/1456223.1456349","url":null,"abstract":"During the past several years, Geographic information system (GIS) has allowed storage, editing, maintenance, dissemination, display and access of geospatial data. Ontologies are a level of description of the knowledge of an application that is independent of internal structure. Existing techniques related to the impedance model for each road segment which is utilized in a route finding algorithm is inadequate. The most impedance models are based on one-dimensional criterion such as distance or time which does not give proper results. Hence, this research investigates how impedance criteria of road segment can represent the real world in GIS using ontology driven architecture. To address this, first several criteria including quantitative as well as qualitative criteria such as traffic and climate are taken into account. Second, to weight these criteria, the Analytical Network Process (ANP) method is proposed. Finally, the impedance model is implemented and verified with real road network data.","PeriodicalId":309453,"journal":{"name":"International Conference on Soft Computing as Transdisciplinary Science and Technology","volume":"32 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2008-10-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"123292806","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Patient monitoring in medical applications such as diagnosis of sleep disorders commonly adopts invasive monitoring equipments such as pulse oximetry and polysomnogram (PSG), but their attachment to the patient's body disturb sleep and therefore compromise results. Furthermore, the invasive approaches often fail to monitor continuously because the devices can be pulled off by the subject during sleep unconsciously. This paper presents an automated noninvasive video monitoring approach to analyze (covered) human activity in conditions with persistent heavy occlusion. The proposed method is a model-based approach, employing both static shape features and dynamic motion features to suppress false positive detection, to identify human activity, and to self-improve the covered human pose estimation.
{"title":"Covered body analysis in application to patient monitoring","authors":"Ching-Wei Wang","doi":"10.1145/1456223.1456251","DOIUrl":"https://doi.org/10.1145/1456223.1456251","url":null,"abstract":"Patient monitoring in medical applications such as diagnosis of sleep disorders commonly adopts invasive monitoring equipments such as pulse oximetry and polysomnogram (PSG), but their attachment to the patient's body disturb sleep and therefore compromise results. Furthermore, the invasive approaches often fail to monitor continuously because the devices can be pulled off by the subject during sleep unconsciously. This paper presents an automated noninvasive video monitoring approach to analyze (covered) human activity in conditions with persistent heavy occlusion. The proposed method is a model-based approach, employing both static shape features and dynamic motion features to suppress false positive detection, to identify human activity, and to self-improve the covered human pose estimation.","PeriodicalId":309453,"journal":{"name":"International Conference on Soft Computing as Transdisciplinary Science and Technology","volume":"8 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2008-10-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"123403442","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pooya Moradian Zadeh, Maryam Mohi, M. S. Moshkenani
Developing internet usage and services urged play strong role for social network. Social networks are environment which uses internet as interface to provide relations between people, in the other word to interchange data and information between persons. Email and Instant Messengers are popular examples of them. Whereas these environments are continuously and instantly developing, revising and viewing by humans, they are good places for mining. In this paper, the topic of exchanged information between users in this type of networks will be our target. Our method is to use a hierarchical dictionary of semantically related topics and words that is mapped to a graph. Then extracted keywords from context of social network area compared to graph nodes and the dependency between them will be computed. This model can be used in many applications such as marketing, advertising and high-risk group detection.
{"title":"Mining social network for extracting topic of textual conversations","authors":"Pooya Moradian Zadeh, Maryam Mohi, M. S. Moshkenani","doi":"10.1145/1456223.1456273","DOIUrl":"https://doi.org/10.1145/1456223.1456273","url":null,"abstract":"Developing internet usage and services urged play strong role for social network. Social networks are environment which uses internet as interface to provide relations between people, in the other word to interchange data and information between persons. Email and Instant Messengers are popular examples of them. Whereas these environments are continuously and instantly developing, revising and viewing by humans, they are good places for mining. In this paper, the topic of exchanged information between users in this type of networks will be our target. Our method is to use a hierarchical dictionary of semantically related topics and words that is mapped to a graph. Then extracted keywords from context of social network area compared to graph nodes and the dependency between them will be computed. This model can be used in many applications such as marketing, advertising and high-risk group detection.","PeriodicalId":309453,"journal":{"name":"International Conference on Soft Computing as Transdisciplinary Science and Technology","volume":"34 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2008-10-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"116619670","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}