The advantages of fault injection techniques and related methodologies like mutation testing have been gaining in attention also from industry, as is evident from the advent of standards like ISO 26262 that suggest to use corresponding approaches for verifying an automotive system's safety aspects. Aside a well-established theoretical background, the availability of tools is a key issue in order to leverage fault injection for the development of industrial, possibly safety-critical applications, e.g., in an automotive context. We propose the corresponding open source toolset SIMULTATE for injecting faults and performing mutation testing for Simulink models. For complementing the provided mutation/fault injection operators, it allows a user to define her own ones within Matlab and further provides a Python interface for easily deriving mutants where she can also focus the scope to desired model parts only. Controlling the activation of individual faults in a derived model, a designer can conveniently conduct mutation tests via a corresponding Python application.
{"title":"SIMULTATE: A Toolset for Fault Injection and Mutation Testing of Simulink Models","authors":"Ingo Pill, Ivan Rubil, F. Wotawa, M. Nica","doi":"10.1109/ICSTW.2016.21","DOIUrl":"https://doi.org/10.1109/ICSTW.2016.21","url":null,"abstract":"The advantages of fault injection techniques and related methodologies like mutation testing have been gaining in attention also from industry, as is evident from the advent of standards like ISO 26262 that suggest to use corresponding approaches for verifying an automotive system's safety aspects. Aside a well-established theoretical background, the availability of tools is a key issue in order to leverage fault injection for the development of industrial, possibly safety-critical applications, e.g., in an automotive context. We propose the corresponding open source toolset SIMULTATE for injecting faults and performing mutation testing for Simulink models. For complementing the provided mutation/fault injection operators, it allows a user to define her own ones within Matlab and further provides a Python interface for easily deriving mutants where she can also focus the scope to desired model parts only. Controlling the activation of individual faults in a derived model, a designer can conveniently conduct mutation tests via a corresponding Python application.","PeriodicalId":335145,"journal":{"name":"2016 IEEE Ninth International Conference on Software Testing, Verification and Validation Workshops (ICSTW)","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2016-04-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129717032","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Previous work has demonstrated that property-based testing can successfully be applied to web services. For example, it has been shown that JSON schemas can be used to automatically derive test-case generators for web forms. This paper presents a test-case generation approach for web services that takes business rule models as input for property-based testing. We parse these models to automatically derive generators for sequences of web service requests together with their required form data. Most of the work in this field applies property-based testing in the context of functional programming. Here, we define our properties in an object-oriented style in C# and its tool FsCheck. We apply our method to the business rule models of an industrial web service application in the automotive domain.
{"title":"Property-Based Testing with FsCheck by Deriving Properties from Business Rule Models","authors":"B. Aichernig, Richard Schumi","doi":"10.1109/ICSTW.2016.24","DOIUrl":"https://doi.org/10.1109/ICSTW.2016.24","url":null,"abstract":"Previous work has demonstrated that property-based testing can successfully be applied to web services. For example, it has been shown that JSON schemas can be used to automatically derive test-case generators for web forms. This paper presents a test-case generation approach for web services that takes business rule models as input for property-based testing. We parse these models to automatically derive generators for sequences of web service requests together with their required form data. Most of the work in this field applies property-based testing in the context of functional programming. Here, we define our properties in an object-oriented style in C# and its tool FsCheck. We apply our method to the business rule models of an industrial web service application in the automotive domain.","PeriodicalId":335145,"journal":{"name":"2016 IEEE Ninth International Conference on Software Testing, Verification and Validation Workshops (ICSTW)","volume":"14 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2016-04-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"125950841","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Combinatorial test design is a well-established practice for obtaining tests. Most approaches focus on the actual test generation and test size minimization. For execution of test cases, expected results and executable test scripts are required. We investigate both, how to determine expected results for test cases, ideally in an automated fashion, and ways for generic test script generation to allow for execution of combinatorial test suites. This paper provides a survey of current approaches to test oracles and test script generation in combinatorial testing. Illustration and running example is provided in terms of the Classification Tree Method.
{"title":"Test Oracles and Test Script Generation in Combinatorial Testing","authors":"Peter M. Kruse","doi":"10.1109/ICSTW.2016.11","DOIUrl":"https://doi.org/10.1109/ICSTW.2016.11","url":null,"abstract":"Combinatorial test design is a well-established practice for obtaining tests. Most approaches focus on the actual test generation and test size minimization. For execution of test cases, expected results and executable test scripts are required. We investigate both, how to determine expected results for test cases, ideally in an automated fashion, and ways for generic test script generation to allow for execution of combinatorial test suites. This paper provides a survey of current approaches to test oracles and test script generation in combinatorial testing. Illustration and running example is provided in terms of the Classification Tree Method.","PeriodicalId":335145,"journal":{"name":"2016 IEEE Ninth International Conference on Software Testing, Verification and Validation Workshops (ICSTW)","volume":"24 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2016-04-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128377593","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Exploratory software testing is an activity which can be carried out by both untrained and formally trained testers. We personify the former as Carmen and the latter as George. In this paper, we outline a joint research exercise between industry and academia that contributes to the body of knowledge by (1) proposing a data gathering and processing methodology which leverages HCI techniques to characterise the differences in strategies utilised by Carmen and George when approaching an exploratory testing task, and (2) present the findings of an initial study amongst twenty participants, ten formally trained testers and another ten with no formal training. Our results shed light on the types of strategies used by each type of tester, how they are used, the effectiveness of each type of strategy in terms of finding bugs, and the types of bugs each tester/strategy combination uncovers. We also demonstrate how our methodology can be used to help assemble and manage exploratory testing teams in the real world.
{"title":"Do Exploratory Testers Need Formal Training? An Investigation Using HCI Techniques","authors":"Mark Micallef, C. Porter, A. Borg","doi":"10.1109/ICSTW.2016.31","DOIUrl":"https://doi.org/10.1109/ICSTW.2016.31","url":null,"abstract":"Exploratory software testing is an activity which can be carried out by both untrained and formally trained testers. We personify the former as Carmen and the latter as George. In this paper, we outline a joint research exercise between industry and academia that contributes to the body of knowledge by (1) proposing a data gathering and processing methodology which leverages HCI techniques to characterise the differences in strategies utilised by Carmen and George when approaching an exploratory testing task, and (2) present the findings of an initial study amongst twenty participants, ten formally trained testers and another ten with no formal training. Our results shed light on the types of strategies used by each type of tester, how they are used, the effectiveness of each type of strategy in terms of finding bugs, and the types of bugs each tester/strategy combination uncovers. We also demonstrate how our methodology can be used to help assemble and manage exploratory testing teams in the real world.","PeriodicalId":335145,"journal":{"name":"2016 IEEE Ninth International Conference on Software Testing, Verification and Validation Workshops (ICSTW)","volume":"5 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2016-04-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128241406","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
After sixty or so years of development the theory of checking experiments for FSM still continues to attract a lot of attention of research community. One of the reasons is that it offers test generation techniques which under well-defined assumptions guarantee complete fault coverage for a given fault model of a specification FSM. Checking experiments have already been extended to remove assumptions that the specification Mealy machine need to be reduced, deterministic, and completely specified, while keeping the input, output and state sets finite. In our recent work, we investigated possibilities of removing the assumption about the finiteness of the input set, introducing the model FSM with symbolic inputs. In this paper, we report on our efforts of further lifting the theory of checking experiments for Mealy machines with symbolic inputs and symbolic outputs. The former are predicates defined over input variables and the latter are output variable valuations computed by assignments on input variables. Both types of variables can have large or even infinite domains. Inclusion of assignments in the model complicates fault detection, as different assignments may produce the same output valuations for some input valuations. We address this issue by using a transition cover enhanced with assignment discriminating predicates specifying symbolic inputs on which the assignments produce different outputs. The enhanced transition cover is then used in checking experiments, which can detect assignment/output faults and more general transition faults under certain assumptions.
{"title":"Checking Experiments for Symbolic Input/Output Finite State Machines","authors":"A. Petrenko","doi":"10.1109/ICSTW.2016.9","DOIUrl":"https://doi.org/10.1109/ICSTW.2016.9","url":null,"abstract":"After sixty or so years of development the theory of checking experiments for FSM still continues to attract a lot of attention of research community. One of the reasons is that it offers test generation techniques which under well-defined assumptions guarantee complete fault coverage for a given fault model of a specification FSM. Checking experiments have already been extended to remove assumptions that the specification Mealy machine need to be reduced, deterministic, and completely specified, while keeping the input, output and state sets finite. In our recent work, we investigated possibilities of removing the assumption about the finiteness of the input set, introducing the model FSM with symbolic inputs. In this paper, we report on our efforts of further lifting the theory of checking experiments for Mealy machines with symbolic inputs and symbolic outputs. The former are predicates defined over input variables and the latter are output variable valuations computed by assignments on input variables. Both types of variables can have large or even infinite domains. Inclusion of assignments in the model complicates fault detection, as different assignments may produce the same output valuations for some input valuations. We address this issue by using a transition cover enhanced with assignment discriminating predicates specifying symbolic inputs on which the assignments produce different outputs. The enhanced transition cover is then used in checking experiments, which can detect assignment/output faults and more general transition faults under certain assumptions.","PeriodicalId":335145,"journal":{"name":"2016 IEEE Ninth International Conference on Software Testing, Verification and Validation Workshops (ICSTW)","volume":"17 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2016-04-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115291361","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Paula Raappana, Soili Saukkoriipi, I. Tervonen, M. Mäntylä
Practitioners have found exploratory testing (ET) to be cost effective in detecting defects. The team exploratory testing (TET) approach scales exploratory testing to team level. This paper reports the effectiveness of (TET), and the experiences of the participants of TET sessions. The research was carried at F-Secure Corporation, where two projects were investigated. The results show that the TET sessions have good effectiveness and higher efficiency than other testing methods in the company measured in number of defects detected. Furthermore, the TET sessions found more usability defects that other methods. The session participants saw benefits in especially in the joint discussion and learning of the target application. However, with respect to test effectiveness and efficiency we should be cautions as further studies are needed to compensate the limitations of this work.
{"title":"The Effect of Team Exploratory Testing -- Experience Report from F-Secure","authors":"Paula Raappana, Soili Saukkoriipi, I. Tervonen, M. Mäntylä","doi":"10.1109/ICSTW.2016.13","DOIUrl":"https://doi.org/10.1109/ICSTW.2016.13","url":null,"abstract":"Practitioners have found exploratory testing (ET) to be cost effective in detecting defects. The team exploratory testing (TET) approach scales exploratory testing to team level. This paper reports the effectiveness of (TET), and the experiences of the participants of TET sessions. The research was carried at F-Secure Corporation, where two projects were investigated. The results show that the TET sessions have good effectiveness and higher efficiency than other testing methods in the company measured in number of defects detected. Furthermore, the TET sessions found more usability defects that other methods. The session participants saw benefits in especially in the joint discussion and learning of the target application. However, with respect to test effectiveness and efficiency we should be cautions as further studies are needed to compensate the limitations of this work.","PeriodicalId":335145,"journal":{"name":"2016 IEEE Ninth International Conference on Software Testing, Verification and Validation Workshops (ICSTW)","volume":"61 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2016-04-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"133624555","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Rahul Gopinath, Mohammad Amin Alipour, Iftekhar Ahmed, Carlos Jensen, Alex Groce
Redundancy in mutants, where multiple mutants end up producing the same semantic variant of a program, is a major problem in mutation analysis. Hence, a measure of effectiveness that accounts for redundancy is an essential tool for evaluating mutation tools, new operators, and reduction techniques. Previous research suggests using the size of the disjoint mutant set as an effectiveness measure. We start from a simple premise: test suites need to be judged on both the number of unique variations in specifications they detect (as a variation measure), and also on how good they are at detecting hard-to-find faults (as a measure of thoroughness). Hence, any set of mutants should be judged by how well it supports these measurements. We show that the disjoint mutant set has two major inadequacies - the single variant assumption and the large test suite assumption - when used as a measure of effectiveness in variation. These stem from its reliance on minimal test suites. We show that when used to emulate hard to find bugs (as a measure of thoroughness), disjoint mutant set discards useful mutants. We propose two alternatives: one measures variation and is not vulnerable to either the single variant assumption or the large test suite assumption, the other measures thoroughness. We provide a benchmark of these measures using diverse tools.
{"title":"Measuring Effectiveness of Mutant Sets","authors":"Rahul Gopinath, Mohammad Amin Alipour, Iftekhar Ahmed, Carlos Jensen, Alex Groce","doi":"10.1109/ICSTW.2016.45","DOIUrl":"https://doi.org/10.1109/ICSTW.2016.45","url":null,"abstract":"Redundancy in mutants, where multiple mutants end up producing the same semantic variant of a program, is a major problem in mutation analysis. Hence, a measure of effectiveness that accounts for redundancy is an essential tool for evaluating mutation tools, new operators, and reduction techniques. Previous research suggests using the size of the disjoint mutant set as an effectiveness measure. We start from a simple premise: test suites need to be judged on both the number of unique variations in specifications they detect (as a variation measure), and also on how good they are at detecting hard-to-find faults (as a measure of thoroughness). Hence, any set of mutants should be judged by how well it supports these measurements. We show that the disjoint mutant set has two major inadequacies - the single variant assumption and the large test suite assumption - when used as a measure of effectiveness in variation. These stem from its reliance on minimal test suites. We show that when used to emulate hard to find bugs (as a measure of thoroughness), disjoint mutant set discards useful mutants. We propose two alternatives: one measures variation and is not vulnerable to either the single variant assumption or the large test suite assumption, the other measures thoroughness. We provide a benchmark of these measures using diverse tools.","PeriodicalId":335145,"journal":{"name":"2016 IEEE Ninth International Conference on Software Testing, Verification and Validation Workshops (ICSTW)","volume":"106 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2016-04-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"122640836","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Just as with any other profession, an efficient way to exchange ideas and networking in software testing are conferences, workshops and similar events. This is true for both professional testers and researchers working in the testing area. However, these two groups usually look for different kinds of events: a tester likes to attend 'industrial' (sometimes called practitioner's or user) conferences, whereas a researcher is more likely interested in 'academic' (in other words scientific or research) conferences. Although there are notable exceptions, this separation is substantial, which hinders a successful academy-industry collaboration, and communication about the demand and supply of research in software testing. This paper reviews 101 conferences: two thirds are academic ones, the rest being industrial. Besides providing this reasonably comprehensive list, we analyze any visible synergies such as events that have a mixed Program Committee and offer a program with elements from both sides. We found only a handful of such events, but these can serve both as opportunities for attendees who wish to extend their perspectives and as models for organizers of future conferences.
{"title":"Academic and Industrial Software Testing Conferences: Survey and Synergies","authors":"Árpád Beszédes, László Vidács","doi":"10.1109/ICSTW.2016.30","DOIUrl":"https://doi.org/10.1109/ICSTW.2016.30","url":null,"abstract":"Just as with any other profession, an efficient way to exchange ideas and networking in software testing are conferences, workshops and similar events. This is true for both professional testers and researchers working in the testing area. However, these two groups usually look for different kinds of events: a tester likes to attend 'industrial' (sometimes called practitioner's or user) conferences, whereas a researcher is more likely interested in 'academic' (in other words scientific or research) conferences. Although there are notable exceptions, this separation is substantial, which hinders a successful academy-industry collaboration, and communication about the demand and supply of research in software testing. This paper reviews 101 conferences: two thirds are academic ones, the rest being industrial. Besides providing this reasonably comprehensive list, we analyze any visible synergies such as events that have a mixed Program Committee and offer a program with elements from both sides. We found only a handful of such events, but these can serve both as opportunities for attendees who wish to extend their perspectives and as models for organizers of future conferences.","PeriodicalId":335145,"journal":{"name":"2016 IEEE Ninth International Conference on Software Testing, Verification and Validation Workshops (ICSTW)","volume":"12 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2016-04-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115321685","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Nikolay Tcholtchev, Martin A. Schneider, I. Schieferdecker
This paper is about issues experienced along testing large-scale industrial products, with safety and security critical relevance. The challenges in testing - several thousand requirements for several product variants and various configurations - were addressed by test execution automation. However, since principal testing concepts as well as architectural concepts were ignored or poorly implemented, the test automation activities faced various difficulties within the considered projects. The current paper presents these issues in an abstracted manner and discusses possible solutions.
{"title":"Systematic Analysis of Practical Issues in Test Automation for Communication Based Systems","authors":"Nikolay Tcholtchev, Martin A. Schneider, I. Schieferdecker","doi":"10.1109/ICSTW.2016.32","DOIUrl":"https://doi.org/10.1109/ICSTW.2016.32","url":null,"abstract":"This paper is about issues experienced along testing large-scale industrial products, with safety and security critical relevance. The challenges in testing - several thousand requirements for several product variants and various configurations - were addressed by test execution automation. However, since principal testing concepts as well as architectural concepts were ignored or poorly implemented, the test automation activities faced various difficulties within the considered projects. The current paper presents these issues in an abstracted manner and discusses possible solutions.","PeriodicalId":335145,"journal":{"name":"2016 IEEE Ninth International Conference on Software Testing, Verification and Validation Workshops (ICSTW)","volume":"44 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2016-04-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"116998258","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
When test sets are generated according to a coverage criterion, it is often sufficient to fix values only for some of the inputs to achieve 100% coverage. Other inputs are immaterial and the coverage is achieved with any of their values ("don't care" values). The research question is: How do these "don't care" values (which can reach up to 20% of all input values) influence the effectiveness and other characteristics of test sets? The paper empirically investigated this question for pair-wise test sets applied for logical expressions with different sizes and complexities. Variations of the effectiveness and the Modified Condition/Decision Coverage (MC/DC) levels of pair-wise test sets were analyzed. Our results show that these variations are low and so pair-wise test sets with different "don't care" values are very stable. Any test set with randomly selected "don't care" values can be similarly used for practical testing.
{"title":"Should We Care about \"Don't Care\" Testing Inputs?: Empirical Investigation of Pair-Wise Testing","authors":"S. Vilkomir, Galen Pennell","doi":"10.1109/ICSTW.2016.8","DOIUrl":"https://doi.org/10.1109/ICSTW.2016.8","url":null,"abstract":"When test sets are generated according to a coverage criterion, it is often sufficient to fix values only for some of the inputs to achieve 100% coverage. Other inputs are immaterial and the coverage is achieved with any of their values (\"don't care\" values). The research question is: How do these \"don't care\" values (which can reach up to 20% of all input values) influence the effectiveness and other characteristics of test sets? The paper empirically investigated this question for pair-wise test sets applied for logical expressions with different sizes and complexities. Variations of the effectiveness and the Modified Condition/Decision Coverage (MC/DC) levels of pair-wise test sets were analyzed. Our results show that these variations are low and so pair-wise test sets with different \"don't care\" values are very stable. Any test set with randomly selected \"don't care\" values can be similarly used for practical testing.","PeriodicalId":335145,"journal":{"name":"2016 IEEE Ninth International Conference on Software Testing, Verification and Validation Workshops (ICSTW)","volume":"26 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2016-04-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126493211","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}