Peiyuan Li, Naoyasu Ubayashi, Di Ai, Yu Ning Li, Shintaro Hosoai, Yasutaka Kamei
This paper proposes an abstraction-aware reverse engineering method in which a developer just makes a mark on an important code region as if he or she draws a quick sketch on the program list. A support tool called iArch slices a program from marked program points and generates an abstract design model faithful to the intention of the developer. The developer can modify the design model and re-generate the code again while preserving the abstraction level and the traceability. Archface, an interface mechanism between design and code, plays an important role in abstraction-aware traceability check. If the developer wants to obtain a more concrete design model from the code, he or she only has to make additional marks on the program list. We can gradually transition to model-driven development style.
{"title":"Sketch-based gradual model-driven development","authors":"Peiyuan Li, Naoyasu Ubayashi, Di Ai, Yu Ning Li, Shintaro Hosoai, Yasutaka Kamei","doi":"10.1145/2666581.2666595","DOIUrl":"https://doi.org/10.1145/2666581.2666595","url":null,"abstract":"This paper proposes an abstraction-aware reverse engineering method in which a developer just makes a mark on an important code region as if he or she draws a quick sketch on the program list. A support tool called iArch slices a program from marked program points and generates an abstract design model faithful to the intention of the developer. The developer can modify the design model and re-generate the code again while preserving the abstraction level and the traceability. Archface, an interface mechanism between design and code, plays an important role in abstraction-aware traceability check. If the developer wants to obtain a more concrete design model from the code, he or she only has to make additional marks on the program list. We can gradually transition to model-driven development style.","PeriodicalId":249136,"journal":{"name":"Proceedings of the International Workshop on Innovative Software Development Methodologies and Practices","volume":"33 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2014-11-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"121787050","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Software testing is always an effective method to show the presence of bugs in programs, while debugging is never an easy task to remove a bug from a program. To facilitate the debugging task, statistical fault localization estimates the location of faults in programs automatically by analyzing the program executions to narrow down the suspicious region. We observe that program structure has strong impacts on the assessed suspiciousness of program elements. However, existing techniques inadequately pay attention to this problem. In this paper, we emphasize the biases caused by program structure in fault localization, and propose a method to address them. Our method is dedicated to boost a fault localization technique by adapting it to various program structures, in a software development process. It collects the suspiciousness of program elements when locating historical faults, statistically captures the biases caused by program structure, and removes such an impact factor from a fault localization result. An empirical study using the Siemens test suite shows that our method can greatly improve the effectiveness of the most representative fault localization Tarantula.
{"title":"Program structure aware fault localization","authors":"Heng Li, Yuzhen Liu, Zhenyu Zhang, Jian Liu","doi":"10.1145/2666581.2666593","DOIUrl":"https://doi.org/10.1145/2666581.2666593","url":null,"abstract":"Software testing is always an effective method to show the presence of bugs in programs, while debugging is never an easy task to remove a bug from a program. To facilitate the debugging task, statistical fault localization estimates the location of faults in programs automatically by analyzing the program executions to narrow down the suspicious region. We observe that program structure has strong impacts on the assessed suspiciousness of program elements. However, existing techniques inadequately pay attention to this problem. In this paper, we emphasize the biases caused by program structure in fault localization, and propose a method to address them. Our method is dedicated to boost a fault localization technique by adapting it to various program structures, in a software development process. It collects the suspiciousness of program elements when locating historical faults, statistically captures the biases caused by program structure, and removes such an impact factor from a fault localization result. An empirical study using the Siemens test suite shows that our method can greatly improve the effectiveness of the most representative fault localization Tarantula.","PeriodicalId":249136,"journal":{"name":"Proceedings of the International Workshop on Innovative Software Development Methodologies and Practices","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2014-11-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127899700","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
In this paper, we propose a new business model for custom software development industry utilizing agile software development to embrace changes. Traditional business model usually does not work well with agile software development because it mandate up-front man-month estimation based on a fixed scope. Our new model proposes a solution with two notable features; fixed monthly subscription and weekly agreement of what to provide. We successfully applied the new business model to more than 20 cases over 3 years and this paper describes one of them.
{"title":"A new business model of custom software development for agile software development","authors":"Yoshihito Kuranuki, Tsuyoshi Ushio, Tsutomu Yasui, Susumu Yamazaki","doi":"10.1145/2666581.2666584","DOIUrl":"https://doi.org/10.1145/2666581.2666584","url":null,"abstract":"In this paper, we propose a new business model for custom software development industry utilizing agile software development to embrace changes. Traditional business model usually does not work well with agile software development because it mandate up-front man-month estimation based on a fixed scope. Our new model proposes a solution with two notable features; fixed monthly subscription and weekly agreement of what to provide. We successfully applied the new business model to more than 20 cases over 3 years and this paper describes one of them.","PeriodicalId":249136,"journal":{"name":"Proceedings of the International Workshop on Innovative Software Development Methodologies and Practices","volume":"23 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2014-11-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"132589441","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Software product line (SPL) maximizes commonality between software products to reduce cost and improve productivity, where each product is represented by a selection of features that corresponds to particular customer requirements. SPL has been widely applied in critical systems such as communications, automobile, and aerospace, and ensuring correctness of the system is thus of great importance. In this paper, we consider model checking partial software product line designs, i.e., the incomplete designs in the early stage of software development, where the design decisions for a feature may be unknown. This enables detecting design errors earlier, reducing the cost of later development of final products. To this end, we first propose bilattice-based feature transitions systems (BFTSs) for modeling partial software product line designs, which support description of uncertainty and preserve features as a first class notion. We then express system behavioral properties using ACTL formulas and define its semantics over BFTSs. Finally, to leverage the power of existing model checking engine for verification, we provide the procedures that translate BFTSs and ACTL formulas to the inputs of the symbolic model checker $Chi$Chek. We implement our approach and illustrate its effectiveness on a benchmark from literature.
{"title":"Model checking partial software product line designs","authors":"Yufeng Shi, Ou Wei, Yu Zhou","doi":"10.1145/2666581.2666589","DOIUrl":"https://doi.org/10.1145/2666581.2666589","url":null,"abstract":"Software product line (SPL) maximizes commonality between software products to reduce cost and improve productivity, where each product is represented by a selection of features that corresponds to particular customer requirements. SPL has been widely applied in critical systems such as communications, automobile, and aerospace, and ensuring correctness of the system is thus of great importance. In this paper, we consider model checking partial software product line designs, i.e., the incomplete designs in the early stage of software development, where the design decisions for a feature may be unknown. This enables detecting design errors earlier, reducing the cost of later development of final products. To this end, we first propose bilattice-based feature transitions systems (BFTSs) for modeling partial software product line designs, which support description of uncertainty and preserve features as a first class notion. We then express system behavioral properties using ACTL formulas and define its semantics over BFTSs. Finally, to leverage the power of existing model checking engine for verification, we provide the procedures that translate BFTSs and ACTL formulas to the inputs of the symbolic model checker $Chi$Chek. We implement our approach and illustrate its effectiveness on a benchmark from literature.","PeriodicalId":249136,"journal":{"name":"Proceedings of the International Workshop on Innovative Software Development Methodologies and Practices","volume":"13 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2014-11-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"125466868","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Feature location is a software comprehension activity which aims at identifying source code entities that implement functionalities. Manual feature location is a labor-insensitive task, and developers need to find the target entities from thousands of software artifacts. Recent research has developed automatic and semiautomatic methods mainly based on Information Retrieval (IR) techniques to help developers locate the entities which are textually similar to the feature. In this paper, we focus on individual IR-based methods and try to find a suitable IR technique for feature location, which could be chosen as a part of hybrid methods to achieve good performance. We present two feature location approaches based on BM25 and its variant BM25F algorithm. We compared the two algorithms to the Vector Space Model (VSM), Unigram Model (UM), and Latent Dirichlet Allocation (LDA) on four open source projects. The result shows that BM25 and BM25F are consistently better than other IR methods such as VSM, UM and LDA on the four selected software systems in their best configurations respectively.
{"title":"An empirical study of BM25 and BM25F based feature location techniques","authors":"Zhendong Shi, J. Keung, Qinbao Song","doi":"10.1145/2666581.2666594","DOIUrl":"https://doi.org/10.1145/2666581.2666594","url":null,"abstract":"Feature location is a software comprehension activity which aims at identifying source code entities that implement functionalities. Manual feature location is a labor-insensitive task, and developers need to find the target entities from thousands of software artifacts. Recent research has developed automatic and semiautomatic methods mainly based on Information Retrieval (IR) techniques to help developers locate the entities which are textually similar to the feature. In this paper, we focus on individual IR-based methods and try to find a suitable IR technique for feature location, which could be chosen as a part of hybrid methods to achieve good performance. We present two feature location approaches based on BM25 and its variant BM25F algorithm. We compared the two algorithms to the Vector Space Model (VSM), Unigram Model (UM), and Latent Dirichlet Allocation (LDA) on four open source projects. The result shows that BM25 and BM25F are consistently better than other IR methods such as VSM, UM and LDA on the four selected software systems in their best configurations respectively.","PeriodicalId":249136,"journal":{"name":"Proceedings of the International Workshop on Innovative Software Development Methodologies and Practices","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2014-11-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127957232","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Passakorn Phannachitta, J. Keung, Akito Monden, Ken-ichi Matsumoto
Analogy-based estimation (ABE) is one of the most time consuming and compute intensive method in software development effort estimation. Optimizing ABE has been a dilemma because simplifying the procedure can reduce the estimation performance, while increasing the procedure complexity with more sophisticated theory may sacrifice an advantage of the unlimited scalability for a large data input. Motivated by an emergence of cloud computing technology in software applications, in this study we present 3 different implementation schemes based on Hadoop MapReduce to optimize the ABE process across multiple computing instances in the cloud-computing environment. We experimentally compared the 3 MapReduce implementation schemes in contrast with our previously proposed GPGPU approach (named ABE-CUDA) over 8 high-performance Amazon EC2 instances. Results present that the Hadoop solution can provide more computational resources that can extend the scalability of the ABE process. We recommend adoption of 2 different Hadoop implementations (Hadoop streaming and RHadoop) for accelerating the computation specifically for compute-intensive software engineering related tasks.
{"title":"Scaling up analogy-based software effort estimation: a comparison of multiple hadoop implementation schemes","authors":"Passakorn Phannachitta, J. Keung, Akito Monden, Ken-ichi Matsumoto","doi":"10.1145/2666581.2666582","DOIUrl":"https://doi.org/10.1145/2666581.2666582","url":null,"abstract":"Analogy-based estimation (ABE) is one of the most time consuming and compute intensive method in software development effort estimation. Optimizing ABE has been a dilemma because simplifying the procedure can reduce the estimation performance, while increasing the procedure complexity with more sophisticated theory may sacrifice an advantage of the unlimited scalability for a large data input. Motivated by an emergence of cloud computing technology in software applications, in this study we present 3 different implementation schemes based on Hadoop MapReduce to optimize the ABE process across multiple computing instances in the cloud-computing environment. We experimentally compared the 3 MapReduce implementation schemes in contrast with our previously proposed GPGPU approach (named ABE-CUDA) over 8 high-performance Amazon EC2 instances. Results present that the Hadoop solution can provide more computational resources that can extend the scalability of the ABE process. We recommend adoption of 2 different Hadoop implementations (Hadoop streaming and RHadoop) for accelerating the computation specifically for compute-intensive software engineering related tasks.","PeriodicalId":249136,"journal":{"name":"Proceedings of the International Workshop on Innovative Software Development Methodologies and Practices","volume":"366 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2014-11-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"116607037","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Multi-tenancy is a cloud computing phenomenon. Multiple instances of an application occupy and share resources from a large pool, allowing different users to have their own version of the same application running and coexisting on the same hardware but in isolated virtual spaces. In this position paper we survey the current landscape of multi-tenancy, laying out the challenges and complexity of software engineering where multi-tenancy issues are involved. Multi-tenancy allows cloud service providers to better utilise computing resources, supporting the development of more flexible services to customers based on economy of scale, reducing overheads and infrastructural costs. Nevertheless, there are major challenges in migration from single tenant applications to multi-tenancy. These have not been fully explored in research or practice to date. In particular, the reengineering effort of multi-tenancy in Software-as-a-Service cloud applications requires many complex and important aspects that should be taken into consideration, such as security, scalability, scheduling, data isolation, etc. Our study emphasizes scheduling policies and cloud provisioning and deployment with regards to multi-tenancy issues. We employ CloudSim and MapReduce in our experiments to simulate and analyse multi-tenancy models, scenarios, performance, scalability, scheduling and reliability on cloud platforms.
{"title":"Software engineering for multi-tenancy computing challenges and implications","authors":"J. Ru, J. Grundy, J. Keung","doi":"10.1145/2666581.2666585","DOIUrl":"https://doi.org/10.1145/2666581.2666585","url":null,"abstract":"Multi-tenancy is a cloud computing phenomenon. Multiple instances of an application occupy and share resources from a large pool, allowing different users to have their own version of the same application running and coexisting on the same hardware but in isolated virtual spaces. In this position paper we survey the current landscape of multi-tenancy, laying out the challenges and complexity of software engineering where multi-tenancy issues are involved. Multi-tenancy allows cloud service providers to better utilise computing resources, supporting the development of more flexible services to customers based on economy of scale, reducing overheads and infrastructural costs. Nevertheless, there are major challenges in migration from single tenant applications to multi-tenancy. These have not been fully explored in research or practice to date. In particular, the reengineering effort of multi-tenancy in Software-as-a-Service cloud applications requires many complex and important aspects that should be taken into consideration, such as security, scalability, scheduling, data isolation, etc. Our study emphasizes scheduling policies and cloud provisioning and deployment with regards to multi-tenancy issues. We employ CloudSim and MapReduce in our experiments to simulate and analyse multi-tenancy models, scenarios, performance, scalability, scheduling and reliability on cloud platforms.","PeriodicalId":249136,"journal":{"name":"Proceedings of the International Workshop on Innovative Software Development Methodologies and Practices","volume":"8 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2014-11-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"121397434","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Many multithreaded programs incur concurrency bugs. A modified version of such a program, which the exposed concurrency bug is deemed fixed, should be subject to further testing to validate whether the concurrency bug may only be partially fixed. In this paper, we present a similarity-based regression testing methodology to address this problem. It is based on the notions of similar execution contexts of events and bug signatures. To the best of our knowledge, it also presents the first regression testing technique that manipulates thread schedules using a similarity-based active testing strategy.
{"title":"Toward a methodology to expose partially fixed concurrency bugs in modified multithreaded programs","authors":"To Tsui, Shangru Wu, W. Chan","doi":"10.1145/2666581.2666592","DOIUrl":"https://doi.org/10.1145/2666581.2666592","url":null,"abstract":"Many multithreaded programs incur concurrency bugs. A modified version of such a program, which the exposed concurrency bug is deemed fixed, should be subject to further testing to validate whether the concurrency bug may only be partially fixed. In this paper, we present a similarity-based regression testing methodology to address this problem. It is based on the notions of similar execution contexts of events and bug signatures. To the best of our knowledge, it also presents the first regression testing technique that manipulates thread schedules using a similarity-based active testing strategy.","PeriodicalId":249136,"journal":{"name":"Proceedings of the International Workshop on Innovative Software Development Methodologies and Practices","volume":"18 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2014-11-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"125388789","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
So far, a lot of techniques have been developed on the detection of code clones (i.e., duplicated code) in large-scale source code. Currently, the code clone research community is gradually shifting its focus of attention from the detection to the management (e.g., clone refactoring). During clone management, developers need to understand how and why code clones are scattered in source code, and then decide how to handle those code clones. In this paper, we present a clone analysis tool with tag cloud visualization. This tool is aimed at helping to understand why code clone are concentrated in a part of a software system by generating tag clouds from a collection of identifier names in source code.
{"title":"Supporting clone analysis with tag cloud visualization","authors":"Manamu Sano, Eunjong Choi, Norihiro Yoshida, Yuki Yamanaka, Katsuro Inoue","doi":"10.1145/2666581.2666586","DOIUrl":"https://doi.org/10.1145/2666581.2666586","url":null,"abstract":"So far, a lot of techniques have been developed on the detection of code clones (i.e., duplicated code) in large-scale source code. Currently, the code clone research community is gradually shifting its focus of attention from the detection to the management (e.g., clone refactoring). During clone management, developers need to understand how and why code clones are scattered in source code, and then decide how to handle those code clones. In this paper, we present a clone analysis tool with tag cloud visualization. This tool is aimed at helping to understand why code clone are concentrated in a part of a software system by generating tag clouds from a collection of identifier names in source code.","PeriodicalId":249136,"journal":{"name":"Proceedings of the International Workshop on Innovative Software Development Methodologies and Practices","volume":"21 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2014-11-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"134354880","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
It is commonly recognized that software development is highly unpredictable and software quality may not be easily enhanced after software product is finished. During the software development life cycle (SDLC), project managers have to solve many technical and management issues, such as high failure rate, cost over-run, low quality, late delivery, etc. Consequently, in order to produce robust and reliable software product(s) on time and within budget, project managers and developers have to appropriately allocate limited development- and testing-effort and time. In the past, the distribution of testing-effort or manpower can typically be described by the Weibull or Rayleigh model. Practically, it should be noticed that development environments or methods could change due to some reasons. Thus when we plan to perform software reliability modeling and prediction, these changes or variations occurring in the development process have to be taken into consideration. In this paper, we will study how to use the Parr-curve model with multiple change-points to depict the consumption of testing-effort and how to perform further software reliability analysis. The applicability and performance of our proposed model will be demonstrated and assessed through real software failure data. Experimental results are analyzed and compared with other existing models to show that our proposed model gives better predictions.
{"title":"Software reliability analysis considering the variation of testing-effort and change-point","authors":"Syuan-Zao Ke, Chin-Yu Huang, K. Peng","doi":"10.1145/2666581.2666588","DOIUrl":"https://doi.org/10.1145/2666581.2666588","url":null,"abstract":"It is commonly recognized that software development is highly unpredictable and software quality may not be easily enhanced after software product is finished. During the software development life cycle (SDLC), project managers have to solve many technical and management issues, such as high failure rate, cost over-run, low quality, late delivery, etc. Consequently, in order to produce robust and reliable software product(s) on time and within budget, project managers and developers have to appropriately allocate limited development- and testing-effort and time. In the past, the distribution of testing-effort or manpower can typically be described by the Weibull or Rayleigh model. Practically, it should be noticed that development environments or methods could change due to some reasons. Thus when we plan to perform software reliability modeling and prediction, these changes or variations occurring in the development process have to be taken into consideration. In this paper, we will study how to use the Parr-curve model with multiple change-points to depict the consumption of testing-effort and how to perform further software reliability analysis. The applicability and performance of our proposed model will be demonstrated and assessed through real software failure data. Experimental results are analyzed and compared with other existing models to show that our proposed model gives better predictions.","PeriodicalId":249136,"journal":{"name":"Proceedings of the International Workshop on Innovative Software Development Methodologies and Practices","volume":"48 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2014-11-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"121689291","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}