Some scenario-based testing approaches propose to express a test suite as a regular expression (called scenario or pattern). It specifies a set of valid sequences of operations in a abstract way. When the regular expression is "unfold", test sequences are obtained. Usually, the unfolding is done in an exhaustive way, which can result in a combinatorial explosion. In this article, we explore a pairwise coverage criterion to select a subset of test sequences satisfying the pattern, in order to decrease the number of test sequences. The originality of the approach lies in the fact that the pairwise criterion is applied to the instantiated method calls (and not on the parameters). We applied this strategy to generate unit tests for Java classes. The quality of the test suites is evaluated with a mutation analysis and compared to test suites randomly generated.
{"title":"Applying a Pairwise Coverage Criterion to Scenario-Based Testing","authors":"L. D. Bousquet, Mickaël Delahaye, Catherine Oriat","doi":"10.1109/ICSTW.2016.23","DOIUrl":"https://doi.org/10.1109/ICSTW.2016.23","url":null,"abstract":"Some scenario-based testing approaches propose to express a test suite as a regular expression (called scenario or pattern). It specifies a set of valid sequences of operations in a abstract way. When the regular expression is \"unfold\", test sequences are obtained. Usually, the unfolding is done in an exhaustive way, which can result in a combinatorial explosion. In this article, we explore a pairwise coverage criterion to select a subset of test sequences satisfying the pattern, in order to decrease the number of test sequences. The originality of the approach lies in the fact that the pairwise criterion is applied to the instantiated method calls (and not on the parameters). We applied this strategy to generate unit tests for Java classes. The quality of the test suites is evaluated with a mutation analysis and compared to test suites randomly generated.","PeriodicalId":335145,"journal":{"name":"2016 IEEE Ninth International Conference on Software Testing, Verification and Validation Workshops (ICSTW)","volume":"10 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2016-04-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"131866829","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
M. E. Coskun, M. Ceylan, Kadir Yigitozu, V. Garousi
While software inspection is an effective activity to detect defects early in the software development lifecycle, it is an effort-intensive and error-prone activity. Motivated by a real need in the context of the Turkish Aerospace Industries Inc. (TAI), a tool named AutoInspect was developed to (semi-) automate the inspection of software design documents and, as a result, to increase the efficiency and effectiveness of the inspection process. We present in this paper the features of the tool, its development details and its initial evaluation for inspecting the design documents of three real systems in the company. The results of the initial case-study reveal that the tool is indeed able to increase the inspections efficiency and effectiveness. In terms of efficiency, inspection engineers who used AutoInspect performed 41-50% more efficiently, for the three design documents under study, compared to the case when the tool was not used (i.e., manual inspections). In terms of effectiveness, compared to manual inspections, the automated approach found between 23-33% more defects in the three design documents under study. As the tool currently only provides partial automation, our efforts are currently underway to increase its automation level even further.
{"title":"A Tool for Automated Inspection of Software Design Documents and Its Empirical Evaluation in an Aviation Industry Setting","authors":"M. E. Coskun, M. Ceylan, Kadir Yigitozu, V. Garousi","doi":"10.1109/ICSTW.2016.12","DOIUrl":"https://doi.org/10.1109/ICSTW.2016.12","url":null,"abstract":"While software inspection is an effective activity to detect defects early in the software development lifecycle, it is an effort-intensive and error-prone activity. Motivated by a real need in the context of the Turkish Aerospace Industries Inc. (TAI), a tool named AutoInspect was developed to (semi-) automate the inspection of software design documents and, as a result, to increase the efficiency and effectiveness of the inspection process. We present in this paper the features of the tool, its development details and its initial evaluation for inspecting the design documents of three real systems in the company. The results of the initial case-study reveal that the tool is indeed able to increase the inspections efficiency and effectiveness. In terms of efficiency, inspection engineers who used AutoInspect performed 41-50% more efficiently, for the three design documents under study, compared to the case when the tool was not used (i.e., manual inspections). In terms of effectiveness, compared to manual inspections, the automated approach found between 23-33% more defects in the three design documents under study. As the tool currently only provides partial automation, our efforts are currently underway to increase its automation level even further.","PeriodicalId":335145,"journal":{"name":"2016 IEEE Ninth International Conference on Software Testing, Verification and Validation Workshops (ICSTW)","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2016-04-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114617110","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Combinatorial Testing: From Algorithms to Applications","authors":"A. Gargantini, Rachel Tzoref","doi":"10.1109/ICSTW.2016.40","DOIUrl":"https://doi.org/10.1109/ICSTW.2016.40","url":null,"abstract":"IWCT introduction.","PeriodicalId":335145,"journal":{"name":"2016 IEEE Ninth International Conference on Software Testing, Verification and Validation Workshops (ICSTW)","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2016-04-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129755216","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
S. Tahvili, Mehrdad Saadatmand, S. Larsson, W. Afzal, M. Bohlin, Daniel Sundmark
Prioritization, selection and minimization of test cases are well-known problems in software testing. Test case prioritization deals with the problem of ordering an existing set of test cases, typically with respect to the estimated likelihood of detecting faults. Test case selection addresses the problem of selecting a subset of an existing set of test cases, typically by discarding test cases that do not add any value in improving the quality of the software under test. Most existing approaches for test case prioritization and selection suffer from one or several drawbacks. For example, they to a large extent utilize static analysis of code for that purpose, making them unfit for higher levels of testing such as integration testing. Moreover, they do not exploit the possibility of dynamically changing the prioritization or selection of test cases based on the execution results of prior test cases. Such dynamic analysis allows for discarding test cases that do not need to be executed and are thus redundant. This paper proposes a generic method for prioritization and selection of test cases in integration testing that addresses the above issues. We also present the results of an industrial case study where initial evidence suggests the potential usefulness of our approach in testing a safety-critical train control management subsystem.
{"title":"Dynamic Integration Test Selection Based on Test Case Dependencies","authors":"S. Tahvili, Mehrdad Saadatmand, S. Larsson, W. Afzal, M. Bohlin, Daniel Sundmark","doi":"10.1109/ICSTW.2016.14","DOIUrl":"https://doi.org/10.1109/ICSTW.2016.14","url":null,"abstract":"Prioritization, selection and minimization of test cases are well-known problems in software testing. Test case prioritization deals with the problem of ordering an existing set of test cases, typically with respect to the estimated likelihood of detecting faults. Test case selection addresses the problem of selecting a subset of an existing set of test cases, typically by discarding test cases that do not add any value in improving the quality of the software under test. Most existing approaches for test case prioritization and selection suffer from one or several drawbacks. For example, they to a large extent utilize static analysis of code for that purpose, making them unfit for higher levels of testing such as integration testing. Moreover, they do not exploit the possibility of dynamically changing the prioritization or selection of test cases based on the execution results of prior test cases. Such dynamic analysis allows for discarding test cases that do not need to be executed and are thus redundant. This paper proposes a generic method for prioritization and selection of test cases in integration testing that addresses the above issues. We also present the results of an industrial case study where initial evidence suggests the potential usefulness of our approach in testing a safety-critical train control management subsystem.","PeriodicalId":335145,"journal":{"name":"2016 IEEE Ninth International Conference on Software Testing, Verification and Validation Workshops (ICSTW)","volume":"128 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2016-04-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127393134","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Mutation testing is endorsed by software testing researchers for its unique capability of providing pragmatic estimates of a test suite's fault detection capability, and for guiding testers in improving their test suites. In practice, however, wide-spread adoption of mutation testing is hampered because any non-trivial program results in huge numbers of mutants, many of which are either trivial or equivalent, and thus useless. Trivial mutants reduce the motivation of developers in trusting and using the technique, while equivalent mutants are frustratingly difficult to handle. These problems are exacerbated by insufficient education on testing, which often means that mutation testing is not well understood in practice. These are examples of the types of problems that gamification aims to overcome by making such tedious activities competitive and entertaining. In this paper, we introduce the first steps towards building Code Defenders, a mutation testing game where players take the role of an attacker, who aims to create the most subtle non-equivalent mutants, or a defender, who aims to create strong tests to kill these mutants. The benefits of such an approach are manifold: The game can serve an educational role by engaging learners in mutation testing activities in a fun way. Experienced players will produce strong test suites, capable of detecting even the most subtle bugs that other players can conceive. Equivalent mutants are handled by making them a special part of the gameplay, where points are at stake in duels between attackers and defenders.
{"title":"Code Defenders: A Mutation Testing Game","authors":"J. Rojas, G. Fraser","doi":"10.1109/ICSTW.2016.43","DOIUrl":"https://doi.org/10.1109/ICSTW.2016.43","url":null,"abstract":"Mutation testing is endorsed by software testing researchers for its unique capability of providing pragmatic estimates of a test suite's fault detection capability, and for guiding testers in improving their test suites. In practice, however, wide-spread adoption of mutation testing is hampered because any non-trivial program results in huge numbers of mutants, many of which are either trivial or equivalent, and thus useless. Trivial mutants reduce the motivation of developers in trusting and using the technique, while equivalent mutants are frustratingly difficult to handle. These problems are exacerbated by insufficient education on testing, which often means that mutation testing is not well understood in practice. These are examples of the types of problems that gamification aims to overcome by making such tedious activities competitive and entertaining. In this paper, we introduce the first steps towards building Code Defenders, a mutation testing game where players take the role of an attacker, who aims to create the most subtle non-equivalent mutants, or a defender, who aims to create strong tests to kill these mutants. The benefits of such an approach are manifold: The game can serve an educational role by engaging learners in mutation testing activities in a fun way. Experienced players will produce strong test suites, capable of detecting even the most subtle bugs that other players can conceive. Equivalent mutants are handled by making them a special part of the gameplay, where points are at stake in duels between attackers and defenders.","PeriodicalId":335145,"journal":{"name":"2016 IEEE Ninth International Conference on Software Testing, Verification and Validation Workshops (ICSTW)","volume":"44 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2016-04-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"132143971","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Model-based testing is used for automatically generating test cases based on models of the system under test. The effectiveness of tests depends on the contents of these models. Therefore, we introduce a novel three-step model refinement approach. We represent test models in the form of Markov chains. First, we update state transition probabilities in these models based on usage profile. Second, we perform an update based on fault likelihood that is estimated with static code analysis. Our third update is based on error likelihood that is estimated with dynamic analysis. We generate and execute test cases after each refinement. We applied our approach for model-based testing of a Smart TV system and new faults were revealed after each refinement.
{"title":"Successive Refinement of Models for Model-Based Testing to Increase System Test Effectiveness","authors":"Ceren Sahin Gebizli, Hasan Sözer, A. Ercan","doi":"10.1109/ICSTW.2016.10","DOIUrl":"https://doi.org/10.1109/ICSTW.2016.10","url":null,"abstract":"Model-based testing is used for automatically generating test cases based on models of the system under test. The effectiveness of tests depends on the contents of these models. Therefore, we introduce a novel three-step model refinement approach. We represent test models in the form of Markov chains. First, we update state transition probabilities in these models based on usage profile. Second, we perform an update based on fault likelihood that is estimated with static code analysis. Our third update is based on error likelihood that is estimated with dynamic analysis. We generate and execute test cases after each refinement. We applied our approach for model-based testing of a Smart TV system and new faults were revealed after each refinement.","PeriodicalId":335145,"journal":{"name":"2016 IEEE Ninth International Conference on Software Testing, Verification and Validation Workshops (ICSTW)","volume":"27 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2016-04-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"122212019","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
D. Kuhn, Vincent C. Hu, David F. Ferraiolo, R. Kacker, Yu Lei
Access control typically requires translating policies or rules given in natural language into a form such as a programming language or decision table, which can be processed by an access control system. Once rules have been described in machine-processable form, testing is necessary to ensure that the rules are implemented correctly. This paper describes an approach based on combinatorial test methods for efficiently testing access control rules, using the structure of attribute based access control (ABAC) to detect a large class of faults without a conventional test oracle.
{"title":"Pseudo-Exhaustive Testing of Attribute Based Access Control Rules","authors":"D. Kuhn, Vincent C. Hu, David F. Ferraiolo, R. Kacker, Yu Lei","doi":"10.1109/ICSTW.2016.35","DOIUrl":"https://doi.org/10.1109/ICSTW.2016.35","url":null,"abstract":"Access control typically requires translating policies or rules given in natural language into a form such as a programming language or decision table, which can be processed by an access control system. Once rules have been described in machine-processable form, testing is necessary to ensure that the rules are implemented correctly. This paper describes an approach based on combinatorial test methods for efficiently testing access control rules, using the structure of attribute based access control (ABAC) to detect a large class of faults without a conventional test oracle.","PeriodicalId":335145,"journal":{"name":"2016 IEEE Ninth International Conference on Software Testing, Verification and Validation Workshops (ICSTW)","volume":"5 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2016-04-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"122394881","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Unified Modeling Language (UML) is the language of modeling from requirements for software design. UML Testing Profile (UTP) is the definition of the modeling test from requirements analysis for software testing. UTP has Test Architecture, Test Behavior, Test Data, and Time Concepts as the test models. Requirements are described in natural language, and engineers who have modeling skills then manually generate test models. Hence the generation of test models depends upon the engineer's skills, leaving the quality of test models unstable. In this paper, we present automatic generation test models from requirements in natural language by focusing on descriptions of test cases in UTP test behavior. We develop three rules to generate test models from requirements by using natural language processing techniques and experiment with our approach on requirements in language that is considered natural English. Our results in three case studies show the promise of our approach.
{"title":"Automatic Generation of UTP Models from Requirements in Natural Language","authors":"Satoshi Masuda, T. Matsuodani, K. Tsuda","doi":"10.1109/ICSTW.2016.27","DOIUrl":"https://doi.org/10.1109/ICSTW.2016.27","url":null,"abstract":"Unified Modeling Language (UML) is the language of modeling from requirements for software design. UML Testing Profile (UTP) is the definition of the modeling test from requirements analysis for software testing. UTP has Test Architecture, Test Behavior, Test Data, and Time Concepts as the test models. Requirements are described in natural language, and engineers who have modeling skills then manually generate test models. Hence the generation of test models depends upon the engineer's skills, leaving the quality of test models unstable. In this paper, we present automatic generation test models from requirements in natural language by focusing on descriptions of test cases in UTP test behavior. We develop three rules to generate test models from requirements by using natural language processing techniques and experiment with our approach on requirements in language that is considered natural English. Our results in three case studies show the promise of our approach.","PeriodicalId":335145,"journal":{"name":"2016 IEEE Ninth International Conference on Software Testing, Verification and Validation Workshops (ICSTW)","volume":"17 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2016-04-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"132782020","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Dominik Holling, Sebastian Banescu, Marco Probst, A. Petrovska, A. Pretschner
The mutation score is defined as the number of killed mutants divided by the number of non-equivalent mutants. However, whether a mutant is equivalent to the original program is undecidable in general. Thus, even when improving a test suite, a mutant score assessing this test suite may become worse during the development of a system, because of equivalent mutants introduced during mutant creation. This is a fundamental problem. Using static analysis and symbolic execution, we show how to establish non-equivalence or "don't know" among mutants. If the number of don't knows is small, this is a good indicator that a computed mutation score actually reflects its above definition. We can therefore have an increased confidence that mutation score trends correspond to actual improvements of a test suite's quality, and are not overly polluted by equivalent mutants. Using a set of 14 representative unit size programs, we show that for some, but not all, of these programs, the above confidence can indeed be established. We also evaluate the reproducibility, efficiency and effectiveness of our Nequivack tool. Our findings are that reproducibility is completely given. A single mutant analysis can be performed within 3 seconds on average, which is efficient for practical and industrial applications.
{"title":"Nequivack: Assessing Mutation Score Confidence","authors":"Dominik Holling, Sebastian Banescu, Marco Probst, A. Petrovska, A. Pretschner","doi":"10.1109/ICSTW.2016.29","DOIUrl":"https://doi.org/10.1109/ICSTW.2016.29","url":null,"abstract":"The mutation score is defined as the number of killed mutants divided by the number of non-equivalent mutants. However, whether a mutant is equivalent to the original program is undecidable in general. Thus, even when improving a test suite, a mutant score assessing this test suite may become worse during the development of a system, because of equivalent mutants introduced during mutant creation. This is a fundamental problem. Using static analysis and symbolic execution, we show how to establish non-equivalence or \"don't know\" among mutants. If the number of don't knows is small, this is a good indicator that a computed mutation score actually reflects its above definition. We can therefore have an increased confidence that mutation score trends correspond to actual improvements of a test suite's quality, and are not overly polluted by equivalent mutants. Using a set of 14 representative unit size programs, we show that for some, but not all, of these programs, the above confidence can indeed be established. We also evaluate the reproducibility, efficiency and effectiveness of our Nequivack tool. Our findings are that reproducibility is completely given. A single mutant analysis can be performed within 3 seconds on average, which is efficient for practical and industrial applications.","PeriodicalId":335145,"journal":{"name":"2016 IEEE Ninth International Conference on Software Testing, Verification and Validation Workshops (ICSTW)","volume":"52 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2016-04-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114420092","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Koji Tsumura, H. Washizaki, Y. Fukazawa, Keishi Oshima, Ryota Mibe
Because program behaviors of database applications depend on the data used, code coverages do not effectively test database applications. Additionally, test coverages for database applications that focus on predicates in Structured Query Language (SQL) queries are not useful if the necessary predicates are omitted. In this paper, we present two new database applications using Plain Pairwise Coverage (PPC) and Selected Pairwise Coverage (SPC) for SQL queries called Plain Pairwise Coverage Testing (PPCT) and Selected Pairwise Coverage Testing (SPCT), respectively. These coverages are based on pairwise testing coverage, which employs selected elements in the SQL SELECT query as parameters. We also implement a coverage calculation tool and conduct case studies on two open source software systems. PPCT and SPCT can detect many faults, which are not detected by existing test methods based on predicates in the query. Furthermore, the case study suggests that SPCT can detect faults more efficiently than PPCT and the costs of SPCT can be further reduced by ignoring records filtered out by the conditions of the query.
{"title":"Pairwise Coverage-Based Testing with Selected Elements in a Query for Database Applications","authors":"Koji Tsumura, H. Washizaki, Y. Fukazawa, Keishi Oshima, Ryota Mibe","doi":"10.1109/ICSTW.2016.19","DOIUrl":"https://doi.org/10.1109/ICSTW.2016.19","url":null,"abstract":"Because program behaviors of database applications depend on the data used, code coverages do not effectively test database applications. Additionally, test coverages for database applications that focus on predicates in Structured Query Language (SQL) queries are not useful if the necessary predicates are omitted. In this paper, we present two new database applications using Plain Pairwise Coverage (PPC) and Selected Pairwise Coverage (SPC) for SQL queries called Plain Pairwise Coverage Testing (PPCT) and Selected Pairwise Coverage Testing (SPCT), respectively. These coverages are based on pairwise testing coverage, which employs selected elements in the SQL SELECT query as parameters. We also implement a coverage calculation tool and conduct case studies on two open source software systems. PPCT and SPCT can detect many faults, which are not detected by existing test methods based on predicates in the query. Furthermore, the case study suggests that SPCT can detect faults more efficiently than PPCT and the costs of SPCT can be further reduced by ignoring records filtered out by the conditions of the query.","PeriodicalId":335145,"journal":{"name":"2016 IEEE Ninth International Conference on Software Testing, Verification and Validation Workshops (ICSTW)","volume":"129 3 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2016-04-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"124246457","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}