Airbus has used formal methods for several years to specify avionics systems. Thanks to these methods, formal verification, testability concepts and automatic test case generation were explored and experimented on Airbus systems. This paper depicts the Airbus validation and verification process all along the system development cycle pointing out what kinds of static analysis and dynamic verification and validation activities are conducted. Then, we focus on innovative methods based on testing strategies for traceability purposes, testing design and fault isolation. Some considerations related to automatic test case generation are also discussed.
{"title":"Using Formal Methods and Testability Concepts in the Avionics Systems Validation and Verification (V&V) Process","authors":"O. Laurent","doi":"10.1109/ICST.2010.38","DOIUrl":"https://doi.org/10.1109/ICST.2010.38","url":null,"abstract":"Airbus has used formal methods for several years to specify avionics systems. Thanks to these methods, formal verification, testability concepts and automatic test case generation were explored and experimented on Airbus systems. This paper depicts the Airbus validation and verification process all along the system development cycle pointing out what kinds of static analysis and dynamic verification and validation activities are conducted. Then, we focus on innovative methods based on testing strategies for traceability purposes, testing design and fault isolation. Some considerations related to automatic test case generation are also discussed.","PeriodicalId":192678,"journal":{"name":"2010 Third International Conference on Software Testing, Verification and Validation","volume":"7 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2010-04-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115297553","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Gilles Perrouin, S. Sen, Jacques Klein, B. Baudry, Yves Le Traon
Software Product Lines (SPL) are difficult to validate due to combinatorics induced by variability across their features. This leads to combinatorial explosion of the number of derivable products. Exhaustive testing in such a large space of products is infeasible. One possible option is to test SPLs by generating test cases that cover all possible T feature interactions (T-wise). T-wise dramatically reduces the number of test products while ensuring reasonable SPL coverage. However, automatic generation of test cases satisfying T-wise using SAT solvers raises two issues. The encoding of SPL models and T-wise criteria into a set of formulas acceptable by the solver and their satisfaction which fails when processed ``all-at-once''. We propose a scalable toolset using Alloy to automatically generate test cases satisfying T-wise from SPL models. We define strategies to split T-wise combinations into solvable subsets. We design and compute metrics to evaluate strategies on Aspect OPTIMA, a concrete transactional SPL.
{"title":"Automated and Scalable T-wise Test Case Generation Strategies for Software Product Lines","authors":"Gilles Perrouin, S. Sen, Jacques Klein, B. Baudry, Yves Le Traon","doi":"10.1109/ICST.2010.43","DOIUrl":"https://doi.org/10.1109/ICST.2010.43","url":null,"abstract":"Software Product Lines (SPL) are difficult to validate due to combinatorics induced by variability across their features. This leads to combinatorial explosion of the number of derivable products. Exhaustive testing in such a large space of products is infeasible. One possible option is to test SPLs by generating test cases that cover all possible T feature interactions (T-wise). T-wise dramatically reduces the number of test products while ensuring reasonable SPL coverage. However, automatic generation of test cases satisfying T-wise using SAT solvers raises two issues. The encoding of SPL models and T-wise criteria into a set of formulas acceptable by the solver and their satisfaction which fails when processed ``all-at-once''. We propose a scalable toolset using Alloy to automatically generate test cases satisfying T-wise from SPL models. We define strategies to split T-wise combinations into solvable subsets. We design and compute metrics to evaluate strategies on Aspect OPTIMA, a concrete transactional SPL.","PeriodicalId":192678,"journal":{"name":"2010 Third International Conference on Software Testing, Verification and Validation","volume":"348 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2010-04-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"123417296","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
T. Vos, A. Baars, Felix F. Lindlar, Peter M. Kruse, Andreas Windisch, J. Wegener
Evolutionary testing has been researched and promising results have been presented. However, evolutionary testing has remained predominately a research-based activity not practiced within industry. Although attempts have been made, such as Daimler's Evolutionary Structural Test (EST) prototype, until now, no such tool has been suitable for industrial adoption. The European project EvoTest (IST-33472) team has been working from 2006 till2009 to improve this situation. This paper describes the final version of the Evolutionary Testing Framework (ETF) resulting from the EvoTest project. In specific we will present the EvoTest Structural Testing tool for fully automatic structural testing that has been demonstrated to be suitable within an industrial setting. The paper concentrates on how to use it and interpret the results. The paper starts with introducing the concepts of Evolutionary Testing in general and Structural Testing in specific. Subsequently, the ETF and the EvoTest Structural Testing tool built on-top of it will be described. We will concentrate on the usage, the architecture, and remaining limitations of the tool. The paper concludes describing the results of using the EvoTest Structural Testing tool in practice on real-world systems in an industrial setting.
{"title":"Industrial Scaled Automated Structural Testing with the Evolutionary Testing Tool","authors":"T. Vos, A. Baars, Felix F. Lindlar, Peter M. Kruse, Andreas Windisch, J. Wegener","doi":"10.1109/ICST.2010.24","DOIUrl":"https://doi.org/10.1109/ICST.2010.24","url":null,"abstract":"Evolutionary testing has been researched and promising results have been presented. However, evolutionary testing has remained predominately a research-based activity not practiced within industry. Although attempts have been made, such as Daimler's Evolutionary Structural Test (EST) prototype, until now, no such tool has been suitable for industrial adoption. The European project EvoTest (IST-33472) team has been working from 2006 till2009 to improve this situation. This paper describes the final version of the Evolutionary Testing Framework (ETF) resulting from the EvoTest project. In specific we will present the EvoTest Structural Testing tool for fully automatic structural testing that has been demonstrated to be suitable within an industrial setting. The paper concentrates on how to use it and interpret the results. The paper starts with introducing the concepts of Evolutionary Testing in general and Structural Testing in specific. Subsequently, the ETF and the EvoTest Structural Testing tool built on-top of it will be described. We will concentrate on the usage, the architecture, and remaining limitations of the tool. The paper concludes describing the results of using the EvoTest Structural Testing tool in practice on real-world systems in an industrial setting.","PeriodicalId":192678,"journal":{"name":"2010 Third International Conference on Software Testing, Verification and Validation","volume":"80 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2010-04-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"124619788","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
The conventional approach to Risk Assessment in Software Testing is based on analytic models and statistical analysis. The analytic models are static, so they don’t account for the inherent variability and uncertainty of the testing process, which is an apparent deficiency. This paper presents an application of Six Sigma and Simulation in Software Testing. DMAIC and simulation are applied to a testing process to assess and mitigate the risk to deliver the product on time, achieving the quality goals. DMAIC is used to improve the process and achieve required (higher) capability. Simulation is used to predict the quality (reliability) and considers the uncertainty and variability, which, in comparison with the analytic models, more accurately models the testing process. Presented experiments are applied on a real project using published data. The results are satisfactorily verified. This enhanced approach is compliant with CMMI® and provides for substantial Software Testing performance-driven improvements.
{"title":"An Application of Six Sigma and Simulation in Software Testing Risk Assessment","authors":"Vojo Bubevski","doi":"10.1109/ICST.2010.23","DOIUrl":"https://doi.org/10.1109/ICST.2010.23","url":null,"abstract":"The conventional approach to Risk Assessment in Software Testing is based on analytic models and statistical analysis. The analytic models are static, so they don’t account for the inherent variability and uncertainty of the testing process, which is an apparent deficiency. This paper presents an application of Six Sigma and Simulation in Software Testing. DMAIC and simulation are applied to a testing process to assess and mitigate the risk to deliver the product on time, achieving the quality goals. DMAIC is used to improve the process and achieve required (higher) capability. Simulation is used to predict the quality (reliability) and considers the uncertainty and variability, which, in comparison with the analytic models, more accurately models the testing process. Presented experiments are applied on a real project using published data. The results are satisfactorily verified. This enhanced approach is compliant with CMMI® and provides for substantial Software Testing performance-driven improvements.","PeriodicalId":192678,"journal":{"name":"2010 Third International Conference on Software Testing, Verification and Validation","volume":"15 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2010-04-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128846514","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Cluster filtering is a kind of test selection technique, which saves human efforts for result inspection by reducing test size and finding maximum failures. Cluster sampling strategies play a key role in the cluster filtering technique. A good sampling strategy can greatly improve the failure detection capability. In this paper, we propose a new cluster sampling strategy called execution-spectra-based sampling (ESBS). Different from the existing sampling strategies, ESBS iteratively selects test cases from each cluster. In each iteration process, ESBS selects the test case that has the maximum possibility to be a failed test. For each test, its suspiciousness is computed based on the execution spectra information of previous passed and failed test cases selected from the same cluster. The new sampling strategy ESBS is evaluated experimentally and the results show that it is more effective than existing sampling strategies in most cases.
{"title":"A Dynamic Test Cluster Sampling Strategy by Leveraging Execution Spectra Information","authors":"Shali Yan, Zhenyu Chen, Zhihong Zhao, Chen Zhang, Yuming Zhou","doi":"10.1109/ICST.2010.47","DOIUrl":"https://doi.org/10.1109/ICST.2010.47","url":null,"abstract":"Cluster filtering is a kind of test selection technique, which saves human efforts for result inspection by reducing test size and finding maximum failures. Cluster sampling strategies play a key role in the cluster filtering technique. A good sampling strategy can greatly improve the failure detection capability. In this paper, we propose a new cluster sampling strategy called execution-spectra-based sampling (ESBS). Different from the existing sampling strategies, ESBS iteratively selects test cases from each cluster. In each iteration process, ESBS selects the test case that has the maximum possibility to be a failed test. For each test, its suspiciousness is computed based on the execution spectra information of previous passed and failed test cases selected from the same cluster. The new sampling strategy ESBS is evaluated experimentally and the results show that it is more effective than existing sampling strategies in most cases.","PeriodicalId":192678,"journal":{"name":"2010 Third International Conference on Software Testing, Verification and Validation","volume":"55 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2010-04-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115936707","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Service oriented computing is based on a typical combination of features such as very late binding, run-time integration of software elements owned and managed by third parties, run-time changes. These characteristics generally make difficult both static and dynamic verification capabilities of service-centric systems. In this domain verification and testing research communities have to face new issues and revise existing solutions; possibly profiting of the new opportunities that the new paradigm makes available. In this paper, focusing on service orchestrations, we propose an approach to automatic test case generation aiming in particular at checking the behaviour of services participating in a given orchestration. The approach exploits the availability of a runnable model (the BPEL specification) and uses model checking techniques to derive test cases suitable to detect possible integration problems. The approach has been implemented in a plug-in for the Eclipse platform already released for public usage. In this way BPEL developers can easily derive, using a single environment, test suites for each participant service they would like to compose.
{"title":"A Counter-Example Testing Approach for Orchestrated Services","authors":"F. D. Angelis, A. Polini, G. D. Angelis","doi":"10.1109/ICST.2010.27","DOIUrl":"https://doi.org/10.1109/ICST.2010.27","url":null,"abstract":"Service oriented computing is based on a typical combination of features such as very late binding, run-time integration of software elements owned and managed by third parties, run-time changes. These characteristics generally make difficult both static and dynamic verification capabilities of service-centric systems. In this domain verification and testing research communities have to face new issues and revise existing solutions; possibly profiting of the new opportunities that the new paradigm makes available. In this paper, focusing on service orchestrations, we propose an approach to automatic test case generation aiming in particular at checking the behaviour of services participating in a given orchestration. The approach exploits the availability of a runnable model (the BPEL specification) and uses model checking techniques to derive test cases suitable to detect possible integration problems. The approach has been implemented in a plug-in for the Eclipse platform already released for public usage. In this way BPEL developers can easily derive, using a single environment, test suites for each participant service they would like to compose.","PeriodicalId":192678,"journal":{"name":"2010 Third International Conference on Software Testing, Verification and Validation","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2010-04-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"131340236","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Researchers have argued that for failure to be observed the following three conditions must be met: 1) the defect is executed, 2) the program has transitioned into an infectious state, and 3) the infection has propagated to the output. Coincidental correctness arises when the program produces the correct output, while conditions 1) and 2) are met but not 3). In previous work, we showed that coincidental correctness is prevalent and demonstrated that it is a safety reducing factor for coverage-based fault localization. This work aims at cleansing test suites from coincidental correctness to enhance fault localization. Specifically, given a test suite in which each test has been classified as failing or passing, we present three variations of a technique that identify the subset of passing tests that are likely to be coincidentally correct. We evaluated the effectiveness of our techniques by empirically quantifying the following: 1) how accurately did they identify the coincidentally correct tests, 2) how much did they improve the effectiveness of coverage-based fault localization, and 3) how much did coverage decrease as a result of applying them. Using our better performing technique and configuration, the safety and precision of fault-localization was improved for 88% and 61% of the programs, respectively.
{"title":"Cleansing Test Suites from Coincidental Correctness to Enhance Fault-Localization","authors":"Wes Masri, R. A. Assi","doi":"10.1109/ICST.2010.22","DOIUrl":"https://doi.org/10.1109/ICST.2010.22","url":null,"abstract":"Researchers have argued that for failure to be observed the following three conditions must be met: 1) the defect is executed, 2) the program has transitioned into an infectious state, and 3) the infection has propagated to the output. Coincidental correctness arises when the program produces the correct output, while conditions 1) and 2) are met but not 3). In previous work, we showed that coincidental correctness is prevalent and demonstrated that it is a safety reducing factor for coverage-based fault localization. This work aims at cleansing test suites from coincidental correctness to enhance fault localization. Specifically, given a test suite in which each test has been classified as failing or passing, we present three variations of a technique that identify the subset of passing tests that are likely to be coincidentally correct. We evaluated the effectiveness of our techniques by empirically quantifying the following: 1) how accurately did they identify the coincidentally correct tests, 2) how much did they improve the effectiveness of coverage-based fault localization, and 3) how much did coverage decrease as a result of applying them. Using our better performing technique and configuration, the safety and precision of fault-localization was improved for 88% and 61% of the programs, respectively.","PeriodicalId":192678,"journal":{"name":"2010 Third International Conference on Software Testing, Verification and Validation","volume":"2 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2010-04-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"133625386","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Interoperability is one of the most important requirements for the next generation Healthcare Information Systems (HIS) that permits healthcare institutes to use heterogeneous solutions from different vendors. Introducing standards in eHealth domain, such as Health Level 7 (HL7) used for data representation, and Integrating Healthcare Enterprise (IHE) profiles for describing interactions between actors, is important to support interoperability of healthcare systems. This work addresses the challenges of interoperability testing of different HL7/IHE based HIS systems and introduces a testing methodology and its realization test framework based on TTCN-3 language.
{"title":"Towards an Automated and Dynamically Adaptable Test System for Testing Healthcare Information Systems","authors":"D. Vega","doi":"10.1109/ICST.2010.67","DOIUrl":"https://doi.org/10.1109/ICST.2010.67","url":null,"abstract":"Interoperability is one of the most important requirements for the next generation Healthcare Information Systems (HIS) that permits healthcare institutes to use heterogeneous solutions from different vendors. Introducing standards in eHealth domain, such as Health Level 7 (HL7) used for data representation, and Integrating Healthcare Enterprise (IHE) profiles for describing interactions between actors, is important to support interoperability of healthcare systems. This work addresses the challenges of interoperability testing of different HL7/IHE based HIS systems and introduces a testing methodology and its realization test framework based on TTCN-3 language.","PeriodicalId":192678,"journal":{"name":"2010 Third International Conference on Software Testing, Verification and Validation","volume":"3 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2010-04-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128761826","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Background: The fix-cache approach to regression test selection was proposed to identify the most fault-prone files and corresponding test cases through analysis of fixed defect reports. Aim: The study aims at evaluating the efficiency of this approach, compared to the previous regression test selection strategy in a major corporation, developing embedded systems. Method: We launched a post-hoc case study applying the fix-cache selection method during six iterations of development of a multi-million LOC product. The test case execution was monitored through the test management and defect reporting systems of the company. Results: From the observations, we conclude that the fix-cache method is more efficient in four iterations. The difference is statistically significant at alpha = 0.05. Conclusions: The new method is significantly more efficient in our case study. The study will be replicated in an environment with better control of the test execution.
{"title":"An Empirical Evaluation of Regression Testing Based on Fix-Cache Recommendations","authors":"Emelie Engström, P. Runeson, Greger Wikstrand","doi":"10.1109/ICST.2010.40","DOIUrl":"https://doi.org/10.1109/ICST.2010.40","url":null,"abstract":"Background: The fix-cache approach to regression test selection was proposed to identify the most fault-prone files and corresponding test cases through analysis of fixed defect reports. Aim: The study aims at evaluating the efficiency of this approach, compared to the previous regression test selection strategy in a major corporation, developing embedded systems. Method: We launched a post-hoc case study applying the fix-cache selection method during six iterations of development of a multi-million LOC product. The test case execution was monitored through the test management and defect reporting systems of the company. Results: From the observations, we conclude that the fix-cache method is more efficient in four iterations. The difference is statistically significant at alpha = 0.05. Conclusions: The new method is significantly more efficient in our case study. The study will be replicated in an environment with better control of the test execution.","PeriodicalId":192678,"journal":{"name":"2010 Third International Conference on Software Testing, Verification and Validation","volume":"51 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2010-04-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"122821066","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Regression testing is a necessary maintenance activity that can ensure high quality of the modified software system, and a great deal of research on regression testing has been performed. Most of the studies performed to date, however, have evaluated regression testing techniques under the limited context, such as a short-term assessment, which do not fully account for system evolution or industrial circumstances. One important issue associated with a system lifetime view that we have overlooked in past years is the effects of residual defects -- defects that persist undetected -- across several releases of a system. Depending on an organization's business goals and the type of system being built, residual defects might affect the level of success of the software products. In this paper, we conducted an empirical study to investigate whether regression testing techniques are effective in reducing the occurrence and persistence of residual defects across a system's lifetime, in particular, considering test case prioritization techniques. Our results show that heuristics can be effective in reducing both the occurrence of residual defects and their age. Our results also indicate that residual defects and their age have a strong impact on the cost-benefits of test case prioritization techniques.
{"title":"The Effectiveness of Regression Testing Techniques in Reducing the Occurrence of Residual Defects","authors":"Panduka Nagahawatte, Hyunsook Do","doi":"10.1109/ICST.2010.57","DOIUrl":"https://doi.org/10.1109/ICST.2010.57","url":null,"abstract":"Regression testing is a necessary maintenance activity that can ensure high quality of the modified software system, and a great deal of research on regression testing has been performed. Most of the studies performed to date, however, have evaluated regression testing techniques under the limited context, such as a short-term assessment, which do not fully account for system evolution or industrial circumstances. One important issue associated with a system lifetime view that we have overlooked in past years is the effects of residual defects -- defects that persist undetected -- across several releases of a system. Depending on an organization's business goals and the type of system being built, residual defects might affect the level of success of the software products. In this paper, we conducted an empirical study to investigate whether regression testing techniques are effective in reducing the occurrence and persistence of residual defects across a system's lifetime, in particular, considering test case prioritization techniques. Our results show that heuristics can be effective in reducing both the occurrence of residual defects and their age. Our results also indicate that residual defects and their age have a strong impact on the cost-benefits of test case prioritization techniques.","PeriodicalId":192678,"journal":{"name":"2010 Third International Conference on Software Testing, Verification and Validation","volume":"8 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2010-04-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129848264","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}