In recent years, statistical model checking (SMC) has become increasingly popular, because it scales well to larger stochastic models and is relatively simple to implement. SMC solves the model checking problem by simulating the model for finitely many executions and uses hypothesis testing to infer if the samples provide statistical evidence for or against a property. Being based on simulation and statistics, SMC avoids the state-space explosion problem well-known from other model checking algorithms. In this paper we show how SMC can be easily integrated into a property-based testing framework, like FsCheck for C#. As a result we obtain a very flexible testing and simulation environment, where a programmer can define models and properties in a familiar programming language. The advantages: no external modelling language is needed and both stochastic models and implementations can be checked. In addition, we have access to the powerful test-data generators of a property-based testing tool. We demonstrate the feasibility of our approach by repeating three experiments from the SMC literature.
{"title":"Statistical Model Checking Meets Property-Based Testing","authors":"B. Aichernig, Richard Schumi","doi":"10.1109/ICST.2017.42","DOIUrl":"https://doi.org/10.1109/ICST.2017.42","url":null,"abstract":"In recent years, statistical model checking (SMC) has become increasingly popular, because it scales well to larger stochastic models and is relatively simple to implement. SMC solves the model checking problem by simulating the model for finitely many executions and uses hypothesis testing to infer if the samples provide statistical evidence for or against a property. Being based on simulation and statistics, SMC avoids the state-space explosion problem well-known from other model checking algorithms. In this paper we show how SMC can be easily integrated into a property-based testing framework, like FsCheck for C#. As a result we obtain a very flexible testing and simulation environment, where a programmer can define models and properties in a familiar programming language. The advantages: no external modelling language is needed and both stochastic models and implementations can be checked. In addition, we have access to the powerful test-data generators of a property-based testing tool. We demonstrate the feasibility of our approach by repeating three experiments from the SMC literature.","PeriodicalId":112258,"journal":{"name":"2017 IEEE International Conference on Software Testing, Verification and Validation (ICST)","volume":"32 3","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2017-03-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114121071","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Fault localization is very important to both researchers and practitioners. Running tests is a useful approach to identify the fault location. Researchers have studied how to automatically identify faults in database applications [1], [2], [3]. However, those research considers the entire SQL statement as one line of code, indicating that the whole SQL statement contains errors. Little attention has been paid to ?nding faults in individual components of SQL statements such as a predicate clause. My research includes two major aspects: 1) Finding an effective and ef?cient method to localize faults in SQL predicates 2) Automatically fixing the reported faults The effectiveness is defined in terms of the faults found and the efficiency is defined with regards to the execution time. I have proposed a new approach that is more effective in ?nding fault in SQL predicates than existing methods. This approach and evaluation has been accepted to ICST 2017 [4].
{"title":"Localizing and Fixing Faults in SQL Predicates","authors":"Yun Guo","doi":"10.1109/ICST.2017.72","DOIUrl":"https://doi.org/10.1109/ICST.2017.72","url":null,"abstract":"Fault localization is very important to both researchers and practitioners. Running tests is a useful approach to identify the fault location. Researchers have studied how to automatically identify faults in database applications [1], [2], [3]. However, those research considers the entire SQL statement as one line of code, indicating that the whole SQL statement contains errors. Little attention has been paid to ?nding faults in individual components of SQL statements such as a predicate clause. My research includes two major aspects: 1) Finding an effective and ef?cient method to localize faults in SQL predicates 2) Automatically fixing the reported faults The effectiveness is defined in terms of the faults found and the efficiency is defined with regards to the execution time. I have proposed a new approach that is more effective in ?nding fault in SQL predicates than existing methods. This approach and evaluation has been accepted to ICST 2017 [4].","PeriodicalId":112258,"journal":{"name":"2017 IEEE International Conference on Software Testing, Verification and Validation (ICST)","volume":"118 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2017-03-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"124536755","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Open source projects and the globalization of the software industry have been a driving force in reuse of system components across traditional system boundaries. As a result, vulnerabilities and security concerns are no longer only impact individual but now also global software ecosystems. Known vulnerabilities and security concerns are reported in specialized vulnerability databases, which often remain information silos. In my PhD research, I introduce a modeling approach, which eliminates these information silos by linking the security knowledge with other software artifacts to improve traceability and trust in software products.
{"title":"Enhancing Trust – Software Vulnerability Analysis Framework","authors":"Sultan S. Al-Qahtani","doi":"10.1109/ICST.2017.76","DOIUrl":"https://doi.org/10.1109/ICST.2017.76","url":null,"abstract":"Open source projects and the globalization of the software industry have been a driving force in reuse of system components across traditional system boundaries. As a result, vulnerabilities and security concerns are no longer only impact individual but now also global software ecosystems. Known vulnerabilities and security concerns are reported in specialized vulnerability databases, which often remain information silos. In my PhD research, I introduce a modeling approach, which eliminates these information silos by linking the security knowledge with other software artifacts to improve traceability and trust in software products.","PeriodicalId":112258,"journal":{"name":"2017 IEEE International Conference on Software Testing, Verification and Validation (ICST)","volume":"76 3 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2017-03-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"132069308","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
In the recent years, a lot of research has been done in the field of software testing. But, there exist few empirical studies which analyze, if results of software testing research are actually practiced in real software projects, why they are (not) practiced, and how this influences the quality of the project. Our proposed research project tries to close this gap by analyzing open-source software projects. We focus our work on a concept, which is well accepted and known in our community for a longer period of time: test levels. Hence, we propose a two step approach to tackle the problem. First, we want to determine if the concept of a unit is still up-to-date and propose alternatives otherwise. Furthermore, we aim to investigate, why developers think that the concept of a unit is (not) current. In the second step we intend to check, based on the unit definition determined in the ?rst step, how many tests on the different levels exist for the investigated projects. Additionally, based on the results, we want to examine, why developers are (not) developing tests for a certain test level and how this influences the software quality of the project. Our initial study showed, that very few projects have unit tests, using the unit definition of the IEEE and ISTQB. Furthermore, it revealed that developers intend to write unit tests, but they fail to do so.
{"title":"Reflecting the Adoption of Software Testing Research in Open-Source Projects","authors":"Fabian Trautsch","doi":"10.1109/ICST.2017.77","DOIUrl":"https://doi.org/10.1109/ICST.2017.77","url":null,"abstract":"In the recent years, a lot of research has been done in the field of software testing. But, there exist few empirical studies which analyze, if results of software testing research are actually practiced in real software projects, why they are (not) practiced, and how this influences the quality of the project. Our proposed research project tries to close this gap by analyzing open-source software projects. We focus our work on a concept, which is well accepted and known in our community for a longer period of time: test levels. Hence, we propose a two step approach to tackle the problem. First, we want to determine if the concept of a unit is still up-to-date and propose alternatives otherwise. Furthermore, we aim to investigate, why developers think that the concept of a unit is (not) current. In the second step we intend to check, based on the unit definition determined in the ?rst step, how many tests on the different levels exist for the investigated projects. Additionally, based on the results, we want to examine, why developers are (not) developing tests for a certain test level and how this influences the software quality of the project. Our initial study showed, that very few projects have unit tests, using the unit definition of the IEEE and ISTQB. Furthermore, it revealed that developers intend to write unit tests, but they fail to do so.","PeriodicalId":112258,"journal":{"name":"2017 IEEE International Conference on Software Testing, Verification and Validation (ICST)","volume":"14 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2017-03-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130236980","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
In most cases, web applications communicate with web services (SOAP and RESTful). The former act as a front-end to the latter, which contain the business logic. A hacker might not have direct access to those web services (e.g., they are not on public networks), but can still provide malicious inputs to the web application, thus potentially compromising related services. Typical examples are XML injection attacks that target SOAP communications. In this paper, we present a novel, search-based approach used to generate test data for a web application in an attempt to deliver malicious XML messages to web services. Our goal is thus to detect XML injection vulnerabilities in web applications. The proposed approach is evaluated on two studies, including an industrial web application with millions of users. Results show that we are able to effectively generate test data (e.g., input values in an HTML form) that detect such vulnerabilities.
{"title":"A Search-Based Testing Approach for XML Injection Vulnerabilities in Web Applications","authors":"S. Jan, Duy Cu Nguyen, Andrea Arcuri, L. Briand","doi":"10.1109/ICST.2017.39","DOIUrl":"https://doi.org/10.1109/ICST.2017.39","url":null,"abstract":"In most cases, web applications communicate with web services (SOAP and RESTful). The former act as a front-end to the latter, which contain the business logic. A hacker might not have direct access to those web services (e.g., they are not on public networks), but can still provide malicious inputs to the web application, thus potentially compromising related services. Typical examples are XML injection attacks that target SOAP communications. In this paper, we present a novel, search-based approach used to generate test data for a web application in an attempt to deliver malicious XML messages to web services. Our goal is thus to detect XML injection vulnerabilities in web applications. The proposed approach is evaluated on two studies, including an industrial web application with millions of users. Results show that we are able to effectively generate test data (e.g., input values in an HTML form) that detect such vulnerabilities.","PeriodicalId":112258,"journal":{"name":"2017 IEEE International Conference on Software Testing, Verification and Validation (ICST)","volume":"79 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2017-03-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"121827205","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
P. Darke, Bharti Chimdyalwar, Avriti Chauhan, R. Venkatesh
Loop Abstraction followed by Bounded Model Checking, or LABMC in short, is a promising recent technique for proving safety of large programs. In an experimental setup proposed last year [14], LABMC was combined with slicing and Iterative Context Extension (ICE) with the aim of achieving scalability over industrial code. In this paper, we address two major limitations of that set-up, namely i) the inability of ICE to prune redundant code in a verification context, and ii) the unavailability of a tool that implements the set-up. We propose an improvement over ICE called Iterative Function Level Slicing (IFLS) and incorporate it in our tool called ELABMC, to offer an efficient implementation of [14]. We substantiate our claim with two sets of experiments over industrial applications as well as academic benchmarks. Quantifying the benefits of IFLS over traditional ICE in one, our results report that IFLS leads to 34.9% increase in efficiency, 17.7% improvement in precision, and scales in 14.2% more cases. With the second experiment, we show that ELABMC outperforms state-of-the-art verification techniques in the task of identifying static analysis warnings as false alarms.
{"title":"Efficient Safety Proofs for Industry-Scale Code Using Abstractions and Bounded Model Checking","authors":"P. Darke, Bharti Chimdyalwar, Avriti Chauhan, R. Venkatesh","doi":"10.1109/ICST.2017.53","DOIUrl":"https://doi.org/10.1109/ICST.2017.53","url":null,"abstract":"Loop Abstraction followed by Bounded Model Checking, or LABMC in short, is a promising recent technique for proving safety of large programs. In an experimental setup proposed last year [14], LABMC was combined with slicing and Iterative Context Extension (ICE) with the aim of achieving scalability over industrial code. In this paper, we address two major limitations of that set-up, namely i) the inability of ICE to prune redundant code in a verification context, and ii) the unavailability of a tool that implements the set-up. We propose an improvement over ICE called Iterative Function Level Slicing (IFLS) and incorporate it in our tool called ELABMC, to offer an efficient implementation of [14]. We substantiate our claim with two sets of experiments over industrial applications as well as academic benchmarks. Quantifying the benefits of IFLS over traditional ICE in one, our results report that IFLS leads to 34.9% increase in efficiency, 17.7% improvement in precision, and scales in 14.2% more cases. With the second experiment, we show that ELABMC outperforms state-of-the-art verification techniques in the task of identifying static analysis warnings as false alarms.","PeriodicalId":112258,"journal":{"name":"2017 IEEE International Conference on Software Testing, Verification and Validation (ICST)","volume":"29 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2017-03-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"132790222","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Developing for a global market requires the internationalization of software products and their localization to different countries, regions, and cultures. Localization testing verifies that the localized software variants work, look and feel as expected. Localization testing is a perfect candidate for automation. It has a high potential to reduce the manual effort in testing of multiple language variants and to speed-up release cycles. However, localization testing is rarely investigated in scientific work. There are only a few reports on automation approaches for localization testing providing very little empirical results or practical advice. In this paper we describe the approach we applied for automated testing of the different localized variants of a large industrial software system, we report on the various bugs found, and we discuss our experiences and lessons learned.
{"title":"How to Test in Sixteen Languages? Automation Support for Localization Testing","authors":"R. Ramler, R. Hoschek","doi":"10.1109/ICST.2017.63","DOIUrl":"https://doi.org/10.1109/ICST.2017.63","url":null,"abstract":"Developing for a global market requires the internationalization of software products and their localization to different countries, regions, and cultures. Localization testing verifies that the localized software variants work, look and feel as expected. Localization testing is a perfect candidate for automation. It has a high potential to reduce the manual effort in testing of multiple language variants and to speed-up release cycles. However, localization testing is rarely investigated in scientific work. There are only a few reports on automation approaches for localization testing providing very little empirical results or practical advice. In this paper we describe the approach we applied for automated testing of the different localized variants of a large industrial software system, we report on the various bugs found, and we discuss our experiences and lessons learned.","PeriodicalId":112258,"journal":{"name":"2017 IEEE International Conference on Software Testing, Verification and Validation (ICST)","volume":"9 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2017-03-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129841383","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Debugging multithreaded software is challenging because the basic assumption that underlies sequential software debugging, i.e. the program behavior is deterministic under fixed inputs, is no longer valid due to the nondeterminism brought by thread scheduling. To restore this basic assumption, we propose a proactive debugging method so that programmers can debug multithreaded programs as if they were sequential. Our approach is based on the synergistic integration of a set of new symbolic analysis and dynamic analysis techniques. In particular, symbolic analysis is used to investigate the program behavior under multiple thread interleavings and then drive the dynamic execution to new branches. Dynamic analysis is used to execute these new branches and in turn guide the symbolic analysis further. The net effect of applying this feedback loop is a systematic and complete coverage of the program behavior under a fixed test input.
{"title":"Debugging Multithreaded Programs Using Symbolic Analysis","authors":"Xiaodong Zhang","doi":"10.1109/ICST.2017.73","DOIUrl":"https://doi.org/10.1109/ICST.2017.73","url":null,"abstract":"Debugging multithreaded software is challenging because the basic assumption that underlies sequential software debugging, i.e. the program behavior is deterministic under fixed inputs, is no longer valid due to the nondeterminism brought by thread scheduling. To restore this basic assumption, we propose a proactive debugging method so that programmers can debug multithreaded programs as if they were sequential. Our approach is based on the synergistic integration of a set of new symbolic analysis and dynamic analysis techniques. In particular, symbolic analysis is used to investigate the program behavior under multiple thread interleavings and then drive the dynamic execution to new branches. Dynamic analysis is used to execute these new branches and in turn guide the symbolic analysis further. The net effect of applying this feedback loop is a systematic and complete coverage of the program behavior under a fixed test input.","PeriodicalId":112258,"journal":{"name":"2017 IEEE International Conference on Software Testing, Verification and Validation (ICST)","volume":"15 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2017-03-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114912598","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Junjie Tang, Xingmin Cui, Ziming Zhao, Shanqing Guo, Xin-Shun Xu, Chengyu Hu, Tao Ban, Bing Mao
In the Android system design, any app can start another app's public components to facilitate code reuse by sending an asynchronous message called Intent. In addition, Android also allows an app to have private components that should only be visible to the app itself. However, malicious apps can bypass this system protection and directly invoke private components in vulnerable apps through a class of newly discovered vulnerability, which is called next-intent vulnerability. In this paper, we design an intent flow analysis strategy which accurately tracks the intent in smali code to statically detect next-intent vulnerabilities efficiently and effectively on a large scale. We further propose an automated approach to dynamically verify the discovered vulnerabilities by generating exploit apps. Then we implement a tool named NIVAnalyzer and evaluate it on 20,000 apps downloaded from Google Play. As the result, we successfully confirms 190 vulnerable apps, some of which even have millions of downloads. We also confirmed that an open-source project and a third-party SDK, which are still used by other apps, have next intent vulnerabilities.
{"title":"NIVAnalyzer: A Tool for Automatically Detecting and Verifying Next-Intent Vulnerabilities in Android Apps","authors":"Junjie Tang, Xingmin Cui, Ziming Zhao, Shanqing Guo, Xin-Shun Xu, Chengyu Hu, Tao Ban, Bing Mao","doi":"10.1109/ICST.2017.56","DOIUrl":"https://doi.org/10.1109/ICST.2017.56","url":null,"abstract":"In the Android system design, any app can start another app's public components to facilitate code reuse by sending an asynchronous message called Intent. In addition, Android also allows an app to have private components that should only be visible to the app itself. However, malicious apps can bypass this system protection and directly invoke private components in vulnerable apps through a class of newly discovered vulnerability, which is called next-intent vulnerability. In this paper, we design an intent flow analysis strategy which accurately tracks the intent in smali code to statically detect next-intent vulnerabilities efficiently and effectively on a large scale. We further propose an automated approach to dynamically verify the discovered vulnerabilities by generating exploit apps. Then we implement a tool named NIVAnalyzer and evaluate it on 20,000 apps downloaded from Google Play. As the result, we successfully confirms 190 vulnerable apps, some of which even have millions of downloads. We also confirmed that an open-source project and a third-party SDK, which are still used by other apps, have next intent vulnerabilities.","PeriodicalId":112258,"journal":{"name":"2017 IEEE International Conference on Software Testing, Verification and Validation (ICST)","volume":"22 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2017-03-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"133772353","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
In contract-based development of model transformations, continuous deductive verification may help the transformation developer in early bug detection. However, because of the execution performance of current verification systems, re-verifying from scratch after a change has been made would introduce impractical delays. We address this problem by proposing an incremental verification approach for the ATL model-transformation language. Our approach is based on decomposing each OCL contract into sub-goals, and caching the sub-goal verification results. At each change we exploit the semantics of relational model transformation to determine whether a cached verification result may be impacted. Consequently, less postconditions/sub-goals need to be re-verified. When a change forces the re-verification of a postcondition, we use the cached verification results of sub-goals to construct a simplified version of the postcondition to verify. We prove the soundness of our approach and show its effectiveness by mutation analysis. Our case study presents an approximate 50% reuse of verification results for postconditions, and 70% reuse of verification results for sub-goals. The user perceives about 56% reduction of verification time for postconditions, and 51% for sub-goals.
{"title":"Incremental Deductive Verification for Relational Model Transformations","authors":"Zheng Cheng, M. Tisi","doi":"10.1109/ICST.2017.41","DOIUrl":"https://doi.org/10.1109/ICST.2017.41","url":null,"abstract":"In contract-based development of model transformations, continuous deductive verification may help the transformation developer in early bug detection. However, because of the execution performance of current verification systems, re-verifying from scratch after a change has been made would introduce impractical delays. We address this problem by proposing an incremental verification approach for the ATL model-transformation language. Our approach is based on decomposing each OCL contract into sub-goals, and caching the sub-goal verification results. At each change we exploit the semantics of relational model transformation to determine whether a cached verification result may be impacted. Consequently, less postconditions/sub-goals need to be re-verified. When a change forces the re-verification of a postcondition, we use the cached verification results of sub-goals to construct a simplified version of the postcondition to verify. We prove the soundness of our approach and show its effectiveness by mutation analysis. Our case study presents an approximate 50% reuse of verification results for postconditions, and 70% reuse of verification results for sub-goals. The user perceives about 56% reduction of verification time for postconditions, and 51% for sub-goals.","PeriodicalId":112258,"journal":{"name":"2017 IEEE International Conference on Software Testing, Verification and Validation (ICST)","volume":"12 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2017-03-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"133882119","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}