Implementations of static analyses are usually tailored toward a single goal to be efficient, hampering reusability and adaptability of the components of an analysis. To solve these issues, we propose to implement static analyses as highly-configurable software product lines (SPLs). Furthermore, we also discuss an implementation of an SPL for static analyses -- called OPAL -- that uses advanced language features offered by the Scala programming language to get an easily adaptable and (type-)safe software product line. OPAL is a general purpose library for static analysis of Java Bytecode that is already successfully used. We present OPAL and show how a design based on software produce line engineering benefits the implementation of static analyses with the framework.
{"title":"A software product line for static analyses: the OPAL framework","authors":"Michael Eichberg, Ben Hermann","doi":"10.1145/2614628.2614630","DOIUrl":"https://doi.org/10.1145/2614628.2614630","url":null,"abstract":"Implementations of static analyses are usually tailored toward a single goal to be efficient, hampering reusability and adaptability of the components of an analysis. To solve these issues, we propose to implement static analyses as highly-configurable software product lines (SPLs). Furthermore, we also discuss an implementation of an SPL for static analyses -- called OPAL -- that uses advanced language features offered by the Scala programming language to get an easily adaptable and (type-)safe software product line.\u0000 OPAL is a general purpose library for static analysis of Java Bytecode that is already successfully used. We present OPAL and show how a design based on software produce line engineering benefits the implementation of static analyses with the framework.","PeriodicalId":198433,"journal":{"name":"State Of the Art in Java Program Analysis","volume":"31 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2014-06-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"116676410","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Some program-analysis frameworks have been around for a long time, with Soot alone having been around for more than one decade. Over the years, demand on such frameworks have changed drastically, stressing the flexibility of frameworks such as Soot to their limit. What were those demands back then and how did they impact the design of Soot? What are the current demands and what architectural and methodological changes do they demand? What has the Soot community done to address these challenges? What remains to be solved? This talk means to address these questions to open the debate about the future evolution of Soot and other static-analysis frameworks.
{"title":"How to build the perfect Swiss army knife, and keep it sharp?: Challenges for the soot program-analysis framework in the light of past, current and future demands","authors":"E. Bodden","doi":"10.1145/2614628.2614634","DOIUrl":"https://doi.org/10.1145/2614628.2614634","url":null,"abstract":"Some program-analysis frameworks have been around for a long time, with Soot alone having been around for more than one decade. Over the years, demand on such frameworks have changed drastically, stressing the flexibility of frameworks such as Soot to their limit. What were those demands back then and how did they impact the design of Soot? What are the current demands and what architectural and methodological changes do they demand? What has the Soot community done to address these challenges? What remains to be solved? This talk means to address these questions to open the debate about the future evolution of Soot and other static-analysis frameworks.","PeriodicalId":198433,"journal":{"name":"State Of the Art in Java Program Analysis","volume":"12 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2014-06-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"134112639","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Typestate analyses determine whether a program's use of a given API obeys this API's usage constraints in the sense that the right methods are called on the right objects in the right order. Previously, we and others have described approaches that generate typestate analyses from textual finite-state property definitions written in specialized domain-specific languages. While such an approach is feasible, it requires a heavyweight compiler, hindering an effective integration into the programmer's development environment and thus often also into her software-development practice. Here we explain the design of a pure-Java interface facilitating both the definition and evaluation of typestate analyses. The interface is fluent, a term coined by Eric Evans and Martin Fowler. Fluent interfaces provide the user with the possibility to write method-invocation chains that almost read like natural-language text, in our case allowing for a seemingly declarative style of typestate definitions. In all previously described approaches, however, fluent APIs are used to build configuration objects. In this work, for the first time we show how to design a fluent API in such a way that it also encapsulates actual computation, not just configuration. We describe an implementation on top of Soot, Heros and Eclipse, which we are currently evaluating together with pilot customers in an industrial context at Fraunhofer SIT.
{"title":"TS4J: a fluent interface for defining and computing typestate analyses","authors":"E. Bodden","doi":"10.1145/2614628.2614629","DOIUrl":"https://doi.org/10.1145/2614628.2614629","url":null,"abstract":"Typestate analyses determine whether a program's use of a given API obeys this API's usage constraints in the sense that the right methods are called on the right objects in the right order. Previously, we and others have described approaches that generate typestate analyses from textual finite-state property definitions written in specialized domain-specific languages. While such an approach is feasible, it requires a heavyweight compiler, hindering an effective integration into the programmer's development environment and thus often also into her software-development practice.\u0000 Here we explain the design of a pure-Java interface facilitating both the definition and evaluation of typestate analyses. The interface is fluent, a term coined by Eric Evans and Martin Fowler. Fluent interfaces provide the user with the possibility to write method-invocation chains that almost read like natural-language text, in our case allowing for a seemingly declarative style of typestate definitions. In all previously described approaches, however, fluent APIs are used to build configuration objects. In this work, for the first time we show how to design a fluent API in such a way that it also encapsulates actual computation, not just configuration.\u0000 We describe an implementation on top of Soot, Heros and Eclipse, which we are currently evaluating together with pilot customers in an industrial context at Fraunhofer SIT.","PeriodicalId":198433,"journal":{"name":"State Of the Art in Java Program Analysis","volume":"17 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2014-06-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127390905","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Program analyses developed over the last three decades have demonstrated the ability to prove non-trivial properties of real-world programs. This ability in turn has applications to emerging software challenges in security, software-defined networking, cyber-physical systems, and beyond. The diversity of such applications necessitates adapting the underlying program analyses to client needs, in aspects of scalability, applicability, and accuracy. Today's program analyses, however, do not provide useful tuning knobs. This talk presents a general computer-assisted approach to effectively adapt program analyses to diverse clients. The approach has three key ingredients. First, it poses optimization problems that expose a large set of choices to adapt various aspects of an analysis, such as its cost, the accuracy of its result, and the assumptions it makes about missing information. Second, it solves those optimization problems by new search algorithms that efficiently navigate large search spaces, reason in the presence of noise, interact with users, and learn across programs. Third, it comprises a program analysis platform that facilitates users to specify and compose analyses, enables search algorithms to reason about analyses, and allows using large-scale computing resources to parallelize analyses.
{"title":"Large-scale configurable static analysis","authors":"M. Naik","doi":"10.1145/2614628.2614635","DOIUrl":"https://doi.org/10.1145/2614628.2614635","url":null,"abstract":"Program analyses developed over the last three decades have demonstrated the ability to prove non-trivial properties of real-world programs. This ability in turn has applications to emerging software challenges in security, software-defined networking, cyber-physical systems, and beyond. The diversity of such applications necessitates adapting the underlying program analyses to client needs, in aspects of scalability, applicability, and accuracy. Today's program analyses, however, do not provide useful tuning knobs. This talk presents a general computer-assisted approach to effectively adapt program analyses to diverse clients.\u0000 The approach has three key ingredients. First, it poses optimization problems that expose a large set of choices to adapt various aspects of an analysis, such as its cost, the accuracy of its result, and the assumptions it makes about missing information. Second, it solves those optimization problems by new search algorithms that efficiently navigate large search spaces, reason in the presence of noise, interact with users, and learn across programs. Third, it comprises a program analysis platform that facilitates users to specify and compose analyses, enables search algorithms to reason about analyses, and allows using large-scale computing resources to parallelize analyses.","PeriodicalId":198433,"journal":{"name":"State Of the Art in Java Program Analysis","volume":"51 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2014-06-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114981336","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
William Klieber, Lori Flynn, Amar Bhosale, Limin Jia, Lujo Bauer
One approach to defending against malicious Android applications has been to analyze them to detect potential information leaks. This paper describes a new static taint analysis for Android that combines and augments the FlowDroid and Epicc analyses to precisely track both inter-component and intra-component data flow in a set of Android applications. The analysis takes place in two phases: given a set of applications, we first determine the data flows enabled individually by each application, and the conditions under which these are possible; we then build on these results to enumerate the potentially dangerous data flows enabled by the set of applications as a whole. This paper describes our analysis method, implementation, and experimental results.
{"title":"Android taint flow analysis for app sets","authors":"William Klieber, Lori Flynn, Amar Bhosale, Limin Jia, Lujo Bauer","doi":"10.1145/2614628.2614633","DOIUrl":"https://doi.org/10.1145/2614628.2614633","url":null,"abstract":"One approach to defending against malicious Android applications has been to analyze them to detect potential information leaks. This paper describes a new static taint analysis for Android that combines and augments the FlowDroid and Epicc analyses to precisely track both inter-component and intra-component data flow in a set of Android applications. The analysis takes place in two phases: given a set of applications, we first determine the data flows enabled individually by each application, and the conditions under which these are possible; we then build on these results to enumerate the potentially dangerous data flows enabled by the set of applications as a whole. This paper describes our analysis method, implementation, and experimental results.","PeriodicalId":198433,"journal":{"name":"State Of the Art in Java Program Analysis","volume":"14 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2014-06-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114036659","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Slicing is a powerful technique that can help a developer to understand how the interaction of different parts of a program causes a specific outcome. Dynamic slicing uses runtime information to compute a precise slice for a given execution. However, dynamic slicing is not possible without a static analysis of the underlying code to reveal dependencies between the instructions that have been recorded. In this paper, we present a new algorithm for computing dynamic slices. We describe how the optimization framework Soot was used to compute specialized intraprocedural dependency graphs that better reflect the programmer's view of a program than previous approaches. Combining these dependency graphs with recorded execution traces allowed us to create debuggable dynamic slices. For this, a mapping of the debugger's model of the execution to the static code model of Soot was needed, and could be found with only few ambiguities. This results in the ability to produce a dynamic slice that can not only be visualized, but explored interactively and adjusted to better answer specific questions.
{"title":"Dynamic slicing with soot","authors":"Arian Treffer, M. Uflacker","doi":"10.1145/2614628.2614631","DOIUrl":"https://doi.org/10.1145/2614628.2614631","url":null,"abstract":"Slicing is a powerful technique that can help a developer to understand how the interaction of different parts of a program causes a specific outcome. Dynamic slicing uses runtime information to compute a precise slice for a given execution. However, dynamic slicing is not possible without a static analysis of the underlying code to reveal dependencies between the instructions that have been recorded.\u0000 In this paper, we present a new algorithm for computing dynamic slices. We describe how the optimization framework Soot was used to compute specialized intraprocedural dependency graphs that better reflect the programmer's view of a program than previous approaches. Combining these dependency graphs with recorded execution traces allowed us to create debuggable dynamic slices. For this, a mapping of the debugger's model of the execution to the static code model of Soot was needed, and could be found with only few ambiguities.\u0000 This results in the ability to produce a dynamic slice that can not only be visualized, but explored interactively and adjusted to better answer specific questions.","PeriodicalId":198433,"journal":{"name":"State Of the Art in Java Program Analysis","volume":"77 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2014-06-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"132621346","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Points-to analysis that scales to large programs is still an open area of research and there are several trade-offs between speed and precision. In this paper, we report advances in achieving extremely fast and scalable analysis for field-sensitive inclusion-based points-to analysis. The first algorithm is based on an explicit representation, sparse bit-vector set representation. The second algorithm is a refinement of the first using symbolic set representations using binary decision diagrams. The first algorithm scales extremely well when compared with the state-of-the-art points-to analysis problems solving the same problem, while the second reduces the memory footprint tremendously, using, on average, 4.6x less memory than the first, and even performs slightly faster points-to set propagation. The techniques that we introduce are a judicious combination of several heuristics involving sparse bit-vector set representations, prioritized processing of worklists for efficient iteration while computing fixed-points, and usage of binary decision diagrams for storing (but not propagating) points-to sets. The implementation of our approaches scales to large real life Java applications. We evaluated our implementation on benchmark applications from two recent releases of DaCapo benchmark suite using a recent version of Java Standard Library (JRE 1.7_03). Using our techniques we can propagate points-to information on all the benchmarks in less than a minute, using at most 2GB of memory for explicit representation and, at most 600MB for symbolic representation. Comparison with the fastest and the most closely related explicit and symbolic approaches reveals that our techniques are more scalable than related explicit approaches, and in terms of time, on average 4x times faster than the current state-of-the-art.
将指向分析扩展到大型程序仍然是一个开放的研究领域,在速度和精度之间存在一些权衡。在本文中,我们报告了在实现基于场敏感的包含点到分析的极快速和可扩展分析方面的进展。第一种算法是基于显式表示,稀疏的位向量集表示。第二种算法是对第一种算法的改进,使用二元决策图的符号集表示。与解决相同问题的最先进的点到分析问题相比,第一种算法的扩展性非常好,而第二种算法极大地减少了内存占用,平均使用的内存比第一种算法少4.6倍,甚至执行的点到集传播速度略快。我们介绍的技术是几种启发式的明智组合,涉及稀疏的位向量集表示,在计算定点时优先处理工作列表以实现有效迭代,以及使用二进制决策图存储(但不传播)点到集。我们的方法的实现可以扩展到现实生活中的大型Java应用程序。我们使用最新版本的Java Standard Library (JRE 1.7_03),在两个最新版本的DaCapo基准测试套件的基准测试应用程序上评估了我们的实现。使用我们的技术,我们可以在不到一分钟的时间内传播所有基准测试的点到信息,最多使用2GB内存进行显式表示,最多使用600MB内存进行符号表示。与最快和最密切相关的显式和符号方法相比,我们的技术比相关的显式方法更具可扩展性,并且在时间方面平均比当前最先进的方法快4倍。
{"title":"Explicit and symbolic techniques for fast and scalable points-to analysis","authors":"E. Pek, P. Madhusudan","doi":"10.1145/2614628.2614632","DOIUrl":"https://doi.org/10.1145/2614628.2614632","url":null,"abstract":"Points-to analysis that scales to large programs is still an open area of research and there are several trade-offs between speed and precision. In this paper, we report advances in achieving extremely fast and scalable analysis for field-sensitive inclusion-based points-to analysis. The first algorithm is based on an explicit representation, sparse bit-vector set representation. The second algorithm is a refinement of the first using symbolic set representations using binary decision diagrams. The first algorithm scales extremely well when compared with the state-of-the-art points-to analysis problems solving the same problem, while the second reduces the memory footprint tremendously, using, on average, 4.6x less memory than the first, and even performs slightly faster points-to set propagation. The techniques that we introduce are a judicious combination of several heuristics involving sparse bit-vector set representations, prioritized processing of worklists for efficient iteration while computing fixed-points, and usage of binary decision diagrams for storing (but not propagating) points-to sets.\u0000 The implementation of our approaches scales to large real life Java applications. We evaluated our implementation on benchmark applications from two recent releases of DaCapo benchmark suite using a recent version of Java Standard Library (JRE 1.7_03). Using our techniques we can propagate points-to information on all the benchmarks in less than a minute, using at most 2GB of memory for explicit representation and, at most 600MB for symbolic representation. Comparison with the fastest and the most closely related explicit and symbolic approaches reveals that our techniques are more scalable than related explicit approaches, and in terms of time, on average 4x times faster than the current state-of-the-art.","PeriodicalId":198433,"journal":{"name":"State Of the Art in Java Program Analysis","volume":"3 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2014-06-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126810862","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
To analyze a large system, one often needs to break it into smaller components. To analyze a component or unit under analysis, one needs to model its context of execution, called environment, which represents the components with which the unit interacts. Environment generation is a challenging problem, because the environment needs to be general enough to uncover unit errors, yet precise enough to make the analysis tractable. In this paper, we present a tool for automated environment generation for open components and systems. The tool, called OCSEGen, is implemented on top of the Soot framework. We present the tool's current support and discuss its possible future extensions.
{"title":"OCSEGen: open components and systems environment generator","authors":"O. Tkachuk","doi":"10.1145/2487568.2487572","DOIUrl":"https://doi.org/10.1145/2487568.2487572","url":null,"abstract":"To analyze a large system, one often needs to break it into smaller components. To analyze a component or unit under analysis, one needs to model its context of execution, called environment, which represents the components with which the unit interacts. Environment generation is a challenging problem, because the environment needs to be general enough to uncover unit errors, yet precise enough to make the analysis tractable. In this paper, we present a tool for automated environment generation for open components and systems. The tool, called OCSEGen, is implemented on top of the Soot framework. We present the tool's current support and discuss its possible future extensions.","PeriodicalId":198433,"journal":{"name":"State Of the Art in Java Program Analysis","volume":"37 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2013-06-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"124974760","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Numerical static analysis computes an approximation of all the possible values that a numeric variable may assume, in any execution of the program. Many numerical static analyses have been proposed exploiting the theory of abstract interpretation, which is a general framework for designing provably correct program analysis. The two main problems in analyzing numerical properties are: choosing the right level of abstraction (the abstract domain) and developing an efficient iteration strategy which computes the analysis result guaranteeing termination and soundness. In this paper, we report on our prototype implementation of a Java bytecode static analyzer for numerical properties. It has been developed exploiting Soot bytecode abstractions, existing libraries for numerical abstract domains, and the iteration strategies commonly used in the abstract interpretation community. We show pros and cons of using Soot, and discuss the main differences between our analyzer and the Soot static analysis framework.
{"title":"Numerical static analysis with Soot","authors":"G. Amato, S. Maio, F. Scozzari","doi":"10.1145/2487568.2487571","DOIUrl":"https://doi.org/10.1145/2487568.2487571","url":null,"abstract":"Numerical static analysis computes an approximation of all the possible values that a numeric variable may assume, in any execution of the program. Many numerical static analyses have been proposed exploiting the theory of abstract interpretation, which is a general framework for designing provably correct program analysis. The two main problems in analyzing numerical properties are: choosing the right level of abstraction (the abstract domain) and developing an efficient iteration strategy which computes the analysis result guaranteeing termination and soundness.\u0000 In this paper, we report on our prototype implementation of a Java bytecode static analyzer for numerical properties. It has been developed exploiting Soot bytecode abstractions, existing libraries for numerical abstract domains, and the iteration strategies commonly used in the abstract interpretation community. We show pros and cons of using Soot, and discuss the main differences between our analyzer and the Soot static analysis framework.","PeriodicalId":198433,"journal":{"name":"State Of the Art in Java Program Analysis","volume":"55 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2013-06-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126512794","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Recently, software verification is being used to prove the presence of contradictions in source code, and thus detect potential weaknesses in the code or provide assistance to the compiler optimization. Compared to verification of correctness properties, the translation from source code to logic can be very simple and thus easy to solve by automated theorem provers. In this paper, we present a translation of Java into logic that is suitable for proving the presence of contradictions in code. We show that the translation, which is based on the Jimple language, can be used to analyze real-world programs, and discuss some issues that arise from differences between Java code and its bytecode.
{"title":"Joogie: from Java through Jimple to Boogie","authors":"Stephan Arlt, Philipp Rümmer, Martin Schäf","doi":"10.1145/2487568.2487570","DOIUrl":"https://doi.org/10.1145/2487568.2487570","url":null,"abstract":"Recently, software verification is being used to prove the presence of contradictions in source code, and thus detect potential weaknesses in the code or provide assistance to the compiler optimization. Compared to verification of correctness properties, the translation from source code to logic can be very simple and thus easy to solve by automated theorem provers. In this paper, we present a translation of Java into logic that is suitable for proving the presence of contradictions in code. We show that the translation, which is based on the Jimple language, can be used to analyze real-world programs, and discuss some issues that arise from differences between Java code and its bytecode.","PeriodicalId":198433,"journal":{"name":"State Of the Art in Java Program Analysis","volume":"73 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2013-06-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128293585","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}