Dynamic analyses reason about a program's concrete heap and control flow and hence can report on actual program behavior with high or even perfect accuracy. But many dynamic analyses require extensive program instrumentation, often slowing down the analyzed program considerably. In the past, researchers have hence developed specialized static optimizations that can prove instrumentation for a special analysis unnecessary at many program locations: the analysis can safely omit monitoring these locations, as their monitoring would not change the analysis results. Arguing about the correctness of such optimizations is hard, however, and ad-hoc approaches have lead to mistakes in the past. In this paper we present a correctness criterion called Continuation Equivalence, which allows researchers to prove static optimizations of dynamic analyses correct more easily. The criterion demands that an optimization may alter instrumentation at a program site only if the altered instrumentation produces a dynamic analysis configuration equivalent to the configuration of the un-altered program with respect to all possible continuations of the control flow. In previous work, we have used a notion of continuationequivalent states to prove the correctness of static optimization for finite-state runtime monitors. With this work, we propose to generalize the idea to general dynamic analyses.
{"title":"Continuation equivalence: a correctness criterion for static optimizations of dynamic analyses","authors":"E. Bodden","doi":"10.1145/2002951.2002958","DOIUrl":"https://doi.org/10.1145/2002951.2002958","url":null,"abstract":"Dynamic analyses reason about a program's concrete heap and control flow and hence can report on actual program behavior with high or even perfect accuracy. But many dynamic analyses require extensive program instrumentation, often slowing down the analyzed program considerably.\u0000 In the past, researchers have hence developed specialized static optimizations that can prove instrumentation for a special analysis unnecessary at many program locations: the analysis can safely omit monitoring these locations, as their monitoring would not change the analysis results. Arguing about the correctness of such optimizations is hard, however, and ad-hoc approaches have lead to mistakes in the past.\u0000 In this paper we present a correctness criterion called Continuation Equivalence, which allows researchers to prove static optimizations of dynamic analyses correct more easily. The criterion demands that an optimization may alter instrumentation at a program site only if the altered instrumentation produces a dynamic analysis configuration equivalent to the configuration of the un-altered program with respect to all possible continuations of the control flow.\u0000 In previous work, we have used a notion of continuationequivalent states to prove the correctness of static optimization for finite-state runtime monitors. With this work, we propose to generalize the idea to general dynamic analyses.","PeriodicalId":315305,"journal":{"name":"International Workshop on Dynamic Analysis","volume":"203 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2011-07-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115637564","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
R. Salkeld, Wenhao Xu, Brendan Cully, Geoffrey Lefebvre, A. Warfield, G. Kiczales
We present a novel approach to the problem of dynamic program analysis: writing analysis code directly into the program source, but evaluating it against a recording of the original program's execution. This approach allows developers to reason about their program in the familiar context of its actual source, and take full advantage of program semantics, data structures, and library functionality for understanding execution. It also gives them the advantage of hindsight, letting them easily analyze unexpected behavior after it has occurred. Our position is that writing offline analysis as retroactive aspects provides a unifying approach that developers will find natural and powerful.
{"title":"Retroactive aspects: programming in the past","authors":"R. Salkeld, Wenhao Xu, Brendan Cully, Geoffrey Lefebvre, A. Warfield, G. Kiczales","doi":"10.1145/2002951.2002960","DOIUrl":"https://doi.org/10.1145/2002951.2002960","url":null,"abstract":"We present a novel approach to the problem of dynamic program analysis: writing analysis code directly into the program source, but evaluating it against a recording of the original program's execution. This approach allows developers to reason about their program in the familiar context of its actual source, and take full advantage of program semantics, data structures, and library functionality for understanding execution. It also gives them the advantage of hindsight, letting them easily analyze unexpected behavior after it has occurred. Our position is that writing offline analysis as retroactive aspects provides a unifying approach that developers will find natural and powerful.","PeriodicalId":315305,"journal":{"name":"International Workshop on Dynamic Analysis","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2011-07-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"131207836","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
In this paper a method of supporting integration testing based on logging the operation of an embedded system's software written in the C language is outlined. Its purpose is to facilitate the process of integration testing and partially automate it. It enables automatic verification of tests described with UML language's sequence diagrams by means of a log analyzer based on state machines running in parallel. A class of UML diagrams is also defined to which the abovementioned method is applicable. A short overview of the proposed method's advantages and disadvantages is given. Finally, an example is provided of an embedded system for which the described method can be used.
{"title":"A method facilitating integration testing of embedded software","authors":"Dominik Hura, Michal Dimmich","doi":"10.1145/2002951.2002954","DOIUrl":"https://doi.org/10.1145/2002951.2002954","url":null,"abstract":"In this paper a method of supporting integration testing based on logging the operation of an embedded system's software written in the C language is outlined. Its purpose is to facilitate the process of integration testing and partially automate it. It enables automatic verification of tests described with UML language's sequence diagrams by means of a log analyzer based on state machines running in parallel. A class of UML diagrams is also defined to which the abovementioned method is applicable. A short overview of the proposed method's advantages and disadvantages is given. Finally, an example is provided of an embedded system for which the described method can be used.","PeriodicalId":315305,"journal":{"name":"International Workshop on Dynamic Analysis","volume":"4 6","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2011-07-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"131717716","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Programmers and data analysts get frustrated when their long-running data processing scripts crash without producing results, due to either bugs in their code or inconsistencies in data sources. To alleviate this frustration, we developed a dynamic analysis technique that guarantees scripts will never crash: It converts all uncaught exceptions into special NA (Not Available) objects and continues executing rather than crashing. Thus, imperfect scripts will run to completion and produce partial results and an error log, which is more informative than simply crashing with no results. We implemented our technique as a "Sloppy" Python interpreter that automatically adds error tolerance to existing scripts without any programmer effort or run-time slowdown.
{"title":"Sloppy Python: using dynamic analysis to automatically add error tolerance to ad-hoc data processing scripts","authors":"Philip J. Guo","doi":"10.1145/2002951.2002961","DOIUrl":"https://doi.org/10.1145/2002951.2002961","url":null,"abstract":"Programmers and data analysts get frustrated when their long-running data processing scripts crash without producing results, due to either bugs in their code or inconsistencies in data sources. To alleviate this frustration, we developed a dynamic analysis technique that guarantees scripts will never crash: It converts all uncaught exceptions into special NA (Not Available) objects and continues executing rather than crashing. Thus, imperfect scripts will run to completion and produce partial results and an error log, which is more informative than simply crashing with no results. We implemented our technique as a \"Sloppy\" Python interpreter that automatically adds error tolerance to existing scripts without any programmer effort or run-time slowdown.","PeriodicalId":315305,"journal":{"name":"International Workshop on Dynamic Analysis","volume":"319 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2011-07-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126024998","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Coding against interfaces is a powerful technique in object-oriented programming. It decouples code and enables independent development. However, code decoupled via interfaces poses additional challenges for testing and dynamic execution, as not all pieces of code that are necessary to execute a piece of code may be available. For example, a client class may be coded against several interfaces. For testing, however, no classes may be available that implement the interfaces. This means that, to support testing, we need to generate mock classes along with test cases. Current test case generators do not fully support this kind of independent development and testing. In this paper, we describe a novel technique for generating test cases and mock classes for object-oriented programs that are coded against interfaces. We report on our initial experience with an implementation of our technique for Java. Our prototype implementation achieved higher code coverage than related tools that do not generate mock classes, such as Pex.
{"title":"Dsc+Mock: a test case + mock class generator in support of coding against interfaces","authors":"Mainul Islam, Christoph Csallner","doi":"10.1145/1868321.1868326","DOIUrl":"https://doi.org/10.1145/1868321.1868326","url":null,"abstract":"Coding against interfaces is a powerful technique in object-oriented programming. It decouples code and enables independent development. However, code decoupled via interfaces poses additional challenges for testing and dynamic execution, as not all pieces of code that are necessary to execute a piece of code may be available. For example, a client class may be coded against several interfaces. For testing, however, no classes may be available that implement the interfaces. This means that, to support testing, we need to generate mock classes along with test cases. Current test case generators do not fully support this kind of independent development and testing.\u0000 In this paper, we describe a novel technique for generating test cases and mock classes for object-oriented programs that are coded against interfaces. We report on our initial experience with an implementation of our technique for Java. Our prototype implementation achieved higher code coverage than related tools that do not generate mock classes, such as Pex.","PeriodicalId":315305,"journal":{"name":"International Workshop on Dynamic Analysis","volume":"8 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2010-07-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114671949","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Several software maintenance tasks such as debugging, phase-identification, or simply the high-level exploration of system functionality, rely on the extensive analysis of program traces. These usually require the developer to manually discern any repeated patterns that may be of interest from some visual representation of the trace. This can be both time-consuming and inaccurate; there is always the danger that visually similar trace-patterns actually represent distinct program behaviours. This paper presents an automated phase-identification technique. It is founded on the observation that the challenge of identifying repeated patterns in a trace is analogous to the challenge faced by data-compression algorithms. This applies an established data compression algorithm to identify repeated phases in traces. The SEQUITUR compression algorithm not only compresses data, but organises the repeated patterns into a hierarchy, which is especially useful from a comprehension standpoint, because it enables the analysis of a trace at at varying levels of abstraction.
{"title":"Using compression algorithms to support the comprehension of program traces","authors":"Neil Walkinshaw, S. Afshan, Phil McMinn","doi":"10.1145/1868321.1868323","DOIUrl":"https://doi.org/10.1145/1868321.1868323","url":null,"abstract":"Several software maintenance tasks such as debugging, phase-identification, or simply the high-level exploration of system functionality, rely on the extensive analysis of program traces. These usually require the developer to manually discern any repeated patterns that may be of interest from some visual representation of the trace. This can be both time-consuming and inaccurate; there is always the danger that visually similar trace-patterns actually represent distinct program behaviours. This paper presents an automated phase-identification technique. It is founded on the observation that the challenge of identifying repeated patterns in a trace is analogous to the challenge faced by data-compression algorithms. This applies an established data compression algorithm to identify repeated phases in traces. The SEQUITUR compression algorithm not only compresses data, but organises the repeated patterns into a hierarchy, which is especially useful from a comprehension standpoint, because it enables the analysis of a trace at at varying levels of abstraction.","PeriodicalId":315305,"journal":{"name":"International Workshop on Dynamic Analysis","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2010-07-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129938114","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
This paper presents an approach to extract high-level patterns from traces of programmable logic control (PLC) programs recorded with a deterministic replay debugging tool. Our deterministic replay debugging works by recording an application run in real-time with minimal overhead so that it can be reproduced afterwards. In a subsequent phase, the application is replayed in offline mode to produce a more detailed trace log with additional information about the application run. A software developer can replay the program in a debugger and use debugger features to analyze the program run and locate errors. However, due to the vast amount of data and the complex behavior of reactive control programs, a normal debugger is usually only a poor support in comprehending the program behavior. In this paper we present an approach to analyze recorded program runs of PLC applications. We present a technology to visualize the reactive behavior of a program run and find recurring high-level execution patterns in long-running applications. We give an overview of possible application scenarios to support program comprehension, testing, and debugging.
{"title":"Detection of high-level execution patterns in reactive behavior of control programs","authors":"Herbert Prähofer, Roland Schatz, Christian Wirth","doi":"10.1145/1868321.1868324","DOIUrl":"https://doi.org/10.1145/1868321.1868324","url":null,"abstract":"This paper presents an approach to extract high-level patterns from traces of programmable logic control (PLC) programs recorded with a deterministic replay debugging tool. Our deterministic replay debugging works by recording an application run in real-time with minimal overhead so that it can be reproduced afterwards. In a subsequent phase, the application is replayed in offline mode to produce a more detailed trace log with additional information about the application run. A software developer can replay the program in a debugger and use debugger features to analyze the program run and locate errors. However, due to the vast amount of data and the complex behavior of reactive control programs, a normal debugger is usually only a poor support in comprehending the program behavior. In this paper we present an approach to analyze recorded program runs of PLC applications. We present a technology to visualize the reactive behavior of a program run and find recurring high-level execution patterns in long-running applications. We give an overview of possible application scenarios to support program comprehension, testing, and debugging.","PeriodicalId":315305,"journal":{"name":"International Workshop on Dynamic Analysis","volume":"24 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2010-07-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"122056262","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
We present DSDSR, a generic repair tool for complex data structures. Generic, automatic data structure repair algorithms have applications in many areas. Reducing repair time can may therefore have a significant impact on software robustness. Current state of the art tools try to address the problem exhaustively and their performance depend primarily on the style of the correctness condition. We propose a new approach and implement a prototype that suffers less from style limitations and utilizes recent improvements in automatic theorem proving to reduce the time required in repairing a corrupt data structure. We also present experimental results to demonstrate the promise of our approach for generic repair and discuss our prototype implementation.
{"title":"DSDSR: a tool that uses dynamic symbolic execution for data structure repair","authors":"Ishtiaque Hussain, Christoph Csallner","doi":"10.1145/1868321.1868325","DOIUrl":"https://doi.org/10.1145/1868321.1868325","url":null,"abstract":"We present DSDSR, a generic repair tool for complex data structures. Generic, automatic data structure repair algorithms have applications in many areas. Reducing repair time can may therefore have a significant impact on software robustness. Current state of the art tools try to address the problem exhaustively and their performance depend primarily on the style of the correctness condition. We propose a new approach and implement a prototype that suffers less from style limitations and utilizes recent improvements in automatic theorem proving to reduce the time required in repairing a corrupt data structure. We also present experimental results to demonstrate the promise of our approach for generic repair and discuss our prototype implementation.","PeriodicalId":315305,"journal":{"name":"International Workshop on Dynamic Analysis","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2010-07-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"121573819","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
A. Marco, A. Bertolino, F. Giandomenico, P. Masci, A. Sabetta
Dynamic, evolving systems pose new challenges from the point of view of Quality of Service (QoS) analysis, calling for techniques able to combine traditional offline methods with new ones applied at run-time. Tracking the evolution and updating the assessment consistently with such system evolution require not only advanced analysis methods, but also appropriate metrics well representative of QoS properties in the addressed context. The ongoing European project Connect addresses systems evolution, and aims at bridging technological gaps arising from heterogeneity of networked systems, by synthesising on-the-fly interoperability connectors. Moving from such ambitious goal, in this paper we present a metrics framework, whereby classical dependability/QoS metrics can be refined and combined to characterise Connect applications and to support their monitoring and analysis.
{"title":"Metrics for QoS analysis in dynamic, evolving and heterogeneous connected systems","authors":"A. Marco, A. Bertolino, F. Giandomenico, P. Masci, A. Sabetta","doi":"10.1145/1868321.1868327","DOIUrl":"https://doi.org/10.1145/1868321.1868327","url":null,"abstract":"Dynamic, evolving systems pose new challenges from the point of view of Quality of Service (QoS) analysis, calling for techniques able to combine traditional offline methods with new ones applied at run-time. Tracking the evolution and updating the assessment consistently with such system evolution require not only advanced analysis methods, but also appropriate metrics well representative of QoS properties in the addressed context. The ongoing European project Connect addresses systems evolution, and aims at bridging technological gaps arising from heterogeneity of networked systems, by synthesising on-the-fly interoperability connectors. Moving from such ambitious goal, in this paper we present a metrics framework, whereby classical dependability/QoS metrics can be refined and combined to characterise Connect applications and to support their monitoring and analysis.","PeriodicalId":315305,"journal":{"name":"International Workshop on Dynamic Analysis","volume":"88 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2010-07-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129776081","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
In this paper we present the possibility of using an ontology based framework in order to model Dynamic Analysis techniques. This work relies on similar ideas applied to the case of Static Analysis [22, 28, 27], in which ontologies are used to represent some knowledge about the programs to be analyzed. In the approach proposed in this paper we describe how ontologies can be applied to Dynamic Analysis by modeling both the information collected from the system, as well as some requirements about the type of analysis to be performed. Both of these ontologies can be designed by integrating ontologies previously defined during the software development cycle, allowing for re-usability. Finally, these ontologies make it possible to reason about concepts related to Dynamic Analysis and offer tools that facilitate automation. This paper presents the main ideas of the proposed approach and illustrates them with an example related to Frequency Spectrum Analysis.
{"title":"An approach for modeling dynamic analysis using ontologies","authors":"Newres Al Haider, P. Nixon, B. Gaudin","doi":"10.1145/1868321.1868322","DOIUrl":"https://doi.org/10.1145/1868321.1868322","url":null,"abstract":"In this paper we present the possibility of using an ontology based framework in order to model Dynamic Analysis techniques. This work relies on similar ideas applied to the case of Static Analysis [22, 28, 27], in which ontologies are used to represent some knowledge about the programs to be analyzed. In the approach proposed in this paper we describe how ontologies can be applied to Dynamic Analysis by modeling both the information collected from the system, as well as some requirements about the type of analysis to be performed. Both of these ontologies can be designed by integrating ontologies previously defined during the software development cycle, allowing for re-usability. Finally, these ontologies make it possible to reason about concepts related to Dynamic Analysis and offer tools that facilitate automation. This paper presents the main ideas of the proposed approach and illustrates them with an example related to Frequency Spectrum Analysis.","PeriodicalId":315305,"journal":{"name":"International Workshop on Dynamic Analysis","volume":"24 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2010-07-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"122137389","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}