We describe a tool that supports verification of workflow models specified in UML activity graphs. The tool translates an activity graph into an input format for a model checker according to a semantics we published earlier. With the model checker arbitrary propositional requirements can be checked against the input model. If a requirement fails to hold an error trace is returned by the model checker. The tool automatically translates such an error trace into an activity graph trace by highlighting a corresponding path in the activity graph. One of the problems that is dealt with is that model checkers require a finite state space whereas workflow models in general have an infinite state space. Another problem is that strong fairness is necessary to obtain realistic results. Only model checkers that use a special model checking algorithm for strong fairness are suitable for verifying workflow models. We analyse the structure of the state space. We illustrate our approach with some example verifications.
{"title":"Verification support for workflow design with UML activity graphs","authors":"Rik Eshuis, R. Wieringa","doi":"10.1145/581339.581362","DOIUrl":"https://doi.org/10.1145/581339.581362","url":null,"abstract":"We describe a tool that supports verification of workflow models specified in UML activity graphs. The tool translates an activity graph into an input format for a model checker according to a semantics we published earlier. With the model checker arbitrary propositional requirements can be checked against the input model. If a requirement fails to hold an error trace is returned by the model checker. The tool automatically translates such an error trace into an activity graph trace by highlighting a corresponding path in the activity graph. One of the problems that is dealt with is that model checkers require a finite state space whereas workflow models in general have an infinite state space. Another problem is that strong fairness is necessary to obtain realistic results. Only model checkers that use a special model checking algorithm for strong fairness are suitable for verifying workflow models. We analyse the structure of the state space. We illustrate our approach with some example verifications.","PeriodicalId":186061,"journal":{"name":"Proceedings of the 24th International Conference on Software Engineering. ICSE 2002","volume":"26 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2002-05-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130633621","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
C. Lucena, Alessandro F. Garcia, Andrea Omicini, J. Castro, F. Zambonelli
Objects and agents are abstractions that exhibit points of similarity, but the development of agent-based software poses other challenges to software engineering since software agents are inherently more complex entities. In addition, a large-scale multi-agent system needs to satisfy multiple stringent requirements such as reliability, security, interoperability, scalability, reusability, and maintainability. This workshop brought together researchers and practitioners to discuss the current state and future direction of research in software engineering for large-scale multi-agent systems. A particular interest was to understand those issues in the agent technology that difficult and/or improve the production of large-scale distributed systems.
{"title":"Software engineering for large-scale multi-agent systems - SELMAS'2002","authors":"C. Lucena, Alessandro F. Garcia, Andrea Omicini, J. Castro, F. Zambonelli","doi":"10.1145/581339.581428","DOIUrl":"https://doi.org/10.1145/581339.581428","url":null,"abstract":"Objects and agents are abstractions that exhibit points of similarity, but the development of agent-based software poses other challenges to software engineering since software agents are inherently more complex entities. In addition, a large-scale multi-agent system needs to satisfy multiple stringent requirements such as reliability, security, interoperability, scalability, reusability, and maintainability. This workshop brought together researchers and practitioners to discuss the current state and future direction of research in software engineering for large-scale multi-agent systems. A particular interest was to understand those issues in the agent technology that difficult and/or improve the production of large-scale distributed systems.","PeriodicalId":186061,"journal":{"name":"Proceedings of the 24th International Conference on Software Engineering. ICSE 2002","volume":"11 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2002-05-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114310081","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2002-05-19DOI: 10.1109/ICSE.2002.1007961
Jung-Min Kim, A. Porter
Regression testing is an expensive and frequently executed maintenance process used to revalidate modified software. To improve it, regression test selection (RTS) techniques strive to lower costs without overly reducing effectiveness by carefully selecting a subset of the test suite. Under certain conditions, some can even guarantee that the selected test cases perform no worse than the original test suite. This ignores certain software development realities such as resource and time constraints that may prevent using RTS techniques as intended (e.g., regression testing must be done overnight, but RTS selection returns two days worth of tests). In practice, testers work around this by prioritizing the test cases and running only those that fit within existing constraints. Unfortunately this generally violates key RTS assumptions, voiding RTS technique guarantees and making regression testing performance unpredictable. Despite this, existing prioritization techniques are memoryless, implicitly assuming that local choices can ensure adequate long run performance. Instead, we propose a new technique that bases prioritization on historical execution data. We conducted an experiment to assess its effects on the long run performance of resource constrained regression testing. Our results expose essential tradeoffs that should be considered when using these techniques over a series of software releases.
{"title":"A history-based test prioritization technique for regression testing in resource constrained environments","authors":"Jung-Min Kim, A. Porter","doi":"10.1109/ICSE.2002.1007961","DOIUrl":"https://doi.org/10.1109/ICSE.2002.1007961","url":null,"abstract":"Regression testing is an expensive and frequently executed maintenance process used to revalidate modified software. To improve it, regression test selection (RTS) techniques strive to lower costs without overly reducing effectiveness by carefully selecting a subset of the test suite. Under certain conditions, some can even guarantee that the selected test cases perform no worse than the original test suite. This ignores certain software development realities such as resource and time constraints that may prevent using RTS techniques as intended (e.g., regression testing must be done overnight, but RTS selection returns two days worth of tests). In practice, testers work around this by prioritizing the test cases and running only those that fit within existing constraints. Unfortunately this generally violates key RTS assumptions, voiding RTS technique guarantees and making regression testing performance unpredictable. Despite this, existing prioritization techniques are memoryless, implicitly assuming that local choices can ensure adequate long run performance. Instead, we propose a new technique that bases prioritization on historical execution data. We conducted an experiment to assess its effects on the long run performance of resource constrained regression testing. Our results expose essential tradeoffs that should be considered when using these techniques over a series of software releases.","PeriodicalId":186061,"journal":{"name":"Proceedings of the 24th International Conference on Software Engineering. ICSE 2002","volume":"26 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2002-05-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"131550138","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Goal orientation is an increasingly recognized paradigm for eliciting, structuring, analyzing and documenting system requirements. Goals are statements of intent ranging from high-level, strategic concerns to low-level, technical requirements on the software-to-be and assumptions on its environment. Achieving goals require the cooperation of agents such as software components, input/output devices and human agents. The assignment of responsibilities for goals to agents is a critical decision in the requirements engineering process as alternative agent assignments define alternative system proposals. The paper describes a systematic technique to support the process of refining goals, identifying agents, and exploring alternative responsibility assignments. The underlying principles are to refine goals until they are assignable to single agents, and to assign a goal to an agent only if the agent can realize the goal. There are various reasons why a goal may not be realizable by an agent, e.g., the goal may refer to variables that are not monitorable or controllable by the agent. The notion of goal realizability is first defined on formal grounds; it provides a basis for identifying a complete taxonomy of realizability problems. From this taxonomy we systematically derive a catalog of tactics for refining goals and identifying agents so as to resolve realizability problems. Each tactics corresponds to the application of a formal refinement pattern that relieves the specifier from verifying the correctness of refinements in temporal logic. Our techniques have been used in two case studies of significant size; excerpts are shown to illustrate the main ideas.
{"title":"Agent-based tactics for goal-oriented requirements elaboration","authors":"Emmanuel Letier, A. V. Lamsweerde","doi":"10.1145/581352.581353","DOIUrl":"https://doi.org/10.1145/581352.581353","url":null,"abstract":"Goal orientation is an increasingly recognized paradigm for eliciting, structuring, analyzing and documenting system requirements. Goals are statements of intent ranging from high-level, strategic concerns to low-level, technical requirements on the software-to-be and assumptions on its environment. Achieving goals require the cooperation of agents such as software components, input/output devices and human agents. The assignment of responsibilities for goals to agents is a critical decision in the requirements engineering process as alternative agent assignments define alternative system proposals. The paper describes a systematic technique to support the process of refining goals, identifying agents, and exploring alternative responsibility assignments. The underlying principles are to refine goals until they are assignable to single agents, and to assign a goal to an agent only if the agent can realize the goal. There are various reasons why a goal may not be realizable by an agent, e.g., the goal may refer to variables that are not monitorable or controllable by the agent. The notion of goal realizability is first defined on formal grounds; it provides a basis for identifying a complete taxonomy of realizability problems. From this taxonomy we systematically derive a catalog of tactics for refining goals and identifying agents so as to resolve realizability problems. Each tactics corresponds to the application of a formal refinement pattern that relieves the specifier from verifying the correctness of refinements in temporal logic. Our techniques have been used in two case studies of significant size; excerpts are shown to illustrate the main ideas.","PeriodicalId":186061,"journal":{"name":"Proceedings of the 24th International Conference on Software Engineering. ICSE 2002","volume":"50 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2002-05-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"133504044","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Summary form only given. Proposes an approach for enhancing aspect-oriented software development considering aspects as first-class design entities. The proposal puts together lines of research coming from different fields, namely: aspect-oriented frameworks, aspect models extending UML models, knowledge-driven framework documentation and agent-based planning. The concept of smart-weaving promotes essentially an early incorporation of aspects in the development cycle, so that designers are able to specify their designs by means of aspect models, reuse parts of these models, and also provide different strategies to map generic aspect structures to specific implementations. With this purpose, we have built an experimental environment called Smartweaver aiming to support this process. The kind of assistance provided by the tool relies on the Smartbooks method, a method extending traditional techniques for framework documentation. Smartbooks includes a special planning agent that is able to derive the sequence of activities that should be executed to implement a given functionality from a target framework.
{"title":"Smartweaver: an agent-based approach for aspect-oriented development","authors":"Federico Trilnik, J. A. D. Pace, M. Campo","doi":"10.1145/581339.581467","DOIUrl":"https://doi.org/10.1145/581339.581467","url":null,"abstract":"Summary form only given. Proposes an approach for enhancing aspect-oriented software development considering aspects as first-class design entities. The proposal puts together lines of research coming from different fields, namely: aspect-oriented frameworks, aspect models extending UML models, knowledge-driven framework documentation and agent-based planning. The concept of smart-weaving promotes essentially an early incorporation of aspects in the development cycle, so that designers are able to specify their designs by means of aspect models, reuse parts of these models, and also provide different strategies to map generic aspect structures to specific implementations. With this purpose, we have built an experimental environment called Smartweaver aiming to support this process. The kind of assistance provided by the tool relies on the Smartbooks method, a method extending traditional techniques for framework documentation. Smartbooks includes a special planning agent that is able to derive the sequence of activities that should be executed to implement a given functionality from a target framework.","PeriodicalId":186061,"journal":{"name":"Proceedings of the 24th International Conference on Software Engineering. ICSE 2002","volume":"24 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2002-05-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114480020","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Program slicing combined with constraint solving is a powerful tool for software analysis. Path conditions are generated for a slice or chop, which-when solved for the input variables-deliver compact "witnesses" for dependences or illegal influences between program points. We show how to make path conditions work for large programs. Aggressive engineering, based on interval analysis and BDDs, is shown to overcome the potential combinatoric explosion. Case studies and empirical data demonstrate the usefulness of path conditions for practical program analysis.
{"title":"Efficient path conditions in dependence graphs","authors":"T. Robschink, G. Snelting","doi":"10.1145/581339.581398","DOIUrl":"https://doi.org/10.1145/581339.581398","url":null,"abstract":"Program slicing combined with constraint solving is a powerful tool for software analysis. Path conditions are generated for a slice or chop, which-when solved for the input variables-deliver compact \"witnesses\" for dependences or illegal influences between program points. We show how to make path conditions work for large programs. Aggressive engineering, based on interval analysis and BDDs, is shown to overcome the potential combinatoric explosion. Case studies and empirical data demonstrate the usefulness of path conditions for practical program analysis.","PeriodicalId":186061,"journal":{"name":"Proceedings of the 24th International Conference on Software Engineering. ICSE 2002","volume":"76 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2002-05-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"123539410","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Summary form only given. The engineering of systems in the 21st Century demands robust use of the systems approach given the nature of our times, as well as the systems being created. The global marketplace, changing competition dynamics, shorter life cycles, and increasing complexity characterize our environment. We are building systems that are much larger than ever before and we are building systems that are infinitely smaller than ever before. Maturity of technical, management, and infrastructure processes are competitive discriminators. Systems engineering, both as a profession and as practiced by multi-discipline practitioners, is key to addressing these challenges. Over the past decade, there have been frequent debates on whether systems engineering is an approach or a formal field of engineering. Given the technical, management, and environmental challenges of this century, I believe that systems engineering must be an essential engineering discipline for the 21st Century. This talk discusses the state of the art and practice of systems engineering, and several initiatives focused on its evolution as a formal engineering discipline. Systems engineering and software engineering must each evolve as unique engineering disciplines to address the engineering problems of the 21st Century. We must ensure their evolution results in shared knowledge, and highly collaborative approaches and methods drawing on the unique strengths of each discipline.
{"title":"Systems engineering: an essential engineering discipline for the 21st Century","authors":"D. Rhodes","doi":"10.1145/581339.581342","DOIUrl":"https://doi.org/10.1145/581339.581342","url":null,"abstract":"Summary form only given. The engineering of systems in the 21st Century demands robust use of the systems approach given the nature of our times, as well as the systems being created. The global marketplace, changing competition dynamics, shorter life cycles, and increasing complexity characterize our environment. We are building systems that are much larger than ever before and we are building systems that are infinitely smaller than ever before. Maturity of technical, management, and infrastructure processes are competitive discriminators. Systems engineering, both as a profession and as practiced by multi-discipline practitioners, is key to addressing these challenges. Over the past decade, there have been frequent debates on whether systems engineering is an approach or a formal field of engineering. Given the technical, management, and environmental challenges of this century, I believe that systems engineering must be an essential engineering discipline for the 21st Century. This talk discusses the state of the art and practice of systems engineering, and several initiatives focused on its evolution as a formal engineering discipline. Systems engineering and software engineering must each evolve as unique engineering disciplines to address the engineering problems of the 21st Century. We must ensure their evolution results in shared knowledge, and highly collaborative approaches and methods drawing on the unique strengths of each discipline.","PeriodicalId":186061,"journal":{"name":"Proceedings of the 24th International Conference on Software Engineering. ICSE 2002","volume":"27 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2002-05-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"125998278","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
G. Rothermel, Sebastian G. Elbaum, Alexey G. Malishevsky, P. Kallakuri, B. Davia
Regression testing is an expensive testing process used to validate software following modifications. The cost-effectiveness of regression testing techniques varies with characteristics of test suites. One such characteristic, test suite granularity, involves the way in which test inputs are grouped into test cases within a test suite. Various cost-benefit tradeoffs have been attributed to choices of test suite granularity, but almost no research has formally examined these tradeoffs. To address this lack, we conducted several controlled experiments, examining the effects of test suite granularity on the costs and benefits of several regression testing methodologies across six releases of two non-trivial software systems. Our results expose essential tradeoffs to consider when designing test suites for use in regression testing evolving systems.
{"title":"The impact of test suite granularity on the cost-effectiveness of regression testing","authors":"G. Rothermel, Sebastian G. Elbaum, Alexey G. Malishevsky, P. Kallakuri, B. Davia","doi":"10.1145/581356.581358","DOIUrl":"https://doi.org/10.1145/581356.581358","url":null,"abstract":"Regression testing is an expensive testing process used to validate software following modifications. The cost-effectiveness of regression testing techniques varies with characteristics of test suites. One such characteristic, test suite granularity, involves the way in which test inputs are grouped into test cases within a test suite. Various cost-benefit tradeoffs have been attributed to choices of test suite granularity, but almost no research has formally examined these tradeoffs. To address this lack, we conducted several controlled experiments, examining the effects of test suite granularity on the costs and benefits of several regression testing methodologies across six releases of two non-trivial software systems. Our results expose essential tradeoffs to consider when designing test suites for use in regression testing evolving systems.","PeriodicalId":186061,"journal":{"name":"Proceedings of the 24th International Conference on Software Engineering. ICSE 2002","volume":"14 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2002-05-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"124827694","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Many maintenance tasks address concerns, or features, that are not well modularized in the source code comprising a system. Existing approaches available to help software developers locate and manage scattered concerns use a representation based on lines of source code, complicating the analysis of the concerns. In this paper, we introduce the concern graph representation that abstracts the implementation details of a concern and makes explicit the relationships between different parts of the concern. The abstraction used in a Concern Graph has been designed to allow an obvious and inexpensive mapping back to the corresponding source code. To investigate the practical tradeoffs related to this approach, we have built the feature exploration and analysis tool (FEAT) that allows a developer to manipulate a concern representation extracted from a Java system, and to analyze the relationships of that concern to the code base. We have used this tool to find and describe concerns related to software change tasks. We have performed case studies to evaluate the feasibility, usability, and scalability of the approach. Our results indicate that concern graphs can be used to document a concern for change, that developers unfamiliar with concern graphs can use them effectively, and that the underlying technology scales to industrial-sized programs.
{"title":"Concern graphs: finding and describing concerns using structural program dependencies","authors":"M. Robillard, G. Murphy","doi":"10.1145/581339.581390","DOIUrl":"https://doi.org/10.1145/581339.581390","url":null,"abstract":"Many maintenance tasks address concerns, or features, that are not well modularized in the source code comprising a system. Existing approaches available to help software developers locate and manage scattered concerns use a representation based on lines of source code, complicating the analysis of the concerns. In this paper, we introduce the concern graph representation that abstracts the implementation details of a concern and makes explicit the relationships between different parts of the concern. The abstraction used in a Concern Graph has been designed to allow an obvious and inexpensive mapping back to the corresponding source code. To investigate the practical tradeoffs related to this approach, we have built the feature exploration and analysis tool (FEAT) that allows a developer to manipulate a concern representation extracted from a Java system, and to analyze the relationships of that concern to the code base. We have used this tool to find and describe concerns related to software change tasks. We have performed case studies to evaluate the feasibility, usability, and scalability of the approach. Our results indicate that concern graphs can be used to document a concern for change, that developers unfamiliar with concern graphs can use them effectively, and that the underlying technology scales to industrial-sized programs.","PeriodicalId":186061,"journal":{"name":"Proceedings of the 24th International Conference on Software Engineering. ICSE 2002","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2002-05-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129401699","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Assuring and evolving concurrent programs requires understanding the concurrency-related design decisions used in their implementation. In Java-style shared-memory programs, these decisions include which state is shared, how access to it is regulated, the roles of threads, and the policy that distinguishes desired concurrency from race conditions. We use case studies from production Java code to explore the costs and benefits of a new annotation-based approach for expressing design intent. Our intent is both to assist in establishing "thread safety" attributes in code and to support tools that safely restructure code. The annotations we use express "mechanical" properties such as lock-state associations, uniqueness of references, and encapsulation of state into named aggregations. Our analyses revealed race conditions in our case study samples, drawn from open-source projects and library code. The novel technical features of this approach include (1) flexible encapsulation via aggregations of state that can cross object boundaries, (2) the association of locks with state aggregations, (3) policy descriptions for allowable method interleavings, and (4) the incremental process for inserting, validating, and exploiting annotations.
{"title":"Assuring and evolving concurrent programs: annotations and policy","authors":"Aaron Greenhouse, W. Scherlis","doi":"10.1145/581339.581395","DOIUrl":"https://doi.org/10.1145/581339.581395","url":null,"abstract":"Assuring and evolving concurrent programs requires understanding the concurrency-related design decisions used in their implementation. In Java-style shared-memory programs, these decisions include which state is shared, how access to it is regulated, the roles of threads, and the policy that distinguishes desired concurrency from race conditions. We use case studies from production Java code to explore the costs and benefits of a new annotation-based approach for expressing design intent. Our intent is both to assist in establishing \"thread safety\" attributes in code and to support tools that safely restructure code. The annotations we use express \"mechanical\" properties such as lock-state associations, uniqueness of references, and encapsulation of state into named aggregations. Our analyses revealed race conditions in our case study samples, drawn from open-source projects and library code. The novel technical features of this approach include (1) flexible encapsulation via aggregations of state that can cross object boundaries, (2) the association of locks with state aggregations, (3) policy descriptions for allowable method interleavings, and (4) the incremental process for inserting, validating, and exploiting annotations.","PeriodicalId":186061,"journal":{"name":"Proceedings of the 24th International Conference on Software Engineering. ICSE 2002","volume":"35 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2002-05-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128353889","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}