A comprehensive software development process needs some adjustment before it can be used: It needs to be tailored to the particular organization's and project's setting. The definition of an appropriate tailoring model is a critical task. Process users need tailoring that enables them to trim the process to reflect the actual needs. Process engineers need a method and a tool to define a valid model. The SE Book of T-Systems contains a feature model to describe variable parts of the process model and relations and constraints between these parts. The notation and semantics of feature models can be used to visually author a consistent and valid tailoring model. In this paper we present a tool for visual modeling and validation of process model tailoring based on feature models using the SE Book of T-Systems as an example. The tool is based on a domain-specific language that represents the process model. It leverages the semantics of feature models to provide an easy-to-use editor for tailoring-enabled process models.
{"title":"Design and validation of feature-based process model tailoring: a sample implementation of PDE","authors":"Daniela Costache, G. Kalus, M. Kuhrmann","doi":"10.1145/2025113.2025192","DOIUrl":"https://doi.org/10.1145/2025113.2025192","url":null,"abstract":"A comprehensive software development process needs some adjustment before it can be used: It needs to be tailored to the particular organization's and project's setting. The definition of an appropriate tailoring model is a critical task. Process users need tailoring that enables them to trim the process to reflect the actual needs. Process engineers need a method and a tool to define a valid model. The SE Book of T-Systems contains a feature model to describe variable parts of the process model and relations and constraints between these parts. The notation and semantics of feature models can be used to visually author a consistent and valid tailoring model. In this paper we present a tool for visual modeling and validation of process model tailoring based on feature models using the SE Book of T-Systems as an example. The tool is based on a domain-specific language that represents the process model. It leverages the semantics of feature models to provide an easy-to-use editor for tailoring-enabled process models.","PeriodicalId":184518,"journal":{"name":"ESEC/FSE '11","volume":"27 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2011-09-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"122406357","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Software system development typically starts from a requirement specification followed by stepwise refinement of available requirements while transferring them into the system architecture. However, the granularity and the amount of requirements to be elicited for a successful architectural design are not well understood. This paper proposes a process concept to support system development with the help of an architecture-centric approach for goal-driven requirements elicitation. The process focuses on multiple quality dimensions, such as performance, reliability and scalability, and at the same time shall reduce costs and risks through early decision evaluation. The main contribution of this paper is a novel process where not only requirements can drive architectural design, but also architectural design can selectively drive requirement elicitation with the help of hypotheses connected to the selected architectural solutions. The paper concludes with a discussion on its possible empirical validation.
{"title":"An architecture-centric approach for goal-driven requirements elicitation","authors":"Zoya Durdik","doi":"10.1145/2025113.2025167","DOIUrl":"https://doi.org/10.1145/2025113.2025167","url":null,"abstract":"Software system development typically starts from a requirement specification followed by stepwise refinement of available requirements while transferring them into the system architecture. However, the granularity and the amount of requirements to be elicited for a successful architectural design are not well understood. This paper proposes a process concept to support system development with the help of an architecture-centric approach for goal-driven requirements elicitation. The process focuses on multiple quality dimensions, such as performance, reliability and scalability, and at the same time shall reduce costs and risks through early decision evaluation. The main contribution of this paper is a novel process where not only requirements can drive architectural design, but also architectural design can selectively drive requirement elicitation with the help of hypotheses connected to the selected architectural solutions. The paper concludes with a discussion on its possible empirical validation.","PeriodicalId":184518,"journal":{"name":"ESEC/FSE '11","volume":"296 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2011-09-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"121204964","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Software defect information, including links between bugs and committed changes, plays an important role in software maintenance such as measuring quality and predicting defects. Usually, the links are automatically mined from change logs and bug reports using heuristics such as searching for specific keywords and bug IDs in change logs. However, the accuracy of these heuristics depends on the quality of change logs. Bird et al. found that there are many missing links due to the absence of bug references in change logs. They also found that the missing links lead to biased defect information, and it affects defect prediction performance. We manually inspected the explicit links, which have explicit bug IDs in change logs and observed that the links exhibit certain features. Based on our observation, we developed an automatic link recovery algorithm, ReLink, which automatically learns criteria of features from explicit links to recover missing links. We applied ReLink to three open source projects. ReLink reliably identified links with 89% precision and 78% recall on average, while the traditional heuristics alone achieve 91% precision and 64% recall. We also evaluated the impact of recovered links on software maintainability measurement and defect prediction, and found the results of ReLink yields significantly better accuracy than those of traditional heuristics.
{"title":"ReLink: recovering links between bugs and changes","authors":"Rongxin Wu, Hongyu Zhang, Sunghun Kim, S. Cheung","doi":"10.1145/2025113.2025120","DOIUrl":"https://doi.org/10.1145/2025113.2025120","url":null,"abstract":"Software defect information, including links between bugs and committed changes, plays an important role in software maintenance such as measuring quality and predicting defects. Usually, the links are automatically mined from change logs and bug reports using heuristics such as searching for specific keywords and bug IDs in change logs. However, the accuracy of these heuristics depends on the quality of change logs. Bird et al. found that there are many missing links due to the absence of bug references in change logs. They also found that the missing links lead to biased defect information, and it affects defect prediction performance. We manually inspected the explicit links, which have explicit bug IDs in change logs and observed that the links exhibit certain features. Based on our observation, we developed an automatic link recovery algorithm, ReLink, which automatically learns criteria of features from explicit links to recover missing links. We applied ReLink to three open source projects. ReLink reliably identified links with 89% precision and 78% recall on average, while the traditional heuristics alone achieve 91% precision and 64% recall. We also evaluated the impact of recovered links on software maintainability measurement and defect prediction, and found the results of ReLink yields significantly better accuracy than those of traditional heuristics.","PeriodicalId":184518,"journal":{"name":"ESEC/FSE '11","volume":"88 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2011-09-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"117305274","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Swarat Chaudhuri, Sumit Gulwani, Roberto Lublinerman, S. Navidpour
We present a program analysis for verifying quantitative robustness properties of programs, stated generally as: "If the inputs of a program are perturbed by an arbitrary amount epsilon, then its outputs change at most by (K . epsilon), where K can depend on the size of the input but not its value." Robustness properties generalize the analytic notion of continuity---e.g., while the function ex is continuous, it is not robust. Our problem is to verify the robustness of a function P that is coded as an imperative program, and can use diverse data types and features such as branches and loops. Our approach to the problem soundly decomposes it into two subproblems: (a) verifying that the smallest possible perturbations to the inputs of P do not change the corresponding outputs significantly, even if control now flows along a different control path; and (b) verifying the robustness of the computation along each control-flow path of P. To solve the former subproblem, we build on an existing method for verifying that a program encodes a continuous function [5]. The latter is solved using a static analysis that bounds the magnitude of the slope of any function computed by a control flow path of P. The outcome is a sound program analysis for robustness that uses proof obligations which do not refer to epsilon-changes and can often be fully automated using off-the-shelf SMT-solvers. We identify three application domains for our analysis. First, our analysis can be used to guarantee the predictable execution of embedded control software, whose inputs come from physical sources and can suffer from error and uncertainty. A guarantee of robustness ensures that the system does not react disproportionately to such uncertainty. Second, our analysis is directly applicable to approximate computation, and can be used to provide foundations for a recently-proposed program approximation scheme called {loop perforation}. A third application is in database privacy: proofs of robustness of queries are essential to differential privacy, the most popular notion of privacy for statistical databases.
{"title":"Proving programs robust","authors":"Swarat Chaudhuri, Sumit Gulwani, Roberto Lublinerman, S. Navidpour","doi":"10.1145/2025113.2025131","DOIUrl":"https://doi.org/10.1145/2025113.2025131","url":null,"abstract":"We present a program analysis for verifying quantitative robustness properties of programs, stated generally as: \"If the inputs of a program are perturbed by an arbitrary amount epsilon, then its outputs change at most by (K . epsilon), where K can depend on the size of the input but not its value.\" Robustness properties generalize the analytic notion of continuity---e.g., while the function ex is continuous, it is not robust. Our problem is to verify the robustness of a function P that is coded as an imperative program, and can use diverse data types and features such as branches and loops.\u0000 Our approach to the problem soundly decomposes it into two subproblems: (a) verifying that the smallest possible perturbations to the inputs of P do not change the corresponding outputs significantly, even if control now flows along a different control path; and (b) verifying the robustness of the computation along each control-flow path of P. To solve the former subproblem, we build on an existing method for verifying that a program encodes a continuous function [5]. The latter is solved using a static analysis that bounds the magnitude of the slope of any function computed by a control flow path of P. The outcome is a sound program analysis for robustness that uses proof obligations which do not refer to epsilon-changes and can often be fully automated using off-the-shelf SMT-solvers.\u0000 We identify three application domains for our analysis. First, our analysis can be used to guarantee the predictable execution of embedded control software, whose inputs come from physical sources and can suffer from error and uncertainty. A guarantee of robustness ensures that the system does not react disproportionately to such uncertainty. Second, our analysis is directly applicable to approximate computation, and can be used to provide foundations for a recently-proposed program approximation scheme called {loop perforation}. A third application is in database privacy: proofs of robustness of queries are essential to differential privacy, the most popular notion of privacy for statistical databases.","PeriodicalId":184518,"journal":{"name":"ESEC/FSE '11","volume":"174 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2011-09-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115567738","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
This paper addresses the problem of identifying incompatibilities between two programs that operate in a producer/consumer relationship. It describes the techniques that are incorporated in a tool called PCCA (Producer-Consumer Conformance Analyzer), which attempts to (i) determine whether the consumer is prepared to accept all messages that the producer can emit, or (ii) find a counter-example: a message that the producer can emit and the consumer considers ill-formed.
{"title":"Checking conformance of a producer and a consumer","authors":"E. Driscoll, Amanda Burton, T. Reps","doi":"10.1145/2025113.2025132","DOIUrl":"https://doi.org/10.1145/2025113.2025132","url":null,"abstract":"This paper addresses the problem of identifying incompatibilities between two programs that operate in a producer/consumer relationship. It describes the techniques that are incorporated in a tool called PCCA (Producer-Consumer Conformance Analyzer), which attempts to (i) determine whether the consumer is prepared to accept all messages that the producer can emit, or (ii) find a counter-example: a message that the producer can emit and the consumer considers ill-formed.","PeriodicalId":184518,"journal":{"name":"ESEC/FSE '11","volume":"60 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2011-09-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115673264","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Speculative optimizations are increasingly becoming popular for improving program performance by allowing transformations that benefit frequently traversed program paths. Such optimizations are based on dataflow facts which are mostly true, though not always safe. Probabilistic dataflow analysis frameworks infer such facts about a program, while also providing the probability with which a fact is likely to be true. We propose a new Probabilistic Dataflow Analysis Framework which uses path profiles and information about the nesting structure of loops to obtain improved probabilities of dataflow facts.
{"title":"Probabilistic dataflow analysis using path profiles on structure graphs","authors":"A. Ramamurthi, Subhajit Roy, Y. Srikant","doi":"10.1145/2025113.2025206","DOIUrl":"https://doi.org/10.1145/2025113.2025206","url":null,"abstract":"Speculative optimizations are increasingly becoming popular for improving program performance by allowing transformations that benefit frequently traversed program paths. Such optimizations are based on dataflow facts which are mostly true, though not always safe. Probabilistic dataflow analysis frameworks infer such facts about a program, while also providing the probability with which a fact is likely to be true. We propose a new Probabilistic Dataflow Analysis Framework which uses path profiles and information about the nesting structure of loops to obtain improved probabilities of dataflow facts.","PeriodicalId":184518,"journal":{"name":"ESEC/FSE '11","volume":"4 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2011-09-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"116178665","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
MapReduce has become a common programming model for processing very large amounts of data, which is needed in a spectrum of modern computing applications. Today several MapReduce implementations and execution systems exist and many MapReduce programs are being developed and deployed in practice. However, developing MapReduce programs is not always an easy task. The programming model makes programs prone to several MapReduce-specific bugs. That is, to produce deterministic results, a MapReduce program needs to satisfy certain high-level correctness conditions. A violating program may yield different output values on the same input data, based on low-level infrastructure events such as network latency, scheduling decisions, etc. Current MapReduce systems and tools are lacking in support for checking these conditions and reporting violations. This paper presents a novel technique that systematically searches for such bugs in MapReduce applications and generates corresponding test cases. The technique works by encoding the high-level MapReduce correctness conditions as symbolic program constraints and checking them for the program under test. To the best of our knowledge, this is the first approach to addressing this problem of MapReduce-style programming.
{"title":"New ideas track: testing mapreduce-style programs","authors":"Christoph Csallner, L. Fegaras, Chengkai Li","doi":"10.1145/2025113.2025204","DOIUrl":"https://doi.org/10.1145/2025113.2025204","url":null,"abstract":"MapReduce has become a common programming model for processing very large amounts of data, which is needed in a spectrum of modern computing applications. Today several MapReduce implementations and execution systems exist and many MapReduce programs are being developed and deployed in practice. However, developing MapReduce programs is not always an easy task. The programming model makes programs prone to several MapReduce-specific bugs. That is, to produce deterministic results, a MapReduce program needs to satisfy certain high-level correctness conditions. A violating program may yield different output values on the same input data, based on low-level infrastructure events such as network latency, scheduling decisions, etc. Current MapReduce systems and tools are lacking in support for checking these conditions and reporting violations.\u0000 This paper presents a novel technique that systematically searches for such bugs in MapReduce applications and generates corresponding test cases. The technique works by encoding the high-level MapReduce correctness conditions as symbolic program constraints and checking them for the program under test. To the best of our knowledge, this is the first approach to addressing this problem of MapReduce-style programming.","PeriodicalId":184518,"journal":{"name":"ESEC/FSE '11","volume":"31 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2011-09-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"125773115","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Abstraction is a valuable tool that can play an important role in reducing the cost of maintenance of software systems. Despite the cost reduction abstract documentation can provide, the cost of generating documentation that offers an implementation-independent overview of the system often outweighs it. This has been the motivating force for tools and techniques that reduce the cost of documentation generation, including this work. State machines offer an ideal level of abstraction and techniques to infer them from machines are already mature. Despite this, the abstraction state machines provide is restricted as they become unmanageable when they are of any significant size. As a result, inference tools are only ideal for those who are already familiar with the system. This work focuses on making state machines useful for larger systems. In order to do so the complexity of a machine needs to be reduced; this is realised by introducing a hierarchy to the machine, making them closer to Harel's Statechart formalism (without concurrency).
{"title":"Search based hierarchy generation for reverse engineered state machines","authors":"Mathew Hall","doi":"10.1145/2025113.2025170","DOIUrl":"https://doi.org/10.1145/2025113.2025170","url":null,"abstract":"Abstraction is a valuable tool that can play an important role in reducing the cost of maintenance of software systems. Despite the cost reduction abstract documentation can provide, the cost of generating documentation that offers an implementation-independent overview of the system often outweighs it. This has been the motivating force for tools and techniques that reduce the cost of documentation generation, including this work.\u0000 State machines offer an ideal level of abstraction and techniques to infer them from machines are already mature. Despite this, the abstraction state machines provide is restricted as they become unmanageable when they are of any significant size. As a result, inference tools are only ideal for those who are already familiar with the system.\u0000 This work focuses on making state machines useful for larger systems. In order to do so the complexity of a machine needs to be reduced; this is realised by introducing a hierarchy to the machine, making them closer to Harel's Statechart formalism (without concurrency).","PeriodicalId":184518,"journal":{"name":"ESEC/FSE '11","volume":"60 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2011-09-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126423099","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Model checking provides a powerful means to assert and verify desired system properties. But, for the verification process to become feasible, a correct formulation of these properties in a temporal logic is necessary - a potential barrier to application in practice. Research on property specification has supplied us with rich pattern catalogs that capture commonly occurring system properties in different temporal logics. Furthermore, these property specification pattern catalogs usually offer both a structured English grammar to facilitate the pattern selection and an associated template solutions to express the properties formally. Yet, the actual use of property specification patterns remains cumbersome, due to limited tool support. For this reason, we have developed the Property Specification Pattern Wizard (PSPWizard), a framework that defines an interface for the currently accepted property specification pattern libraries. PSPWizard consists of two main building blocks: a mapping generator that weaves a given pattern library with a target logic and a GUI front-end to the structured English grammar tailored to those patterns that are supported in the target logic.
{"title":"PSPWizard: machine-assisted definition of temporal logical properties with specification patterns","authors":"M. Lumpe, Indika Meedeniya, Lars Grunske","doi":"10.1145/2025113.2025193","DOIUrl":"https://doi.org/10.1145/2025113.2025193","url":null,"abstract":"Model checking provides a powerful means to assert and verify desired system properties. But, for the verification process to become feasible, a correct formulation of these properties in a temporal logic is necessary - a potential barrier to application in practice. Research on property specification has supplied us with rich pattern catalogs that capture commonly occurring system properties in different temporal logics. Furthermore, these property specification pattern catalogs usually offer both a structured English grammar to facilitate the pattern selection and an associated template solutions to express the properties formally. Yet, the actual use of property specification patterns remains cumbersome, due to limited tool support. For this reason, we have developed the Property Specification Pattern Wizard (PSPWizard), a framework that defines an interface for the currently accepted property specification pattern libraries. PSPWizard consists of two main building blocks: a mapping generator that weaves a given pattern library with a target logic and a GUI front-end to the structured English grammar tailored to those patterns that are supported in the target logic.","PeriodicalId":184518,"journal":{"name":"ESEC/FSE '11","volume":"7 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2011-09-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"131963149","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Zuoning Yin, Ding Yuan, Yuanyuan Zhou, S. Pasupathy, Lakshmi N. Bairavasundaram
Software bugs affect system reliability. When a bug is exposed in the field, developers need to fix them. Unfortunately, the bug-fixing process can also introduce errors, which leads to buggy patches that further aggravate the damage to end users and erode software vendors' reputation. This paper presents a comprehensive characteristic study on incorrect bug-fixes from large operating system code bases including Linux, OpenSolaris, FreeBSD and also a mature commercial OS developed and evolved over the last 12 years, investigating not only themistake patterns during bug-fixing but also the possible human reasons in the development process when these incorrect bug-fixes were introduced. Our major findings include: (1) at least 14.8%--24.4% of sampled fixes for post-release bugs in these large OSes are incorrect and have made impacts to end users. (2) Among several common bug types, concurrency bugs are the most difficult to fix correctly: 39% of concurrency bug fixes are incorrect. (3) Developers and reviewers for incorrect fixes usually do not have enough knowledge about the involved code. For example, 27% of the incorrect fixes are made by developers who have never touched the source code files associated with the fix. Our results provide useful guidelines to design new tools and also to improve the development process. Based on our findings, the commercial software vendor whose OS code we evaluated is building a tool to improve the bug fixing and code reviewing process.
{"title":"How do fixes become bugs?","authors":"Zuoning Yin, Ding Yuan, Yuanyuan Zhou, S. Pasupathy, Lakshmi N. Bairavasundaram","doi":"10.1145/2025113.2025121","DOIUrl":"https://doi.org/10.1145/2025113.2025121","url":null,"abstract":"Software bugs affect system reliability. When a bug is exposed in the field, developers need to fix them. Unfortunately, the bug-fixing process can also introduce errors, which leads to buggy patches that further aggravate the damage to end users and erode software vendors' reputation.\u0000 This paper presents a comprehensive characteristic study on incorrect bug-fixes from large operating system code bases including Linux, OpenSolaris, FreeBSD and also a mature commercial OS developed and evolved over the last 12 years, investigating not only themistake patterns during bug-fixing but also the possible human reasons in the development process when these incorrect bug-fixes were introduced. Our major findings include: (1) at least 14.8%--24.4% of sampled fixes for post-release bugs in these large OSes are incorrect and have made impacts to end users. (2) Among several common bug types, concurrency bugs are the most difficult to fix correctly: 39% of concurrency bug fixes are incorrect. (3) Developers and reviewers for incorrect fixes usually do not have enough knowledge about the involved code. For example, 27% of the incorrect fixes are made by developers who have never touched the source code files associated with the fix. Our results provide useful guidelines to design new tools and also to improve the development process. Based on our findings, the commercial software vendor whose OS code we evaluated is building a tool to improve the bug fixing and code reviewing process.","PeriodicalId":184518,"journal":{"name":"ESEC/FSE '11","volume":"26 1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2011-09-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"131152239","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}