Socio-professional websites such as LinkedIn use various mechanisms such as network of colleagues, groups and discussions to assist their users in maintaining their professional network and keeping up with relevant discussions. Software professionals post hundreds of thousands of comments each week in these group discussions regarding technological and methodological aspects of their work. Analyzing these comments enables us, the software community at large, to better understand the state of the practice of many aspects of software development. In this paper we describe a case study in which we use such LinkedIn group discussion to learn about developers' perception of using code examples.
{"title":"Using social media to study the diversity of example usage among professional developers","authors":"Ohad Barzilay, O. Hazzan, A. Yehudai","doi":"10.1145/2025113.2025195","DOIUrl":"https://doi.org/10.1145/2025113.2025195","url":null,"abstract":"Socio-professional websites such as LinkedIn use various mechanisms such as network of colleagues, groups and discussions to assist their users in maintaining their professional network and keeping up with relevant discussions. Software professionals post hundreds of thousands of comments each week in these group discussions regarding technological and methodological aspects of their work. Analyzing these comments enables us, the software community at large, to better understand the state of the practice of many aspects of software development. In this paper we describe a case study in which we use such LinkedIn group discussion to learn about developers' perception of using code examples.","PeriodicalId":184518,"journal":{"name":"ESEC/FSE '11","volume":"14 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2011-09-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"132395132","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
As developers debug, they often have to seek the origins of wrong values they see in their debugger. This search must be performed backwards in time since the code causing the wrong value is executed before the wrong value appears. Therefore, locating the origin of wrong values with breakpoint- or log- based debuggers demands persistence and significant experience. Querypoint, is a Firefox plugin that enhances the popular Firebug JavaScript debugger with a new, practical feature called lastChange. lastChange automatically locates the last point at which a variable or an object property has been changed. Starting from a program suspended on a breakpoint, the lastChange algorithm applies queries to the live program during re-execution, recording the call stack and limited program state each time the property value changes. When the program halts again on the breakpoint, it shows the call stack and program state at the last change point. To evaluate the usability and effectiveness of Querypoint we studied four experienced JavaScript developers applying the tool to two test cases.
{"title":"Querypoint: moving backwards on wrong values in the buggy execution","authors":"S. Mirghasemi, J. Barton, C. Petitpierre","doi":"10.1145/2025113.2025184","DOIUrl":"https://doi.org/10.1145/2025113.2025184","url":null,"abstract":"As developers debug, they often have to seek the origins of wrong values they see in their debugger. This search must be performed backwards in time since the code causing the wrong value is executed before the wrong value appears. Therefore, locating the origin of wrong values with breakpoint- or log- based debuggers demands persistence and significant experience.\u0000 Querypoint, is a Firefox plugin that enhances the popular Firebug JavaScript debugger with a new, practical feature called lastChange. lastChange automatically locates the last point at which a variable or an object property has been changed. Starting from a program suspended on a breakpoint, the lastChange algorithm applies queries to the live program during re-execution, recording the call stack and limited program state each time the property value changes. When the program halts again on the breakpoint, it shows the call stack and program state at the last change point. To evaluate the usability and effectiveness of Querypoint we studied four experienced JavaScript developers applying the tool to two test cases.","PeriodicalId":184518,"journal":{"name":"ESEC/FSE '11","volume":"31 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2011-09-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"124840661","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Knowledge management plays an important role in many software organizations. Knowledge can be captured and distributed using a variety of media, including traditional help files and manuals, videos, technical articles, wikis, and blogs. In recent years, web-based community portals have emerged as an important mechanism for combining various communication channels. However, there is little advice on how they can be effectively deployed in a software project. In this paper, we present a first study of a community portal used by a closed source software project. Using grounded theory, we develop a model that characterizes documentation artifacts along several dimensions, such as content type, intended audience, feedback options, and review mechanisms. Our findings lead to actionable advice for industry by articulating the benefits and possible shortcomings of the various communication channels in a knowledge-sharing portal. We conclude by suggesting future research on the increasing adoption of community portals in software engineering projects.
{"title":"Effective communication of software development knowledge through community portals","authors":"Christoph Treude, M. Storey","doi":"10.1145/2025113.2025129","DOIUrl":"https://doi.org/10.1145/2025113.2025129","url":null,"abstract":"Knowledge management plays an important role in many software organizations. Knowledge can be captured and distributed using a variety of media, including traditional help files and manuals, videos, technical articles, wikis, and blogs. In recent years, web-based community portals have emerged as an important mechanism for combining various communication channels. However, there is little advice on how they can be effectively deployed in a software project.\u0000 In this paper, we present a first study of a community portal used by a closed source software project. Using grounded theory, we develop a model that characterizes documentation artifacts along several dimensions, such as content type, intended audience, feedback options, and review mechanisms. Our findings lead to actionable advice for industry by articulating the benefits and possible shortcomings of the various communication channels in a knowledge-sharing portal. We conclude by suggesting future research on the increasing adoption of community portals in software engineering projects.","PeriodicalId":184518,"journal":{"name":"ESEC/FSE '11","volume":"53 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2011-09-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"125069665","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Even in simple programs there are hidden assumptions and dependencies between units that are not immediately visible in each involved unit. These dependencies are generally hard to identify and locate, and can lead to subtle faults that are often missed, even by extensive test suites. We propose to leverage existing test suites to identify faults due to hidden dependencies and to identify inadequate test suite design. Rather than just executing entire test suites within frameworks such as JUnit, we execute each test in isolation, thus removing masking effects that might be present in the test suites. We hypothesize that this can reveal previously hidden dependencies between program units or tests. A preliminary study shows that this technique is capable of identifying subtle faults that have lived in a system for 120 revisions, despite failures being reported and despite attempts to fix the fault.
{"title":"Finding bugs by isolating unit tests","authors":"Kivanç Muslu, B. Soran, Jochen Wuttke","doi":"10.1145/2025113.2025202","DOIUrl":"https://doi.org/10.1145/2025113.2025202","url":null,"abstract":"Even in simple programs there are hidden assumptions and dependencies between units that are not immediately visible in each involved unit. These dependencies are generally hard to identify and locate, and can lead to subtle faults that are often missed, even by extensive test suites.\u0000 We propose to leverage existing test suites to identify faults due to hidden dependencies and to identify inadequate test suite design. Rather than just executing entire test suites within frameworks such as JUnit, we execute each test in isolation, thus removing masking effects that might be present in the test suites. We hypothesize that this can reveal previously hidden dependencies between program units or tests.\u0000 A preliminary study shows that this technique is capable of identifying subtle faults that have lived in a system for 120 revisions, despite failures being reported and despite attempts to fix the fault.","PeriodicalId":184518,"journal":{"name":"ESEC/FSE '11","volume":"89 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2011-09-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127189249","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Foyzur Rahman, Daryl Posnett, Abram Hindle, Earl T. Barr, Premkumar T. Devanbu
Inspection is a highly effective but costly technique for quality control. Most companies do not have the resources to inspect all the code; thus accurate defect prediction can help focus available inspection resources. BugCache is a simple, elegant, award-winning prediction scheme that "caches" files that are likely to contain defects [12]. In this paper, we evaluate the utility of BugCache as a tool for focusing inspection, we examine the assumptions underlying BugCache with the aim of improving it, and finally we compare it with a simple, standard bug-prediction technique. We find that BugCache is, in fact, useful for focusing inspection effort; but surprisingly, we find that its performance, when used for inspections, is not much better than a naive prediction model -- viz., a model that orders files in the system by their count of closed bugs and chooses enough files to capture 20% of the lines in the system.
{"title":"BugCache for inspections: hit or miss?","authors":"Foyzur Rahman, Daryl Posnett, Abram Hindle, Earl T. Barr, Premkumar T. Devanbu","doi":"10.1145/2025113.2025157","DOIUrl":"https://doi.org/10.1145/2025113.2025157","url":null,"abstract":"Inspection is a highly effective but costly technique for quality control. Most companies do not have the resources to inspect all the code; thus accurate defect prediction can help focus available inspection resources. BugCache is a simple, elegant, award-winning prediction scheme that \"caches\" files that are likely to contain defects [12]. In this paper, we evaluate the utility of BugCache as a tool for focusing inspection, we examine the assumptions underlying BugCache with the aim of improving it, and finally we compare it with a simple, standard bug-prediction technique. We find that BugCache is, in fact, useful for focusing inspection effort; but surprisingly, we find that its performance, when used for inspections, is not much better than a naive prediction model -- viz., a model that orders files in the system by their count of closed bugs and chooses enough files to capture 20% of the lines in the system.","PeriodicalId":184518,"journal":{"name":"ESEC/FSE '11","volume":"9 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2011-09-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"123356090","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
V. Jagannath, Miloš Gligorić, Dongyun Jin, Q. Luo, Grigore Roşu, D. Marinov
Multithreaded code is notoriously hard to develop and test. A multithreaded test exercises the code under test with two or more threads. Each test execution follows some schedule/interleaving of the multiple threads, and different schedules can give different results. Developers often want to enforce a particular schedule for test execution, and to do so, they use time delays (Thread.sleep in Java). Unfortunately, this approach can produce false positives or negatives, and can result in unnecessarily long testing time. This paper presents IMUnit, a novel approach to specifying and executing schedules for multithreaded tests. We introduce a new language that allows explicit specification of schedules as orderings on events encountered during test execution. We present a tool that automatically instruments the code to control test execution to follow the specified schedule, and a tool that helps developers migrate their legacy, sleep-based tests into event-based tests in IMUnit. The migration tool uses novel techniques for inferring events and schedules from the executions of sleep-based tests. We describe our experience in migrating over 200 tests. The inference techniques have high precision and recall of over 75%, and IMUnit reduces testing time compared to sleep-based tests on average 3.39x.
{"title":"Improved multithreaded unit testing","authors":"V. Jagannath, Miloš Gligorić, Dongyun Jin, Q. Luo, Grigore Roşu, D. Marinov","doi":"10.1145/2025113.2025145","DOIUrl":"https://doi.org/10.1145/2025113.2025145","url":null,"abstract":"Multithreaded code is notoriously hard to develop and test. A multithreaded test exercises the code under test with two or more threads. Each test execution follows some schedule/interleaving of the multiple threads, and different schedules can give different results. Developers often want to enforce a particular schedule for test execution, and to do so, they use time delays (Thread.sleep in Java). Unfortunately, this approach can produce false positives or negatives, and can result in unnecessarily long testing time.\u0000 This paper presents IMUnit, a novel approach to specifying and executing schedules for multithreaded tests. We introduce a new language that allows explicit specification of schedules as orderings on events encountered during test execution. We present a tool that automatically instruments the code to control test execution to follow the specified schedule, and a tool that helps developers migrate their legacy, sleep-based tests into event-based tests in IMUnit. The migration tool uses novel techniques for inferring events and schedules from the executions of sleep-based tests. We describe our experience in migrating over 200 tests. The inference techniques have high precision and recall of over 75%, and IMUnit reduces testing time compared to sleep-based tests on average 3.39x.","PeriodicalId":184518,"journal":{"name":"ESEC/FSE '11","volume":"121 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2011-09-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"122910777","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Unpredictable changes continuously affect software systems and may have a severe impact on their quality of service, potentially jeopardizing the system’s ability to meet the desired requirements. Changes may occur in critical components of the system, clients’ operational profiles, requirements, or deployment environments. As a consequence, software engineers are increasingly required to design software as a (self-) adaptive system, which automatically detects and reacts to changes. In order to detect significant changes in the execution environment, effective monitoring procedures are not enough since their outcome can seldom provide a direct support for reasoning and verification on the state of the system and its changes. The adoption of software models and model checking techniques at run time may support automatic reasoning about such changes, detect harmful configurations, and potentially enable appropriate (self-)reactions. However, traditional model checking techniques and tools may not be simply applied as they are at run time, since they may not meet the constraints imposed by on-the-fly analysis, in terms of execution time and memory occupation. The key idea to deal with verification complexity as proposed in this research consists of splitting the problem in two steps: 1) precomputing a set of closed formulae corresponding to desired properties and depending on relevant system’s variables, and then 2) quickly evaluate such formulas every time a variation is detected. This continuous verification of QoS requirements can support continuous adaptation of the software system. The term continuous here subsumes that reactions should be completed before a new variation invalidates their utility. A special, though large, class of systems behaves depending on a finite set of parameters, e.g. possible configurations, routing options, third party components selection and so on. Many control-theory based approaches have been studied to manipulate control parametersin order to reach or keep desired goals, but this continuous verification is an extremely hard task since software is usually very complex to formal-
{"title":"QoS verification and model tuning @ runtime","authors":"A. Filieri","doi":"10.1145/2025113.2025176","DOIUrl":"https://doi.org/10.1145/2025113.2025176","url":null,"abstract":"Unpredictable changes continuously affect software systems and may have a severe impact on their quality of service, potentially jeopardizing the system’s ability to meet the desired requirements. Changes may occur in critical components of the system, clients’ operational profiles, requirements, or deployment environments. As a consequence, software engineers are increasingly required to design software as a (self-) adaptive system, which automatically detects and reacts to changes. In order to detect significant changes in the execution environment, effective monitoring procedures are not enough since their outcome can seldom provide a direct support for reasoning and verification on the state of the system and its changes. The adoption of software models and model checking techniques at run time may support automatic reasoning about such changes, detect harmful configurations, and potentially enable appropriate (self-)reactions. However, traditional model checking techniques and tools may not be simply applied as they are at run time, since they may not meet the constraints imposed by on-the-fly analysis, in terms of execution time and memory occupation. The key idea to deal with verification complexity as proposed in this research consists of splitting the problem in two steps: 1) precomputing a set of closed formulae corresponding to desired properties and depending on relevant system’s variables, and then 2) quickly evaluate such formulas every time a variation is detected. This continuous verification of QoS requirements can support continuous adaptation of the software system. The term continuous here subsumes that reactions should be completed before a new variation invalidates their utility. A special, though large, class of systems behaves depending on a finite set of parameters, e.g. possible configurations, routing options, third party components selection and so on. Many control-theory based approaches have been studied to manipulate control parametersin order to reach or keep desired goals, but this continuous verification is an extremely hard task since software is usually very complex to formal-","PeriodicalId":184518,"journal":{"name":"ESEC/FSE '11","volume":"24 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2011-09-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"123702449","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Yuriy Brun, Reid Holmes, Michael D. Ernst, D. Notkin
During collaborative development, individual developers can create conflicts in their copies of the code. Such conflicting edits are frequent in practice, and resolving them can be costly. We present Crystal, a tool that proactively examines developers' code and precisely identifies and reports on textual, compilation, and behavioral conflicts. When conflicts are present, Crystal enables developers to resolve them more quickly, and therefore at a lesser cost. When conflicts are absent, Crystal increases the developers' confidence that it is safe to merge their code. Crystal uses an unobtrusive interface to deliver pertinent information about conflicts. It informs developers about actions that would address the conflicts and about people with whom they should communicate.
{"title":"Crystal: precise and unobtrusive conflict warnings","authors":"Yuriy Brun, Reid Holmes, Michael D. Ernst, D. Notkin","doi":"10.1145/2025113.2025187","DOIUrl":"https://doi.org/10.1145/2025113.2025187","url":null,"abstract":"During collaborative development, individual developers can create conflicts in their copies of the code. Such conflicting edits are frequent in practice, and resolving them can be costly. We present Crystal, a tool that proactively examines developers' code and precisely identifies and reports on textual, compilation, and behavioral conflicts. When conflicts are present, Crystal enables developers to resolve them more quickly, and therefore at a lesser cost. When conflicts are absent, Crystal increases the developers' confidence that it is safe to merge their code. Crystal uses an unobtrusive interface to deliver pertinent information about conflicts. It informs developers about actions that would address the conflicts and about people with whom they should communicate.","PeriodicalId":184518,"journal":{"name":"ESEC/FSE '11","volume":"45 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2011-09-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"122814162","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Ivan Beschastnikh, Yuriy Brun, S. Schneider, Michael Sloan, Michael D. Ernst
Computer systems are often difficult to debug and understand. A common way of gaining insight into system behavior is to inspect execution logs and documentation. Unfortunately, manual inspection of logs is an arduous process and documentation is often incomplete and out of sync with the implementation. This paper presents Synoptic, a tool that helps developers by inferring a concise and accurate system model. Unlike most related work, Synoptic does not require developer-written scenarios, specifications, negative execution examples, or other complex user input. Synoptic processes the logs most systems already produce and requires developers only to specify a set of regular expressions for parsing the logs. Synoptic has two unique features. First, the model it produces satisfies three kinds of temporal invariants mined from the logs, improving accuracy over related approaches. Second, Synoptic uses refinement and coarsening to explore the space of models. This improves model efficiency and precision, compared to using just one approach. In this paper, we formally prove that Synoptic always produces a model that satisfies exactly the temporal invariants mined from the log, and we argue that it does so efficiently. We empirically evaluate Synoptic through two user experience studies, one with a developer of a large, real-world system and another with 45 students in a distributed systems course. Developers used Synoptic-generated models to verify known bugs, diagnose new bugs, and increase their confidence in the correctness of their systems. None of the developers in our evaluation had a background in formal methods but were able to easily use Synoptic and detect implementation bugs in as little as a few minutes.
{"title":"Leveraging existing instrumentation to automatically infer invariant-constrained models","authors":"Ivan Beschastnikh, Yuriy Brun, S. Schneider, Michael Sloan, Michael D. Ernst","doi":"10.1145/2025113.2025151","DOIUrl":"https://doi.org/10.1145/2025113.2025151","url":null,"abstract":"Computer systems are often difficult to debug and understand. A common way of gaining insight into system behavior is to inspect execution logs and documentation. Unfortunately, manual inspection of logs is an arduous process and documentation is often incomplete and out of sync with the implementation.\u0000 This paper presents Synoptic, a tool that helps developers by inferring a concise and accurate system model. Unlike most related work, Synoptic does not require developer-written scenarios, specifications, negative execution examples, or other complex user input. Synoptic processes the logs most systems already produce and requires developers only to specify a set of regular expressions for parsing the logs.\u0000 Synoptic has two unique features. First, the model it produces satisfies three kinds of temporal invariants mined from the logs, improving accuracy over related approaches. Second, Synoptic uses refinement and coarsening to explore the space of models. This improves model efficiency and precision, compared to using just one approach.\u0000 In this paper, we formally prove that Synoptic always produces a model that satisfies exactly the temporal invariants mined from the log, and we argue that it does so efficiently. We empirically evaluate Synoptic through two user experience studies, one with a developer of a large, real-world system and another with 45 students in a distributed systems course. Developers used Synoptic-generated models to verify known bugs, diagnose new bugs, and increase their confidence in the correctness of their systems. None of the developers in our evaluation had a background in formal methods but were able to easily use Synoptic and detect implementation bugs in as little as a few minutes.","PeriodicalId":184518,"journal":{"name":"ESEC/FSE '11","volume":"112 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2011-09-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"124794964","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
The IWPSE-EVOL'2011 workshop is the merger of the 12th International Workshop on Principles of Software Evolution (IWPSE) and the 7th annual ERCIM Workshop on Software Evolution (EVOL). The objectives of this joint event is to provide a forum to discuss a wide range of topics in software evolution, to foster the better understanding of the nature of software evolution, and to accelerate research activities on the subject.
{"title":"IWPSE-EVOL 2011: 12th international workshop on principles on software evolution and 7th ERCIM workshop on software evolution","authors":"R. Robbes, Anthony Cleve","doi":"10.1145/2025113.2025209","DOIUrl":"https://doi.org/10.1145/2025113.2025209","url":null,"abstract":"The IWPSE-EVOL'2011 workshop is the merger of the 12th International Workshop on Principles of Software Evolution (IWPSE) and the 7th annual ERCIM Workshop on Software Evolution (EVOL). The objectives of this joint event is to provide a forum to discuss a wide range of topics in software evolution, to foster the better understanding of the nature of software evolution, and to accelerate research activities on the subject.","PeriodicalId":184518,"journal":{"name":"ESEC/FSE '11","volume":"7 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2011-09-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127558849","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}