Pub Date : 2013-11-21DOI: 10.1109/WCRE.2013.6671326
X. Chen, Asia Slowinska, H. Bos
Many reversing techniques for data structures rely on the knowledge of memory allocation routines. Typically, they interpose on the system's malloc and free functions, and track each chunk of memory thus allocated as a data structure. However, many performance-critical applications implement their own custom memory allocators. As a result, current binary analysis techniques for tracking data structures fail on such binaries. We present MemBrush, a new tool to detect memory allocation and deallocation functions in stripped binaries with high accuracy. We evaluated the technique on a large number of real world applications that use custom memory allocators.
{"title":"MemBrush: A practical tool to detect custom memory allocators in C binaries","authors":"X. Chen, Asia Slowinska, H. Bos","doi":"10.1109/WCRE.2013.6671326","DOIUrl":"https://doi.org/10.1109/WCRE.2013.6671326","url":null,"abstract":"Many reversing techniques for data structures rely on the knowledge of memory allocation routines. Typically, they interpose on the system's malloc and free functions, and track each chunk of memory thus allocated as a data structure. However, many performance-critical applications implement their own custom memory allocators. As a result, current binary analysis techniques for tracking data structures fail on such binaries. We present MemBrush, a new tool to detect memory allocation and deallocation functions in stripped binaries with high accuracy. We evaluated the technique on a large number of real world applications that use custom memory allocators.","PeriodicalId":275092,"journal":{"name":"2013 20th Working Conference on Reverse Engineering (WCRE)","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2013-11-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129691189","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2013-11-21DOI: 10.1109/WCRE.2013.6671312
Raghavan Komondoor, Indrajit Bhattacharya, D. D'Souza, Sachin Kale
We address the task of mapping a given textual domain model (e.g., an industry-standard reference model) for a given domain (e.g., ERP), with the source code of an independently developed application in the same domain. This has applications in improving the understandability of an existing application, migrating it to a more flexible architecture, or integrating it with other related applications. We use the vector-space model to abstractly represent domain model elements as well as source-code artifacts. The key novelty in our approach is to leverage the relationships between source-code artifacts in a principled way to improve the mapping process. We describe experiments wherein we apply our approach to the task of matching two real, open-source applications to corresponding industry-standard domain models. We demonstrate the overall usefulness of our approach, as well as the role of our propagation techniques in improving the precision and recall of the mapping task.
{"title":"Using relationships for matching textual domain models with existing code","authors":"Raghavan Komondoor, Indrajit Bhattacharya, D. D'Souza, Sachin Kale","doi":"10.1109/WCRE.2013.6671312","DOIUrl":"https://doi.org/10.1109/WCRE.2013.6671312","url":null,"abstract":"We address the task of mapping a given textual domain model (e.g., an industry-standard reference model) for a given domain (e.g., ERP), with the source code of an independently developed application in the same domain. This has applications in improving the understandability of an existing application, migrating it to a more flexible architecture, or integrating it with other related applications. We use the vector-space model to abstractly represent domain model elements as well as source-code artifacts. The key novelty in our approach is to leverage the relationships between source-code artifacts in a principled way to improve the mapping process. We describe experiments wherein we apply our approach to the task of matching two real, open-source applications to corresponding industry-standard domain models. We demonstrate the overall usefulness of our approach, as well as the role of our propagation techniques in improving the precision and recall of the mapping task.","PeriodicalId":275092,"journal":{"name":"2013 20th Working Conference on Reverse Engineering (WCRE)","volume":"65 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2013-11-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128215751","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2013-11-21DOI: 10.1109/WCRE.2013.6671308
André C. Hora, N. Anquetil, Stéphane Ducasse, M. T. Valente
A significant percentage of warnings reported by tools to detect coding standard violations are false positives. Thus, there are some works dedicated to provide better rules by mining them from source code history, analyzing bug-fixes or changes between system releases. However, software evolves over time, and during development not only bugs are fixed, but also features are added, and code is refactored. In such cases, changes must be consistently applied in source code to avoid maintenance problems. In this paper, we propose to extract system specific rules by mining systematic changes over source code history, i.e., not just from bug-fixes or system releases, to ensure that changes are consistently applied over source code. We focus on structural changes done to support API modification or evolution with the goal of providing better rules to developers. Also, rules are mined from predefined rule patterns that ensure their quality. In order to assess the precision of such specific rules to detect real violations, we compare them with generic rules provided by tools to detect coding standard violations on four real world systems covering two programming languages. The results show that specific rules are more precise in identifying real violations in source code than generic ones, and thus can complement them.
{"title":"Mining system specific rules from change patterns","authors":"André C. Hora, N. Anquetil, Stéphane Ducasse, M. T. Valente","doi":"10.1109/WCRE.2013.6671308","DOIUrl":"https://doi.org/10.1109/WCRE.2013.6671308","url":null,"abstract":"A significant percentage of warnings reported by tools to detect coding standard violations are false positives. Thus, there are some works dedicated to provide better rules by mining them from source code history, analyzing bug-fixes or changes between system releases. However, software evolves over time, and during development not only bugs are fixed, but also features are added, and code is refactored. In such cases, changes must be consistently applied in source code to avoid maintenance problems. In this paper, we propose to extract system specific rules by mining systematic changes over source code history, i.e., not just from bug-fixes or system releases, to ensure that changes are consistently applied over source code. We focus on structural changes done to support API modification or evolution with the goal of providing better rules to developers. Also, rules are mined from predefined rule patterns that ensure their quality. In order to assess the precision of such specific rules to detect real violations, we compare them with generic rules provided by tools to detect coding standard violations on four real world systems covering two programming languages. The results show that specific rules are more precise in identifying real violations in source code than generic ones, and thus can complement them.","PeriodicalId":275092,"journal":{"name":"2013 20th Working Conference on Reverse Engineering (WCRE)","volume":"11 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2013-11-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130329550","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2013-11-21DOI: 10.1109/WCRE.2013.6671280
M. Smithson, Khaled Elwazeer, K. Anand, A. Kotha, R. Barua
Binary rewriting is the process of transforming executables by maintaining the original binary's functionality, while improving it in one or more metrics, such as energy use, memory use, security, or reliability. Although several technologies for rewriting binaries exist, static rewriting allows for arbitrarily complex transformations to be performed. Other technologies, such as dynamic or minimally-invasive rewriting, are limited in their transformation ability. We have designed the first static binary rewriter that guarantees 100% code coverage without the need for relocation or symbolic information. A key challenge in static rewriting is content classification (i.e. deciding what portion of the code segment is code versus data). Our contributions are (i) handling portions of the code segment with uncertain classification by using speculative disassembly in case it was code, and retaining the original binary in case it was data; (ii) drastically limiting the number of possible speculative sequences using a new technique called binary characterization; and (iii) avoiding the need for relocation or symbolic information by using call translation at usage points of code pointers (i.e. indirect control transfers), rather than changing addresses at address creation points. Extensive evaluation using stripped binaries for the entire SPEC 2006 benchmark suite (with over 1.9 million lines of code) demonstrates the robustness of the scheme.
{"title":"Static binary rewriting without supplemental information: Overcoming the tradeoff between coverage and correctness","authors":"M. Smithson, Khaled Elwazeer, K. Anand, A. Kotha, R. Barua","doi":"10.1109/WCRE.2013.6671280","DOIUrl":"https://doi.org/10.1109/WCRE.2013.6671280","url":null,"abstract":"Binary rewriting is the process of transforming executables by maintaining the original binary's functionality, while improving it in one or more metrics, such as energy use, memory use, security, or reliability. Although several technologies for rewriting binaries exist, static rewriting allows for arbitrarily complex transformations to be performed. Other technologies, such as dynamic or minimally-invasive rewriting, are limited in their transformation ability. We have designed the first static binary rewriter that guarantees 100% code coverage without the need for relocation or symbolic information. A key challenge in static rewriting is content classification (i.e. deciding what portion of the code segment is code versus data). Our contributions are (i) handling portions of the code segment with uncertain classification by using speculative disassembly in case it was code, and retaining the original binary in case it was data; (ii) drastically limiting the number of possible speculative sequences using a new technique called binary characterization; and (iii) avoiding the need for relocation or symbolic information by using call translation at usage points of code pointers (i.e. indirect control transfers), rather than changing addresses at address creation points. Extensive evaluation using stripped binaries for the entire SPEC 2006 benchmark suite (with over 1.9 million lines of code) demonstrates the robustness of the scheme.","PeriodicalId":275092,"journal":{"name":"2013 20th Working Conference on Reverse Engineering (WCRE)","volume":"33 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2013-11-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"133097384","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2013-11-21DOI: 10.1109/WCRE.2013.6671330
A. Sutii, S. Roubtsov, Alexander Serebrenik
We present recent extensions to SQuAVisiT, Software Quality Assessment and Visualization Toolset. While SQuAVisiT has been designed with traditional software and traditional caller-callee dependencies in mind, recent popularity of Enterprise JavaBeans (EJB) required extensions that enable analysis of additional forms of dependencies: EJB dependency injections, object-relational (persistence) mappings and Web service mappings. In this paper we discuss the implementation of these extensions in SQuAVisiT and the application of SQuAVisiT to an open-source software system.
{"title":"Detecting dependencies in Enterprise JavaBeans with SQuAVisiT","authors":"A. Sutii, S. Roubtsov, Alexander Serebrenik","doi":"10.1109/WCRE.2013.6671330","DOIUrl":"https://doi.org/10.1109/WCRE.2013.6671330","url":null,"abstract":"We present recent extensions to SQuAVisiT, Software Quality Assessment and Visualization Toolset. While SQuAVisiT has been designed with traditional software and traditional caller-callee dependencies in mind, recent popularity of Enterprise JavaBeans (EJB) required extensions that enable analysis of additional forms of dependencies: EJB dependency injections, object-relational (persistence) mappings and Web service mappings. In this paper we discuss the implementation of these extensions in SQuAVisiT and the application of SQuAVisiT to an open-source software system.","PeriodicalId":275092,"journal":{"name":"2013 20th Working Conference on Reverse Engineering (WCRE)","volume":"66 12","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2013-11-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"132334809","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2013-11-21DOI: 10.1109/WCRE.2013.6671290
Z. Soh, Foutse Khomh, Yann-Gaël Guéhéneuc, G. Antoniol
For many years, researchers and practitioners have strived to assess and improve the productivity of software development teams. One key step toward achieving this goal is the understanding of factors affecting the efficiency of developers performing development and maintenance activities. In this paper, we aim to understand how developers' spend their effort during maintenance activities and study the factors affecting developers' effort. By knowing how developers' spend their effort and which factors affect their effort, software organisations will be able to take the necessary steps to improve the efficiency of their developers, for example, by providing them with adequate program comprehension tools. For this preliminary study, we mine 2,408 developers' interaction histories and 3,395 patches from four open-source software projects (ECF, Mylyn, PDE, Eclipse Platform). We observe that usually, the complexity of the implementation required for a task does not reflect the effort spent by developers on the task. Most of the effort appears to be spent during the exploration of the program. In average, 62% of files explored during the implementation of a task are not significantly relevant to the final implementation of the task. Developers who explore a large number of files that are not significantly relevant to the solution to their task take a longer time to perform the task. We expect that the results of this study will pave the way for better program comprehension tools to guide developers during their explorations of software systems.
{"title":"Towards understanding how developers spend their effort during maintenance activities","authors":"Z. Soh, Foutse Khomh, Yann-Gaël Guéhéneuc, G. Antoniol","doi":"10.1109/WCRE.2013.6671290","DOIUrl":"https://doi.org/10.1109/WCRE.2013.6671290","url":null,"abstract":"For many years, researchers and practitioners have strived to assess and improve the productivity of software development teams. One key step toward achieving this goal is the understanding of factors affecting the efficiency of developers performing development and maintenance activities. In this paper, we aim to understand how developers' spend their effort during maintenance activities and study the factors affecting developers' effort. By knowing how developers' spend their effort and which factors affect their effort, software organisations will be able to take the necessary steps to improve the efficiency of their developers, for example, by providing them with adequate program comprehension tools. For this preliminary study, we mine 2,408 developers' interaction histories and 3,395 patches from four open-source software projects (ECF, Mylyn, PDE, Eclipse Platform). We observe that usually, the complexity of the implementation required for a task does not reflect the effort spent by developers on the task. Most of the effort appears to be spent during the exploration of the program. In average, 62% of files explored during the implementation of a task are not significantly relevant to the final implementation of the task. Developers who explore a large number of files that are not significantly relevant to the solution to their task take a longer time to perform the task. We expect that the results of this study will pave the way for better program comprehension tools to guide developers during their explorations of software systems.","PeriodicalId":275092,"journal":{"name":"2013 20th Working Conference on Reverse Engineering (WCRE)","volume":"12 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2013-11-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128548659","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2013-11-21DOI: 10.1109/WCRE.2013.6671314
Z. Soh, Foutse Khomh, Yann-Gaël Guéhéneuc, G. Antoniol, Bram Adams
When developers perform a maintenance task, they follow an exploration strategy (ES) that is characterised by how they navigate through the program entities. Studying ES can help to assess how developers understand a program and perform a change task. Various factors could influence how developers explore a program and the way in which they explore a program may affect their performance for a certain task. In this paper, we investigate the ES followed by developers during maintenance tasks and assess the impact of these ES on the duration and effort spent by developers on the tasks. We want to know if developers frequently revisit one (or a set) of program entities (referenced exploration), or if they visit program entities with almost the same frequency (unreferenced exploration) when performing a maintenance task. We mine 1,705 Mylyn interaction histories (IH) from four open-source projects (ECF, Mylyn, PDE, and Eclipse Platform) and perform a user study to verify if both referenced exploration (RE) and unreferenced exploration (UE) were followed by some developers. Using the Gini inequality index on the number of revisits of program entities, we automatically classify interaction histories as RE and UE and perform an empirical study to measure the effect of program exploration on the task duration and effort. We report that, although a UE may require more exploration effort than a RE, a UE is on average 12.30% less time consuming than a RE.
{"title":"On the effect of program exploration on maintenance tasks","authors":"Z. Soh, Foutse Khomh, Yann-Gaël Guéhéneuc, G. Antoniol, Bram Adams","doi":"10.1109/WCRE.2013.6671314","DOIUrl":"https://doi.org/10.1109/WCRE.2013.6671314","url":null,"abstract":"When developers perform a maintenance task, they follow an exploration strategy (ES) that is characterised by how they navigate through the program entities. Studying ES can help to assess how developers understand a program and perform a change task. Various factors could influence how developers explore a program and the way in which they explore a program may affect their performance for a certain task. In this paper, we investigate the ES followed by developers during maintenance tasks and assess the impact of these ES on the duration and effort spent by developers on the tasks. We want to know if developers frequently revisit one (or a set) of program entities (referenced exploration), or if they visit program entities with almost the same frequency (unreferenced exploration) when performing a maintenance task. We mine 1,705 Mylyn interaction histories (IH) from four open-source projects (ECF, Mylyn, PDE, and Eclipse Platform) and perform a user study to verify if both referenced exploration (RE) and unreferenced exploration (UE) were followed by some developers. Using the Gini inequality index on the number of revisits of program entities, we automatically classify interaction histories as RE and UE and perform an empirical study to measure the effect of program exploration on the task duration and effort. We report that, although a UE may require more exploration effort than a RE, a UE is on average 12.30% less time consuming than a RE.","PeriodicalId":275092,"journal":{"name":"2013 20th Working Conference on Reverse Engineering (WCRE)","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2013-11-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129945341","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2013-11-21DOI: 10.1109/WCRE.2013.6671292
M. Bernardi, Marta Cimitile, G. D. Lucca
In this paper an approach to automatically detect Design Patterns (DPs) in Object Oriented systems is presented. It allows to link system's source code components to the roles they play in each pattern. DPs are modelled by high level structural properties (e.g. inheritance, dependency, invocation, delegation, type nesting and membership relationships) that are checked against the system structure and components. The proposed metamodel also allows to define DP variants, overriding the structural properties of existing DP models, to improve detection quality. The approach was validated on an open benchmark containing several open-source systems of increasing sizes. Moreover, for other two systems, the results have been compared with the ones from a similar approach existing in literature. The results obtained on the analyzed systems, the identified variants and the efficiency and effectiveness of the approach are thoroughly presented and discussed.
{"title":"A model-driven graph-matching approach for design pattern detection","authors":"M. Bernardi, Marta Cimitile, G. D. Lucca","doi":"10.1109/WCRE.2013.6671292","DOIUrl":"https://doi.org/10.1109/WCRE.2013.6671292","url":null,"abstract":"In this paper an approach to automatically detect Design Patterns (DPs) in Object Oriented systems is presented. It allows to link system's source code components to the roles they play in each pattern. DPs are modelled by high level structural properties (e.g. inheritance, dependency, invocation, delegation, type nesting and membership relationships) that are checked against the system structure and components. The proposed metamodel also allows to define DP variants, overriding the structural properties of existing DP models, to improve detection quality. The approach was validated on an open benchmark containing several open-source systems of increasing sizes. Moreover, for other two systems, the results have been compared with the ones from a similar approach existing in literature. The results obtained on the analyzed systems, the identified variants and the efficiency and effectiveness of the approach are thoroughly presented and discussed.","PeriodicalId":275092,"journal":{"name":"2013 20th Working Conference on Reverse Engineering (WCRE)","volume":"18 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2013-11-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"123559872","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2013-11-21DOI: 10.1109/WCRE.2013.6671317
Lerina Aversano, Marco Molfetta, M. Tortorella
Reuse of software components depends from different aspects of high level software artifacts. In particular, software architecture and its stability should be taken into account before selecting software components for reuse. In this direction, this paper presents an empirical study aimed at assessing software architecture stability and its evolution along the software project history. The study entails the gathering and analysis of relevant information from several open source projects. The analysis of the software architectures stability of the core components of the analyzed projects and related trends are presented as results.
{"title":"Evaluating architecture stability of software projects","authors":"Lerina Aversano, Marco Molfetta, M. Tortorella","doi":"10.1109/WCRE.2013.6671317","DOIUrl":"https://doi.org/10.1109/WCRE.2013.6671317","url":null,"abstract":"Reuse of software components depends from different aspects of high level software artifacts. In particular, software architecture and its stability should be taken into account before selecting software components for reuse. In this direction, this paper presents an empirical study aimed at assessing software architecture stability and its evolution along the software project history. The study entails the gathering and analysis of relevant information from several open source projects. The analysis of the software architectures stability of the core components of the analyzed projects and related trends are presented as results.","PeriodicalId":275092,"journal":{"name":"2013 20th Working Conference on Reverse Engineering (WCRE)","volume":"57 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2013-11-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126917503","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2013-11-21DOI: 10.1109/WCRE.2013.6671304
K. Duran, G. Burns, P. Snell
Software team leaders and managers must make decisions on what type of process model they will use for their projects. Recent work suggests the use of agile processes since they promote shorter development cycles, better collaboration, and process flexibility. Due to the many benefits of agile processes, many software organizations have shifted to using more agile process methodologies. However, there is limited research that studies how agile processes affects the evolution of a software system over time. In this paper, we perform an empirical study to better understand the effects of using agile processes. We compare two open source projects, one of which uses a tailored agile process (i.e., Xtreme Programming) and another that has no formal process methodology. In particular, we compare the two projects within the context of Lehmans Laws for continuing growth, continuing change, increasing complexity, and conservation of familiarity. Our findings show that all four of the laws held true for the project that uses an agile process and that there are noticeable differences in the evolution of the two projects, many of which can be traced back to specific practices used by the agile team.
{"title":"Lehman's laws in agile and non-agile projects","authors":"K. Duran, G. Burns, P. Snell","doi":"10.1109/WCRE.2013.6671304","DOIUrl":"https://doi.org/10.1109/WCRE.2013.6671304","url":null,"abstract":"Software team leaders and managers must make decisions on what type of process model they will use for their projects. Recent work suggests the use of agile processes since they promote shorter development cycles, better collaboration, and process flexibility. Due to the many benefits of agile processes, many software organizations have shifted to using more agile process methodologies. However, there is limited research that studies how agile processes affects the evolution of a software system over time. In this paper, we perform an empirical study to better understand the effects of using agile processes. We compare two open source projects, one of which uses a tailored agile process (i.e., Xtreme Programming) and another that has no formal process methodology. In particular, we compare the two projects within the context of Lehmans Laws for continuing growth, continuing change, increasing complexity, and conservation of familiarity. Our findings show that all four of the laws held true for the project that uses an agile process and that there are noticeable differences in the evolution of the two projects, many of which can be traced back to specific practices used by the agile team.","PeriodicalId":275092,"journal":{"name":"2013 20th Working Conference on Reverse Engineering (WCRE)","volume":"22 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2013-11-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127423881","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}