Pub Date : 2008-04-01DOI: 10.1109/CSMR.2008.4493301
Y. Kanellopoulos, Christos Tjortjis, I. Heitlager, Joost Visser
Clustering is a data mining technique that allows the grouping of data points on the basis of their similarity with respect to multiple dimensions of measurement. It has also been applied in the software engineering domain, in particular to support software quality assessment based on source code metrics. Unfortunately, since clusters emerge from metrics at the source code level, it is difficult to interpret the significance of clusters at the level of the quality of the entire system. In this paper, we propose a method for interpreting source code clusters using the ISO/IEC 9126 software product quality model. Several methods have been proposed to perform quantitative assessment of software systems in terms of the quality characteristics defined by ISO/IEC 9126. These methods perform mappings of low-level source code metrics to high-level quality characteristics by various aggregation and weighting procedures. We applied such a method to obtain quality profiles at various abstraction levels for each generated source code cluster. Subsequently, the plethora of quality profiles obtained is visualized such that conclusions about different quality problems in various clusters can be obtained at a glance.
{"title":"Interpretation of Source Code Clusters in Terms of the ISO/IEC-9126 Maintainability Characteristics","authors":"Y. Kanellopoulos, Christos Tjortjis, I. Heitlager, Joost Visser","doi":"10.1109/CSMR.2008.4493301","DOIUrl":"https://doi.org/10.1109/CSMR.2008.4493301","url":null,"abstract":"Clustering is a data mining technique that allows the grouping of data points on the basis of their similarity with respect to multiple dimensions of measurement. It has also been applied in the software engineering domain, in particular to support software quality assessment based on source code metrics. Unfortunately, since clusters emerge from metrics at the source code level, it is difficult to interpret the significance of clusters at the level of the quality of the entire system. In this paper, we propose a method for interpreting source code clusters using the ISO/IEC 9126 software product quality model. Several methods have been proposed to perform quantitative assessment of software systems in terms of the quality characteristics defined by ISO/IEC 9126. These methods perform mappings of low-level source code metrics to high-level quality characteristics by various aggregation and weighting procedures. We applied such a method to obtain quality profiles at various abstraction levels for each generated source code cluster. Subsequently, the plethora of quality profiles obtained is visualized such that conclusions about different quality problems in various clusters can be obtained at a glance.","PeriodicalId":350838,"journal":{"name":"2008 12th European Conference on Software Maintenance and Reengineering","volume":"31 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2008-04-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"122071380","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2008-04-01DOI: 10.1109/CSMR.2008.4493327
A. Marchetto, F. Ricca, Marco Torchiano
In this paper we focus on the use of Fit tables, a table- based approach used to clarify (change-)requirements and validate software systems. The main purpose of this work is to compare Fit tables for traditional systems and Web specific Fit tables. Results indicate that Fit tables do not provide any significant help to Web developers, while they seem to be useful for traditional systems. The main reason appears to lie in the complexity of the Fit tables used for Web systems.
{"title":"Comparing \"Traditional\" and Web Specific Fit Tables in Maintenance Tasks: A Preliminary Empirical Study","authors":"A. Marchetto, F. Ricca, Marco Torchiano","doi":"10.1109/CSMR.2008.4493327","DOIUrl":"https://doi.org/10.1109/CSMR.2008.4493327","url":null,"abstract":"In this paper we focus on the use of Fit tables, a table- based approach used to clarify (change-)requirements and validate software systems. The main purpose of this work is to compare Fit tables for traditional systems and Web specific Fit tables. Results indicate that Fit tables do not provide any significant help to Web developers, while they seem to be useful for traditional systems. The main reason appears to lie in the complexity of the Fit tables used for Web systems.","PeriodicalId":350838,"journal":{"name":"2008 12th European Conference on Software Maintenance and Reengineering","volume":"75 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2008-04-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"116260835","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2008-04-01DOI: 10.1109/CSMR.2008.4493311
Thilo Mende, Felix Beckwermert, R. Koschke, G. Meier
Software product lines (SPL) can be used to create and maintain different variants of software-intensive systems by explicitly managing variability. Often, SPLs are organized as an SPL core, common to all products, upon which product-specific components are built. Following the so called grow-and-prune model, SPLs may be evolved by copy&paste at large scale. New products are created from existing ones and existing products are enhanced with functionalities specific to other products by copying and pasting code between product-specific code. To regain control of this unmanaged growth, such code may be pruned, that is, identified and refactored into core components upon success. This paper describes tool support for the grow-and- prune model in the evolution of software product lines by identifying similar functions which can be moved to the core. These functions are identified in two steps. First, token-based clone detection is used to detect pairs of functions sharing code. Second, Levenshtein distance measures the textual similarity among these functions. Sufficient similarity at function level is then lifted to the architectural level. The approach is evaluated by three case studies, one using an open source email client to simulate the initial creation of an SPL, and two monitoring existing industrial product lines from the embedded domain.
{"title":"Supporting the Grow-and-Prune Model in Software Product Lines Evolution Using Clone Detection","authors":"Thilo Mende, Felix Beckwermert, R. Koschke, G. Meier","doi":"10.1109/CSMR.2008.4493311","DOIUrl":"https://doi.org/10.1109/CSMR.2008.4493311","url":null,"abstract":"Software product lines (SPL) can be used to create and maintain different variants of software-intensive systems by explicitly managing variability. Often, SPLs are organized as an SPL core, common to all products, upon which product-specific components are built. Following the so called grow-and-prune model, SPLs may be evolved by copy&paste at large scale. New products are created from existing ones and existing products are enhanced with functionalities specific to other products by copying and pasting code between product-specific code. To regain control of this unmanaged growth, such code may be pruned, that is, identified and refactored into core components upon success. This paper describes tool support for the grow-and- prune model in the evolution of software product lines by identifying similar functions which can be moved to the core. These functions are identified in two steps. First, token-based clone detection is used to detect pairs of functions sharing code. Second, Levenshtein distance measures the textual similarity among these functions. Sufficient similarity at function level is then lifted to the architectural level. The approach is evaluated by three case studies, one using an open source email client to simulate the initial creation of an SPL, and two monitoring existing industrial product lines from the embedded domain.","PeriodicalId":350838,"journal":{"name":"2008 12th European Conference on Software Maintenance and Reengineering","volume":"241 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2008-04-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"132108705","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2008-04-01DOI: 10.1109/CSMR.2008.4493321
Pieter Van Der Spek, Steven Klusener, Pierre Van De Laar
In order to address the problem of locating high-level concepts in source code we propose to use an advanced information retrieval method to exploit linguistic information found in source code, such as variable names and comments. Our technique is based on latent semantic indexing (LSI) which is also used in today's search engines. Applying LSI to source code, however, is not straightforward. Our approach therefore not only includes LSI, but also several other algorithms and methods. We discuss the algorithms and methods that turned out to be useful and provide an overview of their effects using the results obtained from a case study at Philips Healthcare.
{"title":"Towards Recovering Architectural Concepts Using Latent Semantic Indexing","authors":"Pieter Van Der Spek, Steven Klusener, Pierre Van De Laar","doi":"10.1109/CSMR.2008.4493321","DOIUrl":"https://doi.org/10.1109/CSMR.2008.4493321","url":null,"abstract":"In order to address the problem of locating high-level concepts in source code we propose to use an advanced information retrieval method to exploit linguistic information found in source code, such as variable names and comments. Our technique is based on latent semantic indexing (LSI) which is also used in today's search engines. Applying LSI to source code, however, is not straightforward. Our approach therefore not only includes LSI, but also several other algorithms and methods. We discuss the algorithms and methods that turned out to be useful and provide an overview of their effects using the results obtained from a case study at Philips Healthcare.","PeriodicalId":350838,"journal":{"name":"2008 12th European Conference on Software Maintenance and Reengineering","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2008-04-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130225534","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2008-04-01DOI: 10.1109/CSMR.2008.4493294
J. Ebert
Source code and accompanying documents that are subject to reverse engineering activities are usually written in different artifact languages, ranging from programming languages over diagram languages to natural languages. For the purpose of information extraction from such heterogeneous sources a common unifying representation is essential. Metamodeling is a popular approach to define the abstract syntax of any kind of language and is capable to handle almost all kinds of languages and formats occurring in reverse engineering contexts. Metamodels specify how concrete artifacts are to be represented as instances. The instances of CMOF-like metamodels can be viewed as graphs. TGraphs are a very general graph concept, which is based on vertices and edges as first-class entities and includes types, attributes, and ordering for both. Their use allows a common integrated representation of all kinds of documents in a concise manner which is simultaneously formal, visualisable, and efficiently processable. This talk will explain the use of metamodeling of artifacts and their representation by TGraphs as an efficient data structure. It will illustrate the role of graph algorithms and graph querying as enabling technologies in graph-based reverse engineering tools..
{"title":"Metamodels Taken Seriously: The TGraph Approach","authors":"J. Ebert","doi":"10.1109/CSMR.2008.4493294","DOIUrl":"https://doi.org/10.1109/CSMR.2008.4493294","url":null,"abstract":"Source code and accompanying documents that are subject to reverse engineering activities are usually written in different artifact languages, ranging from programming languages over diagram languages to natural languages. For the purpose of information extraction from such heterogeneous sources a common unifying representation is essential. Metamodeling is a popular approach to define the abstract syntax of any kind of language and is capable to handle almost all kinds of languages and formats occurring in reverse engineering contexts. Metamodels specify how concrete artifacts are to be represented as instances. The instances of CMOF-like metamodels can be viewed as graphs. TGraphs are a very general graph concept, which is based on vertices and edges as first-class entities and includes types, attributes, and ordering for both. Their use allows a common integrated representation of all kinds of documents in a concise manner which is simultaneously formal, visualisable, and efficiently processable. This talk will explain the use of metamodeling of artifacts and their representation by TGraphs as an efficient data structure. It will illustrate the role of graph algorithms and graph querying as enabling technologies in graph-based reverse engineering tools..","PeriodicalId":350838,"journal":{"name":"2008 12th European Conference on Software Maintenance and Reengineering","volume":"42 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2008-04-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"133196511","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2008-04-01DOI: 10.1109/CSMR.2008.4493325
Foutse Khomh, Yann-Gaël Guéhéneuc
We study the impact of design patterns on quality attributes in the context of software maintenance and evolution. We show that, contrary to popular beliefs, design patterns in practice impact negatively several quality attributes, thus providing concrete evidence against common lore. We then study design patterns and object-oriented best practices by formulating a second hypothesis on the impact of these principles on quality. We show that results for some design patterns cannot be explained and conclude on the need for further studies. Thus, we bring further evidence that design patterns should be used with caution during development because they may actually impede maintenance and evolution.
{"title":"Do Design Patterns Impact Software Quality Positively?","authors":"Foutse Khomh, Yann-Gaël Guéhéneuc","doi":"10.1109/CSMR.2008.4493325","DOIUrl":"https://doi.org/10.1109/CSMR.2008.4493325","url":null,"abstract":"We study the impact of design patterns on quality attributes in the context of software maintenance and evolution. We show that, contrary to popular beliefs, design patterns in practice impact negatively several quality attributes, thus providing concrete evidence against common lore. We then study design patterns and object-oriented best practices by formulating a second hypothesis on the impact of these principles on quality. We show that results for some design patterns cannot be explained and conclude on the need for further studies. Thus, we bring further evidence that design patterns should be used with caution during development because they may actually impede maintenance and evolution.","PeriodicalId":350838,"journal":{"name":"2008 12th European Conference on Software Maintenance and Reengineering","volume":"7 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2008-04-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127657843","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2008-04-01DOI: 10.1109/CSMR.2008.4493340
A. Telea, L. Voinea
We present Build Analyzer, a tool that helps developers optimize the build performance of huge systems written in C. Due to complex C header dependencies, even small code changes can cause extremely long rebuilds, which are problematic when code is shared and modified by teams of hundreds of individuals. Build Analyzer supports several use cases. For developers, it provides an estimate of the build impact and distribution caused by a given change. For architects, it shows why a build is costly, how its cost is spread over the entire code base, which headers cause build bottlenecks, and suggests ways to refactor these to reduce the cost. We demonstrate Build Analyzer with a use-case on a real industry code base.
{"title":"A Tool for Optimizing the Build Performance of Large Software Code Bases","authors":"A. Telea, L. Voinea","doi":"10.1109/CSMR.2008.4493340","DOIUrl":"https://doi.org/10.1109/CSMR.2008.4493340","url":null,"abstract":"We present Build Analyzer, a tool that helps developers optimize the build performance of huge systems written in C. Due to complex C header dependencies, even small code changes can cause extremely long rebuilds, which are problematic when code is shared and modified by teams of hundreds of individuals. Build Analyzer supports several use cases. For developers, it provides an estimate of the build impact and distribution caused by a given change. For architects, it shows why a build is costly, how its cost is spread over the entire code base, which headers cause build bottlenecks, and suggests ways to refactor these to reduce the cost. We demonstrate Build Analyzer with a use-case on a real industry code base.","PeriodicalId":350838,"journal":{"name":"2008 12th European Conference on Software Maintenance and Reengineering","volume":"78 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2008-04-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128442351","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2008-04-01DOI: 10.1109/CSMR.2008.4493293
J. Mylopoulos
There is an ongoing paradigm shift in Software Engineering from object-orientation to agent-orientation. We review some of the reasons for this, and briefly overview the state-of-the-art in Agent-Oriented Software Engineering (AOSE). We then sketch some threads of long-term research on autonomic software, software monitoring and diagnosis, and requirements evolution. In addition, we discuss the impact this research may have on how software maintenance and reengineering is done in the future. The research reported is the result of collaborations with colleagues at the Universities of Toronto, Trento and a number of other academic institutions..
{"title":"Software Maintenance and Reengineering in the Days of Software Agents","authors":"J. Mylopoulos","doi":"10.1109/CSMR.2008.4493293","DOIUrl":"https://doi.org/10.1109/CSMR.2008.4493293","url":null,"abstract":"There is an ongoing paradigm shift in Software Engineering from object-orientation to agent-orientation. We review some of the reasons for this, and briefly overview the state-of-the-art in Agent-Oriented Software Engineering (AOSE). We then sketch some threads of long-term research on autonomic software, software monitoring and diagnosis, and requirements evolution. In addition, we discuss the impact this research may have on how software maintenance and reengineering is done in the future. The research reported is the result of collaborations with colleagues at the Universities of Toronto, Trento and a number of other academic institutions..","PeriodicalId":350838,"journal":{"name":"2008 12th European Conference on Software Maintenance and Reengineering","volume":"64 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2008-04-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130581708","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2008-04-01DOI: 10.1109/CSMR.2008.4493306
H. Sneed, Stefan Opferkuch
This position paper presents a proposal for training and certifying software maintainers - both at the academic level and in the industry. Two levels of certification are proposed similar to the existing certification of software testers - a foundation level and an advanced level. For each training a course is outlined, one for the foundation level and one for the advanced level. The courses were developed and delivered by the authors both at the university and for industrial customers and were met with acceptance in both fields. Certifying maintainers according to an internationally accredited body of knowledge would help to establish software maintenance and evolution as an acknowledged profession. The certification program would also give universities and training institutes a common guideline to go by in teaching maintenance and evolution of software.
{"title":"Training and Certifying Software Maintainers","authors":"H. Sneed, Stefan Opferkuch","doi":"10.1109/CSMR.2008.4493306","DOIUrl":"https://doi.org/10.1109/CSMR.2008.4493306","url":null,"abstract":"This position paper presents a proposal for training and certifying software maintainers - both at the academic level and in the industry. Two levels of certification are proposed similar to the existing certification of software testers - a foundation level and an advanced level. For each training a course is outlined, one for the foundation level and one for the advanced level. The courses were developed and delivered by the authors both at the university and for industrial customers and were met with acceptance in both fields. Certifying maintainers according to an internationally accredited body of knowledge would help to establish software maintenance and evolution as an acknowledged profession. The certification program would also give universities and training institutes a common guideline to go by in teaching maintenance and evolution of software.","PeriodicalId":350838,"journal":{"name":"2008 12th European Conference on Software Maintenance and Reengineering","volume":"50 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2008-04-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"132721007","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2008-04-01DOI: 10.1109/CSMR.2008.4493337
Shimin Li, L. Tahvildari, Weining Liu, M. Morrissey, G. Cort
The testing activities of the Software Verification & Validation (SV&V) team at Research In Motion (RIM) are requirements-based, which is commonly known as requirements-driven testing (RDT). Software requirements are continuously changing, which has an important impact on the RDT process. This paper describes the major challenges in coping with requirements changes in the software verification and validation processes and indicates how those challenges are being addressed at RIM.
{"title":"Coping with Requirements Changes in Software Verification and Validation","authors":"Shimin Li, L. Tahvildari, Weining Liu, M. Morrissey, G. Cort","doi":"10.1109/CSMR.2008.4493337","DOIUrl":"https://doi.org/10.1109/CSMR.2008.4493337","url":null,"abstract":"The testing activities of the Software Verification & Validation (SV&V) team at Research In Motion (RIM) are requirements-based, which is commonly known as requirements-driven testing (RDT). Software requirements are continuously changing, which has an important impact on the RDT process. This paper describes the major challenges in coping with requirements changes in the software verification and validation processes and indicates how those challenges are being addressed at RIM.","PeriodicalId":350838,"journal":{"name":"2008 12th European Conference on Software Maintenance and Reengineering","volume":"38 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2008-04-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"124095832","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}