Pub Date : 2002-10-03DOI: 10.1109/ICSM.2002.1167813
Dirk Ohst, U. Kelter
In this paper we present a model of version and configuration management in the early phases of software development and an implementation of this model. We assume that software documents are modeled in a fine-grained way, that they are stored as syntax trees in XML files or a repository system, and that tools directly operate on these syntax trees. In contrast to file-based systems, structural changes in the document, e.g. the shifting of a method between two classes, can be identified in our model. Configurations allow us to manage groups of single modifications; such a group will mostly correspond to a specific design task or a similar activity. Configurations are thus a means to establish a connection to a change management system.
{"title":"A fine-grained version and configuration model in analysis and design","authors":"Dirk Ohst, U. Kelter","doi":"10.1109/ICSM.2002.1167813","DOIUrl":"https://doi.org/10.1109/ICSM.2002.1167813","url":null,"abstract":"In this paper we present a model of version and configuration management in the early phases of software development and an implementation of this model. We assume that software documents are modeled in a fine-grained way, that they are stored as syntax trees in XML files or a repository system, and that tools directly operate on these syntax trees. In contrast to file-based systems, structural changes in the document, e.g. the shifting of a method between two classes, can be identified in our model. Configurations allow us to manage groups of single modifications; such a group will mostly correspond to a specific design task or a similar activity. Configurations are thus a means to establish a connection to a change management system.","PeriodicalId":385190,"journal":{"name":"International Conference on Software Maintenance, 2002. Proceedings.","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2002-10-03","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"134289583","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2002-10-03DOI: 10.1109/ICSM.2002.1167799
N. T. Binh, M. Delaunay, C. Robach
In this paper, we propose to use the static single assignment form, which was originally proposed for code optimization in compilation techniques, in order to transform software components into a data-flow representation. Thus, hardware testability concepts can be used to analyze the testability of components that are described by C or Ada programs. Such a testability analysis helps designers during the specification phases of their components and testers during the testing phases to evaluate and eventually to modify the design.
{"title":"Testability analysis for software components","authors":"N. T. Binh, M. Delaunay, C. Robach","doi":"10.1109/ICSM.2002.1167799","DOIUrl":"https://doi.org/10.1109/ICSM.2002.1167799","url":null,"abstract":"In this paper, we propose to use the static single assignment form, which was originally proposed for code optimization in compilation techniques, in order to transform software components into a data-flow representation. Thus, hardware testability concepts can be used to analyze the testability of components that are described by C or Ada programs. Such a testability analysis helps designers during the specification phases of their components and testers during the testing phases to evaluate and eventually to modify the design.","PeriodicalId":385190,"journal":{"name":"International Conference on Software Maintenance, 2002. Proceedings.","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2002-10-03","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"123484812","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2002-10-03DOI: 10.1109/ICSM.2002.1167823
Ana L. Milanova, A. Rountev, B. Ryder
The object relation diagram (ORD) of a program is a class interdependence diagram which has applications in a wide variety of software engineering problems (e.g., integration testing, integration coverage analysis, regression testing, impact analysis, program understanding, and reverse engineering). Because the imprecision of the ORD directly affects the practicality of its usage, it is important to investigate techniques for constructing precise ORDs. This paper makes three contributions. First, we develop the extended object relation diagram (ExtORD), a version of the ORD designed for use in integration coverage analysis. The ExtORD shows the specific statement that creates an interclass dependence, and can be easily constructed by extending techniques for ORD construction. Second, we develop a general algorithm for ORD construction, parameterized by class analysis. Third, we demonstrate empirically that relatively precise class analyses can significantly improve diagram precision compared to earlier work, resulting in average size reduction of 55% for the ORD and 39% for the ExtORD.
{"title":"Constructing precise object relation diagrams","authors":"Ana L. Milanova, A. Rountev, B. Ryder","doi":"10.1109/ICSM.2002.1167823","DOIUrl":"https://doi.org/10.1109/ICSM.2002.1167823","url":null,"abstract":"The object relation diagram (ORD) of a program is a class interdependence diagram which has applications in a wide variety of software engineering problems (e.g., integration testing, integration coverage analysis, regression testing, impact analysis, program understanding, and reverse engineering). Because the imprecision of the ORD directly affects the practicality of its usage, it is important to investigate techniques for constructing precise ORDs. This paper makes three contributions. First, we develop the extended object relation diagram (ExtORD), a version of the ORD designed for use in integration coverage analysis. The ExtORD shows the specific statement that creates an interclass dependence, and can be easily constructed by extending techniques for ORD construction. Second, we develop a general algorithm for ORD construction, parameterized by class analysis. Third, we demonstrate empirically that relatively precise class analyses can significantly improve diagram precision compared to earlier work, resulting in average size reduction of 55% for the ORD and 39% for the ExtORD.","PeriodicalId":385190,"journal":{"name":"International Conference on Software Maintenance, 2002. Proceedings.","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2002-10-03","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"125656223","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2002-10-03DOI: 10.1109/ICSM.2002.1167749
B. Curtis
The fundamental characteristic of an E-type system is its need to evolve to satisfy the needs of its users. Most approaches to system enhancements assume that a set of evolving requirements can be elicited from system clients that indicate their common needs at the time the requirements are elicited. However, decades of experience in requirements analysis and business process engineering indicates that client organizations frequently lack common processes for common business functions performed across the organization. Thus, the elicited requirements contain a cacophony of approaches used by different organizational units for performing the same business process. Therefore the requirements reflect the unnecessary complexity induced by a lack of process maturity. This complexity exacerbates the phenomena described by in the Laws of Software Evolution. The concepts underlying Watts Humphrey's Process Maturity Framework -best understood in its instantiation in the Capability Maturity Model (CMM)provide a way of predicting how organizational conditions will modulate the system's evolutionary trends as stated in Lehman's Laws of Software Evolution. Stated simply, the less mature the business processes automated in an E-type system, the greater the evolutionary effects described in the Laws of Software Evolution. Organizations with few or no stated processes will experience the greatest evolutionary impact in the system, since the specification of its enhancements will be little more precise than the ad hoc processes it is automating. Organizations that have local processes and procedures will have good local specifications, but their amalgamation into a system enhancement specification will be complex, since this amalgamation has not been previously worked out in the business processes being automated. An organization that has common business processes that can be tailored for local use has already performed much of the confusing, complex, and error prone work that would otherwise have to be worked out by the software requirements team. The more an organization has disciplined methods for improving its business processes and deploying the improvements across the organization in an orderly way, the more the organization will have control over the evolutionary effects described in the Laws of Software Evolution. In the most mature organizations, the control of system evolution and complexity is performed initially at the level of the business process, allowing the system to evolve as part of a planned improvement with built-in controls on complexity. Thus the level of evolutionary impact experienced in a system is modulated by the maturity of the business processes being automated.
{"title":"The principle of organizational maturity and E-type dynamics","authors":"B. Curtis","doi":"10.1109/ICSM.2002.1167749","DOIUrl":"https://doi.org/10.1109/ICSM.2002.1167749","url":null,"abstract":"The fundamental characteristic of an E-type system is its need to evolve to satisfy the needs of its users. Most approaches to system enhancements assume that a set of evolving requirements can be elicited from system clients that indicate their common needs at the time the requirements are elicited. However, decades of experience in requirements analysis and business process engineering indicates that client organizations frequently lack common processes for common business functions performed across the organization. Thus, the elicited requirements contain a cacophony of approaches used by different organizational units for performing the same business process. Therefore the requirements reflect the unnecessary complexity induced by a lack of process maturity. This complexity exacerbates the phenomena described by in the Laws of Software Evolution. The concepts underlying Watts Humphrey's Process Maturity Framework -best understood in its instantiation in the Capability Maturity Model (CMM)provide a way of predicting how organizational conditions will modulate the system's evolutionary trends as stated in Lehman's Laws of Software Evolution. Stated simply, the less mature the business processes automated in an E-type system, the greater the evolutionary effects described in the Laws of Software Evolution. Organizations with few or no stated processes will experience the greatest evolutionary impact in the system, since the specification of its enhancements will be little more precise than the ad hoc processes it is automating. Organizations that have local processes and procedures will have good local specifications, but their amalgamation into a system enhancement specification will be complex, since this amalgamation has not been previously worked out in the business processes being automated. An organization that has common business processes that can be tailored for local use has already performed much of the confusing, complex, and error prone work that would otherwise have to be worked out by the software requirements team. The more an organization has disciplined methods for improving its business processes and deploying the improvements across the organization in an orderly way, the more the organization will have control over the evolutionary effects described in the Laws of Software Evolution. In the most mature organizations, the control of system evolution and complexity is performed initially at the level of the business process, allowing the system to evolve as part of a planned improvement with built-in controls on complexity. Thus the level of evolutionary impact experienced in a system is modulated by the maturity of the business processes being automated.","PeriodicalId":385190,"journal":{"name":"International Conference on Software Maintenance, 2002. Proceedings.","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2002-10-03","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"131822202","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2002-10-03DOI: 10.1109/ICSM.2002.1167756
G. D. Lucca, M. D. Penta, Sara Gradara
When a software system critical for an organization exhibits a problem during its operation, it is relevant to fix it in a short period of time, to avoid serious economical losses. The problem is therefore noticed by the organization in charge of the maintenance, and it should be correctly and quickly dispatched to the right maintenance team. We propose to automatically classify incoming maintenance requests (also said tickets), routing them to specialized maintenance teams. The final goal is to develop a router working around the clock, that, without human intervention, dispatches incoming tickets with the lowest misclassification error, measured with respect to a given routing policy. 6000 maintenance tickets from a large, multi-site, software system, spanning about two years of system in-field operation, were used to compare and assess the accuracy of different classification approaches (i.e., Vector Space model, Bayesian model, support vectors, classification trees and k-nearest neighbor classification). The application and the tickets were divided into eight areas and pre-classified by human experts. Preliminary results were encouraging, up to 84% of the incoming tickets were correctly classified.
{"title":"An approach to classify software maintenance requests","authors":"G. D. Lucca, M. D. Penta, Sara Gradara","doi":"10.1109/ICSM.2002.1167756","DOIUrl":"https://doi.org/10.1109/ICSM.2002.1167756","url":null,"abstract":"When a software system critical for an organization exhibits a problem during its operation, it is relevant to fix it in a short period of time, to avoid serious economical losses. The problem is therefore noticed by the organization in charge of the maintenance, and it should be correctly and quickly dispatched to the right maintenance team. We propose to automatically classify incoming maintenance requests (also said tickets), routing them to specialized maintenance teams. The final goal is to develop a router working around the clock, that, without human intervention, dispatches incoming tickets with the lowest misclassification error, measured with respect to a given routing policy. 6000 maintenance tickets from a large, multi-site, software system, spanning about two years of system in-field operation, were used to compare and assess the accuracy of different classification approaches (i.e., Vector Space model, Bayesian model, support vectors, classification trees and k-nearest neighbor classification). The application and the tickets were divided into eight areas and pre-classified by human experts. Preliminary results were encouraging, up to 84% of the incoming tickets were correctly classified.","PeriodicalId":385190,"journal":{"name":"International Conference on Software Maintenance, 2002. Proceedings.","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2002-10-03","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"116395298","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2002-10-03DOI: 10.1109/ICSM.2002.1167822
Y. Kataoka, Takeo Imai, Hiroki Andou, T. Fukaya
Program refactoring is a technique to enhance the maintainability of a program. Although the concept itself is considered to be effective, there are few quantitative evaluation of its impact to the software maintainability. It is sometimes difficult to judge whether the refactoring in question should be applied or not without knowing the effect accurately. We propose a quantitative evaluation method to measure the maintainability enhancement effect of program refactoring. We focused on the coupling metrics to evaluate the refactoring effect. By comparing the coupling before and after the refactoring, we could evaluate the degree of maintainability enhancement. We applied our method to a certain program and showed that our method was really effective to quantify the refactoring effect and helped us to choose appropriate refactorings.
{"title":"A quantitative evaluation of maintainability enhancement by refactoring","authors":"Y. Kataoka, Takeo Imai, Hiroki Andou, T. Fukaya","doi":"10.1109/ICSM.2002.1167822","DOIUrl":"https://doi.org/10.1109/ICSM.2002.1167822","url":null,"abstract":"Program refactoring is a technique to enhance the maintainability of a program. Although the concept itself is considered to be effective, there are few quantitative evaluation of its impact to the software maintainability. It is sometimes difficult to judge whether the refactoring in question should be applied or not without knowing the effect accurately. We propose a quantitative evaluation method to measure the maintainability enhancement effect of program refactoring. We focused on the coupling metrics to evaluate the refactoring effect. By comparing the coupling before and after the refactoring, we could evaluate the degree of maintainability enhancement. We applied our method to a certain program and showed that our method was really effective to quantify the refactoring effect and helped us to choose appropriate refactorings.","PeriodicalId":385190,"journal":{"name":"International Conference on Software Maintenance, 2002. Proceedings.","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2002-10-03","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"121852695","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2002-10-03DOI: 10.1109/ICSM.2002.1167744
J. Krinke
We present an empirical evaluation of three context-sensitive slicing algorithms and five context-sensitive chopping algorithms, and compare them to context-insensitive methods. Besides the algorithms by Reps et al. (1994, 1995) and Agrawal (2001) we investigate six new algorithms based on variations of k-limited call strings and approximative chopping based on summary information. It turns out that chopping based on summary information may have a prohibitive complexity, and that approximate algorithms are almost as precise and much faster.
{"title":"Evaluating context-sensitive slicing and chopping","authors":"J. Krinke","doi":"10.1109/ICSM.2002.1167744","DOIUrl":"https://doi.org/10.1109/ICSM.2002.1167744","url":null,"abstract":"We present an empirical evaluation of three context-sensitive slicing algorithms and five context-sensitive chopping algorithms, and compare them to context-insensitive methods. Besides the algorithms by Reps et al. (1994, 1995) and Agrawal (2001) we investigate six new algorithms based on variations of k-limited call strings and approximative chopping based on summary information. It turns out that chopping based on summary information may have a prohibitive complexity, and that approximate algorithms are almost as precise and much faster.","PeriodicalId":385190,"journal":{"name":"International Conference on Software Maintenance, 2002. Proceedings.","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2002-10-03","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127021123","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2002-10-03DOI: 10.1109/ICSM.2002.1167800
Amie L. Souter, L. Pollock
Developed primarily for optimization of functional and object-oriented software, escape analysis discerns information to determine whether the lifetime of data exceeds its static scope. We demonstrate how to apply escape analysis to software engineering tasks. In particular we present novel software testing and retesting techniques for object-oriented software which utilize escape analysis. We exploit a combined pointer and escape analysis that is able to identify how individual objects allocated in one region of a program interact with other regions of a program. The analysis framework increases flexibility and scalability as testing coverage can be targeted to a specific arbitrary region of a program, followed by integration testing that can be focused on particular sets of objects escaping the region. We demonstrate how regression testing can be performed utilizing this framework. We believe such a flexible framework becomes increasingly beneficial as applications become more component-oriented.
{"title":"Putting escape analysis to work for software testing","authors":"Amie L. Souter, L. Pollock","doi":"10.1109/ICSM.2002.1167800","DOIUrl":"https://doi.org/10.1109/ICSM.2002.1167800","url":null,"abstract":"Developed primarily for optimization of functional and object-oriented software, escape analysis discerns information to determine whether the lifetime of data exceeds its static scope. We demonstrate how to apply escape analysis to software engineering tasks. In particular we present novel software testing and retesting techniques for object-oriented software which utilize escape analysis. We exploit a combined pointer and escape analysis that is able to identify how individual objects allocated in one region of a program interact with other regions of a program. The analysis framework increases flexibility and scalability as testing coverage can be targeted to a specific arbitrary region of a program, followed by integration testing that can be focused on particular sets of objects escaping the region. We demonstrate how regression testing can be performed utilizing this framework. We believe such a flexible framework becomes increasingly beneficial as applications become more component-oriented.","PeriodicalId":385190,"journal":{"name":"International Conference on Software Maintenance, 2002. Proceedings.","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2002-10-03","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115014530","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2002-10-03DOI: 10.1109/ICSM.2002.1167767
Alexey G. Malishevsky, G. Rothermel, Sebastian G. Elbaum
Regression testing is an expensive activity that can account for a large proportion of the software maintenance budget. Because engineers add tests into test suites as software evolves, over time, increased test suite size makes revalidation of the software more expensive. Regression test selection, test suite reduction, and test case prioritization techniques can help with this, by reducing the number of regression tests that must be run and by helping testers meet testing objectives more quickly. These techniques, however can be expensive to employ and may not reduce overall regression testing costs. Thus, practitioners and researchers could benefit from cost models that would help them assess the cost-benefits of techniques. Cost models have been proposed for this purpose, but some of these models omit important factors, and others cannot truly evaluate cost-effectiveness. In this paper, we present new cost-benefits models for regression test selection, test suite reduction, and test case prioritization, that capture previously omitted factors, and support cost-benefits analyses where they were not supported before. We present the results of an empirical study assessing these models.
{"title":"Modeling the cost-benefits tradeoffs for regression testing techniques","authors":"Alexey G. Malishevsky, G. Rothermel, Sebastian G. Elbaum","doi":"10.1109/ICSM.2002.1167767","DOIUrl":"https://doi.org/10.1109/ICSM.2002.1167767","url":null,"abstract":"Regression testing is an expensive activity that can account for a large proportion of the software maintenance budget. Because engineers add tests into test suites as software evolves, over time, increased test suite size makes revalidation of the software more expensive. Regression test selection, test suite reduction, and test case prioritization techniques can help with this, by reducing the number of regression tests that must be run and by helping testers meet testing objectives more quickly. These techniques, however can be expensive to employ and may not reduce overall regression testing costs. Thus, practitioners and researchers could benefit from cost models that would help them assess the cost-benefits of techniques. Cost models have been proposed for this purpose, but some of these models omit important factors, and others cannot truly evaluate cost-effectiveness. In this paper, we present new cost-benefits models for regression test selection, test suite reduction, and test case prioritization, that capture previously omitted factors, and support cost-benefits analyses where they were not supported before. We present the results of an empirical study assessing these models.","PeriodicalId":385190,"journal":{"name":"International Conference on Software Maintenance, 2002. Proceedings.","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2002-10-03","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127766735","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}