Pub Date : 2013-05-18DOI: 10.1109/ICSE.2013.6606702
H. V. Nguyen, H. Nguyen, T. Nguyen, T. Nguyen
PHP is a server-side language that is widely used for creating dynamic Web applications. However, as a dynamic language, PHP may induce certain programming errors that reveal themselves only at run time. A common type of error is dangling references, which occur if the referred program entities have not been declared in the current program execution. To prevent the run-time errors caused by such dangling references, we introduce Dangling Reference Checker (DRC), a novel tool to statically detect those references in the source code of PHP-based Web applications. DRC first identifies the path constraints of the program executions in which a program entity appears and then matches the path constraints of the entity's declarations and references to detect dangling ones. DRC is able to detect dangling reference errors in several real-world PHP systems with high accuracy. The video demonstration for DRC is available at http://www.youtube.com/watch?v=3Dy_AKZYhLlU4.
{"title":"DRC: A detection tool for dangling references in PHP-based web applications","authors":"H. V. Nguyen, H. Nguyen, T. Nguyen, T. Nguyen","doi":"10.1109/ICSE.2013.6606702","DOIUrl":"https://doi.org/10.1109/ICSE.2013.6606702","url":null,"abstract":"PHP is a server-side language that is widely used for creating dynamic Web applications. However, as a dynamic language, PHP may induce certain programming errors that reveal themselves only at run time. A common type of error is dangling references, which occur if the referred program entities have not been declared in the current program execution. To prevent the run-time errors caused by such dangling references, we introduce Dangling Reference Checker (DRC), a novel tool to statically detect those references in the source code of PHP-based Web applications. DRC first identifies the path constraints of the program executions in which a program entity appears and then matches the path constraints of the entity's declarations and references to detect dangling ones. DRC is able to detect dangling reference errors in several real-world PHP systems with high accuracy. The video demonstration for DRC is available at http://www.youtube.com/watch?v=3Dy_AKZYhLlU4.","PeriodicalId":322423,"journal":{"name":"2013 35th International Conference on Software Engineering (ICSE)","volume":"89 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2013-05-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"116673202","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2013-05-18DOI: 10.1109/ICSE.2013.6606563
Mohammad Mahdi Hassan, J. Andrews
We introduce a family of coverage criteria, called Multi-Point Stride Coverage (MPSC). MPSC generalizes branch coverage to coverage of tuples of branches taken from the execution sequence of a program. We investigate its potential as a replacement for dataflow coverage, such as def-use coverage. We find that programs can be instrumented for MPSC easily, that the instrumentation usually incurs less overhead than that for def-use coverage, and that MPSC is comparable in usefulness to def-use in predicting test suite effectiveness. We also find that the space required to collect MPSC can be predicted from the number of branches in the program.
{"title":"Comparing Multi-Point Stride Coverage and dataflow coverage","authors":"Mohammad Mahdi Hassan, J. Andrews","doi":"10.1109/ICSE.2013.6606563","DOIUrl":"https://doi.org/10.1109/ICSE.2013.6606563","url":null,"abstract":"We introduce a family of coverage criteria, called Multi-Point Stride Coverage (MPSC). MPSC generalizes branch coverage to coverage of tuples of branches taken from the execution sequence of a program. We investigate its potential as a replacement for dataflow coverage, such as def-use coverage. We find that programs can be instrumented for MPSC easily, that the instrumentation usually incurs less overhead than that for def-use coverage, and that MPSC is comparable in usefulness to def-use in predicting test suite effectiveness. We also find that the space required to collect MPSC can be predicted from the number of branches in the program.","PeriodicalId":322423,"journal":{"name":"2013 35th International Conference on Software Engineering (ICSE)","volume":"21 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2013-05-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126951956","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2013-05-18DOI: 10.1109/ICSE.2013.6606740
A. D. Souza
This paper proposes an extension of the Earned Value Management - EVM technique through the integration of historical cost performance data of processes under statistical control as a means to improve the predictability of the cost of projects. The proposed technique was evaluated through a case-study in industry, which evaluated the implementation of the proposed technique in 22 software development projects Hypotheses tests with 95% significance level were performed, and the proposed technique was more accurate and more precise than the traditional technique for calculating the Cost Performance Index - CPI and Estimates at Completion - EAC.
{"title":"A proposal for the improvement of project's cost predictability using EVM and historical data of cost","authors":"A. D. Souza","doi":"10.1109/ICSE.2013.6606740","DOIUrl":"https://doi.org/10.1109/ICSE.2013.6606740","url":null,"abstract":"This paper proposes an extension of the Earned Value Management - EVM technique through the integration of historical cost performance data of processes under statistical control as a means to improve the predictability of the cost of projects. The proposed technique was evaluated through a case-study in industry, which evaluated the implementation of the proposed technique in 22 software development projects Hypotheses tests with 95% significance level were performed, and the proposed technique was more accurate and more precise than the traditional technique for calculating the Cost Performance Index - CPI and Estimates at Completion - EAC.","PeriodicalId":322423,"journal":{"name":"2013 35th International Conference on Software Engineering (ICSE)","volume":"6 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2013-05-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115222664","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Assurance cases provide a structured method of explaining why a system has some desired property, e.g., that the system is safe. But there is no agreed approach for explaining what degree of confidence one should have in the conclusions of such a case. In this paper, we use the principle of eliminative induction to provide a justified basis for assessing how much confidence one should have in an assurance case argument.
{"title":"Eliminative induction: A basis for arguing system confidence","authors":"J. Goodenough, C. Weinstock, Ari Z. Klein","doi":"10.5555/2486788.2486950","DOIUrl":"https://doi.org/10.5555/2486788.2486950","url":null,"abstract":"Assurance cases provide a structured method of explaining why a system has some desired property, e.g., that the system is safe. But there is no agreed approach for explaining what degree of confidence one should have in the conclusions of such a case. In this paper, we use the principle of eliminative induction to provide a justified basis for assessing how much confidence one should have in an assurance case argument.","PeriodicalId":322423,"journal":{"name":"2013 35th International Conference on Software Engineering (ICSE)","volume":"5 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2013-05-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115997655","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2013-05-18DOI: 10.1109/ICSE.2013.6606655
S. Santos, Felipe Santana Furtado Soares
The continuous growth of the use of Information and Communication Technology in different sectors of the market calls out for software professionals with the qualifications needed to solve complex and diverse problems. Innovative teaching methodologies, such as the "Software Internship" model and PBL teaching approaches that are learner-centered and focus on bringing market reality to the learning environment, have been developed and implemented with a view to meeting this demand. However, the effectiveness of these methods cannot always be satisfactorily proved. Prompted by this, this paper proposes a model for assessing students based on real market practices while preserving the authenticity of the learning environment. To evaluate this model, a case study on skills training for software specialists for the Telecom market is discussed, and presents important results that show the applicability of the proposed model for teaching Software Engineering.
{"title":"Authentic assessment in Software Engineering education based on PBL principles a case study in the telecom market","authors":"S. Santos, Felipe Santana Furtado Soares","doi":"10.1109/ICSE.2013.6606655","DOIUrl":"https://doi.org/10.1109/ICSE.2013.6606655","url":null,"abstract":"The continuous growth of the use of Information and Communication Technology in different sectors of the market calls out for software professionals with the qualifications needed to solve complex and diverse problems. Innovative teaching methodologies, such as the \"Software Internship\" model and PBL teaching approaches that are learner-centered and focus on bringing market reality to the learning environment, have been developed and implemented with a view to meeting this demand. However, the effectiveness of these methods cannot always be satisfactorily proved. Prompted by this, this paper proposes a model for assessing students based on real market practices while preserving the authenticity of the learning environment. To evaluate this model, a case study on skills training for software specialists for the Telecom market is discussed, and presents important results that show the applicability of the proposed model for teaching Software Engineering.","PeriodicalId":322423,"journal":{"name":"2013 35th International Conference on Software Engineering (ICSE)","volume":"15 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2013-05-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"122311971","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2013-05-18DOI: 10.1109/ICSE.2013.6606650
Suresh Thummalapenta, P. Devaki, S. Sinha, S. Chandra, Sivagami Gnanasundaram, Deepak Nagaraj, Sampathkumar Sathishkumar
Test automation, which involves the conversion of manual test cases to executable test scripts, is necessary to carry out efficient regression testing of GUI-based applications. However, test automation takes significant investment of time and skilled effort. Moreover, it is not a one-time investment: as the application or its environment evolves, test scripts demand continuous patching. Thus, it is challenging to perform test automation in a cost-effective manner. At IBM, we developed a tool, called ATA [1], [2], to meet this challenge. ATA has novel features that are designed to lower the cost of initial test automation significantly. Moreover, ATA has the ability to patch scripts automatically for certain types of application or environment changes. How well does ATA meet its objectives in the real world? In this paper, we present a detailed case study in the context of a challenging production environment: an enterprise web application that has over 6500 manual test cases, comes in two variants, evolves frequently, and needs to be tested on multiple browsers in time-constrained and resource-constrained regression cycles. We measured how well ATA improved the efficiency in initial automation. We also evaluated the effectiveness of ATA's change-resilience along multiple dimensions: application versions, browsers, and browser versions. Our study highlights several lessons for test-automation practitioners as well as open research problems in test automation.
{"title":"Efficient and change-resilient test automation: An industrial case study","authors":"Suresh Thummalapenta, P. Devaki, S. Sinha, S. Chandra, Sivagami Gnanasundaram, Deepak Nagaraj, Sampathkumar Sathishkumar","doi":"10.1109/ICSE.2013.6606650","DOIUrl":"https://doi.org/10.1109/ICSE.2013.6606650","url":null,"abstract":"Test automation, which involves the conversion of manual test cases to executable test scripts, is necessary to carry out efficient regression testing of GUI-based applications. However, test automation takes significant investment of time and skilled effort. Moreover, it is not a one-time investment: as the application or its environment evolves, test scripts demand continuous patching. Thus, it is challenging to perform test automation in a cost-effective manner. At IBM, we developed a tool, called ATA [1], [2], to meet this challenge. ATA has novel features that are designed to lower the cost of initial test automation significantly. Moreover, ATA has the ability to patch scripts automatically for certain types of application or environment changes. How well does ATA meet its objectives in the real world? In this paper, we present a detailed case study in the context of a challenging production environment: an enterprise web application that has over 6500 manual test cases, comes in two variants, evolves frequently, and needs to be tested on multiple browsers in time-constrained and resource-constrained regression cycles. We measured how well ATA improved the efficiency in initial automation. We also evaluated the effectiveness of ATA's change-resilience along multiple dimensions: application versions, browsers, and browser versions. Our study highlights several lessons for test-automation practitioners as well as open research problems in test automation.","PeriodicalId":322423,"journal":{"name":"2013 35th International Conference on Software Engineering (ICSE)","volume":"9 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2013-05-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"122318857","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2013-05-18DOI: 10.1109/ICSE.2013.6606703
Leandro Sales Pinto, S. Sinha, A. Orso
Test suites, just like the applications they are testing, evolve throughout their lifetime. One of the main reasons for test-suite evolution is test obsolescence: test cases cease to work because of changes in the code and must be suitably repaired. There are several reasons why it is important to achieve a thorough understanding of how test cases evolve in practice. In particular, researchers who investigate automated test repair - an increasingly active research area - can use such understanding to develop more effective repair techniques that can be successfully applied in real-world scenarios. More generally, analyzing testsuite evolution can help testers better understand how test cases are modified during maintenance and improve the test evolution process, an extremely time consuming activity for any nontrivial test suite. Unfortunately, there are no existing tools that facilitate investigation of test evolution. To tackle this problem, we developed TestEvol, a tool that enables the systematic study of test-suite evolution for Java programs and JUnit test cases. This demonstration presents TestEvol and illustrates its usefulness and practical applicability by showing how TestEvol can be successfully used on real-world software and test suites. Demo video at http://www.cc.gatech.edu/~orso/software/testevol/.
{"title":"TestEvol: A tool for analyzing test-suite evolution","authors":"Leandro Sales Pinto, S. Sinha, A. Orso","doi":"10.1109/ICSE.2013.6606703","DOIUrl":"https://doi.org/10.1109/ICSE.2013.6606703","url":null,"abstract":"Test suites, just like the applications they are testing, evolve throughout their lifetime. One of the main reasons for test-suite evolution is test obsolescence: test cases cease to work because of changes in the code and must be suitably repaired. There are several reasons why it is important to achieve a thorough understanding of how test cases evolve in practice. In particular, researchers who investigate automated test repair - an increasingly active research area - can use such understanding to develop more effective repair techniques that can be successfully applied in real-world scenarios. More generally, analyzing testsuite evolution can help testers better understand how test cases are modified during maintenance and improve the test evolution process, an extremely time consuming activity for any nontrivial test suite. Unfortunately, there are no existing tools that facilitate investigation of test evolution. To tackle this problem, we developed TestEvol, a tool that enables the systematic study of test-suite evolution for Java programs and JUnit test cases. This demonstration presents TestEvol and illustrates its usefulness and practical applicability by showing how TestEvol can be successfully used on real-world software and test suites. Demo video at http://www.cc.gatech.edu/~orso/software/testevol/.","PeriodicalId":322423,"journal":{"name":"2013 35th International Conference on Software Engineering (ICSE)","volume":"22 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2013-05-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114170813","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
This work presents a method to combine testing techniques adaptively during the testing process. It intends to mitigate the sources of uncertainty of software testing processes, by learning from past experience and, at the same time, adapting the technique selection to the current testing session. The method is based on machine learning strategies. It uses offline strategies to take historical information into account about the techniques performance collected in past testing sessions; then, online strategies are used to adapt the selection of test cases to the data observed as the testing proceeds. Experimental results show that techniques performance can be accurately characterized from features of the past testing sessions, by means of machine learning algorithms, and that integrating this result into the online algorithm allows improving the fault detection effectiveness with respect to single testing techniques, as well as to their random combination.
{"title":"A learning-based method for combining testing techniques","authors":"Domenico Cotroneo, R. Pietrantuono, S. Russo","doi":"10.5555/2486788.2486808","DOIUrl":"https://doi.org/10.5555/2486788.2486808","url":null,"abstract":"This work presents a method to combine testing techniques adaptively during the testing process. It intends to mitigate the sources of uncertainty of software testing processes, by learning from past experience and, at the same time, adapting the technique selection to the current testing session. The method is based on machine learning strategies. It uses offline strategies to take historical information into account about the techniques performance collected in past testing sessions; then, online strategies are used to adapt the selection of test cases to the data observed as the testing proceeds. Experimental results show that techniques performance can be accurately characterized from features of the past testing sessions, by means of machine learning algorithms, and that integrating this result into the online algorithm allows improving the fault detection effectiveness with respect to single testing techniques, as well as to their random combination.","PeriodicalId":322423,"journal":{"name":"2013 35th International Conference on Software Engineering (ICSE)","volume":"21 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2013-05-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"122202570","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2013-05-18DOI: 10.1109/ICSE.2013.6606645
Dennis Pagano, B. Brügge
User involvement in software engineering has been researched over the last three decades. However, existing studies concentrate mainly on early phases of user-centered design projects, while little is known about how professionals work with post-deployment end-user feedback. In this paper we report on an empirical case study that explores the current practice of user involvement during software evolution. We found that user feedback contains important information for developers, helps to improve software quality and to identify missing features. In order to assess its relevance and potential impact, developers need to analyze the gathered feedback, which is mostly accomplished manually and consequently requires high effort. Overall, our results show the need for tool support to consolidate, structure, analyze, and track user feedback, particularly when feedback volume is high. Our findings call for a hypothesis-driven analysis of user feedback to establish the foundations for future user feedback tools.
{"title":"User involvement in software evolution practice: A case study","authors":"Dennis Pagano, B. Brügge","doi":"10.1109/ICSE.2013.6606645","DOIUrl":"https://doi.org/10.1109/ICSE.2013.6606645","url":null,"abstract":"User involvement in software engineering has been researched over the last three decades. However, existing studies concentrate mainly on early phases of user-centered design projects, while little is known about how professionals work with post-deployment end-user feedback. In this paper we report on an empirical case study that explores the current practice of user involvement during software evolution. We found that user feedback contains important information for developers, helps to improve software quality and to identify missing features. In order to assess its relevance and potential impact, developers need to analyze the gathered feedback, which is mostly accomplished manually and consequently requires high effort. Overall, our results show the need for tool support to consolidate, structure, analyze, and track user feedback, particularly when feedback volume is high. Our findings call for a hypothesis-driven analysis of user feedback to establish the foundations for future user feedback tools.","PeriodicalId":322423,"journal":{"name":"2013 35th International Conference on Software Engineering (ICSE)","volume":"27 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2013-05-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128225936","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2013-05-18DOI: 10.1109/GAS.2013.6632581
K. Cooper, W. Scacchi, Alf Inge Wang
We present a summary of the 3rd ICSE Workshop on Games and Software Engineering: Engineering Computer Games to Enable Positive, Progressive Change in this article. The full day workshop is planned to include a keynote speaker, panel discussion, and paper presentations on game software engineering topics related to requirements specification and verification, software engineering education, re-use, and infrastructure. The accepted papers are overviewed here.
{"title":"3rd International workshop on games and software engineering: Engineering computer games to Enable Positive, Progressive Change (GAS 2013)","authors":"K. Cooper, W. Scacchi, Alf Inge Wang","doi":"10.1109/GAS.2013.6632581","DOIUrl":"https://doi.org/10.1109/GAS.2013.6632581","url":null,"abstract":"We present a summary of the 3rd ICSE Workshop on Games and Software Engineering: Engineering Computer Games to Enable Positive, Progressive Change in this article. The full day workshop is planned to include a keynote speaker, panel discussion, and paper presentations on game software engineering topics related to requirements specification and verification, software engineering education, re-use, and infrastructure. The accepted papers are overviewed here.","PeriodicalId":322423,"journal":{"name":"2013 35th International Conference on Software Engineering (ICSE)","volume":"19 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2013-05-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129330638","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}