Pub Date : 2013-05-18DOI: 10.1109/ICSE.2013.6606574
Michael Pradel, T. Gross
Languages with inheritance and polymorphism assume that a subclass instance can substitute a superclass instance without causing behavioral differences for clients of the superclass. However, programmers may accidentally create subclasses that are semantically incompatible with their superclasses. Such subclasses lead to bugs, because a programmer may assign a subclass instance to a superclass reference. This paper presents an automatic testing technique to reveal subclasses that cannot safely substitute their superclasses. The key idea is to generate generic tests that analyze the behavior of both the subclass and its superclass. If using the subclass leads to behavior that cannot occur with the superclass, the analysis reports a warning. We find a high percentage of widely used Java classes, including classes from JBoss, Eclipse, and Apache Commons Collections, to be unsafe substitutes for their superclasses: 30% of these classes lead to crashes, and even more have other behavioral differences.
{"title":"Automatic testing of sequential and concurrent substitutability","authors":"Michael Pradel, T. Gross","doi":"10.1109/ICSE.2013.6606574","DOIUrl":"https://doi.org/10.1109/ICSE.2013.6606574","url":null,"abstract":"Languages with inheritance and polymorphism assume that a subclass instance can substitute a superclass instance without causing behavioral differences for clients of the superclass. However, programmers may accidentally create subclasses that are semantically incompatible with their superclasses. Such subclasses lead to bugs, because a programmer may assign a subclass instance to a superclass reference. This paper presents an automatic testing technique to reveal subclasses that cannot safely substitute their superclasses. The key idea is to generate generic tests that analyze the behavior of both the subclass and its superclass. If using the subclass leads to behavior that cannot occur with the superclass, the analysis reports a warning. We find a high percentage of widely used Java classes, including classes from JBoss, Eclipse, and Apache Commons Collections, to be unsafe substitutes for their superclasses: 30% of these classes lead to crashes, and even more have other behavioral differences.","PeriodicalId":322423,"journal":{"name":"2013 35th International Conference on Software Engineering (ICSE)","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2013-05-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"125801637","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2013-05-18DOI: 10.1109/ICSE.2013.6606585
Kim Herzig, Sascha Just, A. Zeller
In a manual examination of more than 7,000 issue reports from the bug databases of five open-source projects, we found 33.8% of all bug reports to be misclassified - that is, rather than referring to a code fix, they resulted in a new feature, an update to documentation, or an internal refactoring. This misclassification introduces bias in bug prediction models, confusing bugs and features: On average, 39% of files marked as defective actually never had a bug. We discuss the impact of this misclassification on earlier studies and recommend manual data validation for future studies.
{"title":"It's not a bug, it's a feature: How misclassification impacts bug prediction","authors":"Kim Herzig, Sascha Just, A. Zeller","doi":"10.1109/ICSE.2013.6606585","DOIUrl":"https://doi.org/10.1109/ICSE.2013.6606585","url":null,"abstract":"In a manual examination of more than 7,000 issue reports from the bug databases of five open-source projects, we found 33.8% of all bug reports to be misclassified - that is, rather than referring to a code fix, they resulted in a new feature, an update to documentation, or an internal refactoring. This misclassification introduces bias in bug prediction models, confusing bugs and features: On average, 39% of files marked as defective actually never had a bug. We discuss the impact of this misclassification on earlier studies and recommend manual data validation for future studies.","PeriodicalId":322423,"journal":{"name":"2013 35th International Conference on Software Engineering (ICSE)","volume":"8 1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2013-05-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128248519","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2013-05-18DOI: 10.1109/ICSE.2013.6606745
Romina Torres
Service-based systems (SBS) must be able to adapt their architectural configurations during runtime in order to keep satisfied their specification models. These models are the result of design time derivation of requirements into precise and verifiable specifications by using the knowledge about the current service offerings. Unfortunately, the design time knowledge may be no longer valid during runtime. Then, nonfunctional constraints may have different numerical meanings at different time even for the same observers. Thus, specification models become obsolete affecting the SBS' capability of detecting requirement violations during runtime and therefore they trigger reconfigurations when appropriated. In order to mitigate the obsolescence of specification models, we propose to specify and verify them using the computing with words (CWW) methodology. First, non-functional properties (NFPs) of functionally-equivalent services are modeled as linguistic variables, whose domains are concepts or linguistic values instead of precise numbers. Second, architects specify at design time their requirements as linguistic decision models (LDMs) using these concepts. Third, during runtime, the CWW engine monitors the requirements satisfaction by the current chosen architectural configuration. And fourth, each time a global concept drift is detected in the NFPs of the services market, the numerical meanings are updated. Our initial results are encouraging, where our approach mitigates effectively and efficiently the obsolescence of the specification models used by SBS to drive their reconfigurations.
{"title":"Mitigating the obsolescence of specification models of service-based systems","authors":"Romina Torres","doi":"10.1109/ICSE.2013.6606745","DOIUrl":"https://doi.org/10.1109/ICSE.2013.6606745","url":null,"abstract":"Service-based systems (SBS) must be able to adapt their architectural configurations during runtime in order to keep satisfied their specification models. These models are the result of design time derivation of requirements into precise and verifiable specifications by using the knowledge about the current service offerings. Unfortunately, the design time knowledge may be no longer valid during runtime. Then, nonfunctional constraints may have different numerical meanings at different time even for the same observers. Thus, specification models become obsolete affecting the SBS' capability of detecting requirement violations during runtime and therefore they trigger reconfigurations when appropriated. In order to mitigate the obsolescence of specification models, we propose to specify and verify them using the computing with words (CWW) methodology. First, non-functional properties (NFPs) of functionally-equivalent services are modeled as linguistic variables, whose domains are concepts or linguistic values instead of precise numbers. Second, architects specify at design time their requirements as linguistic decision models (LDMs) using these concepts. Third, during runtime, the CWW engine monitors the requirements satisfaction by the current chosen architectural configuration. And fourth, each time a global concept drift is detected in the NFPs of the services market, the numerical meanings are updated. Our initial results are encouraging, where our approach mitigates effectively and efficiently the obsolescence of the specification models used by SBS to drive their reconfigurations.","PeriodicalId":322423,"journal":{"name":"2013 35th International Conference on Software Engineering (ICSE)","volume":"73 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2013-05-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"121816384","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2013-05-18DOI: 10.1109/ICSE.2013.6606775
L. Pollock, D. Binkley, Dawn J Lawrie, Emily Hill, R. Oliveto, G. Bavota, Alberto Bacchelli
Software engineers produce code that has formal syntax and semantics, which establishes its formal meaning. However, the code also includes significant natural language found primarily in identifier names and comments. Furthermore, the code is surrounded by non-source artifacts, predominantly written in natural language. The NaturaLiSE workshop focuses on natural language analysis of software. The workshop brings together researchers and practitioners interested in exploiting natural language information to create improved software engineering tools. Participants will explore natural language analysis applied to software artifacts, combining natural language and traditional program analysis, integration of natural language analyses into client tools, mining natural language data, and empirical studies focused on evaluating the usefulness of natural language analysis.
{"title":"1st International workshop on natural language analysis in software engineering (NaturaLiSE 2013)","authors":"L. Pollock, D. Binkley, Dawn J Lawrie, Emily Hill, R. Oliveto, G. Bavota, Alberto Bacchelli","doi":"10.1109/ICSE.2013.6606775","DOIUrl":"https://doi.org/10.1109/ICSE.2013.6606775","url":null,"abstract":"Software engineers produce code that has formal syntax and semantics, which establishes its formal meaning. However, the code also includes significant natural language found primarily in identifier names and comments. Furthermore, the code is surrounded by non-source artifacts, predominantly written in natural language. The NaturaLiSE workshop focuses on natural language analysis of software. The workshop brings together researchers and practitioners interested in exploiting natural language information to create improved software engineering tools. Participants will explore natural language analysis applied to software artifacts, combining natural language and traditional program analysis, integration of natural language analyses into client tools, mining natural language data, and empirical studies focused on evaluating the usefulness of natural language analysis.","PeriodicalId":322423,"journal":{"name":"2013 35th International Conference on Software Engineering (ICSE)","volume":"344 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2013-05-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"122475982","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2013-05-18DOI: 10.1109/ICSE.2013.6606666
C. Pilgrim
Stakeholder consultation during course accreditation is now a requirement of new Australian government regulations as well as the Australian ICT professional society accreditation. Despite these requirements there remains some differences between universities and industry regarding the purpose, nature and extent of industry involvement in the curriculum. Surveys of industry and university leaders in ICT were undertaken to provide a representative set of views on these issues. The results provided insights into the perceptions of universities and industry regarding industry involvement into the curriculum. The results also confirmed previous research that identified a tension between industry's desire for relevant skills and the role of universities in providing a broader education for lifelong learning.
{"title":"Industry involvement in ICT curriculum: A comparative survey","authors":"C. Pilgrim","doi":"10.1109/ICSE.2013.6606666","DOIUrl":"https://doi.org/10.1109/ICSE.2013.6606666","url":null,"abstract":"Stakeholder consultation during course accreditation is now a requirement of new Australian government regulations as well as the Australian ICT professional society accreditation. Despite these requirements there remains some differences between universities and industry regarding the purpose, nature and extent of industry involvement in the curriculum. Surveys of industry and university leaders in ICT were undertaken to provide a representative set of views on these issues. The results provided insights into the perceptions of universities and industry regarding industry involvement into the curriculum. The results also confirmed previous research that identified a tension between industry's desire for relevant skills and the role of universities in providing a broader education for lifelong learning.","PeriodicalId":322423,"journal":{"name":"2013 35th International Conference on Software Engineering (ICSE)","volume":"88 2","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2013-05-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"120995021","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2013-05-18DOI: 10.1109/ICSE.2013.6606742
Juliana Saraiva
Object-Oriented Programming (OOP) is one of the most used programming paradigms. Thus, researches dedicated in improvement of software quality that adhere to this paradigm are demanded. Complementarily, maintainability is considered a software attribute that plays an important role in its quality level. In this context, Object-Oriented Software Maintainability (OOSM) has been studied through years and several researchers proposed a high number of metrics to measure it. Nevertheless, there is no standardization or a catalogue to summarize all the information about these metrics, helping the researchers to make decision about which metrics can be adopted to perform their experiments in OOSM. Actually, distinct areas in both academic and industrial environment, such as Software Development, Project Management, and Software Research can adopt them to support decision-making processes. Thus, this work researched about the usage of OOSM metrics in academia and industry in order to help researchers in making decision about the metrics suite to be adopted. We found 570 OOSM metrics. Additionally, as a preliminary result we proposed a catalog with 36 metrics that were most used in academic works/experiments, trying to guide researchers with their decision-make about which metrics are more indicated to be adopted in their experiments.
{"title":"A roadmap for software maintainability measurement","authors":"Juliana Saraiva","doi":"10.1109/ICSE.2013.6606742","DOIUrl":"https://doi.org/10.1109/ICSE.2013.6606742","url":null,"abstract":"Object-Oriented Programming (OOP) is one of the most used programming paradigms. Thus, researches dedicated in improvement of software quality that adhere to this paradigm are demanded. Complementarily, maintainability is considered a software attribute that plays an important role in its quality level. In this context, Object-Oriented Software Maintainability (OOSM) has been studied through years and several researchers proposed a high number of metrics to measure it. Nevertheless, there is no standardization or a catalogue to summarize all the information about these metrics, helping the researchers to make decision about which metrics can be adopted to perform their experiments in OOSM. Actually, distinct areas in both academic and industrial environment, such as Software Development, Project Management, and Software Research can adopt them to support decision-making processes. Thus, this work researched about the usage of OOSM metrics in academia and industry in order to help researchers in making decision about the metrics suite to be adopted. We found 570 OOSM metrics. Additionally, as a preliminary result we proposed a catalog with 36 metrics that were most used in academic works/experiments, trying to guide researchers with their decision-make about which metrics are more indicated to be adopted in their experiments.","PeriodicalId":322423,"journal":{"name":"2013 35th International Conference on Software Engineering (ICSE)","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2013-05-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"116306477","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2013-05-18DOI: 10.1109/ICSE.2013.6606759
E. Denney, Ganesh J. Pai, I. Habli, T. Kelly, J. Knight
Software plays a key role in high-risk systems, i.e., safety and security-critical systems. Several certification standards and guidelines, e.g., in the defense, transportation (aviation, automotive, rail), and healthcare domains, now recommend and/or mandate the development of assurance cases for software-intensive systems. As such, there is a need to understand and evaluate (a) the application of assurance cases to software, and (b) the relationship between the development and assessment of assurance cases, and software engineering concepts, processes and techniques. The ICSE 2013 Workshop on Assurance Cases for Software-intensive Systems (ASSURE) aims to provide an international forum for high-quality contributions (research, practice, and position papers) on the application of assurance case principles and techniques for software assurance, and on the treatment of assurance cases as artifacts to which the full range of software engineering techniques can be applied.
{"title":"1st International workshop on assurance cases for software-intensive systems (ASSURE 2013)","authors":"E. Denney, Ganesh J. Pai, I. Habli, T. Kelly, J. Knight","doi":"10.1109/ICSE.2013.6606759","DOIUrl":"https://doi.org/10.1109/ICSE.2013.6606759","url":null,"abstract":"Software plays a key role in high-risk systems, i.e., safety and security-critical systems. Several certification standards and guidelines, e.g., in the defense, transportation (aviation, automotive, rail), and healthcare domains, now recommend and/or mandate the development of assurance cases for software-intensive systems. As such, there is a need to understand and evaluate (a) the application of assurance cases to software, and (b) the relationship between the development and assessment of assurance cases, and software engineering concepts, processes and techniques. The ICSE 2013 Workshop on Assurance Cases for Software-intensive Systems (ASSURE) aims to provide an international forum for high-quality contributions (research, practice, and position papers) on the application of assurance case principles and techniques for software assurance, and on the treatment of assurance cases as artifacts to which the full range of software engineering techniques can be applied.","PeriodicalId":322423,"journal":{"name":"2013 35th International Conference on Software Engineering (ICSE)","volume":"5 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2013-05-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126701441","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2013-05-18DOI: 10.1109/ICSE.2013.6606621
Asger Feldthaus, Max Schäfer, Manu Sridharan, Julian T Dolby, F. Tip
The rapid rise of JavaScript as one of the most popular programming languages of the present day has led to a demand for sophisticated IDE support similar to what is available for Java or C#. However, advanced tooling is hampered by the dynamic nature of the language, which makes any form of static analysis very difficult. We single out efficient call graph construction as a key problem to be solved in order to improve development tools for JavaScript. To address this problem, we present a scalable field-based flow analysis for constructing call graphs. Our evaluation on large real-world programs shows that the analysis, while in principle unsound, produces highly accurate call graphs in practice. Previous analyses do not scale to these programs, but our analysis handles them in a matter of seconds, thus proving its suitability for use in an interactive setting.
{"title":"Efficient construction of approximate call graphs for JavaScript IDE services","authors":"Asger Feldthaus, Max Schäfer, Manu Sridharan, Julian T Dolby, F. Tip","doi":"10.1109/ICSE.2013.6606621","DOIUrl":"https://doi.org/10.1109/ICSE.2013.6606621","url":null,"abstract":"The rapid rise of JavaScript as one of the most popular programming languages of the present day has led to a demand for sophisticated IDE support similar to what is available for Java or C#. However, advanced tooling is hampered by the dynamic nature of the language, which makes any form of static analysis very difficult. We single out efficient call graph construction as a key problem to be solved in order to improve development tools for JavaScript. To address this problem, we present a scalable field-based flow analysis for constructing call graphs. Our evaluation on large real-world programs shows that the analysis, while in principle unsound, produces highly accurate call graphs in practice. Previous analyses do not scale to these programs, but our analysis handles them in a matter of seconds, thus proving its suitability for use in an interactive setting.","PeriodicalId":322423,"journal":{"name":"2013 35th International Conference on Software Engineering (ICSE)","volume":"49 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2013-05-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127163853","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2013-05-18DOI: 10.1109/ICSE.2013.6606719
Victor Guana
At the core of model-driven software development, model-transformation compositions enable automatic generation of executable artifacts from models. Although the advantages of transformational software development have been explored by numerous academics and industry practitioners, adoption of the paradigm continues to be slow, and limited to specific domains. The main challenge to adoption is the fact that maintenance tasks, such as analysis and management of model-transformation compositions and reflecting code changes to model transformations, are still largely unsupported by tools. My dissertation aims at enhancing the field's understanding around the maintenance issues in transformational software development, and at supporting the tasks involved in the synchronization of evolving system features with their generation environments. This paper discusses the three main aspects of the envisioned thesis: (a) complexity analysis of model-transformation compositions, (b) system feature localization and tracking in model-transformation compositions, and (c) refactoring of transformation compositions to improve their qualities.
{"title":"Supporting maintenance tasks on transformational code generation environments","authors":"Victor Guana","doi":"10.1109/ICSE.2013.6606719","DOIUrl":"https://doi.org/10.1109/ICSE.2013.6606719","url":null,"abstract":"At the core of model-driven software development, model-transformation compositions enable automatic generation of executable artifacts from models. Although the advantages of transformational software development have been explored by numerous academics and industry practitioners, adoption of the paradigm continues to be slow, and limited to specific domains. The main challenge to adoption is the fact that maintenance tasks, such as analysis and management of model-transformation compositions and reflecting code changes to model transformations, are still largely unsupported by tools. My dissertation aims at enhancing the field's understanding around the maintenance issues in transformational software development, and at supporting the tasks involved in the synchronization of evolving system features with their generation environments. This paper discusses the three main aspects of the envisioned thesis: (a) complexity analysis of model-transformation compositions, (b) system feature localization and tracking in model-transformation compositions, and (c) refactoring of transformation compositions to improve their qualities.","PeriodicalId":322423,"journal":{"name":"2013 35th International Conference on Software Engineering (ICSE)","volume":"33 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2013-05-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"123806601","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2013-05-18DOI: 10.1109/ICSE.2013.6606601
Alexander Wert, J. Happe, Lucia Happe
Performance problems pose a significant risk to software vendors. If left undetected, they can lead to lost customers, increased operational costs, and damaged reputation. Despite all efforts, software engineers cannot fully prevent performance problems being introduced into an application. Detecting and resolving such problems as early as possible with minimal effort is still an open challenge in software performance engineering. In this paper, we present a novel approach for Performance Problem Diagnostics (PPD) that systematically searches for well-known performance problems (also called performance antipatterns) within an application. PPD automatically isolates the problem's root cause, hence facilitating problem solving. We applied PPD to a well established transactional web e-Commerce benchmark (TPC-W) in two deployment scenarios. PPD automatically identified four performance problems in the benchmark implementation and its deployment environment. By fixing the problems, we increased the maximum throughput of the benchmark from 1800 requests per second to more than 3500.
{"title":"Supporting swift reaction: Automatically uncovering performance problems by systematic experiments","authors":"Alexander Wert, J. Happe, Lucia Happe","doi":"10.1109/ICSE.2013.6606601","DOIUrl":"https://doi.org/10.1109/ICSE.2013.6606601","url":null,"abstract":"Performance problems pose a significant risk to software vendors. If left undetected, they can lead to lost customers, increased operational costs, and damaged reputation. Despite all efforts, software engineers cannot fully prevent performance problems being introduced into an application. Detecting and resolving such problems as early as possible with minimal effort is still an open challenge in software performance engineering. In this paper, we present a novel approach for Performance Problem Diagnostics (PPD) that systematically searches for well-known performance problems (also called performance antipatterns) within an application. PPD automatically isolates the problem's root cause, hence facilitating problem solving. We applied PPD to a well established transactional web e-Commerce benchmark (TPC-W) in two deployment scenarios. PPD automatically identified four performance problems in the benchmark implementation and its deployment environment. By fixing the problems, we increased the maximum throughput of the benchmark from 1800 requests per second to more than 3500.","PeriodicalId":322423,"journal":{"name":"2013 35th International Conference on Software Engineering (ICSE)","volume":"33 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2013-05-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126603940","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}