Spreadsheets are widely used in industry. It is estimated that end-user programmers outnumber regular programmers by a factor of 5. However, spreadsheets are error-prone: several reports exist of companies that have lost big sums of money due to spreadsheet errors. In previous work, spreadsheet smells have proven to be the cause of some of these errors. To that end, we have developed a tool that can apply refactorings to spreadsheet formulas, implementing our previous work on spreadsheet refactoring, which showed that spreadsheet formula smells are very common and that refactorings for them are widely applicable and that refactoring them with a tool is both quicker and less error-prone. Our new tool Bumblebee is able to execute refactorings originating from both these papers, by means of an extensible syntax, and can furthermore apply refactorings on entire groups of formulas, thus improving upon the existing tool RefBook. Finally, BumbleBee can also execute transformations other than refactorings.
{"title":"BumbleBee: a refactoring environment for spreadsheet formulas","authors":"F. Hermans, Danny Dig","doi":"10.1145/2635868.2661673","DOIUrl":"https://doi.org/10.1145/2635868.2661673","url":null,"abstract":"Spreadsheets are widely used in industry. It is estimated that end-user programmers outnumber regular programmers by a factor of 5. However, spreadsheets are error-prone: several reports exist of companies that have lost big sums of money due to spreadsheet errors. In previous work, spreadsheet smells have proven to be the cause of some of these errors. To that end, we have developed a tool that can apply refactorings to spreadsheet formulas, implementing our previous work on spreadsheet refactoring, which showed that spreadsheet formula smells are very common and that refactorings for them are widely applicable and that refactoring them with a tool is both quicker and less error-prone. Our new tool Bumblebee is able to execute refactorings originating from both these papers, by means of an extensible syntax, and can furthermore apply refactorings on entire groups of formulas, thus improving upon the existing tool RefBook. Finally, BumbleBee can also execute transformations other than refactorings.","PeriodicalId":250543,"journal":{"name":"Proceedings of the 22nd ACM SIGSOFT International Symposium on Foundations of Software Engineering","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2014-11-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"122737370","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Amiangshu Bosu, Jeffrey C. Carver, M. Hafiz, Patrick Hilley, Derek Janni
To focus the efforts of security experts, the goals of this empirical study are to analyze which security vulnerabilities can be discovered by code review, identify characteristics of vulnerable code changes, and identify characteristics of developers likely to introduce vulnerabilities. Using a three-stage manual and automated process, we analyzed 267,046 code review requests from 10 open source projects and identified 413 Vulnerable Code Changes (VCC). Some key results include: (1) code review can identify common types of vulnerabilities; (2) while more experienced contributors authored the majority of the VCCs, the less experienced contributors' changes were 1.8 to 24 times more likely to be vulnerable; (3) the likelihood of a vulnerability increases with the number of lines changed, and (4) modified files are more likely to contain vulnerabilities than new files. Knowing which code changes are more prone to contain vulnerabilities may allow a security expert to concentrate on a smaller subset of submitted code changes. Moreover, we recommend that projects should: (a) create or adapt secure coding guidelines, (b) create a dedicated security review team, (c) ensure detailed comments during review to help knowledge dissemination, and (d) encourage developers to make small, incremental changes rather than large changes.
{"title":"Identifying the characteristics of vulnerable code changes: an empirical study","authors":"Amiangshu Bosu, Jeffrey C. Carver, M. Hafiz, Patrick Hilley, Derek Janni","doi":"10.1145/2635868.2635880","DOIUrl":"https://doi.org/10.1145/2635868.2635880","url":null,"abstract":"To focus the efforts of security experts, the goals of this empirical study are to analyze which security vulnerabilities can be discovered by code review, identify characteristics of vulnerable code changes, and identify characteristics of developers likely to introduce vulnerabilities. Using a three-stage manual and automated process, we analyzed 267,046 code review requests from 10 open source projects and identified 413 Vulnerable Code Changes (VCC). Some key results include: (1) code review can identify common types of vulnerabilities; (2) while more experienced contributors authored the majority of the VCCs, the less experienced contributors' changes were 1.8 to 24 times more likely to be vulnerable; (3) the likelihood of a vulnerability increases with the number of lines changed, and (4) modified files are more likely to contain vulnerabilities than new files. Knowing which code changes are more prone to contain vulnerabilities may allow a security expert to concentrate on a smaller subset of submitted code changes. Moreover, we recommend that projects should: (a) create or adapt secure coding guidelines, (b) create a dedicated security review team, (c) ensure detailed comments during review to help knowledge dissemination, and (d) encourage developers to make small, incremental changes rather than large changes.","PeriodicalId":250543,"journal":{"name":"Proceedings of the 22nd ACM SIGSOFT International Symposium on Foundations of Software Engineering","volume":"2 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2014-11-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"122999988","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
abstract Software maintenance exhibits many differences regarding how other engineering disciplines carry out maintenance on their artifacts. Such dissimilarity is caused due to the fact that it is easy to get a copy from the original artifact to be used in maintenance, and also because the flat dimension of the software text facilitates access to the components by simply using a text editor. Other engineering disciplines resort to different artifact versions (obtained by dissassembling) where the introduction of modifications (previous comprehension) is easier. After which the artifact is reassembled. In software engineering this approach can be simulated by combining program transformation techniques, search-based software engineering technology and design attributes. %%This easiness (absent in the other engineering sciences) as well as the intangible software nature can lead to the belief %%that a software maintenance model similar to those of the other engineering sciences is unnecessary or unfeasible. %%This paper states the opposite, and as a consequence, an entirely new and more robust software maintenance model emerges. abstract
{"title":"Software maintenance like maintenance in other engineering disciplines","authors":"G. Villavicencio","doi":"10.1145/2635868.2666613","DOIUrl":"https://doi.org/10.1145/2635868.2666613","url":null,"abstract":"abstract Software maintenance exhibits many differences regarding how other engineering disciplines carry out maintenance on their artifacts. Such dissimilarity is caused due to the fact that it is easy to get a copy from the original artifact to be used in maintenance, and also because the flat dimension of the software text facilitates access to the components by simply using a text editor. Other engineering disciplines resort to different artifact versions (obtained by dissassembling) where the introduction of modifications (previous comprehension) is easier. After which the artifact is reassembled. In software engineering this approach can be simulated by combining program transformation techniques, search-based software engineering technology and design attributes. %%This easiness (absent in the other engineering sciences) as well as the intangible software nature can lead to the belief %%that a software maintenance model similar to those of the other engineering sciences is unnecessary or unfeasible. %%This paper states the opposite, and as a consequence, an entirely new and more robust software maintenance model emerges. abstract","PeriodicalId":250543,"journal":{"name":"Proceedings of the 22nd ACM SIGSOFT International Symposium on Foundations of Software Engineering","volume":"76 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2014-11-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127546479","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Writing efficient synchronization for multithreaded programs is notoriously hard. The resulting code often contains subtle concurrency bugs. Even worse, many bug fixes introduce new bugs. A classic example, seen widely in practice, is deadlocks resulting from fixing of an atomicity violation. These complexities have motivated the development of automated fixing techniques. Current techniques generate fixes that are typically conservative, giving up on available parallelism. Moreover, some of the techniques cannot guarantee the correctness of a fix, and may introduce deadlocks similarly to manual fix, whereas techniques that ensure correctness do so at the expense of even greater performance loss. We present Grail, a novel fixing algorithm that departs from previous techniques by simultaneously providing both correctness and optimality guarantees. Grail synthesizes bug-free yet optimal lock-based synchronization. To achieve this, Grail builds an analysis model of the buggy code that is both contextual, distinguishing different aliasing contexts to ensure efficiency, and global, accounting for the entire synchronization behavior of the involved threads to ensure correctness. Evaluation of Grail on 12 bugs from popular codebases confirms its practical advantages, especially compared with existing techniques: Grail patches are, in general, >=40% more efficient than the patches produced by other techniques, and incur only 2% overhead.
{"title":"Grail: context-aware fixing of concurrency bugs","authors":"Peng Liu, Omer Tripp, Charles Zhang","doi":"10.1145/2635868.2635881","DOIUrl":"https://doi.org/10.1145/2635868.2635881","url":null,"abstract":"Writing efficient synchronization for multithreaded programs is notoriously hard. The resulting code often contains subtle concurrency bugs. Even worse, many bug fixes introduce new bugs. A classic example, seen widely in practice, is deadlocks resulting from fixing of an atomicity violation. These complexities have motivated the development of automated fixing techniques. Current techniques generate fixes that are typically conservative, giving up on available parallelism. Moreover, some of the techniques cannot guarantee the correctness of a fix, and may introduce deadlocks similarly to manual fix, whereas techniques that ensure correctness do so at the expense of even greater performance loss. We present Grail, a novel fixing algorithm that departs from previous techniques by simultaneously providing both correctness and optimality guarantees. Grail synthesizes bug-free yet optimal lock-based synchronization. To achieve this, Grail builds an analysis model of the buggy code that is both contextual, distinguishing different aliasing contexts to ensure efficiency, and global, accounting for the entire synchronization behavior of the involved threads to ensure correctness. Evaluation of Grail on 12 bugs from popular codebases confirms its practical advantages, especially compared with existing techniques: Grail patches are, in general, >=40% more efficient than the patches produced by other techniques, and incur only 2% overhead.","PeriodicalId":250543,"journal":{"name":"Proceedings of the 22nd ACM SIGSOFT International Symposium on Foundations of Software Engineering","volume":"20 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2014-11-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128454828","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
So you have developed a new software productivity tool, written an FSE or an ICSE paper about it, and are justifiably proud of your work. If you work for a company, your (curmudgeonly) manager now wants to see its “impact” on the business. This is the part where you have to convince someone else to use your shiny new tool in their day-to-day work, or ship it as a product. But you soon realize that getting traction with developers or product managers is significantly harder than the research itself. Sounds familiar? In the past several years, I have been involved in taking a variety of software productivity tools to various constituencies within a company: internal users, product teams, and service delivery teams. In this talk, I will share my experiences in interacting with these constituencies; sometimes successful experiences, but at other times not so successful ones. I will focus broadly on tools in two areas: bug finding and test automation. I will make some observations on when tech transfer works and when it stumbles.
{"title":"Are you getting traction? tales from the tech transfer trenches (invited talk)","authors":"S. Chandra","doi":"10.1145/2635868.2684430","DOIUrl":"https://doi.org/10.1145/2635868.2684430","url":null,"abstract":"So you have developed a new software productivity tool, written an FSE or an ICSE paper about it, and are justifiably proud of your work. If you work for a company, your (curmudgeonly) manager now wants to see its “impact” on the business. This is the part where you have to convince someone else to use your shiny new tool in their day-to-day work, or ship it as a product. But you soon realize that getting traction with developers or product managers is significantly harder than the research itself. Sounds familiar? In the past several years, I have been involved in taking a variety of software productivity tools to various constituencies within a company: internal users, product teams, and service delivery teams. In this talk, I will share my experiences in interacting with these constituencies; sometimes successful experiences, but at other times not so successful ones. I will focus broadly on tools in two areas: bug finding and test automation. I will make some observations on when tech transfer works and when it stumbles.","PeriodicalId":250543,"journal":{"name":"Proceedings of the 22nd ACM SIGSOFT International Symposium on Foundations of Software Engineering","volume":"45 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2014-11-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126326811","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
R. Joiner, T. Reps, S. Jha, Mohan Dhawan, V. Ganapathy
Policy weaving is a program-transformation technique that rewrites a program so that it is guaranteed to be safe with respect to a stateful security policy. It utilizes (i) static analysis to identify points in the program at which policy violations might occur, and (ii) runtime checks inserted at such points to monitor policy state and prevent violations from occurring. The promise of policy weaving stems from the possibility of blending the best aspects of static and dynamic analysis components. Therefore, a successful instantiation of policy weaving requires a careful balance and coordination between the two. In this paper, we examine the strategy of using a combination of transactional introspection and statement indirection to implement runtime enforcement in a policy-weaving system. Transactional introspection allows the state resulting from the execution of a statement to be examined and, if the policy would be violated, suppressed. Statement indirection serves as a light-weight runtime analysis that can recognize and instrument dynamically generated code that is not available to the static analysis. These techniques can be implemented via static rewriting so that all possible program executions are protected against policy violations. We describe our implementation of transactional introspection and statement indirection for policy weaving, and report experimental results that show the viability of the approach in the context of real-world JavaScript programs executing in a browser.
{"title":"Efficient runtime-enforcement techniques for policy weaving","authors":"R. Joiner, T. Reps, S. Jha, Mohan Dhawan, V. Ganapathy","doi":"10.1145/2635868.2635907","DOIUrl":"https://doi.org/10.1145/2635868.2635907","url":null,"abstract":"Policy weaving is a program-transformation technique that rewrites a program so that it is guaranteed to be safe with respect to a stateful security policy. It utilizes (i) static analysis to identify points in the program at which policy violations might occur, and (ii) runtime checks inserted at such points to monitor policy state and prevent violations from occurring. The promise of policy weaving stems from the possibility of blending the best aspects of static and dynamic analysis components. Therefore, a successful instantiation of policy weaving requires a careful balance and coordination between the two. In this paper, we examine the strategy of using a combination of transactional introspection and statement indirection to implement runtime enforcement in a policy-weaving system. Transactional introspection allows the state resulting from the execution of a statement to be examined and, if the policy would be violated, suppressed. Statement indirection serves as a light-weight runtime analysis that can recognize and instrument dynamically generated code that is not available to the static analysis. These techniques can be implemented via static rewriting so that all possible program executions are protected against policy violations. We describe our implementation of transactional introspection and statement indirection for policy weaving, and report experimental results that show the viability of the approach in the context of real-world JavaScript programs executing in a browser.","PeriodicalId":250543,"journal":{"name":"Proceedings of the 22nd ACM SIGSOFT International Symposium on Foundations of Software Engineering","volume":"241-244 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2014-11-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"121583045","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
In this paper we present recent developments in reverse engineering variability for block-based data-flow models.
在本文中,我们介绍了基于块的数据流模型的逆向工程可变性的最新发展。
{"title":"Managing lots of models: the FaMine approach","authors":"David Wille","doi":"10.1145/2635868.2661681","DOIUrl":"https://doi.org/10.1145/2635868.2661681","url":null,"abstract":"In this paper we present recent developments in reverse engineering variability for block-based data-flow models.","PeriodicalId":250543,"journal":{"name":"Proceedings of the 22nd ACM SIGSOFT International Symposium on Foundations of Software Engineering","volume":"19 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2014-11-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127093063","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
ConceptCloud is an interactive browser for SVN and Git repositories. Its main novelty is the combination of an intuitive tag cloud interface with an underlying concept lattice that provides a formal structure for navigation. This combination allows users to explore repositories serendipitously, without predefined search goals and along different navigation paths. ConceptCloud can derive different lattice types for a repository and supports concurrent navigation in multiple linked tag clouds that can each be individually customized, which allows multi-faceted repository explorations.
{"title":"ConceptCloud: a tagcloud browser for software archives","authors":"Gillian J. Greene, B. Fischer","doi":"10.1145/2635868.2661676","DOIUrl":"https://doi.org/10.1145/2635868.2661676","url":null,"abstract":"ConceptCloud is an interactive browser for SVN and Git repositories. Its main novelty is the combination of an intuitive tag cloud interface with an underlying concept lattice that provides a formal structure for navigation. This combination allows users to explore repositories serendipitously, without predefined search goals and along different navigation paths. ConceptCloud can derive different lattice types for a repository and supports concurrent navigation in multiple linked tag clouds that can each be individually customized, which allows multi-faceted repository explorations.","PeriodicalId":250543,"journal":{"name":"Proceedings of the 22nd ACM SIGSOFT International Symposium on Foundations of Software Engineering","volume":"81 7 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2014-11-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128148331","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
A. Filieri, C. Pasareanu, W. Visser, J. Geldenhuys
Symbolic execution techniques have been proposed recently for the probabilistic analysis of programs. These techniques seek to quantify the likelihood of reaching program events of interest, e.g., assert violations. They have many promising applications but have scalability issues due to high computational demand. To address this challenge, we propose a statistical symbolic execution technique that performs Monte Carlo sampling of the symbolic program paths and uses the obtained information for Bayesian estimation and hypothesis testing with respect to the probability of reaching the target events. To speed up the convergence of the statistical analysis, we propose Informed Sampling, an iterative symbolic execution that first explores the paths that have high statistical significance, prunes them from the state space and guides the execution towards less likely paths. The technique combines Bayesian estimation with a partial exact analysis for the pruned paths leading to provably improved convergence of the statistical analysis. We have implemented statistical symbolic execution with informed sampling in the Symbolic PathFinder tool. We show experimentally that the informed sampling obtains more precise results and converges faster than a purely statistical analysis and may also be more efficient than an exact symbolic analysis. When the latter does not terminate symbolic execution with informed sampling can give meaningful results under the same time and memory limits.
{"title":"Statistical symbolic execution with informed sampling","authors":"A. Filieri, C. Pasareanu, W. Visser, J. Geldenhuys","doi":"10.1145/2635868.2635899","DOIUrl":"https://doi.org/10.1145/2635868.2635899","url":null,"abstract":"Symbolic execution techniques have been proposed recently for the probabilistic analysis of programs. These techniques seek to quantify the likelihood of reaching program events of interest, e.g., assert violations. They have many promising applications but have scalability issues due to high computational demand. To address this challenge, we propose a statistical symbolic execution technique that performs Monte Carlo sampling of the symbolic program paths and uses the obtained information for Bayesian estimation and hypothesis testing with respect to the probability of reaching the target events. To speed up the convergence of the statistical analysis, we propose Informed Sampling, an iterative symbolic execution that first explores the paths that have high statistical significance, prunes them from the state space and guides the execution towards less likely paths. The technique combines Bayesian estimation with a partial exact analysis for the pruned paths leading to provably improved convergence of the statistical analysis. We have implemented statistical symbolic execution with informed sampling in the Symbolic PathFinder tool. We show experimentally that the informed sampling obtains more precise results and converges faster than a purely statistical analysis and may also be more efficient than an exact symbolic analysis. When the latter does not terminate symbolic execution with informed sampling can give meaningful results under the same time and memory limits.","PeriodicalId":250543,"journal":{"name":"Proceedings of the 22nd ACM SIGSOFT International Symposium on Foundations of Software Engineering","volume":"128 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2014-11-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114450401","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
An evidence-based software engineer is one who is able to: 1) Formulate a question, related to a decision or judgment, so that it can be answered by the use of evidence, 2) Collect, critically evaluate and summarise relevant evidence from research, practice and local studies, 3) Apply the evidence, integrated with knowledge about the local context, to guide decisions and judgments. The keynote reflects on what it in practise means to be evidence-based in software engineering contexts, where the number of different contexts is high and the research-based evidence sparse, and why there is a need for more evidence-based practises. We summarise our experience from ten years of Evidence-Based Software Engineering in the context of university courses, training of software engineers and systematic literature reviews of software engineering research. While there are challenges in training people in evidence-based practise, our experience suggest that it is feasible and that the training can make an important difference in terms of quality of software engineering judgment and decisions. Based on our experience we suggest changes in how evidence-based software engineering should be presented and taught, and how we should ease the transfer of research results into evidence-based practises.
{"title":"Ten years with evidence-based software engineering. What is it? Has it had any impact? What's next?","authors":"M. Jørgensen","doi":"10.1145/2635868.2684428","DOIUrl":"https://doi.org/10.1145/2635868.2684428","url":null,"abstract":"An evidence-based software engineer is one who is able to: 1) Formulate a question, related to a decision or judgment, so that it can be answered by the use of evidence, 2) Collect, critically evaluate and summarise relevant evidence from research, practice and local studies, 3) Apply the evidence, integrated with knowledge about the local context, to guide decisions and judgments. The keynote reflects on what it in practise means to be evidence-based in software engineering contexts, where the number of different contexts is high and the research-based evidence sparse, and why there is a need for more evidence-based practises. We summarise our experience from ten years of Evidence-Based Software Engineering in the context of university courses, training of software engineers and systematic literature reviews of software engineering research. While there are challenges in training people in evidence-based practise, our experience suggest that it is feasible and that the training can make an important difference in terms of quality of software engineering judgment and decisions. Based on our experience we suggest changes in how evidence-based software engineering should be presented and taught, and how we should ease the transfer of research results into evidence-based practises.","PeriodicalId":250543,"journal":{"name":"Proceedings of the 22nd ACM SIGSOFT International Symposium on Foundations of Software Engineering","volume":"95 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2014-11-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"117212346","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}