Pub Date : 2013-05-18DOI: 10.1109/ICSE.2013.6606778
R. Harrison, S. Greenspan, T. Menzies, M. Mernik, P. Henriques, Daniela Carneiro da Cruz, Daniel Rodríguez
The RAISE'13 workshop brought together researchers from the AI and software engineering disciplines to build on the interdisciplinary synergies which exist and to stimulate research across these disciplines. The first part of the workshop was devoted to current results and consisted of presentations and discussion of the state of the art. This was followed by a second part which looked over the horizon to seek future directions, inspired by a number of selected vision statements concerning the AI-and-SE crossover. The goal of the RAISE workshop was to strengthen the AI-and-SE community and also develop a roadmap of strategic research directions for AI and software engineering.
{"title":"2nd International workshop on realizing artificial intelligence synergies in software engineering (RAISE 2013)","authors":"R. Harrison, S. Greenspan, T. Menzies, M. Mernik, P. Henriques, Daniela Carneiro da Cruz, Daniel Rodríguez","doi":"10.1109/ICSE.2013.6606778","DOIUrl":"https://doi.org/10.1109/ICSE.2013.6606778","url":null,"abstract":"The RAISE'13 workshop brought together researchers from the AI and software engineering disciplines to build on the interdisciplinary synergies which exist and to stimulate research across these disciplines. The first part of the workshop was devoted to current results and consisted of presentations and discussion of the state of the art. This was followed by a second part which looked over the horizon to seek future directions, inspired by a number of selected vision statements concerning the AI-and-SE crossover. The goal of the RAISE workshop was to strengthen the AI-and-SE community and also develop a roadmap of strategic research directions for AI and software engineering.","PeriodicalId":322423,"journal":{"name":"2013 35th International Conference on Software Engineering (ICSE)","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2013-05-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128676226","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2013-05-18DOI: 10.1109/ICSE.2013.6606705
Zhenchang Xing, Yinxing Xue, S. Jarzabek
Many software maintenance tasks require locating code units that implement a certain feature (termed as feature location). Feature location has been an active research area for more than two decades. However, there is lack of publicly available, large scale benchmarks for e valuating and comparing feature location approaches. In this paper, we present a LinuxKernel based benchmark for feature location research. This benchmark is large scale and extensible. By providing rich feature and program information and accurate ground-truth links between features and code units, it supports the e valuation of a wide range of feature location approaches. It allows researchers to gain deeper insights into existing approaches and how they can be improved. It also enables communication and collaboration among different researchers. (video: http://www.youtube.com/watch?v=3D_HihwRNeK3I).
{"title":"A large scale Linux-Kernel based benchmark for feature location research","authors":"Zhenchang Xing, Yinxing Xue, S. Jarzabek","doi":"10.1109/ICSE.2013.6606705","DOIUrl":"https://doi.org/10.1109/ICSE.2013.6606705","url":null,"abstract":"Many software maintenance tasks require locating code units that implement a certain feature (termed as feature location). Feature location has been an active research area for more than two decades. However, there is lack of publicly available, large scale benchmarks for e valuating and comparing feature location approaches. In this paper, we present a LinuxKernel based benchmark for feature location research. This benchmark is large scale and extensible. By providing rich feature and program information and accurate ground-truth links between features and code units, it supports the e valuation of a wide range of feature location approaches. It allows researchers to gain deeper insights into existing approaches and how they can be improved. It also enables communication and collaboration among different researchers. (video: http://www.youtube.com/watch?v=3D_HihwRNeK3I).","PeriodicalId":322423,"journal":{"name":"2013 35th International Conference on Software Engineering (ICSE)","volume":"31 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2013-05-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"117025412","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2013-05-18DOI: 10.1109/ICSE.2013.6606605
Emmanuel Letier, W. Heaven
Requirements modelling helps software engineers understand a system's required behaviour and explore alternative system designs. It also generates a formal software specification that can be used for testing, verification, and debugging. However, elaborating such models requires expertise and significant human effort. The paper aims at reducing this effort by automating an essential activity of requirements modelling which consists in deriving a machine specification satisfying a set of goals in a domain. It introduces deontic input-output automata - an extension of input-output automata with permissions and obligations - and an automated synthesis technique over this formalism to support such derivation. This technique helps modellers identifying early when a goal is not realizable in a domain and can guide the exploration of alternative models to make goals realizable. Synthesis techniques for input-output or interface automata are not adequate for requirements modelling.
{"title":"Requirements modelling by synthesis of deontic input-output automata","authors":"Emmanuel Letier, W. Heaven","doi":"10.1109/ICSE.2013.6606605","DOIUrl":"https://doi.org/10.1109/ICSE.2013.6606605","url":null,"abstract":"Requirements modelling helps software engineers understand a system's required behaviour and explore alternative system designs. It also generates a formal software specification that can be used for testing, verification, and debugging. However, elaborating such models requires expertise and significant human effort. The paper aims at reducing this effort by automating an essential activity of requirements modelling which consists in deriving a machine specification satisfying a set of goals in a domain. It introduces deontic input-output automata - an extension of input-output automata with permissions and obligations - and an automated synthesis technique over this formalism to support such derivation. This technique helps modellers identifying early when a goal is not realizable in a domain and can guide the exploration of alternative models to make goals realizable. Synthesis techniques for input-output or interface automata are not adequate for requirements modelling.","PeriodicalId":322423,"journal":{"name":"2013 35th International Conference on Software Engineering (ICSE)","volume":"19 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2013-05-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115203802","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2013-05-18DOI: 10.1109/ICSE.2013.6606552
Daniel Sykes, Domenico Corapi, J. Magee, J. Kramer, A. Russo, Katsumi Inoue
Environment domain models are a key part of the information used by adaptive systems to determine their behaviour. These models can be incomplete or inaccurate. In addition, since adaptive systems generally operate in environments which are subject to change, these models are often also out of date. To update and correct these models, the system should observe how the environment responds to its actions, and compare these responses to those predicted by the model. In this paper, we use a probabilistic rule learning approach, NoMPRoL, to update models using feedback from the running system in the form of execution traces. NoMPRoL is a technique for nonmonotonic probabilistic rule learning based on a transformation of an inductive logic programming task into an equivalent abductive one. In essence, it exploits consistent observations by finding general rules which explain observations in terms of the conditions under which they occur. The updated models are then used to generate new behaviour with a greater chance of success in the actual environment encountered.
{"title":"Learning revised models for planning in adaptive systems","authors":"Daniel Sykes, Domenico Corapi, J. Magee, J. Kramer, A. Russo, Katsumi Inoue","doi":"10.1109/ICSE.2013.6606552","DOIUrl":"https://doi.org/10.1109/ICSE.2013.6606552","url":null,"abstract":"Environment domain models are a key part of the information used by adaptive systems to determine their behaviour. These models can be incomplete or inaccurate. In addition, since adaptive systems generally operate in environments which are subject to change, these models are often also out of date. To update and correct these models, the system should observe how the environment responds to its actions, and compare these responses to those predicted by the model. In this paper, we use a probabilistic rule learning approach, NoMPRoL, to update models using feedback from the running system in the form of execution traces. NoMPRoL is a technique for nonmonotonic probabilistic rule learning based on a transformation of an inductive logic programming task into an equivalent abductive one. In essence, it exploits consistent observations by finding general rules which explain observations in terms of the conditions under which they occur. The updated models are then used to generate new behaviour with a greater chance of success in the actual environment encountered.","PeriodicalId":322423,"journal":{"name":"2013 35th International Conference on Software Engineering (ICSE)","volume":"33 11 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2013-05-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"125711161","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2013-05-18DOI: 10.1109/ICSE.2013.6606746
Nupul Kukreja
Budget and schedule constraints limit the number of requirements that can be worked on for a software system and is thus necessary to select the most valuable requirements for implementation. However, selecting from a large number of requirements is a decision problem that requires negotiating with multiple stakeholders and satisficing their value propositions. In this paper I present a two-step value-based requirements prioritization approach based on TOPSIS, a decision analysis framework that tightly integrates decision theory with the process of requirements prioritization. In this two-step approach the software system is initially decomposed into high-level Minimal Marketable Features (MMFs) which the business stakeholders prioritize against business goals. Each individual MMF is further decomposed into low-level requirements/features that are primarily prioritized by the technical stakeholders. The priorities of the low-level requirements are influenced by the MMFs they belong to. This approach has been integrated into Winbook, a social-networking influenced collaborative requirements management framework and deployed for use by 10 real-client project teams for the Software Engineering project course at the University of Southern California in Fall 2012. This model allowed the clients and project teams to effectively gauge the importance of each MMF and low-level requirement and perform various sensitivity analyses and take value-informed decisions when selecting requirements for implementation.
{"title":"Decision theoretic requirements prioritization A two-step approach for sliding towards value realization","authors":"Nupul Kukreja","doi":"10.1109/ICSE.2013.6606746","DOIUrl":"https://doi.org/10.1109/ICSE.2013.6606746","url":null,"abstract":"Budget and schedule constraints limit the number of requirements that can be worked on for a software system and is thus necessary to select the most valuable requirements for implementation. However, selecting from a large number of requirements is a decision problem that requires negotiating with multiple stakeholders and satisficing their value propositions. In this paper I present a two-step value-based requirements prioritization approach based on TOPSIS, a decision analysis framework that tightly integrates decision theory with the process of requirements prioritization. In this two-step approach the software system is initially decomposed into high-level Minimal Marketable Features (MMFs) which the business stakeholders prioritize against business goals. Each individual MMF is further decomposed into low-level requirements/features that are primarily prioritized by the technical stakeholders. The priorities of the low-level requirements are influenced by the MMFs they belong to. This approach has been integrated into Winbook, a social-networking influenced collaborative requirements management framework and deployed for use by 10 real-client project teams for the Software Engineering project course at the University of Southern California in Fall 2012. This model allowed the clients and project teams to effectively gauge the importance of each MMF and low-level requirement and perform various sensitivity analyses and take value-informed decisions when selecting requirements for implementation.","PeriodicalId":322423,"journal":{"name":"2013 35th International Conference on Software Engineering (ICSE)","volume":"18 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2013-05-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128180357","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2013-05-18DOI: 10.1109/ICSE.2013.6606602
Adrian Nistor, Linhai Song, D. Marinov, Shan Lu
Performance bugs are programming errors that create significant performance degradation. While developers often use automated oracles for detecting functional bugs, detecting performance bugs usually requires time-consuming, manual analysis of execution profiles. The human effort for performance analysis limits the number of performance tests analyzed and enables performance bugs to easily escape to production. Unfortunately, while profilers can successfully localize slow executing code, profilers cannot be effectively used as automated oracles. This paper presents Toddler, a novel automated oracle for performance bugs, which enables testing for performance bugs to use the well established and automated process of testing for functional bugs. Toddler reports code loops whose computation has repetitive and partially similar memory-access patterns across loop iterations. Such repetitive work is likely unnecessary and can be done faster. We implement Toddler for Java and evaluate it on 9 popular Java codebases. Our experiments with 11 previously known, real-world performance bugs show that Toddler finds these bugs with a higher accuracy than the standard Java profiler. Using Toddler, we also found 42 new bugs in six Java projects: Ant, Google Core Libraries, JUnit, Apache Collections, JDK, and JFreeChart. Based on our bug reports, developers so far fixed 10 bugs and confirmed 6 more as real bugs.
{"title":"Toddler: Detecting performance problems via similar memory-access patterns","authors":"Adrian Nistor, Linhai Song, D. Marinov, Shan Lu","doi":"10.1109/ICSE.2013.6606602","DOIUrl":"https://doi.org/10.1109/ICSE.2013.6606602","url":null,"abstract":"Performance bugs are programming errors that create significant performance degradation. While developers often use automated oracles for detecting functional bugs, detecting performance bugs usually requires time-consuming, manual analysis of execution profiles. The human effort for performance analysis limits the number of performance tests analyzed and enables performance bugs to easily escape to production. Unfortunately, while profilers can successfully localize slow executing code, profilers cannot be effectively used as automated oracles. This paper presents Toddler, a novel automated oracle for performance bugs, which enables testing for performance bugs to use the well established and automated process of testing for functional bugs. Toddler reports code loops whose computation has repetitive and partially similar memory-access patterns across loop iterations. Such repetitive work is likely unnecessary and can be done faster. We implement Toddler for Java and evaluate it on 9 popular Java codebases. Our experiments with 11 previously known, real-world performance bugs show that Toddler finds these bugs with a higher accuracy than the standard Java profiler. Using Toddler, we also found 42 new bugs in six Java projects: Ant, Google Core Libraries, JUnit, Apache Collections, JDK, and JFreeChart. Based on our bug reports, developers so far fixed 10 bugs and confirmed 6 more as real bugs.","PeriodicalId":322423,"journal":{"name":"2013 35th International Conference on Software Engineering (ICSE)","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2013-05-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126386162","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2013-05-18DOI: 10.1109/ICSE.2013.6606619
B. Kasi, A. Sarma
Software conflicts arising because of conflicting changes are a regular occurrence and delay projects. The main precept of workspace awareness tools has been to identify potential conflicts early, while changes are still small and easier to resolve. However, in this approach conflicts still occur and require developer time and effort to resolve. We present a novel conflict minimization technique that proactively identifies potential conflicts, encodes them as constraints, and solves the constraint space to recommend a set of conflict-minimal development paths for the team. Here we present a study of four open source projects to characterize the distribution of conflicts and their resolution efforts. We then explain our conflict minimization technique and the design and implementation of this technique in our prototype, Cassandra. We show that Cassandra would have successfully avoided a majority of conflicts in the four open source test subjects. We demonstrate the efficiency of our approach by applying the technique to a simulated set of scenarios with higher than normal incidence of conflicts.
{"title":"Cassandra: Proactive conflict minimization through optimized task scheduling","authors":"B. Kasi, A. Sarma","doi":"10.1109/ICSE.2013.6606619","DOIUrl":"https://doi.org/10.1109/ICSE.2013.6606619","url":null,"abstract":"Software conflicts arising because of conflicting changes are a regular occurrence and delay projects. The main precept of workspace awareness tools has been to identify potential conflicts early, while changes are still small and easier to resolve. However, in this approach conflicts still occur and require developer time and effort to resolve. We present a novel conflict minimization technique that proactively identifies potential conflicts, encodes them as constraints, and solves the constraint space to recommend a set of conflict-minimal development paths for the team. Here we present a study of four open source projects to characterize the distribution of conflicts and their resolution efforts. We then explain our conflict minimization technique and the design and implementation of this technique in our prototype, Cassandra. We show that Cassandra would have successfully avoided a majority of conflicts in the four open source test subjects. We demonstrate the efficiency of our approach by applying the technique to a simulated set of scenarios with higher than normal incidence of conflicts.","PeriodicalId":322423,"journal":{"name":"2013 35th International Conference on Software Engineering (ICSE)","volume":"25 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2013-05-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126517752","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Modern integrated development environments (IDEs) support one live codebase at a given moment, which imposes limitations to software development. For example, with only one codebase, the developer must pause development while running tests, or a static analysis, as any edit could invalidate the ongoing computation. Were the IDEs supported a copy of developer's codebase, the analyses could have run on this copy, in parallel with the development process. In this paper, we propose techniques and tools that integrate multiple live codebases support to the software development process. Our hypothesis is that IDE support for multiple live codebases can provide a richer development process and aid developers.
{"title":"Integrating Systematic exploration, analysis, and maintenance in software development","authors":"Kivanç Muslu","doi":"10.5555/2486788.2487013","DOIUrl":"https://doi.org/10.5555/2486788.2487013","url":null,"abstract":"Modern integrated development environments (IDEs) support one live codebase at a given moment, which imposes limitations to software development. For example, with only one codebase, the developer must pause development while running tests, or a static analysis, as any edit could invalidate the ongoing computation. Were the IDEs supported a copy of developer's codebase, the analyses could have run on this copy, in parallel with the development process. In this paper, we propose techniques and tools that integrate multiple live codebases support to the software development process. Our hypothesis is that IDE support for multiple live codebases can provide a richer development process and aid developers.","PeriodicalId":322423,"journal":{"name":"2013 35th International Conference on Software Engineering (ICSE)","volume":"36 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2013-05-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"123753519","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2013-05-18DOI: 10.1109/ICSE.2013.6606704
S. Haiduc, G. D. Rosa, G. Bavota, R. Oliveto, A. D. Lucia, Andrian Marcus
Developers search source code frequently during their daily tasks, to find pieces of code to reuse, to find where to implement changes, etc. Code search based on text retrieval (TR) techniques has been widely used in the software engineering community during the past decade. The accuracy of the TR-based search results depends largely on the quality of the query used. We introduce Refoqus, an Eclipse plugin which is able to automatically detect the quality of a text retrieval query and to propose reformulations for it, when needed, in order to improve the results of TR-based code search. A video of Refoqus is found online at http://www.youtube.com/watch?v=UQlWGiauyk4.
{"title":"Query quality prediction and reformulation for source code search: The Refoqus tool","authors":"S. Haiduc, G. D. Rosa, G. Bavota, R. Oliveto, A. D. Lucia, Andrian Marcus","doi":"10.1109/ICSE.2013.6606704","DOIUrl":"https://doi.org/10.1109/ICSE.2013.6606704","url":null,"abstract":"Developers search source code frequently during their daily tasks, to find pieces of code to reuse, to find where to implement changes, etc. Code search based on text retrieval (TR) techniques has been widely used in the software engineering community during the past decade. The accuracy of the TR-based search results depends largely on the quality of the query used. We introduce Refoqus, an Eclipse plugin which is able to automatically detect the quality of a text retrieval query and to propose reformulations for it, when needed, in order to improve the results of TR-based code search. A video of Refoqus is found online at http://www.youtube.com/watch?v=UQlWGiauyk4.","PeriodicalId":322423,"journal":{"name":"2013 35th International Conference on Software Engineering (ICSE)","volume":"89 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2013-05-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"122453406","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2013-05-18DOI: 10.1109/ICSE.2013.6606659
David S. Janzen, John Clements, Michael C Hilton
WebIDE is a framework that enables instructors to develop and deliver online lab content with interactive feedback. The ability to create lock-step labs enables the instructor to guide students through learning experiences, demonstrating mastery as they proceed. Feedback is provided through automated evaluators that vary from simple regular expression evaluation to syntactic parsers to applications that compile and run programs and unit tests. This paper describes WebIDE and its use in a CS0 course that taught introductory Java and Android programming using a test-driven learning approach. We report results from a controlled experiment that compared the use of dynamic WebIDE labs with more traditional static programming labs. Despite weaker performance on pre-study assessments, students who used WebIDE performed two to twelve percent better on all assessments than the students who used traditional labs. In addition, WebIDE students were consistently more positive about their experience in CS0.
{"title":"An evaluation of interactive test-driven labs with WebIDE in CS0","authors":"David S. Janzen, John Clements, Michael C Hilton","doi":"10.1109/ICSE.2013.6606659","DOIUrl":"https://doi.org/10.1109/ICSE.2013.6606659","url":null,"abstract":"WebIDE is a framework that enables instructors to develop and deliver online lab content with interactive feedback. The ability to create lock-step labs enables the instructor to guide students through learning experiences, demonstrating mastery as they proceed. Feedback is provided through automated evaluators that vary from simple regular expression evaluation to syntactic parsers to applications that compile and run programs and unit tests. This paper describes WebIDE and its use in a CS0 course that taught introductory Java and Android programming using a test-driven learning approach. We report results from a controlled experiment that compared the use of dynamic WebIDE labs with more traditional static programming labs. Despite weaker performance on pre-study assessments, students who used WebIDE performed two to twelve percent better on all assessments than the students who used traditional labs. In addition, WebIDE students were consistently more positive about their experience in CS0.","PeriodicalId":322423,"journal":{"name":"2013 35th International Conference on Software Engineering (ICSE)","volume":"19 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2013-05-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126283117","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}