Pub Date : 2013-05-18DOI: 10.1109/ICSE.2013.6606651
H. Malik, H. Hemmati, A. Hassan
Load testing is one of the means for evaluating the performance of Large Scale Systems (LSS). At the end of a load test, performance analysts must analyze thousands of performance counters from hundreds of machines under test. These performance counters are measures of run-time system properties such as CPU utilization, Disk I/O, memory consumption, and network traffic. Analysts observe counters to find out if the system is meeting its Service Level Agreements (SLAs). In this paper, we present and evaluate one supervised and three unsupervised approaches to help performance analysts to 1) more effectively compare load tests in order to detect performance deviations which may lead to SLA violations, and 2) to provide them with a smaller and manageable set of important performance counters to assist in root-cause analysis of the detected deviations. Our case study is based on load test data obtained from both a large scale industrial system and an open source benchmark application. The case study shows, that our wrapper-based supervised approach, which uses a search-based technique to find the best subset of performance counters and a logistic regression model for deviation prediction, can provide up to 89% reduction in the set of performance counters while detecting performance deviations with few false positives (i.e., 95% average precision). The study also shows that the supervised approach is more stable and effective than the unsupervised approaches but it has more overhead due to its semi-automated training phase.
{"title":"Automatic detection of performance deviations in the load testing of Large Scale Systems","authors":"H. Malik, H. Hemmati, A. Hassan","doi":"10.1109/ICSE.2013.6606651","DOIUrl":"https://doi.org/10.1109/ICSE.2013.6606651","url":null,"abstract":"Load testing is one of the means for evaluating the performance of Large Scale Systems (LSS). At the end of a load test, performance analysts must analyze thousands of performance counters from hundreds of machines under test. These performance counters are measures of run-time system properties such as CPU utilization, Disk I/O, memory consumption, and network traffic. Analysts observe counters to find out if the system is meeting its Service Level Agreements (SLAs). In this paper, we present and evaluate one supervised and three unsupervised approaches to help performance analysts to 1) more effectively compare load tests in order to detect performance deviations which may lead to SLA violations, and 2) to provide them with a smaller and manageable set of important performance counters to assist in root-cause analysis of the detected deviations. Our case study is based on load test data obtained from both a large scale industrial system and an open source benchmark application. The case study shows, that our wrapper-based supervised approach, which uses a search-based technique to find the best subset of performance counters and a logistic regression model for deviation prediction, can provide up to 89% reduction in the set of performance counters while detecting performance deviations with few false positives (i.e., 95% average precision). The study also shows that the supervised approach is more stable and effective than the unsupervised approaches but it has more overhead due to its semi-automated training phase.","PeriodicalId":322423,"journal":{"name":"2013 35th International Conference on Software Engineering (ICSE)","volume":"6 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2013-05-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128735785","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2013-05-18DOI: 10.1109/ICSE.2013.6606608
A. Filieri, C. Pasareanu, W. Visser
Software reliability analysis tackles the problem of predicting the failure probability of software. Most of the current approaches base reliability analysis on architectural abstractions useful at early stages of design, but not directly applicable to source code. In this paper we propose a general methodology that exploit symbolic execution of source code for extracting failure and success paths to be used for probabilistic reliability assessment against relevant usage scenarios. Under the assumption of finite and countable input domains, we provide an efficient implementation based on Symbolic PathFinder that supports the analysis of sequential and parallel programs, even with structured data types, at the desired level of confidence. The tool has been validated on both NASA prototypes and other test cases showing a promising applicability scope.
{"title":"Reliability analysis in Symbolic PathFinder","authors":"A. Filieri, C. Pasareanu, W. Visser","doi":"10.1109/ICSE.2013.6606608","DOIUrl":"https://doi.org/10.1109/ICSE.2013.6606608","url":null,"abstract":"Software reliability analysis tackles the problem of predicting the failure probability of software. Most of the current approaches base reliability analysis on architectural abstractions useful at early stages of design, but not directly applicable to source code. In this paper we propose a general methodology that exploit symbolic execution of source code for extracting failure and success paths to be used for probabilistic reliability assessment against relevant usage scenarios. Under the assumption of finite and countable input domains, we provide an efficient implementation based on Symbolic PathFinder that supports the analysis of sequential and parallel programs, even with structured data types, at the desired level of confidence. The tool has been validated on both NASA prototypes and other test cases showing a promising applicability scope.","PeriodicalId":322423,"journal":{"name":"2013 35th International Conference on Software Engineering (ICSE)","volume":"3 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2013-05-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"133907782","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Adding features and fixing bugs in software often require systematic edits which are similar, but not identical, changes to many code locations. Finding all edit locations and editing them correctly is tedious and error-prone. In this paper, we demonstrate an Eclipse plug-in called Lase that (1) creates context-aware edit scripts from two or more examples, and uses these scripts to (2) automatically identify edit locations and (3) transform the code. In Lase, users can view syntactic edit operations and corresponding context for each input example. They can also choose a different subset of the examples to adjust the abstraction level of inferred edits. When Lase locates target methods matching the inferred edit context and suggests customized edits, users can review and correct LASE's edit suggestion. These features can reduce developers' burden in repetitively applying similar edits to different methods. The tool's video demonstration is available at https://www.youtube.com/ watch?v=npDqMVP2e9Q.
{"title":"LASE: An example-based program transformation tool for locating and applying systematic edits","authors":"John Jacobellis, Na Meng, Miryung Kim","doi":"10.5555/2486788.2486995","DOIUrl":"https://doi.org/10.5555/2486788.2486995","url":null,"abstract":"Adding features and fixing bugs in software often require systematic edits which are similar, but not identical, changes to many code locations. Finding all edit locations and editing them correctly is tedious and error-prone. In this paper, we demonstrate an Eclipse plug-in called Lase that (1) creates context-aware edit scripts from two or more examples, and uses these scripts to (2) automatically identify edit locations and (3) transform the code. In Lase, users can view syntactic edit operations and corresponding context for each input example. They can also choose a different subset of the examples to adjust the abstraction level of inferred edits. When Lase locates target methods matching the inferred edit context and suggests customized edits, users can review and correct LASE's edit suggestion. These features can reduce developers' burden in repetitively applying similar edits to different methods. The tool's video demonstration is available at https://www.youtube.com/ watch?v=npDqMVP2e9Q.","PeriodicalId":322423,"journal":{"name":"2013 35th International Conference on Software Engineering (ICSE)","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2013-05-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130536545","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
For a large and evolving software system, the project team could receive many bug reports over a long period of time. It is important to achieve a quantitative understanding of bug-fixing time. The ability to predict bug-fixing time can help a project team better estimate software maintenance efforts and better manage software projects. In this paper, we perform an empirical study of bug-fixing time for three CA Technologies projects. We propose a Markov-based method for predicting the number of bugs that will be fixed in future. For a given number of defects, we propose a method for estimating the total amount of time required to fix them based on the empirical distribution of bug-fixing time derived from historical data. For a given bug report, we can also construct a classification model to predict slow or quick fix (e.g., below or above a time threshold). We evaluate our methods using real maintenance data from three CA Technologies projects. The results show that the proposed methods are effective.
{"title":"Predicting bug-fixing time: An empirical study of commercial software projects","authors":"Hongyu Zhang, Liang Gong, Steven Versteeg","doi":"10.5555/2486788.2486931","DOIUrl":"https://doi.org/10.5555/2486788.2486931","url":null,"abstract":"For a large and evolving software system, the project team could receive many bug reports over a long period of time. It is important to achieve a quantitative understanding of bug-fixing time. The ability to predict bug-fixing time can help a project team better estimate software maintenance efforts and better manage software projects. In this paper, we perform an empirical study of bug-fixing time for three CA Technologies projects. We propose a Markov-based method for predicting the number of bugs that will be fixed in future. For a given number of defects, we propose a method for estimating the total amount of time required to fix them based on the empirical distribution of bug-fixing time derived from historical data. For a given bug report, we can also construct a classification model to predict slow or quick fix (e.g., below or above a time threshold). We evaluate our methods using real maintenance data from three CA Technologies projects. The results show that the proposed methods are effective.","PeriodicalId":322423,"journal":{"name":"2013 35th International Conference on Software Engineering (ICSE)","volume":"52 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2013-05-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130578634","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2013-05-18DOI: 10.1109/ICSE.2013.6606586
Weiyi Shang, Z. Jiang, H. Hemmati, Bram Adams, A. Hassan, Patrick Martin
Big data analytics is the process of examining large amounts of data (big data) in an effort to uncover hidden patterns or unknown correlations. Big Data Analytics Applications (BDA Apps) are a new type of software applications, which analyze big data using massive parallel processing frameworks (e.g., Hadoop). Developers of such applications typically develop them using a small sample of data in a pseudo-cloud environment. Afterwards, they deploy the applications in a large-scale cloud environment with considerably more processing power and larger input data (reminiscent of the mainframe days). Working with BDA App developers in industry over the past three years, we noticed that the runtime analysis and debugging of such applications in the deployment phase cannot be easily addressed by traditional monitoring and debugging approaches. In this paper, as a first step in assisting developers of BDA Apps for cloud deployments, we propose a lightweight approach for uncovering differences between pseudo and large-scale cloud deployments. Our approach makes use of the readily-available yet rarely used execution logs from these platforms. Our approach abstracts the execution logs, recovers the execution sequences, and compares the sequences between the pseudo and cloud deployments. Through a case study on three representative Hadoop-based BDA Apps, we show that our approach can rapidly direct the attention of BDA App developers to the major differences between the two deployments. Knowledge of such differences is essential in verifying BDA Apps when analyzing big data in the cloud. Using injected deployment faults, we show that our approach not only significantly reduces the deployment verification effort, but also provides very few false positives when identifying deployment failures.
{"title":"Assisting developers of Big Data Analytics Applications when deploying on Hadoop clouds","authors":"Weiyi Shang, Z. Jiang, H. Hemmati, Bram Adams, A. Hassan, Patrick Martin","doi":"10.1109/ICSE.2013.6606586","DOIUrl":"https://doi.org/10.1109/ICSE.2013.6606586","url":null,"abstract":"Big data analytics is the process of examining large amounts of data (big data) in an effort to uncover hidden patterns or unknown correlations. Big Data Analytics Applications (BDA Apps) are a new type of software applications, which analyze big data using massive parallel processing frameworks (e.g., Hadoop). Developers of such applications typically develop them using a small sample of data in a pseudo-cloud environment. Afterwards, they deploy the applications in a large-scale cloud environment with considerably more processing power and larger input data (reminiscent of the mainframe days). Working with BDA App developers in industry over the past three years, we noticed that the runtime analysis and debugging of such applications in the deployment phase cannot be easily addressed by traditional monitoring and debugging approaches. In this paper, as a first step in assisting developers of BDA Apps for cloud deployments, we propose a lightweight approach for uncovering differences between pseudo and large-scale cloud deployments. Our approach makes use of the readily-available yet rarely used execution logs from these platforms. Our approach abstracts the execution logs, recovers the execution sequences, and compares the sequences between the pseudo and cloud deployments. Through a case study on three representative Hadoop-based BDA Apps, we show that our approach can rapidly direct the attention of BDA App developers to the major differences between the two deployments. Knowledge of such differences is essential in verifying BDA Apps when analyzing big data in the cloud. Using injected deployment faults, we show that our approach not only significantly reduces the deployment verification effort, but also provides very few false positives when identifying deployment failures.","PeriodicalId":322423,"journal":{"name":"2013 35th International Conference on Software Engineering (ICSE)","volume":"68 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2013-05-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"133210137","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2013-05-18DOI: 10.1109/ICSE.2013.6606722
Javier Gonzalez-Sanchez
One expected characteristic in modern systems is self-adaptation, the capability of monitoring and reacting to changes into the environment. A particular case of self-adaptation is affective-driven self-adaptation. Affective-driven self-adaptation is about having consciousness of user's affects (emotions) and drive self-adaptation reacting to changes in those affects. Most of the previous work around self-adaptive systems deals with performance, resources, and error recovery as variables that trigger a system reaction. Moreover, most effort around affect recognition has been put towards offline analysis of affect, and to date only few applications exist that are able to infer user's affect in real-time and trigger self-adaptation mechanisms. In response to this deficit, this work proposes a software product line approach to jump-start the development of affect-driven self-adaptive systems by offering the definition of a domain-specific architecture, a set of components (organized as a framework), and guidelines to tailor those components. Study cases with systems for learning and gaming will confirm the capability of the software product line to provide desired functionalities and qualities.
{"title":"Toward a software product line for affective-driven self-adaptive systems","authors":"Javier Gonzalez-Sanchez","doi":"10.1109/ICSE.2013.6606722","DOIUrl":"https://doi.org/10.1109/ICSE.2013.6606722","url":null,"abstract":"One expected characteristic in modern systems is self-adaptation, the capability of monitoring and reacting to changes into the environment. A particular case of self-adaptation is affective-driven self-adaptation. Affective-driven self-adaptation is about having consciousness of user's affects (emotions) and drive self-adaptation reacting to changes in those affects. Most of the previous work around self-adaptive systems deals with performance, resources, and error recovery as variables that trigger a system reaction. Moreover, most effort around affect recognition has been put towards offline analysis of affect, and to date only few applications exist that are able to infer user's affect in real-time and trigger self-adaptation mechanisms. In response to this deficit, this work proposes a software product line approach to jump-start the development of affect-driven self-adaptive systems by offering the definition of a domain-specific architecture, a set of components (organized as a framework), and guidelines to tailor those components. Study cases with systems for learning and gaming will confirm the capability of the software product line to provide desired functionalities and qualities.","PeriodicalId":322423,"journal":{"name":"2013 35th International Conference on Software Engineering (ICSE)","volume":"15 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2013-05-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"122011015","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2013-05-18DOI: 10.1109/ICSE.2013.6606786
Andrew Begel, Caitlin Sadowski
We have met many software engineering researchers who would like to evaluate a tool or system they developed with real users, but do not know how to begin. In this second iteration of the USER workshop, attendees will collaboratively design, develop, and pilot plans for conducting user evaluations of their own tools and/or software engineering research projects. Attendees will gain practical experience with various user evaluation methods through scaffolded group exercises, panel discussions, and mentoring by a panel of user-focused software engineering researchers. Together, we will establish a community of like-minded researchers and developers to help one another improve our research and practice through user evaluation.
{"title":"2nd International workshop on user evaluations for software engineering researchers (USER 2013)","authors":"Andrew Begel, Caitlin Sadowski","doi":"10.1109/ICSE.2013.6606786","DOIUrl":"https://doi.org/10.1109/ICSE.2013.6606786","url":null,"abstract":"We have met many software engineering researchers who would like to evaluate a tool or system they developed with real users, but do not know how to begin. In this second iteration of the USER workshop, attendees will collaboratively design, develop, and pilot plans for conducting user evaluations of their own tools and/or software engineering research projects. Attendees will gain practical experience with various user evaluation methods through scaffolded group exercises, panel discussions, and mentoring by a panel of user-focused software engineering researchers. Together, we will establish a community of like-minded researchers and developers to help one another improve our research and practice through user evaluation.","PeriodicalId":322423,"journal":{"name":"2013 35th International Conference on Software Engineering (ICSE)","volume":"89 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2013-05-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"122248475","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2013-05-18DOI: 10.1109/ICSE.2013.6606690
Dalal Alrajeh, A. Russo, J. Lockerbie, N. Maiden, Alistair Mavin, Mark Novak
The purpose of requirements validation is to determine whether a large requirements set will lead to the achievement of system-related goals under different conditions - a task that needs automation if it is to be performed quickly and accurately. One reason for the current lack of software tools to undertake such validation is the absence of the computational mechanisms needed to associate scenario, system specification and goal analysis tools. Therefore, in this paper, we report first research experiments in developing these new capabilities, and demonstrate them with a non-trivial example associated with a Rolls Royce aircraft engine software component.
{"title":"Computational alignment of goals and scenarios for complex systems","authors":"Dalal Alrajeh, A. Russo, J. Lockerbie, N. Maiden, Alistair Mavin, Mark Novak","doi":"10.1109/ICSE.2013.6606690","DOIUrl":"https://doi.org/10.1109/ICSE.2013.6606690","url":null,"abstract":"The purpose of requirements validation is to determine whether a large requirements set will lead to the achievement of system-related goals under different conditions - a task that needs automation if it is to be performed quickly and accurately. One reason for the current lack of software tools to undertake such validation is the absence of the computational mechanisms needed to associate scenario, system specification and goal analysis tools. Therefore, in this paper, we report first research experiments in developing these new capabilities, and demonstrate them with a non-trivial example associated with a Rolls Royce aircraft engine software component.","PeriodicalId":322423,"journal":{"name":"2013 35th International Conference on Software Engineering (ICSE)","volume":"34 6 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2013-05-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"124982333","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2013-05-18DOI: 10.1109/ICSE.2013.6606662
N. Tillmann, J. D. Halleux, Tao Xie, Sumit Gulwani, J. Bishop
Massive Open Online Courses (MOOCs) have recently gained high popularity among various universities and even in global societies. A critical factor for their success in teaching and learning effectiveness is assignment grading. Traditional ways of assignment grading are not scalable and do not give timely or interactive feedback to students. To address these issues, we present an interactive-gaming-based teaching and learning platform called Pex4Fun. Pex4Fun is a browser-based teaching and learning environment targeting teachers and students for introductory to advanced programming or software engineering courses. At the core of the platform is an automated grading engine based on symbolic execution. In Pex4Fun, teachers can create virtual classrooms, customize existing courses, and publish new learning material including learning games. Pex4Fun was released to the public in June 2010 and since then the number of attempts made by users to solve games has reached over one million. Our work on Pex4Fun illustrates that a sophisticated software engineering technique-automated test generation-can be successfully used to underpin automatic grading in an online programming system that can scale to hundreds of thousands of users.
{"title":"Teaching and learning programming and software engineering via interactive gaming","authors":"N. Tillmann, J. D. Halleux, Tao Xie, Sumit Gulwani, J. Bishop","doi":"10.1109/ICSE.2013.6606662","DOIUrl":"https://doi.org/10.1109/ICSE.2013.6606662","url":null,"abstract":"Massive Open Online Courses (MOOCs) have recently gained high popularity among various universities and even in global societies. A critical factor for their success in teaching and learning effectiveness is assignment grading. Traditional ways of assignment grading are not scalable and do not give timely or interactive feedback to students. To address these issues, we present an interactive-gaming-based teaching and learning platform called Pex4Fun. Pex4Fun is a browser-based teaching and learning environment targeting teachers and students for introductory to advanced programming or software engineering courses. At the core of the platform is an automated grading engine based on symbolic execution. In Pex4Fun, teachers can create virtual classrooms, customize existing courses, and publish new learning material including learning games. Pex4Fun was released to the public in June 2010 and since then the number of attempts made by users to solve games has reached over one million. Our work on Pex4Fun illustrates that a sophisticated software engineering technique-automated test generation-can be successfully used to underpin automatic grading in an online programming system that can scale to hundreds of thousands of users.","PeriodicalId":322423,"journal":{"name":"2013 35th International Conference on Software Engineering (ICSE)","volume":"23 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2013-05-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130056463","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2013-05-18DOI: 10.1109/ICSE.2013.6606706
Seonah Lee, Sungwon Kang, Matthew Staats
Recently, several graphical tools have been proposed to help developers avoid becoming disoriented when working with large software projects. These tools visualize the locations that developers have visited, allowing them to quickly recall where they have already visited. However, developers also spend a significant amount of time exploring source locations to visit, which is a task that is not currently supported by existing tools. In this work, we propose a graphical code recommender NavClus, which helps developers find relevant, unexplored source locations to visit. NavClus operates by mining a developer's daily interaction traces, comparing the developer's current working context with previously seen contexts, and then predicting relevant source locations to visit. These locations are displayed graphically along with the already explored locations in a class diagram. As a result, with NavClus developers can quickly find, reach, and focus on source locations relevant to their working contexts. http://www.youtube.com/watch?v=rbrc5ERyWjQ.
{"title":"NavClus: A graphical recommender for assisting code exploration","authors":"Seonah Lee, Sungwon Kang, Matthew Staats","doi":"10.1109/ICSE.2013.6606706","DOIUrl":"https://doi.org/10.1109/ICSE.2013.6606706","url":null,"abstract":"Recently, several graphical tools have been proposed to help developers avoid becoming disoriented when working with large software projects. These tools visualize the locations that developers have visited, allowing them to quickly recall where they have already visited. However, developers also spend a significant amount of time exploring source locations to visit, which is a task that is not currently supported by existing tools. In this work, we propose a graphical code recommender NavClus, which helps developers find relevant, unexplored source locations to visit. NavClus operates by mining a developer's daily interaction traces, comparing the developer's current working context with previously seen contexts, and then predicting relevant source locations to visit. These locations are displayed graphically along with the already explored locations in a class diagram. As a result, with NavClus developers can quickly find, reach, and focus on source locations relevant to their working contexts. http://www.youtube.com/watch?v=rbrc5ERyWjQ.","PeriodicalId":322423,"journal":{"name":"2013 35th International Conference on Software Engineering (ICSE)","volume":"5 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2013-05-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128860444","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}