Pub Date : 2018-07-05DOI: 10.1109/WCRE.2013.6671324
M. M. Rahman, S. Yeasmin, C. Roy
Traditional web search forces the developers to leave their working environments and look for solutions in the web browsers. It often does not consider the context of their programming problems. The context-switching between the web browser and the working environment is time-consuming and distracting, and the keyword-based traditional search often does not help much in problem solving. In this paper, we propose an Eclipse IDE-based web search solution that collects the data from three web search APIs-Google, Yahoo, Bing and a programming Q & A site-StackOverflow. It then provides search results within IDE taking not only the content of the selected error into account but also the problem context, popularity and search engine recommendation of the result links. Experiments with 25 runtime errors and exceptions show that the proposed approach outperforms the keyword-based search approaches with a recommendation accuracy of 96%. We also validate the results with a user study involving five prospective participants where we get a result agreement of 64.28%. While the preliminary results are promising, the approach needs to be further validated with more errors and exceptions followed by a user study with more participants to establish itself as a complete IDE-based web search solution.
{"title":"An IDE-based context-aware meta search engine","authors":"M. M. Rahman, S. Yeasmin, C. Roy","doi":"10.1109/WCRE.2013.6671324","DOIUrl":"https://doi.org/10.1109/WCRE.2013.6671324","url":null,"abstract":"Traditional web search forces the developers to leave their working environments and look for solutions in the web browsers. It often does not consider the context of their programming problems. The context-switching between the web browser and the working environment is time-consuming and distracting, and the keyword-based traditional search often does not help much in problem solving. In this paper, we propose an Eclipse IDE-based web search solution that collects the data from three web search APIs-Google, Yahoo, Bing and a programming Q & A site-StackOverflow. It then provides search results within IDE taking not only the content of the selected error into account but also the problem context, popularity and search engine recommendation of the result links. Experiments with 25 runtime errors and exceptions show that the proposed approach outperforms the keyword-based search approaches with a recommendation accuracy of 96%. We also validate the results with a user study involving five prospective participants where we get a result agreement of 64.28%. While the preliminary results are promising, the approach needs to be further validated with more errors and exceptions followed by a user study with more participants to establish itself as a complete IDE-based web search solution.","PeriodicalId":275092,"journal":{"name":"2013 20th Working Conference on Reverse Engineering (WCRE)","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-07-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"134182951","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2013-11-21DOI: 10.1109/WCRE.2013.6671288
Jing Jiang, Li Zhang, Lei Li
Popular social coding sites like GitHub and BitBucket are changing software development. Users follow some interesting developers, listen to their activities and find new projects. Social relationships between users are utilized to disseminate projects, attract contributors and increase the popularity. A deep understanding of project dissemination on social coding sites can provide important insights into questions of project diffusion characteristics and into the improvement of the popularity. In this paper, we seek a deeper understanding of project dissemination in GitHub. We collect 2,665 projects and 272,874 events. Moreover, we crawl 747,107 developers and 2,234,845 social links to construct social graphs. We analyze topological characteristics and reciprocity of social graphs. We then study the speed and the range of project dissemination, and the role of social links. Our main observations are: (1) Social relationships are not reciprocal. (2) The popularity increases gradually for a long time. (3) Projects spread to users far away from their creators. (4) Social links play a notable role of project dissemination. These results can be leveraged to increase the popularity. Specifically, we suggest that project owners should (1) encourage experienced developers to choose some promising new developers, follow them in return and provide guidance. (2) promote projects for a long time. (3) advertise projects to a wide range of developers. (4) fully utilize social relationships to advertise projects and attract contributors.
{"title":"Understanding project dissemination on a social coding site","authors":"Jing Jiang, Li Zhang, Lei Li","doi":"10.1109/WCRE.2013.6671288","DOIUrl":"https://doi.org/10.1109/WCRE.2013.6671288","url":null,"abstract":"Popular social coding sites like GitHub and BitBucket are changing software development. Users follow some interesting developers, listen to their activities and find new projects. Social relationships between users are utilized to disseminate projects, attract contributors and increase the popularity. A deep understanding of project dissemination on social coding sites can provide important insights into questions of project diffusion characteristics and into the improvement of the popularity. In this paper, we seek a deeper understanding of project dissemination in GitHub. We collect 2,665 projects and 272,874 events. Moreover, we crawl 747,107 developers and 2,234,845 social links to construct social graphs. We analyze topological characteristics and reciprocity of social graphs. We then study the speed and the range of project dissemination, and the role of social links. Our main observations are: (1) Social relationships are not reciprocal. (2) The popularity increases gradually for a long time. (3) Projects spread to users far away from their creators. (4) Social links play a notable role of project dissemination. These results can be leveraged to increase the popularity. Specifically, we suggest that project owners should (1) encourage experienced developers to choose some promising new developers, follow them in return and provide guidance. (2) promote projects for a long time. (3) advertise projects to a wide range of developers. (4) fully utilize social relationships to advertise projects and attract contributors.","PeriodicalId":275092,"journal":{"name":"2013 20th Working Conference on Reverse Engineering (WCRE)","volume":"48 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2013-11-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"124946157","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2013-11-21DOI: 10.1109/WCRE.2013.6671279
B. Cleary, P. Gorman, Eric Verbeek, M. Storey, M. Salois, F. Painchaud
Exploitability analysis is the process of attempting to determine if a vulnerability in a program is exploitable. Fuzzing is a popular method of finding such vulnerabilities, in which a program is subjected to millions of generated program inputs until it crashes. Each program crash indicates a potential vulnerability that needs to be prioritized according to its potential for exploitation. The highest priority vulnerabilities need to be investigated by a security analyst by re-executing the program with the input that caused the crash while recording a trace of all executed assembly instructions and then performing analysis on the resulting trace. Recreating the entire memory state of the program at the time of the crash, or at any other point in the trace, is very important for helping the analyst build an understanding of the conditions that led to the crash. Unfortunately, tracing even a small program can create multimillion line trace files from which reconstructing memory state is a computationally intensive process and virtually impossible to do manually. In this paper we present an analysis of the problem of memory state reconstruction from very large execution traces. We report on a novel approach for reconstructing the entire memory state of a program from an execution trace that allows near realtime queries on the state of memory at any point in a program's execution trace. Finally we benchmark our approach showing storage and performance results in line with our theoretical calculations and demonstrate memory state query response times of less than 200ms for trace files up to 60 million lines.
{"title":"Reconstructing program memory state from multi-gigabyte instruction traces to support interactive analysis","authors":"B. Cleary, P. Gorman, Eric Verbeek, M. Storey, M. Salois, F. Painchaud","doi":"10.1109/WCRE.2013.6671279","DOIUrl":"https://doi.org/10.1109/WCRE.2013.6671279","url":null,"abstract":"Exploitability analysis is the process of attempting to determine if a vulnerability in a program is exploitable. Fuzzing is a popular method of finding such vulnerabilities, in which a program is subjected to millions of generated program inputs until it crashes. Each program crash indicates a potential vulnerability that needs to be prioritized according to its potential for exploitation. The highest priority vulnerabilities need to be investigated by a security analyst by re-executing the program with the input that caused the crash while recording a trace of all executed assembly instructions and then performing analysis on the resulting trace. Recreating the entire memory state of the program at the time of the crash, or at any other point in the trace, is very important for helping the analyst build an understanding of the conditions that led to the crash. Unfortunately, tracing even a small program can create multimillion line trace files from which reconstructing memory state is a computationally intensive process and virtually impossible to do manually. In this paper we present an analysis of the problem of memory state reconstruction from very large execution traces. We report on a novel approach for reconstructing the entire memory state of a program from an execution trace that allows near realtime queries on the state of memory at any point in a program's execution trace. Finally we benchmark our approach showing storage and performance results in line with our theoretical calculations and demonstrate memory state query response times of less than 200ms for trace files up to 60 million lines.","PeriodicalId":275092,"journal":{"name":"2013 20th Working Conference on Reverse Engineering (WCRE)","volume":"91 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2013-11-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"122220483","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2013-11-21DOI: 10.1109/WCRE.2013.6671327
I. Haller, Asia Slowinska, H. Bos
Most current techniques for data structure reverse engineering are limited to low-level programming constructs, such as individual variables or structs. In practice, pointer networks connect some of these constructs, to form higher level entities like lists and trees. The lack of information about the pointer network limits our ability to efficiently perform forensics and reverse engineering. To fill this gap, we propose MemPick, a tool that detects and classifies high-level data structures used in stripped C/C++ binaries. By analyzing the evolution of the heap during program execution, it identifies and classifies the most commonly used data structures, such as singly-or doubly-linked lists, many types of trees (e.g., AVL, red-black trees, B-trees), and graphs. We evaluated MemPick on a wide variety of popular libraries and real world applications with great success.
{"title":"MemPick: A tool for data structure detection","authors":"I. Haller, Asia Slowinska, H. Bos","doi":"10.1109/WCRE.2013.6671327","DOIUrl":"https://doi.org/10.1109/WCRE.2013.6671327","url":null,"abstract":"Most current techniques for data structure reverse engineering are limited to low-level programming constructs, such as individual variables or structs. In practice, pointer networks connect some of these constructs, to form higher level entities like lists and trees. The lack of information about the pointer network limits our ability to efficiently perform forensics and reverse engineering. To fill this gap, we propose MemPick, a tool that detects and classifies high-level data structures used in stripped C/C++ binaries. By analyzing the evolution of the heap during program execution, it identifies and classifies the most commonly used data structures, such as singly-or doubly-linked lists, many types of trees (e.g., AVL, red-black trees, B-trees), and graphs. We evaluated MemPick on a wide variety of popular libraries and real world applications with great success.","PeriodicalId":275092,"journal":{"name":"2013 20th Working Conference on Reverse Engineering (WCRE)","volume":"15 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2013-11-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"116807011","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2013-11-21DOI: 10.1109/WCRE.2013.6671328
Amir Saeidi, Jurriaan Hage, R. Khadka, S. Jansen
We present an integrated set of language-independent (generic) tools for analyzing legacy software systems: Gelato. Like any analysis tool, Gelato consists of a set of parsers, tree walkers, transformers, visualizers and pretty printers for different programming languages. Gelato is divided into a set of components, comprising of a set of language-specific bundles and a generic core. By providing a generic core, Gelato enables building tools for analyzing legacy systems independent of the languages they are implemented in. To achieve this, Gelato consists of a generic extensible imperative language called Kernel which provides a separation between syntactic and semantic analysis. We have adopted model-driven techniques to develop the Gelato tool set which is integrated into the Eclipse environment.
{"title":"Gelato: GEneric language tools for model-driven analysis of legacy software systems","authors":"Amir Saeidi, Jurriaan Hage, R. Khadka, S. Jansen","doi":"10.1109/WCRE.2013.6671328","DOIUrl":"https://doi.org/10.1109/WCRE.2013.6671328","url":null,"abstract":"We present an integrated set of language-independent (generic) tools for analyzing legacy software systems: Gelato. Like any analysis tool, Gelato consists of a set of parsers, tree walkers, transformers, visualizers and pretty printers for different programming languages. Gelato is divided into a set of components, comprising of a set of language-specific bundles and a generic core. By providing a generic core, Gelato enables building tools for analyzing legacy systems independent of the languages they are implemented in. To achieve this, Gelato consists of a generic extensible imperative language called Kernel which provides a separation between syntactic and semantic analysis. We have adopted model-driven techniques to develop the Gelato tool set which is integrated into the Eclipse environment.","PeriodicalId":275092,"journal":{"name":"2013 20th Working Conference on Reverse Engineering (WCRE)","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2013-11-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129308667","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2013-11-21DOI: 10.1109/WCRE.2013.6671286
Deb Chatterji, Jeffrey C. Carver, Nicholas A. Kraft, Jan Harder
Code clones are a common occurrence in most software systems. Their presence is believed to have an effect on the maintenance process. Although these effects have been previously studied, there is not yet a conclusive result. This paper describes an extended replication of a controlled experiment (i.e. a strict replication with an additional task) that analyzes the effects of cloned bugs (i.e. bugs in cloned code) on the program comprehension of programmers. In the strict replication portion, the study participants attempted to isolate and fix two types of bugs, cloned and non-cloned, in one of two small systems. In the extension of the original study, we provided the participants with a clone report describing the location of all cloned code in the other system and asked them to again isolate and fix cloned and non-cloned bugs. The results of the original study showed that cloned bugs were not significantly more difficult to maintain than non-cloned bugs. Conversely, the results of the replication showed that it was significantly more difficult to correctly fix a cloned bug than a non-cloned bug. But, there was no significant difference in the amount of time required to fix a cloned bug vs. a non-cloned bug. Finally, the results of the study extension showed that programmers performed significantly better when given clone information than without clone information.
{"title":"Effects of cloned code on software maintainability: A replicated developer study","authors":"Deb Chatterji, Jeffrey C. Carver, Nicholas A. Kraft, Jan Harder","doi":"10.1109/WCRE.2013.6671286","DOIUrl":"https://doi.org/10.1109/WCRE.2013.6671286","url":null,"abstract":"Code clones are a common occurrence in most software systems. Their presence is believed to have an effect on the maintenance process. Although these effects have been previously studied, there is not yet a conclusive result. This paper describes an extended replication of a controlled experiment (i.e. a strict replication with an additional task) that analyzes the effects of cloned bugs (i.e. bugs in cloned code) on the program comprehension of programmers. In the strict replication portion, the study participants attempted to isolate and fix two types of bugs, cloned and non-cloned, in one of two small systems. In the extension of the original study, we provided the participants with a clone report describing the location of all cloned code in the other system and asked them to again isolate and fix cloned and non-cloned bugs. The results of the original study showed that cloned bugs were not significantly more difficult to maintain than non-cloned bugs. Conversely, the results of the replication showed that it was significantly more difficult to correctly fix a cloned bug than a non-cloned bug. But, there was no significant difference in the amount of time required to fix a cloned bug vs. a non-cloned bug. Finally, the results of the study extension showed that programmers performed significantly better when given clone information than without clone information.","PeriodicalId":275092,"journal":{"name":"2013 20th Working Conference on Reverse Engineering (WCRE)","volume":"20 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2013-11-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"121770780","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2013-11-21DOI: 10.1109/WCRE.2013.6671301
Andrea Avancini, M. Ceccato
Security is a crucial concern, especially for those applications, like web-based programs, that are constantly exposed to potentially malicious environments. Security testing aims at verifying the presence of security related defects. Security tests consist of two major parts, input values to run the application and the decision if the actual output matches the expected output, the latter is known as the “oracle”. In this paper, we present a process to build a security oracle for testing Cross-site scripting vulnerabilities in web applications. In the learning phase, we analyze web pages generated in safe conditions to learn a model of their syntactic structure. Then, in the testing phase, the model is used to classify new test cases either as “safe tests” or as “successful attacks”. This approach has been implemented in a tool, called Circe, and empirically assessed in classifying security test cases for two real world open source web applications.
{"title":"Circe: A grammar-based oracle for testing Cross-site scripting in web applications","authors":"Andrea Avancini, M. Ceccato","doi":"10.1109/WCRE.2013.6671301","DOIUrl":"https://doi.org/10.1109/WCRE.2013.6671301","url":null,"abstract":"Security is a crucial concern, especially for those applications, like web-based programs, that are constantly exposed to potentially malicious environments. Security testing aims at verifying the presence of security related defects. Security tests consist of two major parts, input values to run the application and the decision if the actual output matches the expected output, the latter is known as the “oracle”. In this paper, we present a process to build a security oracle for testing Cross-site scripting vulnerabilities in web applications. In the learning phase, we analyze web pages generated in safe conditions to learn a model of their syntactic structure. Then, in the testing phase, the model is used to classify new test cases either as “safe tests” or as “successful attacks”. This approach has been implemented in a tool, called Circe, and empirically assessed in classifying security test cases for two real world open source web applications.","PeriodicalId":275092,"journal":{"name":"2013 20th Working Conference on Reverse Engineering (WCRE)","volume":"58 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2013-11-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115774746","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2013-11-21DOI: 10.1109/WCRE.2013.6671332
Daniel E. Krutz, Emad Shihab
Code clones are multiple code fragments that produce similar results when provided the same input. Prior research has shown that clones can be harmful since they elevate maintenance costs, increase the number of bugs caused by inconsistent changes to cloned code and may decrease programmer compre-hensibility due to the increased size of the code base. To assist in the detection of code clones, we propose a new tool known as Concolic Code Clone Discovery (CCCD). CCCD is the first known clone detection tool that uses concolic analysis as its primary component and is one of only three known techniques which are able to reliably detect the most complicated kind of clones, type-4 clones.
{"title":"CCCD: Concolic code clone detection","authors":"Daniel E. Krutz, Emad Shihab","doi":"10.1109/WCRE.2013.6671332","DOIUrl":"https://doi.org/10.1109/WCRE.2013.6671332","url":null,"abstract":"Code clones are multiple code fragments that produce similar results when provided the same input. Prior research has shown that clones can be harmful since they elevate maintenance costs, increase the number of bugs caused by inconsistent changes to cloned code and may decrease programmer compre-hensibility due to the increased size of the code base. To assist in the detection of code clones, we propose a new tool known as Concolic Code Clone Discovery (CCCD). CCCD is the first known clone detection tool that uses concolic analysis as its primary component and is one of only three known techniques which are able to reliably detect the most complicated kind of clones, type-4 clones.","PeriodicalId":275092,"journal":{"name":"2013 20th Working Conference on Reverse Engineering (WCRE)","volume":"42 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2013-11-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129423952","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2013-11-21DOI: 10.1109/WCRE.2013.6671313
T. Ishio, Shinpei Hayashi, H. Kazato, T. Oshima
Automated feature location techniques have been proposed to extract program elements that are likely to be relevant to a given feature. A more accurate result is expected to enable developers to perform more accurate feature location. However, several experiments assessing traceability recovery have shown that analysts cannot utilize an accurate traceability matrix for their tasks. Because feature location deals with a certain type of traceability links, it is an important question whether the same phenomena are visible in feature location or not. To answer that question, we have conducted a controlled experiment. We have asked 20 subjects to locate features using lists of methods of which the accuracy is controlled artificially. The result differs from the traceability recovery experiments. Subjects given an accurate list would be able to locate a feature more accurately. However, subjects could not locate the complete implementation of features in 83% of tasks. Results show that the accuracy of automated feature location techniques is effective, but it might be insufficient for perfect feature location.
{"title":"On the effectiveness of accuracy of automated feature location technique","authors":"T. Ishio, Shinpei Hayashi, H. Kazato, T. Oshima","doi":"10.1109/WCRE.2013.6671313","DOIUrl":"https://doi.org/10.1109/WCRE.2013.6671313","url":null,"abstract":"Automated feature location techniques have been proposed to extract program elements that are likely to be relevant to a given feature. A more accurate result is expected to enable developers to perform more accurate feature location. However, several experiments assessing traceability recovery have shown that analysts cannot utilize an accurate traceability matrix for their tasks. Because feature location deals with a certain type of traceability links, it is an important question whether the same phenomena are visible in feature location or not. To answer that question, we have conducted a controlled experiment. We have asked 20 subjects to locate features using lists of methods of which the accuracy is controlled artificially. The result differs from the traceability recovery experiments. Subjects given an accurate list would be able to locate a feature more accurately. However, subjects could not locate the complete implementation of features in 83% of tasks. Results show that the accuracy of automated feature location techniques is effective, but it might be insufficient for perfect feature location.","PeriodicalId":275092,"journal":{"name":"2013 20th Working Conference on Reverse Engineering (WCRE)","volume":"58 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2013-11-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130596442","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2013-11-21DOI: 10.1109/WCRE.2013.6671320
Erika Aeschlimann, M. Lungu, Oscar Nierstrasz, Carl F. Worms
This paper presents a case study of analyzing a legacy PL/1 ecosystem that has grown for 40 years to support the business needs of a large banking company. In order to support the stakeholders in analyzing it we developed St1-PL/1 - a tool that parses the code for association data and computes structural metrics which it then visualizes using top-down interactive exploration. Before building the tool and after demonstrating it to stakeholders we conducted several interviews to learn about legacy ecosystem analysis requirements. We briefly introduce the tool and then present results of analysing the case study. We show that although the vision for the future is to have an ecosystem architecture in which systems are as decoupled as possible the current state of the ecosystem is still removed from this. We also present some of the lessons learned during our experience discussions with stakeholders which include their interests in automatically assessing the quality of the legacy code.
{"title":"Analyzing PL/1 legacy ecosystems: An experience report","authors":"Erika Aeschlimann, M. Lungu, Oscar Nierstrasz, Carl F. Worms","doi":"10.1109/WCRE.2013.6671320","DOIUrl":"https://doi.org/10.1109/WCRE.2013.6671320","url":null,"abstract":"This paper presents a case study of analyzing a legacy PL/1 ecosystem that has grown for 40 years to support the business needs of a large banking company. In order to support the stakeholders in analyzing it we developed St1-PL/1 - a tool that parses the code for association data and computes structural metrics which it then visualizes using top-down interactive exploration. Before building the tool and after demonstrating it to stakeholders we conducted several interviews to learn about legacy ecosystem analysis requirements. We briefly introduce the tool and then present results of analysing the case study. We show that although the vision for the future is to have an ecosystem architecture in which systems are as decoupled as possible the current state of the ecosystem is still removed from this. We also present some of the lessons learned during our experience discussions with stakeholders which include their interests in automatically assessing the quality of the legacy code.","PeriodicalId":275092,"journal":{"name":"2013 20th Working Conference on Reverse Engineering (WCRE)","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2013-11-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130775250","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}