Pub Date : 2013-11-21DOI: 10.1109/WCRE.2013.6671316
Valerio Cosentino, Jordi Cabot, P. Albert, Philippe Bauquel, Jacques Perronnet
Organizations rely on the logic embedded in their Information Systems for their daily operations. This logic implements the business rules in place in the organization, which must be continuously adapted in response to market changes. Unfortunately, this evolution implies understanding and evolving also the underlying software components enforcing those rules. This is challenging because, first, the code implementing the rules is scattered throughout the whole system and, second, most of the time documentation is poor and out-of-date. This is specially true for older systems that have been maintained and evolved for several years (even decades). In those systems, it is not even clear which business rules are enforced nor whether rules are still consistent with the current organizational policies. In this sense, the goal of this paper is to facilitate the comprehension of legacy systems (in particular COBOL-based ones) by providing a model driven reverse engineering framework able to extract and visualize the business logic embedded in them.
{"title":"Extracting business rules from COBOL: A model-based framework","authors":"Valerio Cosentino, Jordi Cabot, P. Albert, Philippe Bauquel, Jacques Perronnet","doi":"10.1109/WCRE.2013.6671316","DOIUrl":"https://doi.org/10.1109/WCRE.2013.6671316","url":null,"abstract":"Organizations rely on the logic embedded in their Information Systems for their daily operations. This logic implements the business rules in place in the organization, which must be continuously adapted in response to market changes. Unfortunately, this evolution implies understanding and evolving also the underlying software components enforcing those rules. This is challenging because, first, the code implementing the rules is scattered throughout the whole system and, second, most of the time documentation is poor and out-of-date. This is specially true for older systems that have been maintained and evolved for several years (even decades). In those systems, it is not even clear which business rules are enforced nor whether rules are still consistent with the current organizational policies. In this sense, the goal of this paper is to facilitate the comprehension of legacy systems (in particular COBOL-based ones) by providing a model driven reverse engineering framework able to extract and visualize the business logic embedded in them.","PeriodicalId":275092,"journal":{"name":"2013 20th Working Conference on Reverse Engineering (WCRE)","volume":"98 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2013-11-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115782750","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Anti-patterns describe poor solutions to design and implementation problems which are claimed to make object oriented systems hard to maintain. Anti-patterns indicate weaknesses in design that may slow down development or increase the risk of faults or failures in the future. Classes in anti-patterns have some dependencies, such as static relationships, that may propagate potential problems to other classes. To the best of our knowledge, the relationship between anti-patterns dependencies (with non anti-patterns classes) and faults has yet to be studied in details. This paper presents the results of an empirical study aimed at analysing anti-patterns dependencies in three open source software systems, namely ArgoUML, JFreeChart, and XerecesJ. We show that, in almost all releases of the three systems, classes having dependencies with anti-patterns are more fault-prone than others. We also report other observations about these dependencies such as their impact on fault prediction. Software organizations could make use of these knowledge about anti-patterns dependencies to better focus their testing and reviews activities toward the most risky classes, e.g., classes with fault-prone dependencies with anti-patterns.
{"title":"Mining the relationship between anti-patterns dependencies and fault-proneness","authors":"Fehmi Jaafar, Yann-Gaël Guéhéneuc, Sylvie Hamel, Foutse Khomh","doi":"10.1109/WCRE.2013.6671310","DOIUrl":"https://doi.org/10.1109/WCRE.2013.6671310","url":null,"abstract":"Anti-patterns describe poor solutions to design and implementation problems which are claimed to make object oriented systems hard to maintain. Anti-patterns indicate weaknesses in design that may slow down development or increase the risk of faults or failures in the future. Classes in anti-patterns have some dependencies, such as static relationships, that may propagate potential problems to other classes. To the best of our knowledge, the relationship between anti-patterns dependencies (with non anti-patterns classes) and faults has yet to be studied in details. This paper presents the results of an empirical study aimed at analysing anti-patterns dependencies in three open source software systems, namely ArgoUML, JFreeChart, and XerecesJ. We show that, in almost all releases of the three systems, classes having dependencies with anti-patterns are more fault-prone than others. We also report other observations about these dependencies such as their impact on fault prediction. Software organizations could make use of these knowledge about anti-patterns dependencies to better focus their testing and reviews activities toward the most risky classes, e.g., classes with fault-prone dependencies with anti-patterns.","PeriodicalId":275092,"journal":{"name":"2013 20th Working Conference on Reverse Engineering (WCRE)","volume":"15 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2013-11-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130332654","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2013-11-21DOI: 10.1109/WCRE.2013.6671334
A. H. Bagge, Vadim Zaytsev
The OOPSLE workshop is a discussion-oriented and collaborative forum for formulating and addressing with open, unsolved and unsolvable problems in software language engineering (SLE), which is a research domain of systematic, disciplined and measurable approaches of development, evolution and maintenance of artificial languages used in software development. OOPSLE aims to serve as a think tank in selecting candidates for the open problem list, as well as other kinds of unconventional questions and definitions that do not necessarily have clear answers or solutions, thus facilitating the exposure of dark data. We also plan to formulate promising language-related challenges to organise in the future. http://oopsle.github.io
{"title":"Workshop on open and original problems in software language engineering","authors":"A. H. Bagge, Vadim Zaytsev","doi":"10.1109/WCRE.2013.6671334","DOIUrl":"https://doi.org/10.1109/WCRE.2013.6671334","url":null,"abstract":"The OOPSLE workshop is a discussion-oriented and collaborative forum for formulating and addressing with open, unsolved and unsolvable problems in software language engineering (SLE), which is a research domain of systematic, disciplined and measurable approaches of development, evolution and maintenance of artificial languages used in software development. OOPSLE aims to serve as a think tank in selecting candidates for the open problem list, as well as other kinds of unconventional questions and definitions that do not necessarily have clear answers or solutions, thus facilitating the exposure of dark data. We also plan to formulate promising language-related challenges to organise in the future. http://oopsle.github.io","PeriodicalId":275092,"journal":{"name":"2013 20th Working Conference on Reverse Engineering (WCRE)","volume":"69 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2013-11-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127887242","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2013-11-21DOI: 10.1109/WCRE.2013.6671277
X. Chen, Asia Slowinska, H. Bos
Many reversing techniques for data structures rely on the knowledge of memory allocation routines. Typically, they interpose on the system's malloc and free functions, and track each chunk of memory thus allocated as a data structure. However, many performance-critical applications implement their own custom memory allocators. Examples include webservers, database management systems, and compilers like gcc and clang. As a result, current binary analysis techniques for tracking data structures fail on such binaries. We present MemBrush, a new tool to detect memory allocation and deallocation functions in stripped binaries with high accuracy. We evaluated the technique on a large number of real world applications that use custom memory allocators. As we show, we can furnish existing reversing tools with detailed information about the memory management API, and as a result perform an analysis of the actual application specific data structures designed by the programmer. Our system uses dynamic analysis and detects memory allocation and deallocation routines by searching for functions that comply with a set of generic characteristics of allocators and deallocators.
{"title":"Who allocated my memory? Detecting custom memory allocators in C binaries","authors":"X. Chen, Asia Slowinska, H. Bos","doi":"10.1109/WCRE.2013.6671277","DOIUrl":"https://doi.org/10.1109/WCRE.2013.6671277","url":null,"abstract":"Many reversing techniques for data structures rely on the knowledge of memory allocation routines. Typically, they interpose on the system's malloc and free functions, and track each chunk of memory thus allocated as a data structure. However, many performance-critical applications implement their own custom memory allocators. Examples include webservers, database management systems, and compilers like gcc and clang. As a result, current binary analysis techniques for tracking data structures fail on such binaries. We present MemBrush, a new tool to detect memory allocation and deallocation functions in stripped binaries with high accuracy. We evaluated the technique on a large number of real world applications that use custom memory allocators. As we show, we can furnish existing reversing tools with detailed information about the memory management API, and as a result perform an analysis of the actual application specific data structures designed by the programmer. Our system uses dynamic analysis and detects memory allocation and deallocation routines by searching for functions that comply with a set of generic characteristics of allocators and deallocators.","PeriodicalId":275092,"journal":{"name":"2013 20th Working Conference on Reverse Engineering (WCRE)","volume":"8 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2013-11-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127516811","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2013-11-21DOI: 10.1109/WCRE.2013.6671322
Tomoya Ishihara, Keisuke Hotta, Yoshiki Higo, S. Kusumoto
Although source code search systems are well known as being helpful to reuse source code, they have an issue that they often suggest larger code than what users actually need. This is because they suggest code based on the structure of programming languages such as files or classes. In this paper, we propose a new code search technique that considers past reuse. In the proposed technique, code are suggested at the unit of past reuse. The proposed technique detects reused code by using a fine-grained code clone detection technique. We conducted an experiment to compare the proposed technique with an existing technique. The result shows that the proposed technique helps more effectively to reuse code than the existing technique.
{"title":"Reusing reused code","authors":"Tomoya Ishihara, Keisuke Hotta, Yoshiki Higo, S. Kusumoto","doi":"10.1109/WCRE.2013.6671322","DOIUrl":"https://doi.org/10.1109/WCRE.2013.6671322","url":null,"abstract":"Although source code search systems are well known as being helpful to reuse source code, they have an issue that they often suggest larger code than what users actually need. This is because they suggest code based on the structure of programming languages such as files or classes. In this paper, we propose a new code search technique that considers past reuse. In the proposed technique, code are suggested at the unit of past reuse. The proposed technique detects reused code by using a fine-grained code clone detection technique. We conducted an experiment to compare the proposed technique with an existing technique. The result shows that the proposed technique helps more effectively to reuse code than the existing technique.","PeriodicalId":275092,"journal":{"name":"2013 20th Working Conference on Reverse Engineering (WCRE)","volume":"142 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2013-11-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"133736358","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2013-11-21DOI: 10.1109/WCRE.2013.6671285
Zhenchang Xing, Yinxing Xue, S. Jarzabek
Clone detectors find similar code fragments and report large numbers of them for large systems. Textually similar clones may perform different computations, depending on the program context in which clones occur. Understanding these contextual differences is essential to distill useful clones for a specific maintenance task, such as refactoring. Manual analysis of contextual differences is time consuming and error-prone. To mitigate this problem, we present an automated approach to helping developers find and analyze contextual differences of clones. Our approach represents context of clones as program dependence graphs, and applies a graph differencing technique to identify required contextual differences of clones. We implemented a tool called CloneDifferentiator that identifies contextual differences of clones and allows developers to formulate queries to distill candidate clones that are useful for a given refactoring task. Two empirical studies show that CloneDifferentiator can reduce the efforts of post-detection analysis of clones for refactorings.
{"title":"Distilling useful clones by contextual differencing","authors":"Zhenchang Xing, Yinxing Xue, S. Jarzabek","doi":"10.1109/WCRE.2013.6671285","DOIUrl":"https://doi.org/10.1109/WCRE.2013.6671285","url":null,"abstract":"Clone detectors find similar code fragments and report large numbers of them for large systems. Textually similar clones may perform different computations, depending on the program context in which clones occur. Understanding these contextual differences is essential to distill useful clones for a specific maintenance task, such as refactoring. Manual analysis of contextual differences is time consuming and error-prone. To mitigate this problem, we present an automated approach to helping developers find and analyze contextual differences of clones. Our approach represents context of clones as program dependence graphs, and applies a graph differencing technique to identify required contextual differences of clones. We implemented a tool called CloneDifferentiator that identifies contextual differences of clones and allows developers to formulate queries to distill candidate clones that are useful for a given refactoring task. Two empirical studies show that CloneDifferentiator can reduce the efforts of post-detection analysis of clones for refactorings.","PeriodicalId":275092,"journal":{"name":"2013 20th Working Conference on Reverse Engineering (WCRE)","volume":"82 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2013-11-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"132568380","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2013-11-21DOI: 10.1109/WCRE.2013.6671278
I. Haller, Asia Slowinska, H. Bos
Many existing techniques for reversing data structures in C/C++ binaries are limited to low-level programming constructs, such as individual variables or structs. Unfortunately, without detailed information about a program's pointer structures, forensics and reverse engineering are exceedingly hard. To fill this gap, we propose MemPick, a tool that detects and classifies high-level data structures used in stripped binaries. By analyzing how links between memory objects evolve throughout the program execution, it distinguishes between many commonly used data structures, such as singly- or doubly-linked lists, many types of trees (e.g., AVL, red-black trees, B-trees), and graphs. We evaluate the technique on 10 real world applications and 16 popular libraries. The results show that MemPick can identify the data structures with high accuracy.
{"title":"MemPick: High-level data structure detection in C/C++ binaries","authors":"I. Haller, Asia Slowinska, H. Bos","doi":"10.1109/WCRE.2013.6671278","DOIUrl":"https://doi.org/10.1109/WCRE.2013.6671278","url":null,"abstract":"Many existing techniques for reversing data structures in C/C++ binaries are limited to low-level programming constructs, such as individual variables or structs. Unfortunately, without detailed information about a program's pointer structures, forensics and reverse engineering are exceedingly hard. To fill this gap, we propose MemPick, a tool that detects and classifies high-level data structures used in stripped binaries. By analyzing how links between memory objects evolve throughout the program execution, it distinguishes between many commonly used data structures, such as singly- or doubly-linked lists, many types of trees (e.g., AVL, red-black trees, B-trees), and graphs. We evaluate the technique on 10 real world applications and 16 popular libraries. The results show that MemPick can identify the data structures with high accuracy.","PeriodicalId":275092,"journal":{"name":"2013 20th Working Conference on Reverse Engineering (WCRE)","volume":"40 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2013-11-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"133004072","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2013-11-21DOI: 10.1109/WCRE.2013.6671298
Vitor Sales, Ricardo Terra, L. F. Miranda, M. T. Valente
Methods implemented in incorrect classes are common bad smells in object-oriented systems, especially in the case of systems maintained and evolved for years. To tackle this design flaw, we propose a novel approach that recommends Move Method refactorings based on the set of static dependencies established by a method. More specifically, our approach compares the similarity of the dependencies established by a source method with the dependencies established by the methods in possible target classes. We evaluated our approach using systems from a compiled version of the Qualitas Corpus. We report that our approach provides an average precision of 60.63% and an average recall of 81.07%. Such results are, respectively, 129% and 49% better than the results achieved by JDeodorant, a well-known move method recommendation system.
{"title":"Recommending Move Method refactorings using dependency sets","authors":"Vitor Sales, Ricardo Terra, L. F. Miranda, M. T. Valente","doi":"10.1109/WCRE.2013.6671298","DOIUrl":"https://doi.org/10.1109/WCRE.2013.6671298","url":null,"abstract":"Methods implemented in incorrect classes are common bad smells in object-oriented systems, especially in the case of systems maintained and evolved for years. To tackle this design flaw, we propose a novel approach that recommends Move Method refactorings based on the set of static dependencies established by a method. More specifically, our approach compares the similarity of the dependencies established by a source method with the dependencies established by the methods in possible target classes. We evaluated our approach using systems from a compiled version of the Qualitas Corpus. We report that our approach provides an average precision of 60.63% and an average recall of 81.07%. Such results are, respectively, 129% and 49% better than the results achieved by JDeodorant, a well-known move method recommendation system.","PeriodicalId":275092,"journal":{"name":"2013 20th Working Conference on Reverse Engineering (WCRE)","volume":"24 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2013-11-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"133355418","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2013-11-21DOI: 10.1109/WCRE.2013.6671300
F. Duchene, Sanjay Rawat, J. Richier, Roland Groz
Fuzz testing consists of automatically generating and sending malicious inputs to an application in order to hopefully trigger a vulnerability. In order to be efficient, the fuzzing should answer questions such as: Where to send a malicious value? Where to observe its effects? How to position the system in such states? Answering such questions is a matter of understanding precisely enough the application. Reverseengineering is a possible way to gain this knowledge, especially in a black-box harness. In fact, given the complexity of modern web applications, automated black-box scanners alternatively reverse-engineer and fuzz web applications to detect vulnerabilities. We present an approach, named as LigRE, which improves the reverse engineering to guide the fuzzing. We adapt a method to automatically learn a control flow model of web applications, and annotate this model with inferred data flows. Afterwards, we generate slices of the model for guiding the scope of a fuzzer. Empirical experiments show that LigRE increases detection capabilities of Cross Site Scripting (XSS), a particular case of web command injection vulnerabilities.
{"title":"LigRE: Reverse-engineering of control and data flow models for black-box XSS detection","authors":"F. Duchene, Sanjay Rawat, J. Richier, Roland Groz","doi":"10.1109/WCRE.2013.6671300","DOIUrl":"https://doi.org/10.1109/WCRE.2013.6671300","url":null,"abstract":"Fuzz testing consists of automatically generating and sending malicious inputs to an application in order to hopefully trigger a vulnerability. In order to be efficient, the fuzzing should answer questions such as: Where to send a malicious value? Where to observe its effects? How to position the system in such states? Answering such questions is a matter of understanding precisely enough the application. Reverseengineering is a possible way to gain this knowledge, especially in a black-box harness. In fact, given the complexity of modern web applications, automated black-box scanners alternatively reverse-engineer and fuzz web applications to detect vulnerabilities. We present an approach, named as LigRE, which improves the reverse engineering to guide the fuzzing. We adapt a method to automatically learn a control flow model of web applications, and annotate this model with inferred data flows. Afterwards, we generate slices of the model for guiding the scope of a fuzzer. Empirical experiments show that LigRE increases detection capabilities of Cross Site Scripting (XSS), a particular case of web command injection vulnerabilities.","PeriodicalId":275092,"journal":{"name":"2013 20th Working Conference on Reverse Engineering (WCRE)","volume":"201 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2013-11-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"134179090","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2013-11-21DOI: 10.1109/WCRE.2013.6671299
A. Yamashita, L. Moonen
Code smells are a well-known metaphor to describe symptoms of code decay or other issues with code quality which can lead to a variety of maintenance problems. Even though code smell detection and removal has been well-researched over the last decade, it remains open to debate whether or not code smells should be considered meaningful conceptualizations of code quality issues from the developer's perspective. To some extent, this question applies as well to the results provided by current code smell detection tools. Are code smells really important for developers? If they are not, is this due to the lack of relevance of the underlying concepts, due to the lack of awareness about code smells on the developers' side, or due to the lack of appropriate tools for code smell analysis or removal? In order to align and direct research efforts to address actual needs and problems of professional developers, we need to better understand the knowledge about, and interest in code smells, together with their perceived criticality. This paper reports on the results obtained from an exploratory survey involving 85 professional software developers.
{"title":"Do developers care about code smells? An exploratory survey","authors":"A. Yamashita, L. Moonen","doi":"10.1109/WCRE.2013.6671299","DOIUrl":"https://doi.org/10.1109/WCRE.2013.6671299","url":null,"abstract":"Code smells are a well-known metaphor to describe symptoms of code decay or other issues with code quality which can lead to a variety of maintenance problems. Even though code smell detection and removal has been well-researched over the last decade, it remains open to debate whether or not code smells should be considered meaningful conceptualizations of code quality issues from the developer's perspective. To some extent, this question applies as well to the results provided by current code smell detection tools. Are code smells really important for developers? If they are not, is this due to the lack of relevance of the underlying concepts, due to the lack of awareness about code smells on the developers' side, or due to the lack of appropriate tools for code smell analysis or removal? In order to align and direct research efforts to address actual needs and problems of professional developers, we need to better understand the knowledge about, and interest in code smells, together with their perceived criticality. This paper reports on the results obtained from an exploratory survey involving 85 professional software developers.","PeriodicalId":275092,"journal":{"name":"2013 20th Working Conference on Reverse Engineering (WCRE)","volume":"19 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2013-11-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114297829","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}