Pub Date : 2010-09-12DOI: 10.1109/ICSM.2010.5609581
A. Qusef, R. Oliveto, A. D. Lucia
Unit tests are valuable as a source of up-to-date documentation as developers continuously changes them to reflect changes in the production code to keep an effective regression suite. Maintaining traceability links between unit tests and classes under test can help developers to comprehend parts of a system. In particular, unit tests show how parts of a system are executed and as such how they are supposed to be used. Moreover, the dependencies between unit tests and classes can be exploited to maintain the consistency during refactoring. Generally, such dependences are not explicitly maintained and they have to be recovered during software development. Some guidelines and naming conventions have been defined to describe the testing environment in order to easily identify related tests for a programming task. However, very often these guidelines are not followed making the identification of links between unit tests and classes a time-consuming task. Thus, automatic approaches to recover such links are needed. In this paper a traceability recovery approach based on Data Flow Analysis (DFA) is presented. In particular, the approach retrieves as tested classes all the classes that affect the result of the last assert statement in each method of the unit test class. The accuracy of the proposed method has been empirically evaluated on two systems, an open source system and an industrial system. As a benchmark, we compare the accuracy of the DFA-based approach with the accuracy of the previously used traceability recovery approaches, namely Naming Convention (NC) and Last Call Before Assert (LCBA) that seem to provide the most accurate results. The results show that the proposed approach is the most accurate method demonstrating the effectiveness of DFA. However, the case study also highlights the limitations of the experimented traceability recovery approaches, showing that detecting the class under test cannot be fully automated and some issues are still under study.
单元测试作为最新文档的来源是有价值的,因为开发人员不断更改它们以反映生产代码中的更改,以保持有效的回归套件。维护单元测试和被测类之间的可跟踪性链接可以帮助开发人员理解系统的各个部分。特别是,单元测试显示了系统的各个部分是如何执行的,以及它们应该如何被使用。此外,可以利用单元测试和类之间的依赖关系来维护重构期间的一致性。通常,这样的依赖关系不会被显式地维护,它们必须在软件开发期间恢复。已经定义了一些指导方针和命名约定来描述测试环境,以便轻松地识别编程任务的相关测试。然而,通常没有遵循这些指导方针,使得识别单元测试和类之间的链接成为一项耗时的任务。因此,需要自动恢复这些链接的方法。提出了一种基于数据流分析(DFA)的可追溯性恢复方法。特别是,该方法将影响单元测试类的每个方法中最后一个assert语句结果的所有类作为测试类检索。本文在一个开源系统和一个工业系统上对该方法的精度进行了实证评价。作为基准,我们将基于dfa的方法的准确性与以前使用的可追溯性恢复方法的准确性进行比较,即命名约定(Naming Convention, NC)和断言前最后调用(Last Call Before Assert, LCBA),它们似乎提供了最准确的结果。结果表明,该方法是最准确的方法,证明了DFA的有效性。然而,案例研究也强调了实验的可追溯性恢复方法的局限性,表明检测测试中的类不能完全自动化,并且一些问题仍在研究中。
{"title":"Recovering traceability links between unit tests and classes under test: An improved method","authors":"A. Qusef, R. Oliveto, A. D. Lucia","doi":"10.1109/ICSM.2010.5609581","DOIUrl":"https://doi.org/10.1109/ICSM.2010.5609581","url":null,"abstract":"Unit tests are valuable as a source of up-to-date documentation as developers continuously changes them to reflect changes in the production code to keep an effective regression suite. Maintaining traceability links between unit tests and classes under test can help developers to comprehend parts of a system. In particular, unit tests show how parts of a system are executed and as such how they are supposed to be used. Moreover, the dependencies between unit tests and classes can be exploited to maintain the consistency during refactoring. Generally, such dependences are not explicitly maintained and they have to be recovered during software development. Some guidelines and naming conventions have been defined to describe the testing environment in order to easily identify related tests for a programming task. However, very often these guidelines are not followed making the identification of links between unit tests and classes a time-consuming task. Thus, automatic approaches to recover such links are needed. In this paper a traceability recovery approach based on Data Flow Analysis (DFA) is presented. In particular, the approach retrieves as tested classes all the classes that affect the result of the last assert statement in each method of the unit test class. The accuracy of the proposed method has been empirically evaluated on two systems, an open source system and an industrial system. As a benchmark, we compare the accuracy of the DFA-based approach with the accuracy of the previously used traceability recovery approaches, namely Naming Convention (NC) and Last Call Before Assert (LCBA) that seem to provide the most accurate results. The results show that the proposed approach is the most accurate method demonstrating the effectiveness of DFA. However, the case study also highlights the limitations of the experimented traceability recovery approaches, showing that detecting the class under test cannot be fully automated and some issues are still under study.","PeriodicalId":101801,"journal":{"name":"2010 IEEE International Conference on Software Maintenance","volume":"125 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2010-09-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"131640000","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2010-09-12DOI: 10.1109/ICSM.2010.5609724
Anthony Cleve
Data-intensive software systems are generally made of a database and a collection of application programs in strong interaction with the former. They constitute critical assets in most enterprises, since they support business activities in all production and management domains. Data-intensive systems form most of the so-called legacy systems: they typically are one or more decades old, they are very large, heterogeneous and highly complex. Many of them significantly resist modifications and change due to the lack of documentation, to the use of aging technologies and to inflexible architectures. Therefore, the evolution of data-intensive systems clearly calls for automated support. This thesis explores the use of automated program analysis and transformation techniques in support to the evolution of the database component of the system. The program analysis techniques aim to ease the database evolution process, by helping the developers to understand the data structures that are to be changed, despite the lack of precise and up-to-date documentation. The objective of the program transformation techniques is to support the adaptation of the application programs to the new database. This adaptation process is studied in the context of two realistic database evolution scenarios, namely database database schema refactoring and database platform migration.
{"title":"Program analysis and transformation for data-intensive system evolution","authors":"Anthony Cleve","doi":"10.1109/ICSM.2010.5609724","DOIUrl":"https://doi.org/10.1109/ICSM.2010.5609724","url":null,"abstract":"Data-intensive software systems are generally made of a database and a collection of application programs in strong interaction with the former. They constitute critical assets in most enterprises, since they support business activities in all production and management domains. Data-intensive systems form most of the so-called legacy systems: they typically are one or more decades old, they are very large, heterogeneous and highly complex. Many of them significantly resist modifications and change due to the lack of documentation, to the use of aging technologies and to inflexible architectures. Therefore, the evolution of data-intensive systems clearly calls for automated support. This thesis explores the use of automated program analysis and transformation techniques in support to the evolution of the database component of the system. The program analysis techniques aim to ease the database evolution process, by helping the developers to understand the data structures that are to be changed, despite the lack of precise and up-to-date documentation. The objective of the program transformation techniques is to support the adaptation of the application programs to the new database. This adaptation process is studied in the context of two realistic database evolution scenarios, namely database database schema refactoring and database platform migration.","PeriodicalId":101801,"journal":{"name":"2010 IEEE International Conference on Software Maintenance","volume":"32 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2010-09-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"132336269","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2010-09-12DOI: 10.1109/ICSM.2010.5609684
Nicolas Haderer, Foutse Khomh, G. Antoniol
Despite the large number of quality models and publicly available quality assessment tools like PMD, Checkstyle, or FindBugs, very few studies have investigated the use of quality models by developers in their daily activities. One reason for this lack of studies is the absence of integrated environments for monitoring the evolution of software quality. We propose SQUANER (Software QUality ANalyzER), a framework for monitoring the evolution of the quality of object-oriented systems. SQUANER connects directly to the SVN of a system, extracts the source code, and perform quality evaluations and faults predictions every time a commit is made by a developer. After quality analysis, a feedback is provided to developers with instructions on how to improve their code.
{"title":"SQUANER: A framework for monitoring the quality of software systems","authors":"Nicolas Haderer, Foutse Khomh, G. Antoniol","doi":"10.1109/ICSM.2010.5609684","DOIUrl":"https://doi.org/10.1109/ICSM.2010.5609684","url":null,"abstract":"Despite the large number of quality models and publicly available quality assessment tools like PMD, Checkstyle, or FindBugs, very few studies have investigated the use of quality models by developers in their daily activities. One reason for this lack of studies is the absence of integrated environments for monitoring the evolution of software quality. We propose SQUANER (Software QUality ANalyzER), a framework for monitoring the evolution of the quality of object-oriented systems. SQUANER connects directly to the SVN of a system, extracts the source code, and perform quality evaluations and faults predictions every time a commit is made by a developer. After quality analysis, a feedback is provided to developers with instructions on how to improve their code.","PeriodicalId":101801,"journal":{"name":"2010 IEEE International Conference on Software Maintenance","volume":"23 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2010-09-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"121315542","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2010-09-12DOI: 10.1109/ICSM.2010.5609716
Dan C. Cosma
A significant part of the modern software systems are designed and implemented as object-oriented distributed applications, addressing the needs of a globally-connected society. While they can be analyzed focusing only on their object-oriented nature, their understanding and quality assessment require very specific, technology-dependent analysis approaches. This doctoral dissertation describes a methodology for understanding object-oriented distributed systems using a process of reverse engineering driven by the assessment of their technological and domain-specific particularities. The approach provides both system-wide and class-level characterizations, capturing the architectural traits of the systems, and assessing the impact of the distribution-aware features throughout the application. The methodology describes a mostly-automated analysis process fully supported by a tools infrastructure, providing means for detailed understanding of the distribution-related traits and including basic support for the potentially consequent system restructuring.
{"title":"Reverse engineering object-oriented distributed systems","authors":"Dan C. Cosma","doi":"10.1109/ICSM.2010.5609716","DOIUrl":"https://doi.org/10.1109/ICSM.2010.5609716","url":null,"abstract":"A significant part of the modern software systems are designed and implemented as object-oriented distributed applications, addressing the needs of a globally-connected society. While they can be analyzed focusing only on their object-oriented nature, their understanding and quality assessment require very specific, technology-dependent analysis approaches. This doctoral dissertation describes a methodology for understanding object-oriented distributed systems using a process of reverse engineering driven by the assessment of their technological and domain-specific particularities. The approach provides both system-wide and class-level characterizations, capturing the architectural traits of the systems, and assessing the impact of the distribution-aware features throughout the application. The methodology describes a mostly-automated analysis process fully supported by a tools infrastructure, providing means for detailed understanding of the distribution-related traits and including basic support for the potentially consequent system restructuring.","PeriodicalId":101801,"journal":{"name":"2010 IEEE International Conference on Software Maintenance","volume":"94 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2010-09-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"121119021","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2010-09-12DOI: 10.1109/ICSM.2010.5609662
M. Jakubicka
Software asset management represents an important part of software maintenance. The paper describes the main issues arising from several aspects such as legislation, management, and finance. It analyses the design of a software asset management system developed for University purposes and addresses the most significant issues in this environment.
{"title":"Software asset management","authors":"M. Jakubicka","doi":"10.1109/ICSM.2010.5609662","DOIUrl":"https://doi.org/10.1109/ICSM.2010.5609662","url":null,"abstract":"Software asset management represents an important part of software maintenance. The paper describes the main issues arising from several aspects such as legislation, management, and finance. It analyses the design of a software asset management system developed for University purposes and addresses the most significant issues in this environment.","PeriodicalId":101801,"journal":{"name":"2010 IEEE International Conference on Software Maintenance","volume":"11 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2010-09-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"121955394","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2010-09-12DOI: 10.1109/ICSM.2010.5609658
D. Wile, R. Balzer, N. Goldman, Marcelo Tallis, Alexander Egyed, T. Hollebeek
COTS products can play various architectural roles in software systems: as interfaces to problem-specific functionality, as components that provide such functionality itself, and as intermediary connectors and components in more complex systems. In doing so, COTS products impose their own, unique constraints on organization and functionality. Over the last ten years, we have gained considerable experience with adopting, adapting, and living with the limitations of COTS products. Our goal was to adapt the COTS product to make it fit the application rather than adapting the application needs to make them fit the COTS product - thus, in essence, adapting the COTS product without access to its source code or documentation (a unique form of maintenance). We report on a large set of experiences involving eight COTS products and a wide range of COTS-Based Software Systems - most of which were done with and for industrial partners or government agencies. This experience report attempts to both give a feeling for how applications can be augmented with such COTS interfaces and also tries to tease out the specific architectural issues that anyone adapting COTS products is certain to face.
{"title":"Adapting COTS products","authors":"D. Wile, R. Balzer, N. Goldman, Marcelo Tallis, Alexander Egyed, T. Hollebeek","doi":"10.1109/ICSM.2010.5609658","DOIUrl":"https://doi.org/10.1109/ICSM.2010.5609658","url":null,"abstract":"COTS products can play various architectural roles in software systems: as interfaces to problem-specific functionality, as components that provide such functionality itself, and as intermediary connectors and components in more complex systems. In doing so, COTS products impose their own, unique constraints on organization and functionality. Over the last ten years, we have gained considerable experience with adopting, adapting, and living with the limitations of COTS products. Our goal was to adapt the COTS product to make it fit the application rather than adapting the application needs to make them fit the COTS product - thus, in essence, adapting the COTS product without access to its source code or documentation (a unique form of maintenance). We report on a large set of experiences involving eight COTS products and a wide range of COTS-Based Software Systems - most of which were done with and for industrial partners or government agencies. This experience report attempts to both give a feeling for how applications can be augmented with such COTS interfaces and also tries to tease out the specific architectural issues that anyone adapting COTS products is certain to face.","PeriodicalId":101801,"journal":{"name":"2010 IEEE International Conference on Software Maintenance","volume":"9 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2010-09-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"131121871","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2010-09-12DOI: 10.1109/ICSM.2010.5609653
Elke Salecker, S. Glesner
Many software faults are triggered by unusual combinations of input values and can be detected using pairwise test sets that cover each pair of input values. The generation of pairwise test sets with a minimal size is an NP-complete problem which implies that many algorithms are either expensive or based on a random process. In this paper we present a deterministic algorithm that exploits our observation that the pairwise testing problem can be modeled as a k-partite graph problem. We calculate the test set using well investigated graph algorithms that take advantage of properties of k-partite graphs. We present evaluation results that prove the applicability of our algorithm and discuss possible improvement of our approach.
{"title":"Pairwise test set calculation using k-partite graphs","authors":"Elke Salecker, S. Glesner","doi":"10.1109/ICSM.2010.5609653","DOIUrl":"https://doi.org/10.1109/ICSM.2010.5609653","url":null,"abstract":"Many software faults are triggered by unusual combinations of input values and can be detected using pairwise test sets that cover each pair of input values. The generation of pairwise test sets with a minimal size is an NP-complete problem which implies that many algorithms are either expensive or based on a random process. In this paper we present a deterministic algorithm that exploits our observation that the pairwise testing problem can be modeled as a k-partite graph problem. We calculate the test set using well investigated graph algorithms that take advantage of properties of k-partite graphs. We present evaluation results that prove the applicability of our algorithm and discuss possible improvement of our approach.","PeriodicalId":101801,"journal":{"name":"2010 IEEE International Conference on Software Maintenance","volume":"113 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2010-09-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128122540","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2010-09-12DOI: 10.1109/ICSM.2010.5609565
Rahul Pandita, Tao Xie, N. Tillmann, J. D. Halleux
Test coverage criteria including boundary-value and logical coverage such as Modified Condition/Decision Coverage (MC/DC) have been increasingly used in safety-critical or mission-critical domains, complementing those more popularly used structural coverage criteria such as block or branch coverage. However, existing automated test-generation approaches often target at block or branch coverage for test generation and selection, and therefore do not support testing against boundary-value coverage or logical coverage. To address this issue, we propose a general approach that uses instrumentation to guide existing test-generation approaches to generate test inputs that achieve boundary-value and logical coverage for the program under test. Our preliminary evaluation shows that our approach effectively helps an approach based on Dynamic Symbolic Execution (DSE) to improve boundary-value and logical coverage of generated test inputs. The evaluation results show 30.5% maximum (23% average) increase in boundary-value coverage and 26% maximum (21.5% average) increase in logical coverage of the subject programs under test using our approach over without using our approach. In addition, our approach improves the fault-detection capability of generated test inputs by 12.5% maximum (11% average) compared to the test inputs generated without using our approach.
{"title":"Guided test generation for coverage criteria","authors":"Rahul Pandita, Tao Xie, N. Tillmann, J. D. Halleux","doi":"10.1109/ICSM.2010.5609565","DOIUrl":"https://doi.org/10.1109/ICSM.2010.5609565","url":null,"abstract":"Test coverage criteria including boundary-value and logical coverage such as Modified Condition/Decision Coverage (MC/DC) have been increasingly used in safety-critical or mission-critical domains, complementing those more popularly used structural coverage criteria such as block or branch coverage. However, existing automated test-generation approaches often target at block or branch coverage for test generation and selection, and therefore do not support testing against boundary-value coverage or logical coverage. To address this issue, we propose a general approach that uses instrumentation to guide existing test-generation approaches to generate test inputs that achieve boundary-value and logical coverage for the program under test. Our preliminary evaluation shows that our approach effectively helps an approach based on Dynamic Symbolic Execution (DSE) to improve boundary-value and logical coverage of generated test inputs. The evaluation results show 30.5% maximum (23% average) increase in boundary-value coverage and 26% maximum (21.5% average) increase in logical coverage of the subject programs under test using our approach over without using our approach. In addition, our approach improves the fault-detection capability of generated test inputs by 12.5% maximum (11% average) compared to the test inputs generated without using our approach.","PeriodicalId":101801,"journal":{"name":"2010 IEEE International Conference on Software Maintenance","volume":"23 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2010-09-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"133051207","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2010-09-12DOI: 10.1109/ICSM.2010.5609707
A. D. Lucia, V. Deufemia, C. Gravino, M. Risi
The extraction of design pattern information from software systems can provide conspicuous insight to software engineers on the software structure and its internal characteristics. In this demonstration we present ePAD, an Eclipse plug-in for recovering design pattern instances from object-oriented source code. The tool is able to recover design pattern instances through a structural analysis performed on a data model extracted from source code, and a behavioral analysis performed through the instrumentation and the monitoring of the software system. ePAD is fully configurable since it allows software engineers to customize the design pattern recovery rules and the layout used for the visualization of the recovered instances.
{"title":"An Eclipse plug-in for the detection of design pattern instances through static and dynamic analysis","authors":"A. D. Lucia, V. Deufemia, C. Gravino, M. Risi","doi":"10.1109/ICSM.2010.5609707","DOIUrl":"https://doi.org/10.1109/ICSM.2010.5609707","url":null,"abstract":"The extraction of design pattern information from software systems can provide conspicuous insight to software engineers on the software structure and its internal characteristics. In this demonstration we present ePAD, an Eclipse plug-in for recovering design pattern instances from object-oriented source code. The tool is able to recover design pattern instances through a structural analysis performed on a data model extracted from source code, and a behavioral analysis performed through the instrumentation and the monitoring of the software system. ePAD is fully configurable since it allows software engineers to customize the design pattern recovery rules and the layout used for the visualization of the recovered instances.","PeriodicalId":101801,"journal":{"name":"2010 IEEE International Conference on Software Maintenance","volume":"84 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2010-09-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"131234822","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2010-09-12DOI: 10.1109/ICSM.2010.5609661
David Insa, Josep Silva
This work presents DDJ, an algorithmic debugger for Java. The main advantage of DDJ with respect to previous algorithmic debuggers is its scalability. DDJ has a new architecture based on the use of cache memories that allows it to scale both in time and memory. In addition, it includes new techniques that allow the debugger to start the debugging session even before the execution tree has been produced. We present the new architecture, and describe the main features of this debugger together with a usage scenario.
{"title":"An algorithmic debugger for Java","authors":"David Insa, Josep Silva","doi":"10.1109/ICSM.2010.5609661","DOIUrl":"https://doi.org/10.1109/ICSM.2010.5609661","url":null,"abstract":"This work presents DDJ, an algorithmic debugger for Java. The main advantage of DDJ with respect to previous algorithmic debuggers is its scalability. DDJ has a new architecture based on the use of cache memories that allows it to scale both in time and memory. In addition, it includes new techniques that allow the debugger to start the debugging session even before the execution tree has been produced. We present the new architecture, and describe the main features of this debugger together with a usage scenario.","PeriodicalId":101801,"journal":{"name":"2010 IEEE International Conference on Software Maintenance","volume":"23 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2010-09-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"132060169","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}