Pub Date : 2000-11-08DOI: 10.1109/HLDVT.2000.889563
Hoon Choi, Byeong-Whee Yun, Yun-Tae Lee
There have been many works reporting the success of model checking in finding the bugs that are not detected by the simulation. On the contrary, in this paper, we show the bugs that can escape from the model checking, and present the simulation strategy and speed up techniques to detect those bugs. The main focus of this paper is to show clearly the importance and the role of a simulation as a complement to the model checking.
{"title":"Simulation strategy after model checking: experience in industrial SOC design","authors":"Hoon Choi, Byeong-Whee Yun, Yun-Tae Lee","doi":"10.1109/HLDVT.2000.889563","DOIUrl":"https://doi.org/10.1109/HLDVT.2000.889563","url":null,"abstract":"There have been many works reporting the success of model checking in finding the bugs that are not detected by the simulation. On the contrary, in this paper, we show the bugs that can escape from the model checking, and present the simulation strategy and speed up techniques to detect those bugs. The main focus of this paper is to show clearly the importance and the role of a simulation as a complement to the model checking.","PeriodicalId":113229,"journal":{"name":"Proceedings IEEE International High-Level Design Validation and Test Workshop (Cat. No.PR00786)","volume":"12 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2000-11-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128321664","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2000-11-08DOI: 10.1109/HLDVT.2000.889560
K. Tomko, A. Tiwari
Application development environments for reconfigurable computing are the topic of many research and development projects yet few comprehensive debugging tools have been provided. In this paper we describe a debugging environmental use with FPGA accelerated applications which supports co-validation and co-testing of the software and hardware portions of the application. Our Co-debugging environment supports in-situ debugging utilizing the readback capabilities of FPGA chips for fast recreation and isolation of a fault. We show that this environment has the potential to reduce application debug times from hours to just a few minutes.
{"title":"Hardware/software co-debugging for reconfigurable computing","authors":"K. Tomko, A. Tiwari","doi":"10.1109/HLDVT.2000.889560","DOIUrl":"https://doi.org/10.1109/HLDVT.2000.889560","url":null,"abstract":"Application development environments for reconfigurable computing are the topic of many research and development projects yet few comprehensive debugging tools have been provided. In this paper we describe a debugging environmental use with FPGA accelerated applications which supports co-validation and co-testing of the software and hardware portions of the application. Our Co-debugging environment supports in-situ debugging utilizing the readback capabilities of FPGA chips for fast recreation and isolation of a fault. We show that this environment has the potential to reduce application debug times from hours to just a few minutes.","PeriodicalId":113229,"journal":{"name":"Proceedings IEEE International High-Level Design Validation and Test Workshop (Cat. No.PR00786)","volume":"41 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2000-11-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"133306166","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2000-11-08DOI: 10.1109/HLDVT.2000.889554
M. Lajolo, M. Rebaudengo, M. Reorda, M. Violante, L. Lavagno
Co-design tools represent an effective solution for reducing costs and shortening time-to-market, when system-on-chip design is considered. In a top-down design flow, designers would greatly benefit from the availability of tools able to automatically generate test sequences, which can be reused during the following design steps, from the system-level specification to the gate-level description. This would significantly increase the chance of identifying testability problems early in the design flow, thus reducing the costs and increasing the final product quality. The paper proposes an approach for integrating the ability to generate test sequences into an existing co-design tool. Preliminary experimental results are reported, assessing the feasibility of the proposed approach.
{"title":"Behavioral-level test vector generation for system-on-chip designs","authors":"M. Lajolo, M. Rebaudengo, M. Reorda, M. Violante, L. Lavagno","doi":"10.1109/HLDVT.2000.889554","DOIUrl":"https://doi.org/10.1109/HLDVT.2000.889554","url":null,"abstract":"Co-design tools represent an effective solution for reducing costs and shortening time-to-market, when system-on-chip design is considered. In a top-down design flow, designers would greatly benefit from the availability of tools able to automatically generate test sequences, which can be reused during the following design steps, from the system-level specification to the gate-level description. This would significantly increase the chance of identifying testability problems early in the design flow, thus reducing the costs and increasing the final product quality. The paper proposes an approach for integrating the ability to generate test sequences into an existing co-design tool. Preliminary experimental results are reported, assessing the feasibility of the proposed approach.","PeriodicalId":113229,"journal":{"name":"Proceedings IEEE International High-Level Design Validation and Test Workshop (Cat. No.PR00786)","volume":"116 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2000-11-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"121531554","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2000-11-08DOI: 10.1109/HLDVT.2000.889569
Sandhya Seshadri, M. Hsiao
The focus of this research is on the testability analysis of the operators in the behavioral description prior to synthesis. The controllabilities of the inputs to an operator and the observabilities of the outputs of the operation are computed from the value ranges of the variables that serve as the inputs and outputs. The proposed technique uses a formal data flow analysis instead of profiling or simulation, to accurately pin-point the hard-to-test operations in the design. Variable selection for testability enhancement of hard-to-test operations is accomplished based on the computed testability measures for all the involved operations in the behavioral description. The insertion of appropriate testability enhancements is then performed for the hard-to-test operators to achieve significantly higher test coverages, while keeping the design area-performance overhead to a minimum.
{"title":"Formal operator testability methods for behavioral-level DFT using value ranges","authors":"Sandhya Seshadri, M. Hsiao","doi":"10.1109/HLDVT.2000.889569","DOIUrl":"https://doi.org/10.1109/HLDVT.2000.889569","url":null,"abstract":"The focus of this research is on the testability analysis of the operators in the behavioral description prior to synthesis. The controllabilities of the inputs to an operator and the observabilities of the outputs of the operation are computed from the value ranges of the variables that serve as the inputs and outputs. The proposed technique uses a formal data flow analysis instead of profiling or simulation, to accurately pin-point the hard-to-test operations in the design. Variable selection for testability enhancement of hard-to-test operations is accomplished based on the computed testability measures for all the involved operations in the behavioral description. The insertion of appropriate testability enhancements is then performed for the hard-to-test operators to achieve significantly higher test coverages, while keeping the design area-performance overhead to a minimum.","PeriodicalId":113229,"journal":{"name":"Proceedings IEEE International High-Level Design Validation and Test Workshop (Cat. No.PR00786)","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2000-11-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"131196773","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2000-11-08DOI: 10.1109/HLDVT.2000.889568
S. Koerner
An innovative simulation concept has been developed for the IBM S/390 system of the year 2000 in the area of microcode verification. The goal is to achieve a long-term improvement in the quality of the delivered microcode, detecting and solving the vast majority of code problems in simulation before the system is first powered on. The number of such problems has a major impact on the time needed during system integration to bring the system up from power on to general availability. Within IBM, this is the first time that much a code simulation concept has been developed and implemented. Our element of that concept is the usage of a large emulation system for hardware/software co-verification.
{"title":"Code simulation concept for S/390 processors using an emulation system","authors":"S. Koerner","doi":"10.1109/HLDVT.2000.889568","DOIUrl":"https://doi.org/10.1109/HLDVT.2000.889568","url":null,"abstract":"An innovative simulation concept has been developed for the IBM S/390 system of the year 2000 in the area of microcode verification. The goal is to achieve a long-term improvement in the quality of the delivered microcode, detecting and solving the vast majority of code problems in simulation before the system is first powered on. The number of such problems has a major impact on the time needed during system integration to bring the system up from power on to general availability. Within IBM, this is the first time that much a code simulation concept has been developed and implemented. Our element of that concept is the usage of a large emulation system for hardware/software co-verification.","PeriodicalId":113229,"journal":{"name":"Proceedings IEEE International High-Level Design Validation and Test Workshop (Cat. No.PR00786)","volume":"26 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2000-11-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126471952","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2000-11-08DOI: 10.1109/HLDVT.2000.889555
M. Beardo, F. Bruschi, Fabrizio Ferrandi, D. Sciuto
VLIW core processors are becoming more and more interesting for high-end embedded applications, in particular in the area of multimedia. Only few approaches have been proposed to test at-speed microprocessors. Moreover, the unique architectural peculiarities of VLIW processors have not yet been exploited. In this paper we propose a method aimed at the generation of functional tests made of valid instructions, and then applicable at speed, exploiting the features of pure VLIW architectures like the explicit instruction parallelism and the functional units visibility. The approach, starting from an HDL description of the functional unit under test, drives, by means of what we called projection over the instructions, an ATPG tool generating test patterns made of valid instructions. Visibility of operations results is then achieved through the exploitation of the explicit instruction level parallelism. Experiments on a VHDL model of VLIW show that the generated patterns are effective to test the processor at gate-level.
{"title":"An approach to functional testing of VLIW architectures","authors":"M. Beardo, F. Bruschi, Fabrizio Ferrandi, D. Sciuto","doi":"10.1109/HLDVT.2000.889555","DOIUrl":"https://doi.org/10.1109/HLDVT.2000.889555","url":null,"abstract":"VLIW core processors are becoming more and more interesting for high-end embedded applications, in particular in the area of multimedia. Only few approaches have been proposed to test at-speed microprocessors. Moreover, the unique architectural peculiarities of VLIW processors have not yet been exploited. In this paper we propose a method aimed at the generation of functional tests made of valid instructions, and then applicable at speed, exploiting the features of pure VLIW architectures like the explicit instruction parallelism and the functional units visibility. The approach, starting from an HDL description of the functional unit under test, drives, by means of what we called projection over the instructions, an ATPG tool generating test patterns made of valid instructions. Visibility of operations results is then achieved through the exploitation of the explicit instruction level parallelism. Experiments on a VHDL model of VLIW show that the generated patterns are effective to test the processor at gate-level.","PeriodicalId":113229,"journal":{"name":"Proceedings IEEE International High-Level Design Validation and Test Workshop (Cat. No.PR00786)","volume":"11 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2000-11-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"133703583","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2000-11-08DOI: 10.1109/HLDVT.2000.889553
C. Paoli, M. Nivet, J. Santucci
Validation of VHDL descriptions at the early phases of the microelectronic design is one of the most time consuming task design. This paper presents a test vector generation method for behavioral VHDL design. This method analyzes control and dependence flow of VHDL program. We use the cyclomatic complexity, that is a software metric based on a graph associated with the control part of software: the control flow graph (CFG). Significant control flow paths are selected using a powerful algorithm: the Poole's algorithm. The execution of this set of paths satisfies the coverage of each decision outcome of the VHDL program. Any additional test path would be a linear combination of the basis paths already tested and therefore considered to be redundant. By considering the selected paths as a group of constraints, test data are generated and solved using constraint programming. These data form the test bench that test the VHDL description.
{"title":"Use of constraint solving in order to generate test vectors for behavioral validation","authors":"C. Paoli, M. Nivet, J. Santucci","doi":"10.1109/HLDVT.2000.889553","DOIUrl":"https://doi.org/10.1109/HLDVT.2000.889553","url":null,"abstract":"Validation of VHDL descriptions at the early phases of the microelectronic design is one of the most time consuming task design. This paper presents a test vector generation method for behavioral VHDL design. This method analyzes control and dependence flow of VHDL program. We use the cyclomatic complexity, that is a software metric based on a graph associated with the control part of software: the control flow graph (CFG). Significant control flow paths are selected using a powerful algorithm: the Poole's algorithm. The execution of this set of paths satisfies the coverage of each decision outcome of the VHDL program. Any additional test path would be a linear combination of the basis paths already tested and therefore considered to be redundant. By considering the selected paths as a group of constraints, test data are generated and solved using constraint programming. These data form the test bench that test the VHDL description.","PeriodicalId":113229,"journal":{"name":"Proceedings IEEE International High-Level Design Validation and Test Workshop (Cat. No.PR00786)","volume":"14 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2000-11-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115990420","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2000-11-08DOI: 10.1109/HLDVT.2000.889559
Debashis Panigrahi, Clark N. Taylor, S. Dey
The availability of reusable IP-cores, increasing time-to-market and design productivity gap, and enabling deep sub-micron technologies have led to core-based system-on-chip (SoC) design as a new paradigm in electronic system design. Validation of these complex hardware/software systems is the most time consuming task in the design flow. In this paper, we focus on developing an efficient interface-based validation methodology for core-based SoC designs. In SoCs designed with pre-validated IP cores, the verification complexity can be significantly alleviated by concentrating on the integration of the cores in the system, rather than the complete SoC. In this paper, we investigate typical interface problems that arise in integrating cores in an SoC, and classify these problems into different categories. Based on the classification of these interface problems, we introduce an interface-based validation methodology. Finally, we demonstrate the effectiveness of the proposed methodology using an example image compression SoC that we are developing.
{"title":"Interface based hardware/software validation of a system-on-chip","authors":"Debashis Panigrahi, Clark N. Taylor, S. Dey","doi":"10.1109/HLDVT.2000.889559","DOIUrl":"https://doi.org/10.1109/HLDVT.2000.889559","url":null,"abstract":"The availability of reusable IP-cores, increasing time-to-market and design productivity gap, and enabling deep sub-micron technologies have led to core-based system-on-chip (SoC) design as a new paradigm in electronic system design. Validation of these complex hardware/software systems is the most time consuming task in the design flow. In this paper, we focus on developing an efficient interface-based validation methodology for core-based SoC designs. In SoCs designed with pre-validated IP cores, the verification complexity can be significantly alleviated by concentrating on the integration of the cores in the system, rather than the complete SoC. In this paper, we investigate typical interface problems that arise in integrating cores in an SoC, and classify these problems into different categories. Based on the classification of these interface problems, we introduce an interface-based validation methodology. Finally, we demonstrate the effectiveness of the proposed methodology using an example image compression SoC that we are developing.","PeriodicalId":113229,"journal":{"name":"Proceedings IEEE International High-Level Design Validation and Test Workshop (Cat. No.PR00786)","volume":"16 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2000-11-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"125322198","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2000-11-08DOI: 10.1109/HLDVT.2000.889574
Yee-Wing Hsieh, S. Levitan
We present new non-deterministic finite state machine (NFSM) abstraction techniques for comparators based on the comparison difference of the two operands (e.g., counters) instead of the comparison order. One of the major advantages of the comparison difference abstractions is the ability to model the comparison of multiple tightly coupled computers. The abstraction techniques are integral to our semantic model abstraction methodology, where abstract models are generated based on semantic matching of behavioral VHDL models with known abstraction templates. Using NFSM models for counters, comparators, and registers, we have shown our approach can yield many orders of magnitude (10/sup 2/-10/sup 11/) reductions in state space size and substantial improvements in performance of formal verification runs.
{"title":"Abstraction techniques for verification of multiple tightly coupled counters, registers and comparators","authors":"Yee-Wing Hsieh, S. Levitan","doi":"10.1109/HLDVT.2000.889574","DOIUrl":"https://doi.org/10.1109/HLDVT.2000.889574","url":null,"abstract":"We present new non-deterministic finite state machine (NFSM) abstraction techniques for comparators based on the comparison difference of the two operands (e.g., counters) instead of the comparison order. One of the major advantages of the comparison difference abstractions is the ability to model the comparison of multiple tightly coupled computers. The abstraction techniques are integral to our semantic model abstraction methodology, where abstract models are generated based on semantic matching of behavioral VHDL models with known abstraction templates. Using NFSM models for counters, comparators, and registers, we have shown our approach can yield many orders of magnitude (10/sup 2/-10/sup 11/) reductions in state space size and substantial improvements in performance of formal verification runs.","PeriodicalId":113229,"journal":{"name":"Proceedings IEEE International High-Level Design Validation and Test Workshop (Cat. No.PR00786)","volume":"86 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2000-11-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"125447343","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2000-11-08DOI: 10.1109/HLDVT.2000.889564
R. Radhakrishnan, Elena Teica, R. Vemuri
Complexity of advanced high-level synthesis algorithms can be attributed to design quality concerns. However this complexity may lead to software errors in their implementations which may adversely impact design correctness. Transformational synthesis is a synthesis methodology where localized, behavior-preserving register transfer level (RTL) transformations are used to obtain a correct and constraint satisfying RTL design. This paper presents the novel use of a set of such transformations in validating an existing non-transformational synthesis system by discovering and to some extent isolating software errors.
{"title":"An approach to high-level synthesis system validation using formally verified transformations","authors":"R. Radhakrishnan, Elena Teica, R. Vemuri","doi":"10.1109/HLDVT.2000.889564","DOIUrl":"https://doi.org/10.1109/HLDVT.2000.889564","url":null,"abstract":"Complexity of advanced high-level synthesis algorithms can be attributed to design quality concerns. However this complexity may lead to software errors in their implementations which may adversely impact design correctness. Transformational synthesis is a synthesis methodology where localized, behavior-preserving register transfer level (RTL) transformations are used to obtain a correct and constraint satisfying RTL design. This paper presents the novel use of a set of such transformations in validating an existing non-transformational synthesis system by discovering and to some extent isolating software errors.","PeriodicalId":113229,"journal":{"name":"Proceedings IEEE International High-Level Design Validation and Test Workshop (Cat. No.PR00786)","volume":"14 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2000-11-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"131438590","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}