Pub Date : 2004-11-10DOI: 10.1109/HLDVT.2004.1431227
S. Shamshiri, H. Esmaeilzadeh, Z. Navabi
TIS (S. Shamshiri et al., 2004) is an instruction level methodology for CPU core self-testing that enhances the instruction set of a CPU with test instructions. Since the functionality of test instructions is the same as the NOP instruction, NOP instructions can be replaced with test instructions so that online testing can be done with no performance penalty. TIS tests different parts of the CPU and detects stuck-at faults. This method can be employed in offline and online testing of all kinds of processors. Hardware-oriented implementation of TIS was proposed previously (S. Shamshiri et al., 2004) that tests just the combinational units of the processor. Contributions of this paper are first, a software-based approach that reduces the hardware overhead to a reasonable size and second, testing the sequential parts of the processor besides the combinational parts. Both hardware and software oriented approaches are implemented on a pipelined CPU core and their area overheads are compared. To demonstrate the appropriateness of the TIS test technique, several programs are executed and fault coverage results are presented.
TIS (S. Shamshiri et al., 2004)是一种用于CPU核心自我测试的指令级方法,它通过测试指令来增强CPU的指令集。由于测试指令的功能与NOP指令相同,因此可以用测试指令替换NOP指令,这样就可以在没有性能损失的情况下进行在线测试。TIS测试CPU的不同部分并检测卡在故障上。该方法可用于各种处理器的离线和在线测试。以前提出了面向硬件的TIS实现(S. Shamshiri et al., 2004),它只测试处理器的组合单元。本文的贡献在于:首先,基于软件的方法将硬件开销降低到合理的大小;其次,除了组合部分之外,还测试了处理器的顺序部分。面向硬件和面向软件的方法都是在一个流水线的CPU核心上实现的,并比较了它们的面积开销。为了证明TIS测试技术的适用性,执行了几个程序并给出了故障覆盖率结果。
{"title":"Instruction level test methodology for CPU core software-based self-testing","authors":"S. Shamshiri, H. Esmaeilzadeh, Z. Navabi","doi":"10.1109/HLDVT.2004.1431227","DOIUrl":"https://doi.org/10.1109/HLDVT.2004.1431227","url":null,"abstract":"TIS (S. Shamshiri et al., 2004) is an instruction level methodology for CPU core self-testing that enhances the instruction set of a CPU with test instructions. Since the functionality of test instructions is the same as the NOP instruction, NOP instructions can be replaced with test instructions so that online testing can be done with no performance penalty. TIS tests different parts of the CPU and detects stuck-at faults. This method can be employed in offline and online testing of all kinds of processors. Hardware-oriented implementation of TIS was proposed previously (S. Shamshiri et al., 2004) that tests just the combinational units of the processor. Contributions of this paper are first, a software-based approach that reduces the hardware overhead to a reasonable size and second, testing the sequential parts of the processor besides the combinational parts. Both hardware and software oriented approaches are implemented on a pipelined CPU core and their area overheads are compared. To demonstrate the appropriateness of the TIS test technique, several programs are executed and fault coverage results are presented.","PeriodicalId":240214,"journal":{"name":"Proceedings. Ninth IEEE International High-Level Design Validation and Test Workshop (IEEE Cat. No.04EX940)","volume":"32 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2004-11-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"122635073","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2004-11-10DOI: 10.1109/HLDVT.2004.1431221
W. Hung, N. Narasimhan
We present an approach that makes reference model based formal verification both complete and practical in an industrial setting. This paper describes a novel approach to conduct this exercise, by seamlessly integrating formal equivalence verification (FEV) techniques within a verification flow suited to formal property verification (FPV). This enables us to take full advantage of the rich expressive power of temporal specification languages and help guide the FEV tools so as to enable reference model verification to an extent that was never attempted before. We have successfully applied our approach to challenging verification problems at Intel/spl reg/.
{"title":"Reference model based RTL verification: an integrated approach","authors":"W. Hung, N. Narasimhan","doi":"10.1109/HLDVT.2004.1431221","DOIUrl":"https://doi.org/10.1109/HLDVT.2004.1431221","url":null,"abstract":"We present an approach that makes reference model based formal verification both complete and practical in an industrial setting. This paper describes a novel approach to conduct this exercise, by seamlessly integrating formal equivalence verification (FEV) techniques within a verification flow suited to formal property verification (FPV). This enables us to take full advantage of the rich expressive power of temporal specification languages and help guide the FEV tools so as to enable reference model verification to an extent that was never attempted before. We have successfully applied our approach to challenging verification problems at Intel/spl reg/.","PeriodicalId":240214,"journal":{"name":"Proceedings. Ninth IEEE International High-Level Design Validation and Test Workshop (IEEE Cat. No.04EX940)","volume":"31 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2004-11-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"117162805","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2004-11-10DOI: 10.1109/HLDVT.2004.1431224
A. Habibi, S. Tahar
In this paper, we present an approach to verify efficiently assertions added on top of the SystemC library and based on the property specification language (PSL). In order to improve the assertion coverage, we also propose an approach based on both static code analysis and genetic algorithms. Static code analysis will help generate a dependency relation between inputs and assertion parameters as well as define the ranges of inputs affecting the assertion. The genetic algorithm will optimize the test generation to get more efficient coverage of the assertion. Experimental results illustrate the efficiency of our approach compared to random simulation.
{"title":"Towards an efficient assertion based verification of SystemC designs","authors":"A. Habibi, S. Tahar","doi":"10.1109/HLDVT.2004.1431224","DOIUrl":"https://doi.org/10.1109/HLDVT.2004.1431224","url":null,"abstract":"In this paper, we present an approach to verify efficiently assertions added on top of the SystemC library and based on the property specification language (PSL). In order to improve the assertion coverage, we also propose an approach based on both static code analysis and genetic algorithms. Static code analysis will help generate a dependency relation between inputs and assertion parameters as well as define the ranges of inputs affecting the assertion. The genetic algorithm will optimize the test generation to get more efficient coverage of the assertion. Experimental results illustrate the efficiency of our approach compared to random simulation.","PeriodicalId":240214,"journal":{"name":"Proceedings. Ninth IEEE International High-Level Design Validation and Test Workshop (IEEE Cat. No.04EX940)","volume":"26 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2004-11-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"134096681","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2004-11-10DOI: 10.1109/HLDVT.2004.1431261
Jia Yu, Wei Wu, X. Chen, H. Hsieh, Jun Yang, F. Balarin
Network processors (NPUs) have emerged as successful platforms to provide both high performance and flexibility in building powerful routers. With the scaling of technology and higher requirements on performance and functionality, power dissipation is becoming one of the major design considerations in NPU development. In this paper, we present an assertion-based methodology for system-level power/performance analysis of network processor designs, which can help designers choose the right architecture features and low power techniques. We write power and performance assertions, based on logic of constraints. Trace checkers and simulation monitors are automatically generated to analyze the power and performance characteristics of the network processor model. Furthermore, we apply a low power technique, dynamic voltage scaling (DVS), to the network processor model, and explore their pros and cons with the assertion-based analysis technique. We demonstrate that the assertion-based methodology is useful and effective for system level power/performance analysis.
{"title":"Assertion-based power/performance analysis of network processor architectures","authors":"Jia Yu, Wei Wu, X. Chen, H. Hsieh, Jun Yang, F. Balarin","doi":"10.1109/HLDVT.2004.1431261","DOIUrl":"https://doi.org/10.1109/HLDVT.2004.1431261","url":null,"abstract":"Network processors (NPUs) have emerged as successful platforms to provide both high performance and flexibility in building powerful routers. With the scaling of technology and higher requirements on performance and functionality, power dissipation is becoming one of the major design considerations in NPU development. In this paper, we present an assertion-based methodology for system-level power/performance analysis of network processor designs, which can help designers choose the right architecture features and low power techniques. We write power and performance assertions, based on logic of constraints. Trace checkers and simulation monitors are automatically generated to analyze the power and performance characteristics of the network processor model. Furthermore, we apply a low power technique, dynamic voltage scaling (DVS), to the network processor model, and explore their pros and cons with the assertion-based analysis technique. We demonstrate that the assertion-based methodology is useful and effective for system level power/performance analysis.","PeriodicalId":240214,"journal":{"name":"Proceedings. Ninth IEEE International High-Level Design Validation and Test Workshop (IEEE Cat. No.04EX940)","volume":"115 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2004-11-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"132292543","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2004-11-10DOI: 10.1109/HLDVT.2004.1431246
T. Margaria, Oliver Niese, Harald Raffelt, B. Steffen
We present the effects of using an efficient algorithm for behavior-based model synthesis which is specifically tailored to reactive (legacy) system behaviors. Conceptual backbone is the classical automata learning procedure L*, which we adapt according to the considered application profile. The resulting learning procedure L*Meal , which directly synthesizes generalized Mealy automata from behavioral observations gathered via an automated test environment, drastically outperforms the classical learning algorithm for deterministic finite automata. Thus it marks a milestone towards opening industrial legacy systems to model-based test suite enhancement, test coverage analysis, and online testing.
{"title":"Efficient test-based model generation for legacy reactive systems","authors":"T. Margaria, Oliver Niese, Harald Raffelt, B. Steffen","doi":"10.1109/HLDVT.2004.1431246","DOIUrl":"https://doi.org/10.1109/HLDVT.2004.1431246","url":null,"abstract":"We present the effects of using an efficient algorithm for behavior-based model synthesis which is specifically tailored to reactive (legacy) system behaviors. Conceptual backbone is the classical automata learning procedure L*, which we adapt according to the considered application profile. The resulting learning procedure L*Meal , which directly synthesizes generalized Mealy automata from behavioral observations gathered via an automated test environment, drastically outperforms the classical learning algorithm for deterministic finite automata. Thus it marks a milestone towards opening industrial legacy systems to model-based test suite enhancement, test coverage analysis, and online testing.","PeriodicalId":240214,"journal":{"name":"Proceedings. Ninth IEEE International High-Level Design Validation and Test Workshop (IEEE Cat. No.04EX940)","volume":"24 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2004-11-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130200741","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2004-11-10DOI: 10.1109/HLDVT.2004.1431231
M. Velev
The formal verification of pipelined processors with load-value prediction is studied. The formal verification is done by abstractions with the logic of equality with uninterpreted functions and memories (EUFM), using an automatic tool flow. Applying special abstractions in previous work had resulted in EUFM correctness formulas where most of the terms (abstract word-level values) appear in only positive equations (equality comparisons) or as arguments of uninterpreted functions and uninterpreted predicates, allowing us to treat such terms as distinct constants - a property we call positive equality. That property resulted in orders of magnitude speedup. However, the mechanism for correcting load-value mispredictions introduces both positive and negated equations between the actual and predicted load values, thus reducing significantly the potential for exploiting positive equality. The contributions of the paper are: 1) modeling and formal verification of a pipelined processor with load-value prediction and a fully implemented mechanism for correcting load-value mispredictions, and comparison with the formal verification of a variant of the design where the load values are not predicted, such that the data hazards are avoided by stalling the dependent instruction; and 2) a way to abstract the mechanism for detecting load-value mispredictions, thus allowing the use of positive equality, at the cost of enriching the specification processor with the abstracted mechanism for detecting load-value mispredictions.
{"title":"Formal verification of pipelined processors with load-value prediction","authors":"M. Velev","doi":"10.1109/HLDVT.2004.1431231","DOIUrl":"https://doi.org/10.1109/HLDVT.2004.1431231","url":null,"abstract":"The formal verification of pipelined processors with load-value prediction is studied. The formal verification is done by abstractions with the logic of equality with uninterpreted functions and memories (EUFM), using an automatic tool flow. Applying special abstractions in previous work had resulted in EUFM correctness formulas where most of the terms (abstract word-level values) appear in only positive equations (equality comparisons) or as arguments of uninterpreted functions and uninterpreted predicates, allowing us to treat such terms as distinct constants - a property we call positive equality. That property resulted in orders of magnitude speedup. However, the mechanism for correcting load-value mispredictions introduces both positive and negated equations between the actual and predicted load values, thus reducing significantly the potential for exploiting positive equality. The contributions of the paper are: 1) modeling and formal verification of a pipelined processor with load-value prediction and a fully implemented mechanism for correcting load-value mispredictions, and comparison with the formal verification of a variant of the design where the load values are not predicted, such that the data hazards are avoided by stalling the dependent instruction; and 2) a way to abstract the mechanism for detecting load-value mispredictions, thus allowing the use of positive equality, at the cost of enriching the specification processor with the abstracted mechanism for detecting load-value mispredictions.","PeriodicalId":240214,"journal":{"name":"Proceedings. Ninth IEEE International High-Level Design Validation and Test Workshop (IEEE Cat. No.04EX940)","volume":"15 12","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2004-11-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"133357366","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2004-11-10DOI: 10.1109/HLDVT.2004.1431234
Tao Feng, Li-C. Wang, K. Cheng, Andy Lin
In this paper, we propose a symbolic simulation method where Boolean functions can be efficiently manipulated through a 2-domain partitioned OBDD data structure. The functional partition is applied based on the key decision points in a circuit. We demonstrate that key decision points in an RTL model can be extracted automatically to facilitate verification at the gate level. The experiments show that the decision points can help to significantly reduce the OBDD size in both RTL and gate level circuit, solving problems that could not be solved with monolithic OBDD data structure. The performance of 2-domain partitioned OBDD approach is shown through the verification of several benchmark circuits.
{"title":"On using a 2-domain partitioned OBDD data structure in verification","authors":"Tao Feng, Li-C. Wang, K. Cheng, Andy Lin","doi":"10.1109/HLDVT.2004.1431234","DOIUrl":"https://doi.org/10.1109/HLDVT.2004.1431234","url":null,"abstract":"In this paper, we propose a symbolic simulation method where Boolean functions can be efficiently manipulated through a 2-domain partitioned OBDD data structure. The functional partition is applied based on the key decision points in a circuit. We demonstrate that key decision points in an RTL model can be extracted automatically to facilitate verification at the gate level. The experiments show that the decision points can help to significantly reduce the OBDD size in both RTL and gate level circuit, solving problems that could not be solved with monolithic OBDD data structure. The performance of 2-domain partitioned OBDD approach is shown through the verification of several benchmark circuits.","PeriodicalId":240214,"journal":{"name":"Proceedings. Ninth IEEE International High-Level Design Validation and Test Workshop (IEEE Cat. No.04EX940)","volume":"43 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2004-11-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"124155108","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2004-11-10DOI: 10.1109/HLDVT.2004.1431241
Markus Braun, S. Fine, A. Ziv
Coverage directed test generation (CDG) is a technique for providing feedback from the coverage domain back to a generator, which produces new stimuli to the tested design. Recent work showed that CDG, implemented using Bayesian networks, can improve the efficiency and reduce the human interaction in the verification process over directed random stimuli. This paper discusses two methods that improve the efficiency of the CDG process. In the first method, additional data collected during simulation is used to "fine tune" the parameters of the Bayesian network model, leading to better directives for the test generator. Clustering techniques enhance the efficiency of the CDG process by focusing on sets of non-covered events, instead of one event at a time. The second method improves upon previous results by providing a technique to find the number of clusters to be used by the clustering algorithm. Applying these methods to a real-world design shows improvement in performance over previously published data.
{"title":"Enhancing the efficiency of Bayesian network based coverage directed test generation","authors":"Markus Braun, S. Fine, A. Ziv","doi":"10.1109/HLDVT.2004.1431241","DOIUrl":"https://doi.org/10.1109/HLDVT.2004.1431241","url":null,"abstract":"Coverage directed test generation (CDG) is a technique for providing feedback from the coverage domain back to a generator, which produces new stimuli to the tested design. Recent work showed that CDG, implemented using Bayesian networks, can improve the efficiency and reduce the human interaction in the verification process over directed random stimuli. This paper discusses two methods that improve the efficiency of the CDG process. In the first method, additional data collected during simulation is used to \"fine tune\" the parameters of the Bayesian network model, leading to better directives for the test generator. Clustering techniques enhance the efficiency of the CDG process by focusing on sets of non-covered events, instead of one event at a time. The second method improves upon previous results by providing a technique to find the number of clusters to be used by the clustering algorithm. Applying these methods to a real-world design shows improvement in performance over previously published data.","PeriodicalId":240214,"journal":{"name":"Proceedings. Ninth IEEE International High-Level Design Validation and Test Workshop (IEEE Cat. No.04EX940)","volume":"6 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2004-11-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127295734","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2004-11-10DOI: 10.1109/HLDVT.2004.1431229
Jason T. Higgins, M. Aagaard
This paper describes a technique that automates the specification and verification of structural-hazard and datapath correctness properties for pipelined circuits. The technique is based upon a template for pipeline stages, a control-circuit cell library, a decomposition of structural hazard and datapath correctness into a collection of simple properties, and a prototype design tool that generates verification scripts for use by external tools. Our case studies include scalar and superscalar implementations of a 32-bit OpenRISC integer microprocessor.
{"title":"Simplifying design and verification for structural hazards and datapaths in pipelined circuits","authors":"Jason T. Higgins, M. Aagaard","doi":"10.1109/HLDVT.2004.1431229","DOIUrl":"https://doi.org/10.1109/HLDVT.2004.1431229","url":null,"abstract":"This paper describes a technique that automates the specification and verification of structural-hazard and datapath correctness properties for pipelined circuits. The technique is based upon a template for pipeline stages, a control-circuit cell library, a decomposition of structural hazard and datapath correctness into a collection of simple properties, and a prototype design tool that generates verification scripts for use by external tools. Our case studies include scalar and superscalar implementations of a 32-bit OpenRISC integer microprocessor.","PeriodicalId":240214,"journal":{"name":"Proceedings. Ninth IEEE International High-Level Design Validation and Test Workshop (IEEE Cat. No.04EX940)","volume":"42 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2004-11-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127507204","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2004-11-10DOI: 10.1109/HLDVT.2004.1431240
F. Fummi, C. Marconcini, G. Pravadelli
The paper presents a methodology for addressing hard-to-detect faults when a high-level ATPG is applied to verify functional descriptions of sequential circuits. A particular kind of extended finite state machines is adopted to improve detectability of such faults.
{"title":"Functional verification based on the EFSM model","authors":"F. Fummi, C. Marconcini, G. Pravadelli","doi":"10.1109/HLDVT.2004.1431240","DOIUrl":"https://doi.org/10.1109/HLDVT.2004.1431240","url":null,"abstract":"The paper presents a methodology for addressing hard-to-detect faults when a high-level ATPG is applied to verify functional descriptions of sequential circuits. A particular kind of extended finite state machines is adopted to improve detectability of such faults.","PeriodicalId":240214,"journal":{"name":"Proceedings. Ninth IEEE International High-Level Design Validation and Test Workshop (IEEE Cat. No.04EX940)","volume":"46 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2004-11-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"121716575","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}