Pub Date : 2001-12-07DOI: 10.1109/HLDVT.2001.972814
Annette Bunker, G. Gopalakrishnan
Interface standard specification documents are notoriously difficult to read and interpret consistently. The advent of the system-on-chip design paradigm compounds the problem as multiple vendors attempt to interpret the standard consistently. Monitors, while popular for formal and semiformal verification, do not offer a readable, high-level description. We propose using Live Sequence Charts to specify hardware standards using a recent Virtual Sockets Interface Alliance standard as a running example.
{"title":"Using live sequence charts for hardware protocol specification and compliance verification","authors":"Annette Bunker, G. Gopalakrishnan","doi":"10.1109/HLDVT.2001.972814","DOIUrl":"https://doi.org/10.1109/HLDVT.2001.972814","url":null,"abstract":"Interface standard specification documents are notoriously difficult to read and interpret consistently. The advent of the system-on-chip design paradigm compounds the problem as multiple vendors attempt to interpret the standard consistently. Monitors, while popular for formal and semiformal verification, do not offer a readable, high-level description. We propose using Live Sequence Charts to specify hardware standards using a recent Virtual Sockets Interface Alliance standard as a running example.","PeriodicalId":188469,"journal":{"name":"Sixth IEEE International High-Level Design Validation and Test Workshop","volume":"83 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2001-12-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"117255205","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2001-12-07DOI: 10.1109/HLDVT.2001.972815
T. Braun, A. Condon, A. Hu, Kai S. Juse, Marius Laza, Michael Leslie, Rita Sharma
Sequential consistency is a multiprocessor memory model of both practical and theoretical importance. Unfortunately, the general problem of verifying that a finite-state protocol implements sequential consistency is undecidable, and in practice, validating that a real-world, finite-state protocol implements sequential consistency is very time-consuming and costly. In this work, we show that for memory protocols that occur in practice, a small amount of manual effort can reduce the problem of verifying sequential consistency into a verification task that can be discharged automatically via model checking. Furthermore, we present experimental results on a substantial, directory-based cache coherence protocol, which demonstrate the practicality of our approach.
{"title":"Proving sequential consistency by model checking","authors":"T. Braun, A. Condon, A. Hu, Kai S. Juse, Marius Laza, Michael Leslie, Rita Sharma","doi":"10.1109/HLDVT.2001.972815","DOIUrl":"https://doi.org/10.1109/HLDVT.2001.972815","url":null,"abstract":"Sequential consistency is a multiprocessor memory model of both practical and theoretical importance. Unfortunately, the general problem of verifying that a finite-state protocol implements sequential consistency is undecidable, and in practice, validating that a real-world, finite-state protocol implements sequential consistency is very time-consuming and costly. In this work, we show that for memory protocols that occur in practice, a small amount of manual effort can reduce the problem of verifying sequential consistency into a verification task that can be discharged automatically via model checking. Furthermore, we present experimental results on a substantial, directory-based cache coherence protocol, which demonstrate the practicality of our approach.","PeriodicalId":188469,"journal":{"name":"Sixth IEEE International High-Level Design Validation and Test Workshop","volume":"21 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2001-12-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114280399","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2001-12-07DOI: 10.1109/HLDVT.2001.972820
J. Bhadra, Andrew K. Martin, J. Abraham, M. Abadir
We present a methodology in which the behavior of custom memories can be abstracted by a couple of artifacts-one for the interface and another for the contents. Memories consisting of several ports result into several user-provided abstract specifications, which in turn can be converted to simulation models. We show that (i) a simulation model is an approximation of the corresponding abstract specification and (ii) the abstracted memory core can be composed with the un-abstracted surrounding logic using a simple theory of composition. We make use of this methodology to verify equivalence between register transfer level and transistor level descriptions of custom memories.
{"title":"A language formalism for verification of PowerPC/sup TM/ custom memories using compositions of abstract specifications","authors":"J. Bhadra, Andrew K. Martin, J. Abraham, M. Abadir","doi":"10.1109/HLDVT.2001.972820","DOIUrl":"https://doi.org/10.1109/HLDVT.2001.972820","url":null,"abstract":"We present a methodology in which the behavior of custom memories can be abstracted by a couple of artifacts-one for the interface and another for the contents. Memories consisting of several ports result into several user-provided abstract specifications, which in turn can be converted to simulation models. We show that (i) a simulation model is an approximation of the corresponding abstract specification and (ii) the abstracted memory core can be composed with the un-abstracted surrounding logic using a simple theory of composition. We make use of this methodology to verify equivalence between register transfer level and transistor level descriptions of custom memories.","PeriodicalId":188469,"journal":{"name":"Sixth IEEE International High-Level Design Validation and Test Workshop","volume":"7 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2001-12-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"122065348","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2001-12-07DOI: 10.1109/HLDVT.2001.972824
Dong Wang, E. Clarke, Yunshan Zhu, J. Kukula
In this paper, we propose cutwidth based heuristics to improve the efficiency of symbolic simulation and SAT algorithms. These algorithms are the underlying engines of many formal verification techniques. We present a new approach for computing variable orderings that reduce CNF/circuit cutwidth. We show that the circuit cutwidth and the peak number of live BDDs during symbolic simulation are equal. Thus using an ordering that reduces the cutwidth in scheduling the gates during symbolic simulation can significantly improve both the runtime and memory requirements. It has been shown that the time complexity of SAT problems can be bounded exponentially by the formula cutwidth and many practical circuits has cutwidth logarithmic of the size of the formulas. We have developed cutwidth based heuristics which in practice can speed up existing SAT algorithms, especially for SAT instances with small cutwidth. We demonstrate the power of our approach on a number of standard benchmarks.
{"title":"Using cutwidth to improve symbolic simulation and Boolean satisfiability","authors":"Dong Wang, E. Clarke, Yunshan Zhu, J. Kukula","doi":"10.1109/HLDVT.2001.972824","DOIUrl":"https://doi.org/10.1109/HLDVT.2001.972824","url":null,"abstract":"In this paper, we propose cutwidth based heuristics to improve the efficiency of symbolic simulation and SAT algorithms. These algorithms are the underlying engines of many formal verification techniques. We present a new approach for computing variable orderings that reduce CNF/circuit cutwidth. We show that the circuit cutwidth and the peak number of live BDDs during symbolic simulation are equal. Thus using an ordering that reduces the cutwidth in scheduling the gates during symbolic simulation can significantly improve both the runtime and memory requirements. It has been shown that the time complexity of SAT problems can be bounded exponentially by the formula cutwidth and many practical circuits has cutwidth logarithmic of the size of the formulas. We have developed cutwidth based heuristics which in practice can speed up existing SAT algorithms, especially for SAT instances with small cutwidth. We demonstrate the power of our approach on a number of standard benchmarks.","PeriodicalId":188469,"journal":{"name":"Sixth IEEE International High-Level Design Validation and Test Workshop","volume":"172 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2001-12-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130773573","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2001-12-07DOI: 10.1109/HLDVT.2001.972819
F. Balarin, J. Burch, L. Lavagno, Yosinori Watanabe, R. Passerone, A. Sangiovanni-Vincentelli
We are proposing a formalism to express performance constraints at a high level of abstraction. The formalism allows specifying design performance constraints even before all low level details necessary to evaluate them are known. It is based on a solid mathematical foundation, to remove any ambiguity in its interpretation, and yet it allows quite simple and natural specification of many typical constraints. Once the design details are known, the satisfaction of constraints can be checked either by simulation, or by formal techniques like theorem proving, and, in some cases, by automatic model checking.
{"title":"Constraints specification at higher levels of abstraction","authors":"F. Balarin, J. Burch, L. Lavagno, Yosinori Watanabe, R. Passerone, A. Sangiovanni-Vincentelli","doi":"10.1109/HLDVT.2001.972819","DOIUrl":"https://doi.org/10.1109/HLDVT.2001.972819","url":null,"abstract":"We are proposing a formalism to express performance constraints at a high level of abstraction. The formalism allows specifying design performance constraints even before all low level details necessary to evaluate them are known. It is based on a solid mathematical foundation, to remove any ambiguity in its interpretation, and yet it allows quite simple and natural specification of many typical constraints. Once the design details are known, the satisfaction of constraints can be checked either by simulation, or by formal techniques like theorem proving, and, in some cases, by automatic model checking.","PeriodicalId":188469,"journal":{"name":"Sixth IEEE International High-Level Design Validation and Test Workshop","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2001-12-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"131119109","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2001-12-07DOI: 10.1109/HLDVT.2001.972804
I. Pomeranz, S. Reddy
When the gate-level description of a logic block is unknown, it may become necessary to estimate the gate-level stuck-at fault coverage of a test set for the block by using a fault coverage metric that does not require simulation of gate-level faults. We propose such a metric based on stuck-at faults on primary inputs of the block We show that the proposed metric is accurate in predicting the relative gate-level stuck-at fault coverage of different test sets.
{"title":"Estimating the relative single stuck-at fault coverage of test sets for a combinational logic block from its functional description","authors":"I. Pomeranz, S. Reddy","doi":"10.1109/HLDVT.2001.972804","DOIUrl":"https://doi.org/10.1109/HLDVT.2001.972804","url":null,"abstract":"When the gate-level description of a logic block is unknown, it may become necessary to estimate the gate-level stuck-at fault coverage of a test set for the block by using a fault coverage metric that does not require simulation of gate-level faults. We propose such a metric based on stuck-at faults on primary inputs of the block We show that the proposed metric is accurate in predicting the relative gate-level stuck-at fault coverage of different test sets.","PeriodicalId":188469,"journal":{"name":"Sixth IEEE International High-Level Design Validation and Test Workshop","volume":"75 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2001-12-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114826331","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2001-12-07DOI: 10.1109/HLDVT.2001.972802
M. D. McKinney
As ASIC designs become more complex, it follows that the complexity of the verification environments for such designs increases dramatically as well. However, while System-on-Chip methodologies and thought processes have been strongly accepted and utilized for the HDL design, there has not been a concurrent type of strong process taking place for verification environments. That is, the HDL of an ASIC design can be divided, even sub-divided, into understandable but reasonably sized components whose behavior can be comprehended in a reasonable amount of time However, any verification environment that is created or generated for these design sub-blocks remains highly complex, whether written in HDL or any of the various verification or scripting languages now available. This paper will address issues faced and lessons learned by an ASIC design team whose product is a highly complex SOC-based design. The team's desire was to integrate C++, Tcl and Perl together in a coherent, highly intelligent and usable verification environment for the ASIC. This effort was highly successful (although there have been some less encouraging moments along the way) and the resulting simulation environment is being used now with acceptable results.
{"title":"Integrating Perl, Tcl and C++ into simulation-based ASIC verification environments","authors":"M. D. McKinney","doi":"10.1109/HLDVT.2001.972802","DOIUrl":"https://doi.org/10.1109/HLDVT.2001.972802","url":null,"abstract":"As ASIC designs become more complex, it follows that the complexity of the verification environments for such designs increases dramatically as well. However, while System-on-Chip methodologies and thought processes have been strongly accepted and utilized for the HDL design, there has not been a concurrent type of strong process taking place for verification environments. That is, the HDL of an ASIC design can be divided, even sub-divided, into understandable but reasonably sized components whose behavior can be comprehended in a reasonable amount of time However, any verification environment that is created or generated for these design sub-blocks remains highly complex, whether written in HDL or any of the various verification or scripting languages now available. This paper will address issues faced and lessons learned by an ASIC design team whose product is a highly complex SOC-based design. The team's desire was to integrate C++, Tcl and Perl together in a coherent, highly intelligent and usable verification environment for the ASIC. This effort was highly successful (although there have been some less encouraging moments along the way) and the resulting simulation environment is being used now with acceptable results.","PeriodicalId":188469,"journal":{"name":"Sixth IEEE International High-Level Design Validation and Test Workshop","volume":"16 22","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2001-12-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"132545764","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2001-12-07DOI: 10.1109/HLDVT.2001.972816
Shuvendu K. Lahiri, C. Pixley, Ken Albin
The paper describes the use of term-level modeling and verification of an industrial microprocessor, M*CORE/sup TM/ which is a limited dual-issue, super-scalar processor with instruction prefetching mechanism, deep pipeline, multicycle functional units, speculation and interlocks. Term-level modeling uses terms, uninterpreted functions and predicates to abstract the data path complexity of the microprocessor. The verification of the control path is carried out almost mechanically with the aid of CMU-EVC, an extremely efficient decision procedure based on the Logic of Positive Equality with Uninterpreted Functions (PEUF). The verification effort resulted in detection of a couple of non-trivial bugs in the microarchitecture in design exploration phase of the design. The paper demonstrates the effectiveness of CMU-EVC for automated verification of real-life microprocessor designs and also points out some of the challenges and the future work that need to be addressed in term-level modeling and verification of microprocessors using CMU-EVC.
{"title":"Experience with term level modeling and verification of the M*CORE/sup TM/ microprocessor core","authors":"Shuvendu K. Lahiri, C. Pixley, Ken Albin","doi":"10.1109/HLDVT.2001.972816","DOIUrl":"https://doi.org/10.1109/HLDVT.2001.972816","url":null,"abstract":"The paper describes the use of term-level modeling and verification of an industrial microprocessor, M*CORE/sup TM/ which is a limited dual-issue, super-scalar processor with instruction prefetching mechanism, deep pipeline, multicycle functional units, speculation and interlocks. Term-level modeling uses terms, uninterpreted functions and predicates to abstract the data path complexity of the microprocessor. The verification of the control path is carried out almost mechanically with the aid of CMU-EVC, an extremely efficient decision procedure based on the Logic of Positive Equality with Uninterpreted Functions (PEUF). The verification effort resulted in detection of a couple of non-trivial bugs in the microarchitecture in design exploration phase of the design. The paper demonstrates the effectiveness of CMU-EVC for automated verification of real-life microprocessor designs and also points out some of the challenges and the future work that need to be addressed in term-level modeling and verification of microprocessors using CMU-EVC.","PeriodicalId":188469,"journal":{"name":"Sixth IEEE International High-Level Design Validation and Test Workshop","volume":"79 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2001-12-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114463874","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2001-12-07DOI: 10.1109/HLDVT.2001.972800
P. Mishra, N. Dutt, A. Nicolau
Recent approaches on language-driven Design Space Exploration (DSE) use Architectural Description Languages (ADL) to capture the processor architecture, generate automatically a software toolkit (including compiler, simulator, and assembler) for that processor, and provide feedback to the designer on the quality of the architecture. It is important to verify the ADL description of the processor to ensure the correctness of the software toolkit. We present in this paper an automatic validation framework, driven by an ADL. We present algorithms for automatic validation of ADL specification of the processor pipelines. We applied our methodology to verify several realistic processor cores to demonstrate the usefulness of our approach.
{"title":"Automatic validation of pipeline specifications","authors":"P. Mishra, N. Dutt, A. Nicolau","doi":"10.1109/HLDVT.2001.972800","DOIUrl":"https://doi.org/10.1109/HLDVT.2001.972800","url":null,"abstract":"Recent approaches on language-driven Design Space Exploration (DSE) use Architectural Description Languages (ADL) to capture the processor architecture, generate automatically a software toolkit (including compiler, simulator, and assembler) for that processor, and provide feedback to the designer on the quality of the architecture. It is important to verify the ADL description of the processor to ensure the correctness of the software toolkit. We present in this paper an automatic validation framework, driven by an ADL. We present algorithms for automatic validation of ADL specification of the processor pipelines. We applied our methodology to verify several realistic processor cores to demonstrate the usefulness of our approach.","PeriodicalId":188469,"journal":{"name":"Sixth IEEE International High-Level Design Validation and Test Workshop","volume":"13 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2001-12-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"122882793","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2001-12-07DOI: 10.1109/HLDVT.2001.972808
Byeong Min, G. Choi
Code-level coverage is often used to measure RTL-level verification progress. However, a simple code-level coverage inaccurately estimates the verification result by considering only the excitations of functional blocks. A coverage measure that considers additional verification qualities, such as conditions checking or observation, can significantly extend the verification accuracy. However, identifying a design error becomes increasingly difficult as design complexity increases. This paper presents heuristic approaches that increase the chance of detecting obvious-but-easily-missed design errors by allowing a designer/verification-engineer to define additional condition states to be checked. The verification approach is implemented using Verilog Programming Language Interface (PLI) and several benchmark circuits are analyzed The results indicate a high correlation between actual error(design mutant) detection rate and the proposed coverage measure The proposed coverage enhances verification performance with less user interaction, fast coverage calculation, and with less system overhead.
{"title":"RTL functional verification using excitation and observation coverage","authors":"Byeong Min, G. Choi","doi":"10.1109/HLDVT.2001.972808","DOIUrl":"https://doi.org/10.1109/HLDVT.2001.972808","url":null,"abstract":"Code-level coverage is often used to measure RTL-level verification progress. However, a simple code-level coverage inaccurately estimates the verification result by considering only the excitations of functional blocks. A coverage measure that considers additional verification qualities, such as conditions checking or observation, can significantly extend the verification accuracy. However, identifying a design error becomes increasingly difficult as design complexity increases. This paper presents heuristic approaches that increase the chance of detecting obvious-but-easily-missed design errors by allowing a designer/verification-engineer to define additional condition states to be checked. The verification approach is implemented using Verilog Programming Language Interface (PLI) and several benchmark circuits are analyzed The results indicate a high correlation between actual error(design mutant) detection rate and the proposed coverage measure The proposed coverage enhances verification performance with less user interaction, fast coverage calculation, and with less system overhead.","PeriodicalId":188469,"journal":{"name":"Sixth IEEE International High-Level Design Validation and Test Workshop","volume":"481 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2001-12-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129765745","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}