Pub Date : 1999-11-07DOI: 10.1109/ICCAD.1999.810694
I. Pomeranz, S. Reddy
Describes a method referred to as sequence counting to improve on the levels of compaction achievable by vector omission-based static compaction procedures. Such procedures are used to reduce the lengths of test sequences for synchronous sequential circuits without reducing the fault coverage. The unique feature of the proposed approach is that test vectors omitted from the test sequence can be reintroduced at a later time. Reintroducing vectors helps to reduce the compacted test sequence length beyond the length that can be achieved if vectors are omitted permanently. Experimental results are presented to demonstrate the levels of compaction achieved by the sequence counting approach.
{"title":"An approach for improving the levels of compaction achieved by vector omission","authors":"I. Pomeranz, S. Reddy","doi":"10.1109/ICCAD.1999.810694","DOIUrl":"https://doi.org/10.1109/ICCAD.1999.810694","url":null,"abstract":"Describes a method referred to as sequence counting to improve on the levels of compaction achievable by vector omission-based static compaction procedures. Such procedures are used to reduce the lengths of test sequences for synchronous sequential circuits without reducing the fault coverage. The unique feature of the proposed approach is that test vectors omitted from the test sequence can be reintroduced at a later time. Reintroducing vectors helps to reduce the compacted test sequence length beyond the length that can be achieved if vectors are omitted permanently. Experimental results are presented to demonstrate the levels of compaction achieved by the sequence counting approach.","PeriodicalId":6414,"journal":{"name":"1999 IEEE/ACM International Conference on Computer-Aided Design. Digest of Technical Papers (Cat. No.99CH37051)","volume":"42 1","pages":"463-466"},"PeriodicalIF":0.0,"publicationDate":"1999-11-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"81171596","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 1999-11-07DOI: 10.1109/ICCAD.1999.810638
P. Tafertshofer, A. Ganz
In this paper we present new methods for fast justification and propagation in the implication graph (IG) which is the core data structure of our SAT based implication engine. As the IG model represents all information on the implemented logic function as well as the topology of a circuit, the proposed techniques inherit all advantages of both general SAT based and structure based approaches to justification, propagation, and implication. These three fundamental Boolean problems are the main tasks to be performed during automatic test pattern generation (ATPG) such that the proposed algorithms are incorporated into our ATPG tool TIP which is built on top of the implication engine. Working exclusively in the IG, the complex functional operations of justification, propagation, and implication reduce to significantly simpler graph algorithms. They are easily extended to exploit bit-parallel techniques. As the IG is automatically generated for arbitrary logics the algorithms remain applicable independent of the required logic. This allows processing of various fault models using the same engine. That is, the presented IG based methods offer a complete and versatile framework for rapid development of new ATPG tools that target emerging fault models such as crosstalk, delay or bridging faults. TIP currently handles stuck-at as well as various delay fault models. Furthermore, the proposed methods are used within tools for Boolean equivalence checking, optimization of netlists, timing analysis or retiming (reset state computation).
{"title":"SAT based ATPG using fast justification and propagation in the implication graph","authors":"P. Tafertshofer, A. Ganz","doi":"10.1109/ICCAD.1999.810638","DOIUrl":"https://doi.org/10.1109/ICCAD.1999.810638","url":null,"abstract":"In this paper we present new methods for fast justification and propagation in the implication graph (IG) which is the core data structure of our SAT based implication engine. As the IG model represents all information on the implemented logic function as well as the topology of a circuit, the proposed techniques inherit all advantages of both general SAT based and structure based approaches to justification, propagation, and implication. These three fundamental Boolean problems are the main tasks to be performed during automatic test pattern generation (ATPG) such that the proposed algorithms are incorporated into our ATPG tool TIP which is built on top of the implication engine. Working exclusively in the IG, the complex functional operations of justification, propagation, and implication reduce to significantly simpler graph algorithms. They are easily extended to exploit bit-parallel techniques. As the IG is automatically generated for arbitrary logics the algorithms remain applicable independent of the required logic. This allows processing of various fault models using the same engine. That is, the presented IG based methods offer a complete and versatile framework for rapid development of new ATPG tools that target emerging fault models such as crosstalk, delay or bridging faults. TIP currently handles stuck-at as well as various delay fault models. Furthermore, the proposed methods are used within tools for Boolean equivalence checking, optimization of netlists, timing analysis or retiming (reset state computation).","PeriodicalId":6414,"journal":{"name":"1999 IEEE/ACM International Conference on Computer-Aided Design. Digest of Technical Papers (Cat. No.99CH37051)","volume":"23 1","pages":"139-146"},"PeriodicalIF":0.0,"publicationDate":"1999-11-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"80778902","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
This paper addresses the copyright protection problem of integrated circuits designed with blocks which are originated from multiple design sources. The process consists of two phases. First, a compact signature is generated from every block independently and made public. Utilizing such signatures, a design can be decomposed into its original building blocks, regardless of multiple hierarchies. Then, a map of all the blocks can be built, thus allowing to reconstruct the original copyright dependencies. The proposed methodology can be used by foundries to verify that designs submitted for fabrication contain blocks traceable to a legal source of intellectual property. The verification process is also useful to intellectual property providers and integrators, as it reduces the likelihood of infringement, thus ultimately minimizing the risk of litigation.
{"title":"Copyright protection of designs based on multi source IPs","authors":"E. Charbon, I. Torunoglu","doi":"10.5555/339492.340082","DOIUrl":"https://doi.org/10.5555/339492.340082","url":null,"abstract":"This paper addresses the copyright protection problem of integrated circuits designed with blocks which are originated from multiple design sources. The process consists of two phases. First, a compact signature is generated from every block independently and made public. Utilizing such signatures, a design can be decomposed into its original building blocks, regardless of multiple hierarchies. Then, a map of all the blocks can be built, thus allowing to reconstruct the original copyright dependencies. The proposed methodology can be used by foundries to verify that designs submitted for fabrication contain blocks traceable to a legal source of intellectual property. The verification process is also useful to intellectual property providers and integrators, as it reduces the likelihood of infringement, thus ultimately minimizing the risk of litigation.","PeriodicalId":6414,"journal":{"name":"1999 IEEE/ACM International Conference on Computer-Aided Design. Digest of Technical Papers (Cat. No.99CH37051)","volume":"52 1","pages":"591-595"},"PeriodicalIF":0.0,"publicationDate":"1999-11-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"83797916","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 1999-11-07DOI: 10.1109/ICCAD.1999.810671
D. Rhodes, W. Wolf
We describe the first co-design technique aimed at heterogeneous systems employing arbitrated communication. Arbitrated system design is especially difficult because communication scheduling is directly tied to task allocation. The method provides a complete co-design-i.e. generation of a hardware configuration along with an allocation and schedule for the execution of hard real-time data-dependent tasks. By using an actual scheduling analysis in the inner co-design loop, the method is readily able to address realistic system effects including various communication models like arbitration, as in PCI-based systems.
{"title":"Co-synthesis of heterogeneous multiprocessor systems using arbitrated communication","authors":"D. Rhodes, W. Wolf","doi":"10.1109/ICCAD.1999.810671","DOIUrl":"https://doi.org/10.1109/ICCAD.1999.810671","url":null,"abstract":"We describe the first co-design technique aimed at heterogeneous systems employing arbitrated communication. Arbitrated system design is especially difficult because communication scheduling is directly tied to task allocation. The method provides a complete co-design-i.e. generation of a hardware configuration along with an allocation and schedule for the execution of hard real-time data-dependent tasks. By using an actual scheduling analysis in the inner co-design loop, the method is readily able to address realistic system effects including various communication models like arbitration, as in PCI-based systems.","PeriodicalId":6414,"journal":{"name":"1999 IEEE/ACM International Conference on Computer-Aided Design. Digest of Technical Papers (Cat. No.99CH37051)","volume":"1 1","pages":"339-342"},"PeriodicalIF":0.0,"publicationDate":"1999-11-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"83812938","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 1999-11-07DOI: 10.1109/ICCAD.1999.810673
F. Balarin
We propose a methodology for worst-case analysis of systems with discrete observable signals. The methodology can be used to verify different properties of systems such as power consumption, timing performance or resource utilization. We also propose an application of the methodology to timing analysis of embedded systems implemented on a single processor. The analysis provides a bound on the response time of such systems. It is typically very efficient, because it does not require a state space search.
{"title":"Worst-case analysis of discrete systems","authors":"F. Balarin","doi":"10.1109/ICCAD.1999.810673","DOIUrl":"https://doi.org/10.1109/ICCAD.1999.810673","url":null,"abstract":"We propose a methodology for worst-case analysis of systems with discrete observable signals. The methodology can be used to verify different properties of systems such as power consumption, timing performance or resource utilization. We also propose an application of the methodology to timing analysis of embedded systems implemented on a single processor. The analysis provides a bound on the response time of such systems. It is typically very efficient, because it does not require a state space search.","PeriodicalId":6414,"journal":{"name":"1999 IEEE/ACM International Conference on Computer-Aided Design. Digest of Technical Papers (Cat. No.99CH37051)","volume":"209 1","pages":"347-352"},"PeriodicalIF":0.0,"publicationDate":"1999-11-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"74151658","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 1999-11-07DOI: 10.1109/ICCAD.1999.810706
Clayton B. McDonald, R. Bryant
We introduce a new method of verifying the timing of custom CMOS circuits. Due to the exponential number of patterns required, traditional simulation methods are unable to exhaustively verify a medium-sized modern logic block. Static analysis can handle much larger circuits but is not robust with respect to variations from standard circuit structures. Our approach applies symbolic simulation to analyze a circuit over all input combinations without these limitations. We present a prototype simulator (SirSim) and experimental results. We also discuss using SirSim to verify an industrial design which previously required a special-purpose verification methodology.
{"title":"Symbolic functional and timing verification of transistor-level circuits","authors":"Clayton B. McDonald, R. Bryant","doi":"10.1109/ICCAD.1999.810706","DOIUrl":"https://doi.org/10.1109/ICCAD.1999.810706","url":null,"abstract":"We introduce a new method of verifying the timing of custom CMOS circuits. Due to the exponential number of patterns required, traditional simulation methods are unable to exhaustively verify a medium-sized modern logic block. Static analysis can handle much larger circuits but is not robust with respect to variations from standard circuit structures. Our approach applies symbolic simulation to analyze a circuit over all input combinations without these limitations. We present a prototype simulator (SirSim) and experimental results. We also discuss using SirSim to verify an industrial design which previously required a special-purpose verification methodology.","PeriodicalId":6414,"journal":{"name":"1999 IEEE/ACM International Conference on Computer-Aided Design. Digest of Technical Papers (Cat. No.99CH37051)","volume":"8 1","pages":"526-530"},"PeriodicalIF":0.0,"publicationDate":"1999-11-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"80708828","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 1999-11-07DOI: 10.1109/ICCAD.1999.810670
S. Jung, C. Myers
This paper presents a new method to synthesize timed asynchronous circuits directly from the specification without generating a state graph. The synthesis procedure begins with a deterministic graph specification with timing constraints. A timing analysis extracts the timed concurrency and timed causality relations between any two signal transitions. Then, a hazard-free implementation of the specification is synthesized by analyzing precedence graphs which are constructed by using the timed concurrency and timed causality relations. The major result of this work is that the method does not suffer from the state explosion problem, achieves significant reductions in synthesis time, and generates synthesized circuits that have nearly the same area as compared to previous timed circuit methods. In particular, this paper shows that a timed circuit-not containing circuit hazards under given timing constraints-can be found by using the relations between signal transitions of the specification. Moreover, the relations can be efficiently found using a heuristic timing analysis algorithm. By allowing significantly larger designs to be synthesized, this work is a step towards the development of high-level synthesis tools for system level asynchronous circuits.
{"title":"Direct synthesis of timed asynchronous circuits","authors":"S. Jung, C. Myers","doi":"10.1109/ICCAD.1999.810670","DOIUrl":"https://doi.org/10.1109/ICCAD.1999.810670","url":null,"abstract":"This paper presents a new method to synthesize timed asynchronous circuits directly from the specification without generating a state graph. The synthesis procedure begins with a deterministic graph specification with timing constraints. A timing analysis extracts the timed concurrency and timed causality relations between any two signal transitions. Then, a hazard-free implementation of the specification is synthesized by analyzing precedence graphs which are constructed by using the timed concurrency and timed causality relations. The major result of this work is that the method does not suffer from the state explosion problem, achieves significant reductions in synthesis time, and generates synthesized circuits that have nearly the same area as compared to previous timed circuit methods. In particular, this paper shows that a timed circuit-not containing circuit hazards under given timing constraints-can be found by using the relations between signal transitions of the specification. Moreover, the relations can be efficiently found using a heuristic timing analysis algorithm. By allowing significantly larger designs to be synthesized, this work is a step towards the development of high-level synthesis tools for system level asynchronous circuits.","PeriodicalId":6414,"journal":{"name":"1999 IEEE/ACM International Conference on Computer-Aided Design. Digest of Technical Papers (Cat. No.99CH37051)","volume":"305 1","pages":"332-337"},"PeriodicalIF":0.0,"publicationDate":"1999-11-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"77424790","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 1999-11-07DOI: 10.1109/ICCAD.1999.810626
K. Muhammad, K. Roy
Presents novel design methodologies which can be used to dramatically reduce the complexity of parallel implementations of digital FIR filters. These approaches are also applicable to IIR filters. Two ideas are presented. First, we remove the redundant computation by using a graph-theoretic framework in which we find the optimal re-ordering of computations for maximal computation sharing. Second, we present the novel approach of searching for a quantization which improves the computation sharing when the frequency-domain transfer function is allowed to deviate within given bounds. A simple search scheme is presented and it is shown that, by appropriate perturbation of the filter coefficients, one can dramatically reduce the number of adders required in the filter implementation. Using these approaches, on an average, less than one adder per coefficient is required, in contrast to a full-width multiplier. Hence, the presented methodologies are a useful compliment to the existing design approaches of high-performance and low-power digital filters for future mobile computing and communication systems.
{"title":"A novel design methodology for high performance and low power digital filters","authors":"K. Muhammad, K. Roy","doi":"10.1109/ICCAD.1999.810626","DOIUrl":"https://doi.org/10.1109/ICCAD.1999.810626","url":null,"abstract":"Presents novel design methodologies which can be used to dramatically reduce the complexity of parallel implementations of digital FIR filters. These approaches are also applicable to IIR filters. Two ideas are presented. First, we remove the redundant computation by using a graph-theoretic framework in which we find the optimal re-ordering of computations for maximal computation sharing. Second, we present the novel approach of searching for a quantization which improves the computation sharing when the frequency-domain transfer function is allowed to deviate within given bounds. A simple search scheme is presented and it is shown that, by appropriate perturbation of the filter coefficients, one can dramatically reduce the number of adders required in the filter implementation. Using these approaches, on an average, less than one adder per coefficient is required, in contrast to a full-width multiplier. Hence, the presented methodologies are a useful compliment to the existing design approaches of high-performance and low-power digital filters for future mobile computing and communication systems.","PeriodicalId":6414,"journal":{"name":"1999 IEEE/ACM International Conference on Computer-Aided Design. Digest of Technical Papers (Cat. No.99CH37051)","volume":"23 1","pages":"80-83"},"PeriodicalIF":0.0,"publicationDate":"1999-11-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"74484763","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 1999-11-07DOI: 10.1109/ICCAD.1999.810620
Wolfgang Günther, R. Drechsler
Binary decision diagrams (BDDs) are the state-of-the-art data structure in VLSI CAD, but, due to their ordering restriction, only exponential-sized BDDs exist for many functions of practical relevance. Linear transformations (LTs) have been proposed as a new concept to minimize the size of BDDs, and it is known that, in some cases, even an exponential reduction can be obtained. In addition to a small representation, the efficient manipulation of a data structure is also important. In this paper, we present polynomial-time manipulation algorithms that can be used for linearly transformed BDDs (LT-BDDs) analogously to BDDs. For some operations, like synthesis algorithms based on ITE (if-then-else), it turns out that the techniques known from BDDs can be directly transferred, while for other operations, like quantification and cofactor computation, completely different algorithms have to be used. Experimental results are given to show the efficiency of the approach.
{"title":"Efficient manipulation algorithms for linearly transformed BDDs","authors":"Wolfgang Günther, R. Drechsler","doi":"10.1109/ICCAD.1999.810620","DOIUrl":"https://doi.org/10.1109/ICCAD.1999.810620","url":null,"abstract":"Binary decision diagrams (BDDs) are the state-of-the-art data structure in VLSI CAD, but, due to their ordering restriction, only exponential-sized BDDs exist for many functions of practical relevance. Linear transformations (LTs) have been proposed as a new concept to minimize the size of BDDs, and it is known that, in some cases, even an exponential reduction can be obtained. In addition to a small representation, the efficient manipulation of a data structure is also important. In this paper, we present polynomial-time manipulation algorithms that can be used for linearly transformed BDDs (LT-BDDs) analogously to BDDs. For some operations, like synthesis algorithms based on ITE (if-then-else), it turns out that the techniques known from BDDs can be directly transferred, while for other operations, like quantification and cofactor computation, completely different algorithms have to be used. Experimental results are given to show the efficiency of the approach.","PeriodicalId":6414,"journal":{"name":"1999 IEEE/ACM International Conference on Computer-Aided Design. Digest of Technical Papers (Cat. No.99CH37051)","volume":"19 1","pages":"50-53"},"PeriodicalIF":0.0,"publicationDate":"1999-11-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"74736418","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 1999-11-07DOI: 10.1109/ICCAD.1999.810625
Chunhong Chen, M. Sarrafzadeh
The dual-voltage approach has emerged as an effective and practical technique for power reduction. In this paper, we explore power optimization with dual supply voltages under given timing constraints. By analyzing the relations among the timing slack, delay and power consumption in a given circuit, we relate the voltage-scaling power optimization to the maximal weighted independent set (MWIS) problem, which is polynomial-time solvable on a transitive graph. Then we develop a provably good lower-bound algorithm based on MWIS to generate the lower bound of the power consumption. Also, we propose a fast approach to predict the optimum supply voltages. The maximum power reduction is obtained by using a modified lower-bound algorithm with optimum voltages. Experimental results show that the resulting lower bound is tight for most circuits and that the estimated optimum supply voltage is exactly, or very close to, the best choice of actual voltages.
{"title":"Provably good algorithm for low power consumption with dual supply voltages","authors":"Chunhong Chen, M. Sarrafzadeh","doi":"10.1109/ICCAD.1999.810625","DOIUrl":"https://doi.org/10.1109/ICCAD.1999.810625","url":null,"abstract":"The dual-voltage approach has emerged as an effective and practical technique for power reduction. In this paper, we explore power optimization with dual supply voltages under given timing constraints. By analyzing the relations among the timing slack, delay and power consumption in a given circuit, we relate the voltage-scaling power optimization to the maximal weighted independent set (MWIS) problem, which is polynomial-time solvable on a transitive graph. Then we develop a provably good lower-bound algorithm based on MWIS to generate the lower bound of the power consumption. Also, we propose a fast approach to predict the optimum supply voltages. The maximum power reduction is obtained by using a modified lower-bound algorithm with optimum voltages. Experimental results show that the resulting lower bound is tight for most circuits and that the estimated optimum supply voltage is exactly, or very close to, the best choice of actual voltages.","PeriodicalId":6414,"journal":{"name":"1999 IEEE/ACM International Conference on Computer-Aided Design. Digest of Technical Papers (Cat. No.99CH37051)","volume":"17 1","pages":"76-79"},"PeriodicalIF":0.0,"publicationDate":"1999-11-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"85172343","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}