Although they are very important for design success, the basic structure of the analytical FET models available in the SPICE circuit simulator have received little attention from the circuit design community. However, a number of factors are forcing a change in this situation, The rapid growth of analog and signal processing applications, along with mixed digital/analog functions on the same integrated circuit, are forcing renewed interest in the details of the FET models. These designs require more stringent model accuracy, and it has been found that FET models which were "good enough" for digital circuit design are inadequate in these new cases. In addition, the increasing use of low power technology has also begun to impose a greater need for accuracy. Finally, with the growth of the fabless design industry, the designers and their fabrication facilities are separated in both the geographic and business senses. It behooves the circuit designer to take a more detailed interest in the models which are provided, as these models serve as the critical communication "vehicle" between a circuit designer and the foundry. This paper reviews the current "state of the art" of analytical FET modeling in SPICE. The target audience is the circuit design user of these models.
{"title":"The SPICE FET models: pitfalls and prospects (Are you an educated model consumer?)","authors":"D. Foty","doi":"10.1109/DAC.1996.545570","DOIUrl":"https://doi.org/10.1109/DAC.1996.545570","url":null,"abstract":"Although they are very important for design success, the basic structure of the analytical FET models available in the SPICE circuit simulator have received little attention from the circuit design community. However, a number of factors are forcing a change in this situation, The rapid growth of analog and signal processing applications, along with mixed digital/analog functions on the same integrated circuit, are forcing renewed interest in the details of the FET models. These designs require more stringent model accuracy, and it has been found that FET models which were \"good enough\" for digital circuit design are inadequate in these new cases. In addition, the increasing use of low power technology has also begun to impose a greater need for accuracy. Finally, with the growth of the fabless design industry, the designers and their fabrication facilities are separated in both the geographic and business senses. It behooves the circuit designer to take a more detailed interest in the models which are provided, as these models serve as the critical communication \"vehicle\" between a circuit designer and the foundry. This paper reviews the current \"state of the art\" of analytical FET modeling in SPICE. The target audience is the circuit design user of these models.","PeriodicalId":152966,"journal":{"name":"33rd Design Automation Conference Proceedings, 1996","volume":"499 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1996-06-03","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"116327478","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Logic synthesis systems are complex systems and algorithmic research in synthesis has become highly specialized. This creates a gap where it is often not clear how an advance in a particular algorithm translates into a better synthesis system. This tutorial starts by describing a set of constraints which synthesis algorithms must satisfy to be useful. A small set of established techniques are reviewed relative to these criteria to understand their applicability and the potential for further research in these areas.
{"title":"Tutorial: design of a logic synthesis system","authors":"R. Rudell","doi":"10.1109/DAC.1996.545571","DOIUrl":"https://doi.org/10.1109/DAC.1996.545571","url":null,"abstract":"Logic synthesis systems are complex systems and algorithmic research in synthesis has become highly specialized. This creates a gap where it is often not clear how an advance in a particular algorithm translates into a better synthesis system. This tutorial starts by describing a set of constraints which synthesis algorithms must satisfy to be useful. A small set of established techniques are reviewed relative to these criteria to understand their applicability and the potential for further research in these areas.","PeriodicalId":152966,"journal":{"name":"33rd Design Automation Conference Proceedings, 1996","volume":"34 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1996-06-03","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"124910085","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
The set covering problem and the minimum cost assignment problem (respectively known as unate and binate covering problem) arise throughout the logic synthesis flow. This paper investigates the complexity and approximation ratio of two lower bound computation algorithms from both a theoretical and practical point of view. It also presents a new pruning technique that takes advantage of the partitioning.
{"title":"On solving covering problems [logic synthesis]","authors":"O. Coudert","doi":"10.1109/DAC.1996.545572","DOIUrl":"https://doi.org/10.1109/DAC.1996.545572","url":null,"abstract":"The set covering problem and the minimum cost assignment problem (respectively known as unate and binate covering problem) arise throughout the logic synthesis flow. This paper investigates the complexity and approximation ratio of two lower bound computation algorithms from both a theoretical and practical point of view. It also presents a new pruning technique that takes advantage of the partitioning.","PeriodicalId":152966,"journal":{"name":"33rd Design Automation Conference Proceedings, 1996","volume":"44 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1996-06-03","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"122549827","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
In this paper, we describe how we have improved the efficiency of a finite-element method for interconnect resistance extraction by introducing articulation nodes in the finite element mesh. The articulation nodes are found by detecting equipotential regions and lines in the interconnects. Without generating inaccuracies, these articulation nodes split the finite-element mesh into small pieces that can be solved independently. The method has been implemented in the layout-to-circuit extractor Space. All interconnect resistances of a circuit containing 63,000 transistors are extracted on an HP 9000/735 workstation in approximately 70 minutes.
{"title":"Using articulation nodes to improve the efficiency of finite-element based resistance extraction","authors":"A. Vangenderen, N.P. vanderMeiis","doi":"10.1109/DAC.1996.545674","DOIUrl":"https://doi.org/10.1109/DAC.1996.545674","url":null,"abstract":"In this paper, we describe how we have improved the efficiency of a finite-element method for interconnect resistance extraction by introducing articulation nodes in the finite element mesh. The articulation nodes are found by detecting equipotential regions and lines in the interconnects. Without generating inaccuracies, these articulation nodes split the finite-element mesh into small pieces that can be solved independently. The method has been implemented in the layout-to-circuit extractor Space. All interconnect resistances of a circuit containing 63,000 transistors are extracted on an HP 9000/735 workstation in approximately 70 minutes.","PeriodicalId":152966,"journal":{"name":"33rd Design Automation Conference Proceedings, 1996","volume":"20 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1996-06-03","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129457085","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Exploration at the earliest stages of the design process is an integral component of effective low-power design. Nevertheless, superficial high-level analyses with insufficient accuracy are routinely performed. Critical drawbacks of current high-level design aids include a limited scope of application, inaccuracy of estimation, inaccessibility and steep learning curves. This paper introduces an approach to alleviate these limitations, thus enabling more effective high-level design exploration. A World Wide Web (WWW) based prototype tool called PowerPlay, which encapsulates and enhances these techniques, is presented.
{"title":"Early power exploration-a World Wide Web application [high-level design]","authors":"D. Lidsky, J. Rabaey","doi":"10.1109/DAC.1996.545539","DOIUrl":"https://doi.org/10.1109/DAC.1996.545539","url":null,"abstract":"Exploration at the earliest stages of the design process is an integral component of effective low-power design. Nevertheless, superficial high-level analyses with insufficient accuracy are routinely performed. Critical drawbacks of current high-level design aids include a limited scope of application, inaccuracy of estimation, inaccessibility and steep learning curves. This paper introduces an approach to alleviate these limitations, thus enabling more effective high-level design exploration. A World Wide Web (WWW) based prototype tool called PowerPlay, which encapsulates and enhances these techniques, is presented.","PeriodicalId":152966,"journal":{"name":"33rd Design Automation Conference Proceedings, 1996","volume":"136 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1996-06-03","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"116570846","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Introspection, a zero-overhead binding technique during self-diagnosing microarchitecture synthesis is presented. Given a scheduled control data flow graph (CDFG) introspective binding exploits the spare computation and data transfer capacity in a synergistic fashion to achieve low latency fault diagnostics with near zero area overheads without compromising the performance. The resulting on-chip fault latencies are one ten-thousandth (10/sup -4/) of previously reported system level diagnostic techniques. A novel feature of the proposed technique is the use of spare data transfer capacity in the interconnect network for diagnostics.
{"title":"Introspection: a low overhead binding technique during self-diagnosing microarchitecture synthesis","authors":"B. Iyer, R. Karri","doi":"10.1109/DAC.1996.545560","DOIUrl":"https://doi.org/10.1109/DAC.1996.545560","url":null,"abstract":"Introspection, a zero-overhead binding technique during self-diagnosing microarchitecture synthesis is presented. Given a scheduled control data flow graph (CDFG) introspective binding exploits the spare computation and data transfer capacity in a synergistic fashion to achieve low latency fault diagnostics with near zero area overheads without compromising the performance. The resulting on-chip fault latencies are one ten-thousandth (10/sup -4/) of previously reported system level diagnostic techniques. A novel feature of the proposed technique is the use of spare data transfer capacity in the interconnect network for diagnostics.","PeriodicalId":152966,"journal":{"name":"33rd Design Automation Conference Proceedings, 1996","volume":"81 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1996-06-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115418950","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
A novel constraint-graph algorithm for the optimization of yield is presented. This algorithm improves the yield of a layout by carefully spacing objects to reduce the probability of faults due to spot defects. White space between objects is removed and spacing in tightly packed areas of the layout is increased. The computationally expensive problem of optimizing yield is transformed into a network flow problem, which can be solved via known efficient algorithms. Yield can be improved either without changing the layout area, or if necessary by increasing the layout area to maximize the number of good chips per wafer. Our method can in theory provide the best possible yield achievable without modifying the layout topology. The method is able to handle a general class of convex objective functions, and can therefore optimize not only yield, but other circuit performance functions such as wire-length, cross-talk and power.
{"title":"Enhanced network flow algorithm for yield optimization","authors":"C. Bamji, E. Malavasi","doi":"10.1109/DAC.1996.545672","DOIUrl":"https://doi.org/10.1109/DAC.1996.545672","url":null,"abstract":"A novel constraint-graph algorithm for the optimization of yield is presented. This algorithm improves the yield of a layout by carefully spacing objects to reduce the probability of faults due to spot defects. White space between objects is removed and spacing in tightly packed areas of the layout is increased. The computationally expensive problem of optimizing yield is transformed into a network flow problem, which can be solved via known efficient algorithms. Yield can be improved either without changing the layout area, or if necessary by increasing the layout area to maximize the number of good chips per wafer. Our method can in theory provide the best possible yield achievable without modifying the layout topology. The method is able to handle a general class of convex objective functions, and can therefore optimize not only yield, but other circuit performance functions such as wire-length, cross-talk and power.","PeriodicalId":152966,"journal":{"name":"33rd Design Automation Conference Proceedings, 1996","volume":"97 6 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1996-06-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115689044","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
S. Khatri, A. Narayan, Sriram C. Krishnan, K. McMillan, R. Brayton, A. Sangiovanni-Vincentelli
We propose a new formalism for the Engineering Change (EC) problem in a finite state machine (FSM) setting. Given an implementation that violates the specification, the problem is to alter the behavior of the implementation so that it meets the specification. The implementation can be a pseudo-nondeterministic FSM while the specification may be a nondeterministic FSM. The EC problem is cast as the existence of an "appropriate" simulation relation from the implementation into the specification. We derive the necessary and sufficient conditions for the existence of a solution to the problem. We synthesize all possible solutions, if the EC is feasible. Our algorithm works in space which is linear, and time which is quadratic, in the product of the sizes of implementation and specification. Previous formulations of the problem which admit nondeterministic specifications, although more general, lead to an algorithm which is exponential. We have implemented our procedure using Reduced Ordered Binary Decision Diagrams.
{"title":"Engineering change in a non-deterministic FSM setting","authors":"S. Khatri, A. Narayan, Sriram C. Krishnan, K. McMillan, R. Brayton, A. Sangiovanni-Vincentelli","doi":"10.1109/DAC.1996.545618","DOIUrl":"https://doi.org/10.1109/DAC.1996.545618","url":null,"abstract":"We propose a new formalism for the Engineering Change (EC) problem in a finite state machine (FSM) setting. Given an implementation that violates the specification, the problem is to alter the behavior of the implementation so that it meets the specification. The implementation can be a pseudo-nondeterministic FSM while the specification may be a nondeterministic FSM. The EC problem is cast as the existence of an \"appropriate\" simulation relation from the implementation into the specification. We derive the necessary and sufficient conditions for the existence of a solution to the problem. We synthesize all possible solutions, if the EC is feasible. Our algorithm works in space which is linear, and time which is quadratic, in the product of the sizes of implementation and specification. Previous formulations of the problem which admit nondeterministic specifications, although more general, lead to an algorithm which is exponential. We have implemented our procedure using Reduced Ordered Binary Decision Diagrams.","PeriodicalId":152966,"journal":{"name":"33rd Design Automation Conference Proceedings, 1996","volume":"17 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1996-06-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"121030509","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Shi-Yu Huang, Kuang-Chien Chen, K. Cheng, Tien-Chien Lee
Transistor-level power simulators have been popularly used to estimate the power dissipation of a CMOS circuit. These tools strike a good balance between the conventional transistor-level simulators, such as SPICE, and the logic-level power estimators with regard to accuracy and speed. However, it is still too time-consuming to run these tools for large designs. To simulate one-million functional vectors for a 50 K-gate circuit, these power simulators may take months to complete. In this paper, we propose an approach to generate a compact set of vectors that can mimic the transition behavior of a much larger set of functional vectors, which is given by the designer or extracted from application programs. This compact set of vectors can then replace the functional vectors for power simulation to reduce the simulation time while still retaining a high degree of accuracy. We present experimental results to show the efficiency and accuracy of this approach.
{"title":"Compact vector generation for accurate power simulation","authors":"Shi-Yu Huang, Kuang-Chien Chen, K. Cheng, Tien-Chien Lee","doi":"10.1109/DAC.1996.545564","DOIUrl":"https://doi.org/10.1109/DAC.1996.545564","url":null,"abstract":"Transistor-level power simulators have been popularly used to estimate the power dissipation of a CMOS circuit. These tools strike a good balance between the conventional transistor-level simulators, such as SPICE, and the logic-level power estimators with regard to accuracy and speed. However, it is still too time-consuming to run these tools for large designs. To simulate one-million functional vectors for a 50 K-gate circuit, these power simulators may take months to complete. In this paper, we propose an approach to generate a compact set of vectors that can mimic the transition behavior of a much larger set of functional vectors, which is given by the designer or extracted from application programs. This compact set of vectors can then replace the functional vectors for power simulation to reduce the simulation time while still retaining a high degree of accuracy. We present experimental results to show the efficiency and accuracy of this approach.","PeriodicalId":152966,"journal":{"name":"33rd Design Automation Conference Proceedings, 1996","volume":"74 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1996-06-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127366123","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
This paper presents a new approach for solving the Lower and Upper Bounded delay routing Tree (LUBT) problem using linear programming. LUBT is a Steiner tree rooted at the source node such that delays from the source to sink nodes lie between the given lower and upper bounds. We show that our proposed method produces minimum cost LUBT for a given topology under a linear delay model. Unlike recent works which control only the difference between the maximum and the minimum source-sink delay, we construct routing trees which satisfy distinct lower and upper bound constraints on the source-sink delays. This formulation exploits all the flexibility that is present in low power and high performance clock routing tree design.
{"title":"Constructing lower and upper bounded delay routing trees using linear programming","authors":"Jaewon Oh, I. Pyo, Massoud Pedram","doi":"10.1109/DAC.1996.545609","DOIUrl":"https://doi.org/10.1109/DAC.1996.545609","url":null,"abstract":"This paper presents a new approach for solving the Lower and Upper Bounded delay routing Tree (LUBT) problem using linear programming. LUBT is a Steiner tree rooted at the source node such that delays from the source to sink nodes lie between the given lower and upper bounds. We show that our proposed method produces minimum cost LUBT for a given topology under a linear delay model. Unlike recent works which control only the difference between the maximum and the minimum source-sink delay, we construct routing trees which satisfy distinct lower and upper bound constraints on the source-sink delays. This formulation exploits all the flexibility that is present in low power and high performance clock routing tree design.","PeriodicalId":152966,"journal":{"name":"33rd Design Automation Conference Proceedings, 1996","volume":"24 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1996-06-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"123417224","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}