We describe an implicit technique for enumerating structural choices in circuit optimization. The restructuring technique relies on the symbolic statements of functional decomposition which explores behavioral equivalence of circuit signals through rewiring and resubstitution. Using rigid, yet practical, formulation a rich variety of restructuring candidates is computed symbolically and applied incrementally to produce circuit changes with predictable structural effects. The restructuring technique is used to obtain much improved delays of the already optimized circuits along with their area savings. It is also applied to analyze benefits of optimizing circuit topology at the early steps of synthesis targeting its routability.
{"title":"Implicit enumeration of structural changes in circuit optimization","authors":"Victor N. Kravets, P. Kudva","doi":"10.1145/996566.996691","DOIUrl":"https://doi.org/10.1145/996566.996691","url":null,"abstract":"We describe an implicit technique for enumerating structural choices in circuit optimization. The restructuring technique relies on the symbolic statements of functional decomposition which explores behavioral equivalence of circuit signals through rewiring and resubstitution. Using rigid, yet practical, formulation a rich variety of restructuring candidates is computed symbolically and applied incrementally to produce circuit changes with predictable structural effects. The restructuring technique is used to obtain much improved delays of the already optimized circuits along with their area savings. It is also applied to analyze benefits of optimizing circuit topology at the early steps of synthesis targeting its routability.","PeriodicalId":115059,"journal":{"name":"Proceedings. 41st Design Automation Conference, 2004.","volume":"43 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2004-06-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"117208046","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
In designing asynchronous circuits it is critical to ensure that cir-cuits are free of hazards in the specified set of input transitions. In this paper, two new algorithms are proposed to determine if a com-binational circuit is hazard-free without exploring all its gates, thus providing more efficient hazard detection. Experimental results in-dicate that the best new algorithm on average visits only 20.7% of the original gates, with an average runtime speedup of 1.69 and best speedup of 2.27 (for the largest example.
{"title":"Fast hazard detection in combinational circuits","authors":"Cheoljoo Jeong, S. Nowick","doi":"10.1145/996566.996728","DOIUrl":"https://doi.org/10.1145/996566.996728","url":null,"abstract":"In designing asynchronous circuits it is critical to ensure that cir-cuits are free of hazards in the specified set of input transitions. In this paper, two new algorithms are proposed to determine if a com-binational circuit is hazard-free without exploring all its gates, thus providing more efficient hazard detection. Experimental results in-dicate that the best new algorithm on average visits only 20.7% of the original gates, with an average runtime speedup of 1.69 and best speedup of 2.27 (for the largest example.","PeriodicalId":115059,"journal":{"name":"Proceedings. 41st Design Automation Conference, 2004.","volume":"102 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2004-06-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"124749728","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
As technology advances, the metal width is decreasing with the length increasing, making the resistance along the power line increase substantially. Together with the nonlinear scaling of the threshold voltage that makes the ratio of the threshold voltage to the supply voltage rise, the voltage (IR) drop become a serious problem in modern VLSI design. Traditional power/ground (P/G) network analysis methods are typically very computationally expensive and thus not feasible to be integrated into floorplanning. To make the integration of the P/G analysis with floorplanning feasible, we need a very efficient, yet sufficiently accurate analysis method. In this paper, we present the methods for the fast analysis of the P/G networks at the floorplanning stage and integrate our analyzer into a commercial tool to develop a power integrity (IR drop) driven design methodology. Experimental results based on three real-world circuit designs show that our P/G network analyzer is accurate enough and very efficient.
{"title":"Efficient power/ground network analysis for power integrity-driven design methodology","authors":"Su-Wei Wu, Yao-Wen Chang","doi":"10.1145/996566.996617","DOIUrl":"https://doi.org/10.1145/996566.996617","url":null,"abstract":"As technology advances, the metal width is decreasing with the length increasing, making the resistance along the power line increase substantially. Together with the nonlinear scaling of the threshold voltage that makes the ratio of the threshold voltage to the supply voltage rise, the voltage (IR) drop become a serious problem in modern VLSI design. Traditional power/ground (P/G) network analysis methods are typically very computationally expensive and thus not feasible to be integrated into floorplanning. To make the integration of the P/G analysis with floorplanning feasible, we need a very efficient, yet sufficiently accurate analysis method. In this paper, we present the methods for the fast analysis of the P/G networks at the floorplanning stage and integrate our analyzer into a commercial tool to develop a power integrity (IR drop) driven design methodology. Experimental results based on three real-world circuit designs show that our P/G network analyzer is accurate enough and very efficient.","PeriodicalId":115059,"journal":{"name":"Proceedings. 41st Design Automation Conference, 2004.","volume":"62 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2004-06-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129744666","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Power optimization is of growing importance for FPGAs in nanometer technologies. Considering dual-Vdd technique, we show that configurable power supply is required to obtain a satisfactory performance and power tradeoff. We design FPGA circuits and logic fabrics using configurable dual-Vdd and develop the corresponding CAD flow to leverage such circuits and logic fabrics. We then carry out a highly quantitative study using area, delay and power models obtained from detailed circuit design and SPICE simulation in 100nm technology. Compared to single-Vdd FPGAs with optimized Vdd level for the same target clock frequency, configurable dual-Vdd FPGAs with full and partial supply programmability for logic blocks reduce logic power by 35.46% and 28.62% respectively and reduce total FPGA power by 14.29% and 9.04% respectively. To the best of our knowledge, it is the first in-depth study on FPGAs with configurable dual-Vdd for power reduction.
{"title":"FPGA power reduction using configurable dual-Vdd","authors":"Fei Li, Yan Lin, Lei He","doi":"10.1145/996566.996767","DOIUrl":"https://doi.org/10.1145/996566.996767","url":null,"abstract":"Power optimization is of growing importance for FPGAs in nanometer technologies. Considering dual-Vdd technique, we show that configurable power supply is required to obtain a satisfactory performance and power tradeoff. We design FPGA circuits and logic fabrics using configurable dual-Vdd and develop the corresponding CAD flow to leverage such circuits and logic fabrics. We then carry out a highly quantitative study using area, delay and power models obtained from detailed circuit design and SPICE simulation in 100nm technology. Compared to single-Vdd FPGAs with optimized Vdd level for the same target clock frequency, configurable dual-Vdd FPGAs with full and partial supply programmability for logic blocks reduce logic power by 35.46% and 28.62% respectively and reduce total FPGA power by 14.29% and 9.04% respectively. To the best of our knowledge, it is the first in-depth study on FPGAs with configurable dual-Vdd for power reduction.","PeriodicalId":115059,"journal":{"name":"Proceedings. 41st Design Automation Conference, 2004.","volume":"53 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2004-06-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128457124","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Sanjay Pant, D. Blaauw, V. Zolotov, S. Sundareswaran, R. Panda
Power supply integrity analysis is critical in modern high perfor-mance designs. In this paper, we propose a stochastic approach to obtain statistical information about the collective IR and LdI/dt drop in a power supply network. The currents drawn from the power grid by the blocks in a design are modelled as stochastic processes and their statistical information is extracted, including correlation infor-mation between blocks in both space and time. We then propose a method to propagate the statistical parameters of the block currents through the linear model of the power grid to obtain the mean and standard deviation of the voltage drops at any node in the grid. We show that the run time is linear with the length of the current wave-forms allowing for extensive vectors, up to millions of cycles, to be analyzed. We implemented the approach on a number of grids, including a grid from an industrial microprocessor and demonstrate its accuracy and efficiency. The proposed statistical analysis can be use to determine which portions of the grid are most likely to fail as well as to provide information for other analyses, such as statistical timing analysis.
电源完整性分析对于现代高能效设计至关重要。在本文中,我们提出了一种随机方法,用于获取电源网络中集体 IR 和 LdI/dt 下降的统计信息。我们将设计中各块从电网汲取的电流模拟为随机过程,并提取其统计信息,包括各块之间在空间和时间上的相关信息。然后,我们提出了一种通过电网线性模型传播块电流统计参数的方法,以获得电网中任意节点电压降的平均值和标准偏差。我们证明,运行时间与电流波形的长度呈线性关系,因此可以分析多达数百万个周期的大量矢量。我们在许多电网(包括来自工业微处理器的电网)上实施了该方法,并证明了其准确性和效率。建议的统计分析可用于确定电网的哪些部分最有可能发生故障,并为统计时序分析等其他分析提供信息。
{"title":"A stochastic approach to power grid analysis","authors":"Sanjay Pant, D. Blaauw, V. Zolotov, S. Sundareswaran, R. Panda","doi":"10.1145/996566.996616","DOIUrl":"https://doi.org/10.1145/996566.996616","url":null,"abstract":"Power supply integrity analysis is critical in modern high perfor-mance designs. In this paper, we propose a stochastic approach to obtain statistical information about the collective IR and LdI/dt drop in a power supply network. The currents drawn from the power grid by the blocks in a design are modelled as stochastic processes and their statistical information is extracted, including correlation infor-mation between blocks in both space and time. We then propose a method to propagate the statistical parameters of the block currents through the linear model of the power grid to obtain the mean and standard deviation of the voltage drops at any node in the grid. We show that the run time is linear with the length of the current wave-forms allowing for extensive vectors, up to millions of cycles, to be analyzed. We implemented the approach on a number of grids, including a grid from an industrial microprocessor and demonstrate its accuracy and efficiency. The proposed statistical analysis can be use to determine which portions of the grid are most likely to fail as well as to provide information for other analyses, such as statistical timing analysis.","PeriodicalId":115059,"journal":{"name":"Proceedings. 41st Design Automation Conference, 2004.","volume":"29 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2004-06-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129312212","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
A. Sangiovanni-Vincentelli, L. Carloni, F. Bernardinis, M. Sgroi
Platforms have become an important concept in the design of electronic systems. We present here the motivations behind the interest shown and the challenges that we have to face to make the Platform-based Design method a standard. As a generic term, platforms have meant different things to different people. The main challenges are to distill the essence of the method, to formalize it and to provide a framework to support its use in areas that go beyond the original domain of application.
{"title":"Benefits and challenges for platform-based design","authors":"A. Sangiovanni-Vincentelli, L. Carloni, F. Bernardinis, M. Sgroi","doi":"10.1145/996566.996684","DOIUrl":"https://doi.org/10.1145/996566.996684","url":null,"abstract":"Platforms have become an important concept in the design of electronic systems. We present here the motivations behind the interest shown and the challenges that we have to face to make the Platform-based Design method a standard. As a generic term, platforms have meant different things to different people. The main challenges are to distill the essence of the method, to formalize it and to provide a framework to support its use in areas that go beyond the original domain of application.","PeriodicalId":115059,"journal":{"name":"Proceedings. 41st Design Automation Conference, 2004.","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2004-06-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129632899","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Abstraction plays a critical role in verifying complex sys-tems. A number of languages have been proposed to model hardware systems by, primarily, abstracting away their wide datapaths while keeping the low-level details of their control logic. This leads to a significant reduction in the size of the state space and makes it possible to verify intricate control interactions formally. These languages, however, require that the abstraction be done manually, a tedious and error-prone process. In this paper we describe Vapor, a tool that auto-matically abstracts behavioral RTL Verilog to the CLU lan-guage used by the UCLID system. Vapor performs a sound abstraction with emphasis on minimizing false errors. Our method is fast, systematic, and complements UCLID by serving as a back-end for dealing with UCLID counterexamples. Preliminary results show the feasibility of automatic abstraction and its utility in formal verification.
{"title":"Automatic abstraction and verification of verilog models","authors":"Zaher S. Andraus, K. Sakallah","doi":"10.1145/996566.996629","DOIUrl":"https://doi.org/10.1145/996566.996629","url":null,"abstract":"Abstraction plays a critical role in verifying complex sys-tems. A number of languages have been proposed to model hardware systems by, primarily, abstracting away their wide datapaths while keeping the low-level details of their control logic. This leads to a significant reduction in the size of the state space and makes it possible to verify intricate control interactions formally. These languages, however, require that the abstraction be done manually, a tedious and error-prone process. In this paper we describe Vapor, a tool that auto-matically abstracts behavioral RTL Verilog to the CLU lan-guage used by the UCLID system. Vapor performs a sound abstraction with emphasis on minimizing false errors. Our method is fast, systematic, and complements UCLID by serving as a back-end for dealing with UCLID counterexamples. Preliminary results show the feasibility of automatic abstraction and its utility in formal verification.","PeriodicalId":115059,"journal":{"name":"Proceedings. 41st Design Automation Conference, 2004.","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2004-06-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127487861","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Just-in-time (JIT) compilation has previously been used in many applications to enable standard software binaries to execute on different underlying processor architectures. However, embedded systems increasingly incorporate Field Programmable Gate Arrays (FPGAs), for which the concept of a standard hardware binary did not previously exist, requiring designers to implement a hardware circuit for a single specific FPGA. We introduce the concept of a standard hardware binary, using a just-in-time compiler to compile the hardware binary to an FPGA. A JIT compiler for FPGAs requires the development of lean versions of technology mapping, placement, and routing algorithms, of which routing is the most computationally and memory expensive step. We present the Riverside On-Chip Router (ROCR) designed to efficiently route a hardware circuit for a simple configurable logic fabric that we have developed. Through experiments with MCNC benchmark hardware circuits, we show that ROCR works well for JIT FPGA compilation, producing good hardware circuits using an order of magnitude less memory resources and execution time compared with the well known Versatile Place and Route (VPR) tool suite. ROCR produces good hardware circuits using 13X less memory and executing 10X faster than VPR's fastest routing algorithm. Furthermore, our results show ROCR requires only 10% additional routing resources, and results in circuit speeds only 32% slower than VPR's timing-driven router, and speeds that are actually 10% faster than VPR's routability-driven router.
{"title":"Dynamic FPGA routing for just-in-time FPGA compilation","authors":"Roman L. Lysecky, F. Vahid, S. Tan","doi":"10.1145/996566.996819","DOIUrl":"https://doi.org/10.1145/996566.996819","url":null,"abstract":"Just-in-time (JIT) compilation has previously been used in many applications to enable standard software binaries to execute on different underlying processor architectures. However, embedded systems increasingly incorporate Field Programmable Gate Arrays (FPGAs), for which the concept of a standard hardware binary did not previously exist, requiring designers to implement a hardware circuit for a single specific FPGA. We introduce the concept of a standard hardware binary, using a just-in-time compiler to compile the hardware binary to an FPGA. A JIT compiler for FPGAs requires the development of lean versions of technology mapping, placement, and routing algorithms, of which routing is the most computationally and memory expensive step. We present the Riverside On-Chip Router (ROCR) designed to efficiently route a hardware circuit for a simple configurable logic fabric that we have developed. Through experiments with MCNC benchmark hardware circuits, we show that ROCR works well for JIT FPGA compilation, producing good hardware circuits using an order of magnitude less memory resources and execution time compared with the well known Versatile Place and Route (VPR) tool suite. ROCR produces good hardware circuits using 13X less memory and executing 10X faster than VPR's fastest routing algorithm. Furthermore, our results show ROCR requires only 10% additional routing resources, and results in circuit speeds only 32% slower than VPR's timing-driven router, and speeds that are actually 10% faster than VPR's routability-driven router.","PeriodicalId":115059,"journal":{"name":"Proceedings. 41st Design Automation Conference, 2004.","volume":"8 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2004-06-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"123735671","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Timing closure has been a headache, is still a headache, and always will be a headache.The fast - track evolution of consumer electronics (especially) acts to keep our pain level high: if timing closure isn't currently painful, the push for quality of result (QOR) will soon make it painful again. All of the metrics below can be traded off against each other:.QOR Metrics faster silicon (more capable product) cheaper (e.g. smaller die area, fewer metal layers) lower power (for cheaper cooling or for battery life) manufacturable (yield at reasonable cost) time-to-market (total project delay as well as schedule predictability) Timing closure has to be discussed in the context of the simultaneous design closure issues today.Point tools (The "Trees") will always evolve to help relieve the timing closure headache, however, this presentation will focus on chip level optimizations and build methodologies ("The Forest") that go beyond block "P&R" point tools. Full-chip design approaches can harvest large improvements on all of the metrics and we shall show how exploiting "full-chip design slack" in one area can be used to ease timing closure.In the past, there have been many heated arguments have been fought over the relative benefits and dangers of hierarchical physical design. In 2004, we find that most SoCs are being built hierarchically. Using hierarchical design creates boundaries that normally limit cross-block optimization. Typically, design teams do "over-design" or "guard-banding" on individual blocks to insure good probability of design closure. This "over-design" has varying negative effects on the full-chip QOR in the worst case even the system architecture can suffer.Rather than sacrifice QOR, we will show chip-level automatic optimization results. Optimizations in wire length, repeaters, timing budgets, routeability and power distribution all translate into timing closure improvements. This tool uses bottom-up feedback from previously built versions of the design to achieve "as-if-flat" QOR in all the metrics listed.With automatic high quality block optimization now available, we can then harvest the true power of hierarchy: fast full chip builds. Fast builds enable design teams to explore and verify many more design choices. Obviously the highest leverage improvements come from exploring chip architecture alternatives assuming they can be verified with fast and accurate what-if builds. In addition, hierarchy with its inherent compartmentalized changes to the design, overcomes the chaotic behavior of P&R tools, to as much determinism and replayability as possible.Fast builds using the actual production tools, in a synergistic way, enable the continuous bottom-up feedback optimization, with testing and 'lock-in' of solutions to the timing (and other) closure requirements of the design. The result is very smooth path from final netlist (and other deliverables) to tapeout.
{"title":"Forest vs. Trees: Where's the slack?","authors":"P. Rodman","doi":"10.1145/996566.996645","DOIUrl":"https://doi.org/10.1145/996566.996645","url":null,"abstract":"Timing closure has been a headache, is still a headache, and always will be a headache.The fast - track evolution of consumer electronics (especially) acts to keep our pain level high: if timing closure isn't currently painful, the push for quality of result (QOR) will soon make it painful again. All of the metrics below can be traded off against each other:.QOR Metrics faster silicon (more capable product) cheaper (e.g. smaller die area, fewer metal layers) lower power (for cheaper cooling or for battery life) manufacturable (yield at reasonable cost) time-to-market (total project delay as well as schedule predictability) Timing closure has to be discussed in the context of the simultaneous design closure issues today.Point tools (The \"Trees\") will always evolve to help relieve the timing closure headache, however, this presentation will focus on chip level optimizations and build methodologies (\"The Forest\") that go beyond block \"P&R\" point tools. Full-chip design approaches can harvest large improvements on all of the metrics and we shall show how exploiting \"full-chip design slack\" in one area can be used to ease timing closure.In the past, there have been many heated arguments have been fought over the relative benefits and dangers of hierarchical physical design. In 2004, we find that most SoCs are being built hierarchically. Using hierarchical design creates boundaries that normally limit cross-block optimization. Typically, design teams do \"over-design\" or \"guard-banding\" on individual blocks to insure good probability of design closure. This \"over-design\" has varying negative effects on the full-chip QOR in the worst case even the system architecture can suffer.Rather than sacrifice QOR, we will show chip-level automatic optimization results. Optimizations in wire length, repeaters, timing budgets, routeability and power distribution all translate into timing closure improvements. This tool uses bottom-up feedback from previously built versions of the design to achieve \"as-if-flat\" QOR in all the metrics listed.With automatic high quality block optimization now available, we can then harvest the true power of hierarchy: fast full chip builds. Fast builds enable design teams to explore and verify many more design choices. Obviously the highest leverage improvements come from exploring chip architecture alternatives assuming they can be verified with fast and accurate what-if builds. In addition, hierarchy with its inherent compartmentalized changes to the design, overcomes the chaotic behavior of P&R tools, to as much determinism and replayability as possible.Fast builds using the actual production tools, in a synergistic way, enable the continuous bottom-up feedback optimization, with testing and 'lock-in' of solutions to the timing (and other) closure requirements of the design. The result is very smooth path from final netlist (and other deliverables) to tapeout.","PeriodicalId":115059,"journal":{"name":"Proceedings. 41st Design Automation Conference, 2004.","volume":"11 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2004-06-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130997482","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Scan-based silicon debug is a technique that can be used to help find design errors in prototype silicon more quickly. One part of this technique involves the inclusion of breakpoint modules during the design stage of the chip. This paper focuses on an innovative approach to automatically generate breakpoint modules by means of a breakpoint description language. This language is illustrated using an example, and experimental results are presented that show the efficiency and effectiveness of this new method for generating breakpoint hardware.
{"title":"Automatic generation of breakpoint hardware for silicon debug","authors":"B. Vermeulen, Mohammad Zalfany Urfianto, S. Goel","doi":"10.1145/996566.996708","DOIUrl":"https://doi.org/10.1145/996566.996708","url":null,"abstract":"Scan-based silicon debug is a technique that can be used to help find design errors in prototype silicon more quickly. One part of this technique involves the inclusion of breakpoint modules during the design stage of the chip. This paper focuses on an innovative approach to automatically generate breakpoint modules by means of a breakpoint description language. This language is illustrated using an example, and experimental results are presented that show the efficiency and effectiveness of this new method for generating breakpoint hardware.","PeriodicalId":115059,"journal":{"name":"Proceedings. 41st Design Automation Conference, 2004.","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2004-06-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130227249","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}