{"title":"Rigorous System Design","authors":"Yangdong Deng","doi":"10.1561/1000000028","DOIUrl":"https://doi.org/10.1561/1000000028","url":null,"abstract":"","PeriodicalId":42137,"journal":{"name":"Foundations and Trends in Electronic Design Automation","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2013-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"75183718","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
In this survey we describe the main research directions in pre-silicon power modeling and post-silicon power characterization. We review techniques in power modeling and characterization for three computing substrates: general-purpose processors, system-on-chip-based embedded systems, and field programmable gate arrays. We describe the basic principles that govern power consumption in digital circuits, and utilize these principles to describe high-level power modeling techniques for designs of the three computing substrates. Once a computing device is fabricated, direct measurements on the actual device reveal a great wealth of information about the device's power consumption under various operating conditions. We describe characterization techniques that integrate infrared imaging with electric current measurements to generate runtime power maps. The power maps can be used to validate design-time power models and to calibrate computer-aided design tools. We also describe empirical power characterization techniques for software power analysis and for adaptive power-aware computing. Finally, we provide a number of plausible future research directions for power modeling and characterization.
{"title":"Power Modeling and Characterization of Computing Devices: A Survey","authors":"S. Reda, Abdullah Nazma Nowroz","doi":"10.1561/1000000022","DOIUrl":"https://doi.org/10.1561/1000000022","url":null,"abstract":"In this survey we describe the main research directions in pre-silicon power modeling and post-silicon power characterization. We review techniques in power modeling and characterization for three computing substrates: general-purpose processors, system-on-chip-based embedded systems, and field programmable gate arrays. We describe the basic principles that govern power consumption in digital circuits, and utilize these principles to describe high-level power modeling techniques for designs of the three computing substrates. Once a computing device is fabricated, direct measurements on the actual device reveal a great wealth of information about the device's power consumption under various operating conditions. We describe characterization techniques that integrate infrared imaging with electric current measurements to generate runtime power maps. The power maps can be used to validate design-time power models and to calibrate computer-aided design tools. We also describe empirical power characterization techniques for software power analysis and for adaptive power-aware computing. Finally, we provide a number of plausible future research directions for power modeling and characterization.","PeriodicalId":42137,"journal":{"name":"Foundations and Trends in Electronic Design Automation","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2012-05-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"88965756","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Discrete gate sizing and threshold assignment are commonly used tools for optimizing digital circuits, and ideal methods for incremental optimization. The gate widths and threshold voltages, along with the gate lengths, can be adjusted to optimize power and delay. This monograph surveys this field, providing the background needed to perform research in the field. Concepts such as standard cell libraries, static timing analysis, and analytical delay and power models are explained, along with examples and data to help understand the tradeoffs involved. Comparative results are also provided to show the current state of the field. Full text available at: http://dx.doi.org/10.1561/1000000019
{"title":"Discrete Circuit Optimization: Library Based Gate Sizing and Threshold Voltage Assignment","authors":"John Lee, Puneet Gupta","doi":"10.1561/1000000019","DOIUrl":"https://doi.org/10.1561/1000000019","url":null,"abstract":"Discrete gate sizing and threshold assignment are commonly used tools for optimizing digital circuits, and ideal methods for incremental optimization. The gate widths and threshold voltages, along with the gate lengths, can be adjusted to optimize power and delay. This monograph surveys this field, providing the background needed to perform research in the field. Concepts such as standard cell libraries, static timing analysis, and analytical delay and power models are explained, along with examples and data to help understand the tradeoffs involved. Comparative results are also provided to show the current state of the field. Full text available at: http://dx.doi.org/10.1561/1000000019","PeriodicalId":42137,"journal":{"name":"Foundations and Trends in Electronic Design Automation","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2012-03-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"75254120","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
The ability to predict circuit performance through simulation is at the core of any design process; it makes the implementation of complex integrated circuits technically feasible and economically viable while relaxing any heavy need for prototyping. Transistor-level circuit simulation is a fundamental computer-aided design technique that enables the design and verification of an extremely broad range of integrated circuits. With the proliferation of modern parallel processor architectures, leveraging parallel computing becomes a necessity and also an important avenue for facilitating large-scale circuit simulation. Parallel Circuit Simulation: A Historical Perspective and Recent Developments presents an in-depth discussion on parallel transistor-level circuit simulation algorithms and their implementation strategies on a variety of hardware platforms. While providing a rather complete perspective on historical and recent research developments, it also highlights key challenges and opportunities in developing efficient parallel simulation paradigms.
{"title":"Parallel Circuit Simulation: A Historical Perspective and Recent Developments","authors":"Peng Li","doi":"10.1561/1000000020","DOIUrl":"https://doi.org/10.1561/1000000020","url":null,"abstract":"The ability to predict circuit performance through simulation is at the core of any design process; it makes the implementation of complex integrated circuits technically feasible and economically viable while relaxing any heavy need for prototyping. Transistor-level circuit simulation is a fundamental computer-aided design technique that enables the design and verification of an extremely broad range of integrated circuits. With the proliferation of modern parallel processor architectures, leveraging parallel computing becomes a necessity and also an important avenue for facilitating large-scale circuit simulation. Parallel Circuit Simulation: A Historical Perspective and Recent Developments presents an in-depth discussion on parallel transistor-level circuit simulation algorithms and their implementation strategies on a variety of hardware platforms. While providing a rather complete perspective on historical and recent research developments, it also highlights key challenges and opportunities in developing efficient parallel simulation paradigms.","PeriodicalId":42137,"journal":{"name":"Foundations and Trends in Electronic Design Automation","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2012-02-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"81964436","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
As device sizes shrink, manufacturing challenges at the device level are resulting in increased variability in physical circuit characteristics. Exponentially increasing circuit density has not only brought about concerns in the reliable manufacturing of circuits but also has exaggerated variations in dynamic circuit behavior. The resulting uncertainty in performance, power, and reliability imposed by compounding static and dynamic nondeterminism threatens the continuation of Moore's law, which has been arguably the primary driving force behind technology and innovation for decades. This situation is exacerbated by emerging computing applications, which exert considerable power and performance pressure on processors. Paradoxically, the problem is not nondeterminism, per se, but rather the approaches that designers have used to deal with it. The traditional response to variability has been to enforce determinism on an increasingly nondeterministic substrate through guardbands. As variability in circuit behavior increases, achieving deterministic behavior becomes increasingly expensive, as performance and energy penalties must be paid to ensure that all devices work correctly under all possible conditions. As such, the benefits of technology scaling are vanishing, due to the overheads of dealing with hardware variations through traditional means. Clearly, status quo cannot continue. Despite the above trends, the contract between hardware and software has, for the most part, remained unchanged. Software expects flawless results from hardware under all possible operating conditions. This rigid contract leaves potential performance gains and energy savings on the table, sacrificing efficiency in the common case in exchange for guaranteed correctness in all cases. However, as the marginal benefits of technology scaling continue to languish, a new vision for computing has begun to emerge. Rather than hiding variations under expensive guardbands, designers have begun to relax traditional correctness constraints and deliberately expose hardware variability to higher levels of the compute stack, thus tapping into potentially significant performance and energy benefits and also opening the potential for errors. Rather than paying the increasing price of hiding the true, stochastic nature of hardware, emerging stochastic computing techniques account for the inevitable variability and exploit it to increase efficiency. Stochastic computing techniques have been proposed at nearly all levels of the computing stack, including stochastic design optimizations, architecture frameworks, compiler optimizations, application transformations, programming language support, and testing techniques. In this monograph, we review work in the area of stochastic computing and discuss the promise and challenges of the field.
{"title":"Stochastic Computing","authors":"J. Sartori, Rakesh Kumar","doi":"10.1561/1000000021","DOIUrl":"https://doi.org/10.1561/1000000021","url":null,"abstract":"As device sizes shrink, manufacturing challenges at the device level are resulting in increased variability in physical circuit characteristics. Exponentially increasing circuit density has not only brought about concerns in the reliable manufacturing of circuits but also has exaggerated variations in dynamic circuit behavior. The resulting uncertainty in performance, power, and reliability imposed by compounding static and dynamic nondeterminism threatens the continuation of Moore's law, which has been arguably the primary driving force behind technology and innovation for decades. This situation is exacerbated by emerging computing applications, which exert considerable power and performance pressure on processors. Paradoxically, the problem is not nondeterminism, per se, but rather the approaches that designers have used to deal with it. The traditional response to variability has been to enforce determinism on an increasingly nondeterministic substrate through guardbands. As variability in circuit behavior increases, achieving deterministic behavior becomes increasingly expensive, as performance and energy penalties must be paid to ensure that all devices work correctly under all possible conditions. As such, the benefits of technology scaling are vanishing, due to the overheads of dealing with hardware variations through traditional means. Clearly, status quo cannot continue. \u0000 \u0000Despite the above trends, the contract between hardware and software has, for the most part, remained unchanged. Software expects flawless results from hardware under all possible operating conditions. This rigid contract leaves potential performance gains and energy savings on the table, sacrificing efficiency in the common case in exchange for guaranteed correctness in all cases. However, as the marginal benefits of technology scaling continue to languish, a new vision for computing has begun to emerge. Rather than hiding variations under expensive guardbands, designers have begun to relax traditional correctness constraints and deliberately expose hardware variability to higher levels of the compute stack, thus tapping into potentially significant performance and energy benefits and also opening the potential for errors. Rather than paying the increasing price of hiding the true, stochastic nature of hardware, emerging stochastic computing techniques account for the inevitable variability and exploit it to increase efficiency. Stochastic computing techniques have been proposed at nearly all levels of the computing stack, including stochastic design optimizations, architecture frameworks, compiler optimizations, application transformations, programming language support, and testing techniques. In this monograph, we review work in the area of stochastic computing and discuss the promise and challenges of the field.","PeriodicalId":42137,"journal":{"name":"Foundations and Trends in Electronic Design Automation","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2011-12-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"79759222","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
The unquenched thirst for higher levels of electronic systems integration and higher performance goals has produced a plethora of design and business challenges that are threatening the success enjoyed so far as modeled by Moore's law. To tackle these challenges and meet the design needs of consumer electronics products such as those of cell phones, audio/video players, digital cameras that are composed of a number of different technologies, vertical system integration has emerged as a required technology to reduce the system board space and height in addition to the overall time-to-market and design cost. System-in-package (SiP) is a system integration technology that achieves the aforementioned needs in a scalable and cost-effective way, where multiple dies, passive components, and discrete devices are assembled, often vertically, in a package. This paper surveys the electrical and layout perspectives of SiP. It first introduces package technologies, and then presents SiP design flow and design exploration. Finally, the paper discusses details of beyond-die signal and power integrity and physical implementation such as I/O (input/output cell) placement and routing for redistribution layer, escape, and substrate.
{"title":"System-in-Package: Electrical and Layout Perspectives","authors":"Lei He, Shauki Elassaad, Yiyu Shi, Yu Hu, Wei Yao","doi":"10.1561/1000000014","DOIUrl":"https://doi.org/10.1561/1000000014","url":null,"abstract":"The unquenched thirst for higher levels of electronic systems integration and higher performance goals has produced a plethora of design and business challenges that are threatening the success enjoyed so far as modeled by Moore's law. To tackle these challenges and meet the design needs of consumer electronics products such as those of cell phones, audio/video players, digital cameras that are composed of a number of different technologies, vertical system integration has emerged as a required technology to reduce the system board space and height in addition to the overall time-to-market and design cost. System-in-package (SiP) is a system integration technology that achieves the aforementioned needs in a scalable and cost-effective way, where multiple dies, passive components, and discrete devices are assembled, often vertically, in a package. This paper surveys the electrical and layout perspectives of SiP. It first introduces package technologies, and then presents SiP design flow and design exploration. Finally, the paper discusses details of beyond-die signal and power integrity and physical implementation such as I/O (input/output cell) placement and routing for redistribution layer, escape, and substrate.","PeriodicalId":42137,"journal":{"name":"Foundations and Trends in Electronic Design Automation","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2011-06-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"87626454","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Guangyu Sun, Yibo Chen, Xiangyu Dong, J. Ouyang, Yuan Xie
The emerging three-dimensional (3D) integration technology is one of the promising solutions to overcome the barriers in interconnection scaling, thereby offering an opportunity to continue performance improvements using CMOS technology. As the fabrication of 3D integrated circuits has become viable, developing CAD tools and architectural techniques are imperative for the successful adoption of 3D integration technology. In this article, we first give a brief introduction on the 3D integration technology, and then review the EDA challenges and solutions that can enable the adoption of 3D ICs, and finally present design and architectural techniques on the application of 3D ICs, including a survey of various approaches to design future 3D ICs, leveraging the benefits of fast latency, higher bandwidth, and heterogeneous integration capability that are offered by 3D technology.
{"title":"Three-dimensional Integrated Circuits: Design, EDA, and Architecture","authors":"Guangyu Sun, Yibo Chen, Xiangyu Dong, J. Ouyang, Yuan Xie","doi":"10.1561/1000000016","DOIUrl":"https://doi.org/10.1561/1000000016","url":null,"abstract":"The emerging three-dimensional (3D) integration technology is one of the promising solutions to overcome the barriers in interconnection scaling, thereby offering an opportunity to continue performance improvements using CMOS technology. As the fabrication of 3D integrated circuits has become viable, developing CAD tools and architectural techniques are imperative for the successful adoption of 3D integration technology. In this article, we first give a brief introduction on the 3D integration technology, and then review the EDA challenges and solutions that can enable the adoption of 3D ICs, and finally present design and architectural techniques on the application of 3D ICs, including a survey of various approaches to design future 3D ICs, leveraging the benefits of fast latency, higher bandwidth, and heterogeneous integration capability that are offered by 3D technology.","PeriodicalId":42137,"journal":{"name":"Foundations and Trends in Electronic Design Automation","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2011-05-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"83174613","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Chip-level soft-error rate (SER) estimation can come from two sources: direct experimental measurement and simulation. Because SER mitigation decisions need to be made very early in the product design cycle, long before product Si is available, a simulation-based methodology of chip-level radiation-induced soft error rates that is fast and reasonably accurate is crucial to the reliability and success of the final product. The following contribution summarizes selected publications that are deemed relevant by the author to enable a truly chip-level radiation-induced soft error rate estimation methodology. Although the strategies and concepts described have microprocessors manufactured in bulk CMOS technologies in mind, there is no fundamental reason why they cannot be applied to other technologies and different types of integrated circuits (ICs).
{"title":"Radiation-induced Soft Errors: A Chip-level Modeling Perspective","authors":"N. Seifert","doi":"10.1561/1000000018","DOIUrl":"https://doi.org/10.1561/1000000018","url":null,"abstract":"Chip-level soft-error rate (SER) estimation can come from two sources: direct experimental measurement and simulation. Because SER mitigation decisions need to be made very early in the product design cycle, long before product Si is available, a simulation-based methodology of chip-level radiation-induced soft error rates that is fast and reasonably accurate is crucial to the reliability and success of the final product. \u0000 \u0000The following contribution summarizes selected publications that are deemed relevant by the author to enable a truly chip-level radiation-induced soft error rate estimation methodology. Although the strategies and concepts described have microprocessors manufactured in bulk CMOS technologies in mind, there is no fundamental reason why they cannot be applied to other technologies and different types of integrated circuits (ICs).","PeriodicalId":42137,"journal":{"name":"Foundations and Trends in Electronic Design Automation","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2010-02-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"84767729","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Yu Cao, A. Balijepalli, S. Sinha, Chi-Chao Wang, Wenping Wang, Wei Zhao
The aggressive scaling of CMOS technology has inevitably led to vastly increased power dissipation, process variability and reliability degradation, posing tremendous challenges to robust circuit design. To continue the success of integrated circuits, advanced design research must start in parallel with or even ahead of technology development. This new paradigm requires the Predictive Technology Model (PTM) for future technology generations, including nanoscale CMOS and post-silicon devices. This paper presents a comprehensive set of predictive modeling developments. Starting from the PTM of traditional CMOS devices, it extends to CMOS alternatives at the end of the silicon roadmap, such as strained Si, high-k/metal gate, and FinFET devices. The impact of process variation and the aging effect is further captured by modeling the device parameters under the influence. Beyond the silicon roadmap, the PTM outreaches to revolutionary devices, especially carbon-based transistor and interconnect, in order to support explorative design research. Overall, these predictive device models enable early stage design exploration with increasing technology diversity, helping shed light on the opportunities and challenges in the nanoelectronics era.
{"title":"The Predictive Technology Model in the Late Silicon Era and Beyond","authors":"Yu Cao, A. Balijepalli, S. Sinha, Chi-Chao Wang, Wenping Wang, Wei Zhao","doi":"10.1561/1000000012","DOIUrl":"https://doi.org/10.1561/1000000012","url":null,"abstract":"The aggressive scaling of CMOS technology has inevitably led to vastly increased power dissipation, process variability and reliability degradation, posing tremendous challenges to robust circuit design. To continue the success of integrated circuits, advanced design research must start in parallel with or even ahead of technology development. This new paradigm requires the Predictive Technology Model (PTM) for future technology generations, including nanoscale CMOS and post-silicon devices. This paper presents a comprehensive set of predictive modeling developments. Starting from the PTM of traditional CMOS devices, it extends to CMOS alternatives at the end of the silicon roadmap, such as strained Si, high-k/metal gate, and FinFET devices. The impact of process variation and the aging effect is further captured by modeling the device parameters under the influence. Beyond the silicon roadmap, the PTM outreaches to revolutionary devices, especially carbon-based transistor and interconnect, in order to support explorative design research. Overall, these predictive device models enable early stage design exploration with increasing technology diversity, helping shed light on the opportunities and challenges in the nanoelectronics era.","PeriodicalId":42137,"journal":{"name":"Foundations and Trends in Electronic Design Automation","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2010-01-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"90992158","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}