Technology modeling and characterization beyond the 45nm node

S. Nassif
{"title":"Technology modeling and characterization beyond the 45nm node","authors":"S. Nassif","doi":"10.1109/ASPDAC.2008.4483944","DOIUrl":null,"url":null,"abstract":"The semiconductor industry is unique in that it produces products with little or no prototyping! While a car company will build (and crash) many prototypes before converging on a final design, integrated circuits are built almost entirely on a basis of computer predictions. These predictions are based on models of performance based on simulation performed at multiple hierarchical levels, but always rooted in the end in classical circuit simulation using tools like the venerable Spice [1]. But as we continue to scale technology further, we observe a diminishing rate of performance return which is in turn causing a spiral of increasing manufacturing process complexity in an attempt to maintain performance per historical trends. This increase in technology complexity is introducing a number of systematic (i.e. design dependent) sources of design variability which demand modeling and characterization resources. At the same time, we are entering a regime where the averaging effect of the law of large numbers is becoming weaker, resulting in an increase in influence of fundamental atomistic variations. Phenomena like channel dopant fluctuations [2] and line-edge roughness [3] are creating a random variability noise floor which is difficult to get around without significant process impact. The result of the increase in these, and other sources of variability is a corresponding increase in important circuit phenomena like SRAM stability and leakage power variations. The net result is a gradual breakdown of the traditional \";device model + design rule\"; contact between design and manufacturing, and a corresponding lack of predictability in fabrication outcome that is endangering the profitability of Silicon semiconductor manufacturing as we enter what may be the last handful of generations of CMOS. This lack of predictability is happening because of two important factors. ldr The overall CMOS technology slowdown has led to rapidly increasing complexity in the process and in its interaction with design. This has in turn caused an increase in the number and magnitude of systematic sources of mismatch between simulation models (both at the circuit simulation and timing levels) and hardware measurements. ldr Manufacturing variability, both systematic and random, -long a source of concern only for analog design- is becoming important for digital designs as well and thus its prediction is now a first order priority. However, it is competing for the attention of researchers and CAD developers with a host of other so-called nm effects, thus slowing down the delivery of needed solutions. The result is (a) our ability to arbitrarily compose a design out of disparate components is compromised because of a high degree of interaction between these components , and (b) our ability to predict the nominal performance of a design as well as its tolerances and sensitivities is in danger. In this talk, we will review these issues and show how they are all related to the core issue of model to hardware matching. We will also show examples of potential solutions to this problem some of which are currently being developed in IBM, and some which are longer term and would benefit greatly from the attention of the academic community.","PeriodicalId":277556,"journal":{"name":"2008 Asia and South Pacific Design Automation Conference","volume":"4 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2008-01-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"9","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"2008 Asia and South Pacific Design Automation Conference","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/ASPDAC.2008.4483944","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 9

Abstract

The semiconductor industry is unique in that it produces products with little or no prototyping! While a car company will build (and crash) many prototypes before converging on a final design, integrated circuits are built almost entirely on a basis of computer predictions. These predictions are based on models of performance based on simulation performed at multiple hierarchical levels, but always rooted in the end in classical circuit simulation using tools like the venerable Spice [1]. But as we continue to scale technology further, we observe a diminishing rate of performance return which is in turn causing a spiral of increasing manufacturing process complexity in an attempt to maintain performance per historical trends. This increase in technology complexity is introducing a number of systematic (i.e. design dependent) sources of design variability which demand modeling and characterization resources. At the same time, we are entering a regime where the averaging effect of the law of large numbers is becoming weaker, resulting in an increase in influence of fundamental atomistic variations. Phenomena like channel dopant fluctuations [2] and line-edge roughness [3] are creating a random variability noise floor which is difficult to get around without significant process impact. The result of the increase in these, and other sources of variability is a corresponding increase in important circuit phenomena like SRAM stability and leakage power variations. The net result is a gradual breakdown of the traditional ";device model + design rule"; contact between design and manufacturing, and a corresponding lack of predictability in fabrication outcome that is endangering the profitability of Silicon semiconductor manufacturing as we enter what may be the last handful of generations of CMOS. This lack of predictability is happening because of two important factors. ldr The overall CMOS technology slowdown has led to rapidly increasing complexity in the process and in its interaction with design. This has in turn caused an increase in the number and magnitude of systematic sources of mismatch between simulation models (both at the circuit simulation and timing levels) and hardware measurements. ldr Manufacturing variability, both systematic and random, -long a source of concern only for analog design- is becoming important for digital designs as well and thus its prediction is now a first order priority. However, it is competing for the attention of researchers and CAD developers with a host of other so-called nm effects, thus slowing down the delivery of needed solutions. The result is (a) our ability to arbitrarily compose a design out of disparate components is compromised because of a high degree of interaction between these components , and (b) our ability to predict the nominal performance of a design as well as its tolerances and sensitivities is in danger. In this talk, we will review these issues and show how they are all related to the core issue of model to hardware matching. We will also show examples of potential solutions to this problem some of which are currently being developed in IBM, and some which are longer term and would benefit greatly from the attention of the academic community.
查看原文
分享 分享
微信好友 朋友圈 QQ好友 复制链接
本刊更多论文
超过45nm节点的技术建模和表征
半导体行业的独特之处在于,它生产的产品很少或没有原型!虽然汽车公司在最终设计之前会制造(和碰撞)许多原型,但集成电路几乎完全是在计算机预测的基础上制造的。这些预测是基于在多个层次上进行的基于仿真的性能模型,但始终植根于使用诸如古老的Spice[1]等工具的经典电路仿真。但随着我们继续扩大技术规模,我们观察到性能回报率的下降,这反过来又导致了制造工艺复杂性的螺旋式增长,以保持每历史趋势的性能。这种技术复杂性的增加引入了许多系统的(即依赖于设计的)设计可变性的来源,这需要建模和表征资源。同时,我们正在进入一个大数定律的平均效应变得越来越弱的状态,导致基本原子变化的影响增加。通道掺杂波动[2]和线边缘粗糙度[3]等现象正在产生随机可变性噪声底,在不产生重大工艺影响的情况下很难绕过。这些和其他可变性来源增加的结果是SRAM稳定性和泄漏功率变化等重要电路现象的相应增加。最终的结果是传统的“设备模型+设计规则”逐渐被打破;设计和制造之间的联系,以及相应的制造结果缺乏可预测性,这正在危及硅半导体制造业的盈利能力,因为我们进入了可能是最后几代CMOS。这种缺乏可预测性的现象是由两个重要因素造成的。整体CMOS技术的放缓导致工艺及其与设计交互的复杂性迅速增加。这反过来又导致了仿真模型(在电路仿真和时序级别)和硬件测量之间不匹配的系统来源的数量和程度的增加。ldr制造的可变性,无论是系统的还是随机的,长期以来只在模拟设计中引起关注,现在对数字设计也变得很重要,因此它的预测现在是第一优先级。然而,它正在与许多其他所谓的纳米效应争夺研究人员和CAD开发人员的注意力,从而减慢了所需解决方案的交付速度。结果是:(a)由于这些组件之间的高度交互,我们将不同组件任意组合设计的能力受到损害,并且(b)我们预测设计的标称性能及其容差和灵敏度的能力处于危险之中。在这次演讲中,我们将回顾这些问题,并展示它们是如何与模型与硬件匹配的核心问题相关的。我们还将展示该问题的潜在解决方案的示例,其中一些解决方案目前正在IBM开发中,而一些解决方案则是长期的,并且将从学术界的关注中受益匪浅。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 去求助
来源期刊
自引率
0.00%
发文量
0
期刊最新文献
Panel: Best ways to use billions of devices on a chip Large-scale fixed-outline floorplanning design using convex optimization techniques The Shining embedded system design methodology based on self dynamic reconfigurable architectures Hybrid solid-state disks: Combining heterogeneous NAND flash in large SSDs Load scheduling: Reducing pressure on distributed register files for free
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
已复制链接
已复制链接
快去分享给好友吧!
我知道了
×
扫码分享
扫码分享
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1