Pub Date : 2011-11-07DOI: 10.1109/ICCAD.2011.6105331
D. Sacchetto, M. D. Marchi, G. Micheli, Y. Leblebici
Next generation logic switch devices are expected to rely on radically new technologies mainly due to the increasing difficulties and limitations of state-of-the-art CMOS switches, which, in turn, will also require innovative design methodologies that are distinctly different from those used for CMOS technologies. In this paper, three alternative emerging technologies are showcased in terms of their requirements for design implementation and in terms of potential advantages. First, a CMOS evolutionary approach based on vertically-stacked gate-all-around Si nanowire FETs is discussed. Next, an alternative design methodology based on ambipolar carbon nanotube FETs is presented. Finally, a novel approach based on the recently discovered memristive devices is presented, offering the possibility of combining memory and logic functions.
{"title":"Alternative design methodologies for the next generation logic switch","authors":"D. Sacchetto, M. D. Marchi, G. Micheli, Y. Leblebici","doi":"10.1109/ICCAD.2011.6105331","DOIUrl":"https://doi.org/10.1109/ICCAD.2011.6105331","url":null,"abstract":"Next generation logic switch devices are expected to rely on radically new technologies mainly due to the increasing difficulties and limitations of state-of-the-art CMOS switches, which, in turn, will also require innovative design methodologies that are distinctly different from those used for CMOS technologies. In this paper, three alternative emerging technologies are showcased in terms of their requirements for design implementation and in terms of potential advantages. First, a CMOS evolutionary approach based on vertically-stacked gate-all-around Si nanowire FETs is discussed. Next, an alternative design methodology based on ambipolar carbon nanotube FETs is presented. Finally, a novel approach based on the recently discovered memristive devices is presented, offering the possibility of combining memory and logic functions.","PeriodicalId":6357,"journal":{"name":"2011 IEEE/ACM International Conference on Computer-Aided Design (ICCAD)","volume":"43 1","pages":"231-234"},"PeriodicalIF":0.0,"publicationDate":"2011-11-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"72938336","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2011-11-07DOI: 10.1109/ICCAD.2011.6105319
J. Cong, Karthik Gururaj
Traditionally, research in fault tolerance has required architectural state to be numerically perfect for program execution to be correct. However, in many programs, even if execution is not 100% numerically correct, the program can still appear to execute correctly from the user's perspective. To quantify user satisfaction, application-level fidelity metrics (such as PSNR) can be used. The output for such applications is defined to be correct if the fidelity metrics satisfy a certain threshold. However, such applications still contain instructions whose outputs are critical — i.e. their correctness decides if the overall quality of the program output is acceptable. In this paper, we present an analysis technique for identifying such critical program segments. More importantly, our technique is capable of guaranteeing application-level correctness through a combination of static analysis and runtime monitoring. Our static analysis consists of data flow analysis followed by control flow analysis to find static critical instructions which affect several instructions. Critical instructions are further refined into likely non-critical and likely critical sets in a profiling phase. At runtime, we use a monitoring scheme to monitor likely non-critical instructions and take remedial actions if some likely non-critical instructions become critical. Based on this analysis, we minimize the number of instructions that are duplicated and checked at runtime using a software-based fault detection and recovery technique [20]. Put together, our approach can lead to 22% average energy savings for multimedia applications while guaranteeing application-level correctness, when compared to a recent work [9], which cannot guarantee application-level correctness. Comparing to the approach proposed in [20] which guarantees both application-level and numerical correctness, our method achieves 79% energy reduction.
{"title":"Assuring application-level correctness against soft errors","authors":"J. Cong, Karthik Gururaj","doi":"10.1109/ICCAD.2011.6105319","DOIUrl":"https://doi.org/10.1109/ICCAD.2011.6105319","url":null,"abstract":"Traditionally, research in fault tolerance has required architectural state to be numerically perfect for program execution to be correct. However, in many programs, even if execution is not 100% numerically correct, the program can still appear to execute correctly from the user's perspective. To quantify user satisfaction, application-level fidelity metrics (such as PSNR) can be used. The output for such applications is defined to be correct if the fidelity metrics satisfy a certain threshold. However, such applications still contain instructions whose outputs are critical — i.e. their correctness decides if the overall quality of the program output is acceptable. In this paper, we present an analysis technique for identifying such critical program segments. More importantly, our technique is capable of guaranteeing application-level correctness through a combination of static analysis and runtime monitoring. Our static analysis consists of data flow analysis followed by control flow analysis to find static critical instructions which affect several instructions. Critical instructions are further refined into likely non-critical and likely critical sets in a profiling phase. At runtime, we use a monitoring scheme to monitor likely non-critical instructions and take remedial actions if some likely non-critical instructions become critical. Based on this analysis, we minimize the number of instructions that are duplicated and checked at runtime using a software-based fault detection and recovery technique [20]. Put together, our approach can lead to 22% average energy savings for multimedia applications while guaranteeing application-level correctness, when compared to a recent work [9], which cannot guarantee application-level correctness. Comparing to the approach proposed in [20] which guarantees both application-level and numerical correctness, our method achieves 79% energy reduction.","PeriodicalId":6357,"journal":{"name":"2011 IEEE/ACM International Conference on Computer-Aided Design (ICCAD)","volume":"24 1","pages":"150-157"},"PeriodicalIF":0.0,"publicationDate":"2011-11-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"78331227","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2011-11-07DOI: 10.1109/ICCAD.2011.6105384
Xuanxing Xiong, Jia Wang
Vectorless power grid verification is a powerful method that evaluates worst-case voltage noises without detailed current waveforms using optimization techniques. It is extremely challenging when considering RLC power grids since inductors are difficult to tackle and multiple time steps should be evaluated after the discretization of the system equation. In this paper, we study integrated RLC power grids with both VDD and GND networks and rigorously prove that their vectorless verification can be decomposed into two sub-problems — the well-studied transient power grid analysis problem and an optimization problem that maximizes an affine function of currents under current constraints. We further introduce transient constraints to restrict the waveform of each current source for realistic scenarios and design the RLCVN algorithm to solve the vectorless verification problem of RLC power grids. Results confirm that our algorithm is an effective approach for practical RLC power grid verification, and the proposed transient constraints make the noise estimations more realistic.
{"title":"Vectorless verification of RLC power grids with transient current constraints","authors":"Xuanxing Xiong, Jia Wang","doi":"10.1109/ICCAD.2011.6105384","DOIUrl":"https://doi.org/10.1109/ICCAD.2011.6105384","url":null,"abstract":"Vectorless power grid verification is a powerful method that evaluates worst-case voltage noises without detailed current waveforms using optimization techniques. It is extremely challenging when considering RLC power grids since inductors are difficult to tackle and multiple time steps should be evaluated after the discretization of the system equation. In this paper, we study integrated RLC power grids with both VDD and GND networks and rigorously prove that their vectorless verification can be decomposed into two sub-problems — the well-studied transient power grid analysis problem and an optimization problem that maximizes an affine function of currents under current constraints. We further introduce transient constraints to restrict the waveform of each current source for realistic scenarios and design the RLCVN algorithm to solve the vectorless verification problem of RLC power grids. Results confirm that our algorithm is an effective approach for practical RLC power grid verification, and the proposed transient constraints make the noise estimations more realistic.","PeriodicalId":6357,"journal":{"name":"2011 IEEE/ACM International Conference on Computer-Aided Design (ICCAD)","volume":"274 1","pages":"548-554"},"PeriodicalIF":0.0,"publicationDate":"2011-11-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"76785866","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2011-11-07DOI: 10.1109/ICCAD.2011.6105320
K. Chakrabarty, G. Dispoto, Rick Bellamy, Jun Zeng
The use of digital print provides unique opportunities to automate the printing process, revamp production steps, better utilize resources, and enhance productivity. This paper highlights the key role that electronic design automation (EDA) can play in the maturation of the digital print automation field. It first describes basic concepts in digital printing and digital commercial print services. Next it describes the application of discrete-event simulation to policy management and performance evaluation, and dynamic resource management using EDA flows based on scheduling and resource binding.
{"title":"The role of EDA in digital print automation and infrastructure optimization","authors":"K. Chakrabarty, G. Dispoto, Rick Bellamy, Jun Zeng","doi":"10.1109/ICCAD.2011.6105320","DOIUrl":"https://doi.org/10.1109/ICCAD.2011.6105320","url":null,"abstract":"The use of digital print provides unique opportunities to automate the printing process, revamp production steps, better utilize resources, and enhance productivity. This paper highlights the key role that electronic design automation (EDA) can play in the maturation of the digital print automation field. It first describes basic concepts in digital printing and digital commercial print services. Next it describes the application of discrete-event simulation to policy management and performance evaluation, and dynamic resource management using EDA flows based on scheduling and resource binding.","PeriodicalId":6357,"journal":{"name":"2011 IEEE/ACM International Conference on Computer-Aided Design (ICCAD)","volume":"42 1","pages":"158-161"},"PeriodicalIF":0.0,"publicationDate":"2011-11-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"86046628","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2011-11-07DOI: 10.1109/ICCAD.2011.6105409
Muhammet Mustafa Ozdal, S. Burns, Jiang Hu
It is becoming more and more important to design high performance designs with as low power as possible. In this paper, we study the gate sizing and device technology selection problem for today's industrial designs. We first outline the typical practical problems that make it difficult to use the traditional algorithms on high-performance industrial designs. Then, we propose a Lagrangian Relaxation (LR) based formulation that decouples timing analysis from optimization without resulting in loss of accuracy. We also propose a graph model that accurately captures discrete cell type characteristics based on library data. We model the relaxed Lagrangian subproblem as a discrete graph problem, and propose algorithms to solve it. In our experiments, we demonstrate the importance of using the signoff timing engine to guide the optimization. Compared to a state-of-the art industrial optimization flow, we show that our algorithms can obtain up to 38% leakage power reductions and better overall timing for real high-performance microprocessor blocks.
{"title":"Gate sizing and device technology selection algorithms for high-performance industrial designs","authors":"Muhammet Mustafa Ozdal, S. Burns, Jiang Hu","doi":"10.1109/ICCAD.2011.6105409","DOIUrl":"https://doi.org/10.1109/ICCAD.2011.6105409","url":null,"abstract":"It is becoming more and more important to design high performance designs with as low power as possible. In this paper, we study the gate sizing and device technology selection problem for today's industrial designs. We first outline the typical practical problems that make it difficult to use the traditional algorithms on high-performance industrial designs. Then, we propose a Lagrangian Relaxation (LR) based formulation that decouples timing analysis from optimization without resulting in loss of accuracy. We also propose a graph model that accurately captures discrete cell type characteristics based on library data. We model the relaxed Lagrangian subproblem as a discrete graph problem, and propose algorithms to solve it. In our experiments, we demonstrate the importance of using the signoff timing engine to guide the optimization. Compared to a state-of-the art industrial optimization flow, we show that our algorithms can obtain up to 38% leakage power reductions and better overall timing for real high-performance microprocessor blocks.","PeriodicalId":6357,"journal":{"name":"2011 IEEE/ACM International Conference on Computer-Aided Design (ICCAD)","volume":"36 1","pages":"724-731"},"PeriodicalIF":0.0,"publicationDate":"2011-11-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"86227801","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2011-11-07DOI: 10.1109/ICCAD.2011.6105357
A. Mishchenko, R. Brayton, Stephen Jang, Victor N. Kravets
Reducing delay of a digital circuit is an important topic in logic synthesis for standard cells and LUT-based FPGAs. This paper presents a simple, fast, and very efficient synthesis algorithm to improve the delay after technology mapping. The algorithm scales to large designs and is implemented in a publicly-available technology mapper. The code is available online. Experimental results on industrial designs show that the method can improve delay after standard cell mapping by 30% with the increase in area 2.4%, or by 41% with the increase in area by 3.9%, on top of a high-effort synthesis and mapping flow. In a separate experiment, the algorithm was used as part of a complete industrial standard cell design flow, leading to improvements in area and delay after place-and-route. In yet another experiment, the algorithm was applied before FPGA mapping into 4-LUTs, resulting in 16% logic level reduction at the cost of 9% area increase on top of a high-effort mapping.
{"title":"Delay optimization using SOP balancing","authors":"A. Mishchenko, R. Brayton, Stephen Jang, Victor N. Kravets","doi":"10.1109/ICCAD.2011.6105357","DOIUrl":"https://doi.org/10.1109/ICCAD.2011.6105357","url":null,"abstract":"Reducing delay of a digital circuit is an important topic in logic synthesis for standard cells and LUT-based FPGAs. This paper presents a simple, fast, and very efficient synthesis algorithm to improve the delay after technology mapping. The algorithm scales to large designs and is implemented in a publicly-available technology mapper. The code is available online. Experimental results on industrial designs show that the method can improve delay after standard cell mapping by 30% with the increase in area 2.4%, or by 41% with the increase in area by 3.9%, on top of a high-effort synthesis and mapping flow. In a separate experiment, the algorithm was used as part of a complete industrial standard cell design flow, leading to improvements in area and delay after place-and-route. In yet another experiment, the algorithm was applied before FPGA mapping into 4-LUTs, resulting in 16% logic level reduction at the cost of 9% area increase on top of a high-effort mapping.","PeriodicalId":6357,"journal":{"name":"2011 IEEE/ACM International Conference on Computer-Aided Design (ICCAD)","volume":"1111 1","pages":"375-382"},"PeriodicalIF":0.0,"publicationDate":"2011-11-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"86474071","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2011-11-07DOI: 10.1109/ICCAD.2011.6105348
IV ClintonWillsSmullen, Anurag Nigam, S. Gurumurthi, M. Stan
There is growing interest in emerging non-volatile memory technologies such as Phase-Change Memory, Memristors, and Spin-Transfer Torque RAM (STT-RAM). STT-RAM, in particular, is experiencing rapid development that can be difficult for memory systems researchers to take advantage of. What is needed are techniques that enable designers to explore the potential of recent STT-RAM designs and adjust the performance without needing a detailed understanding of the physics. In this paper, we present the STeTSiMS STT-RAM Simulation and Modeling System to assist memory systems researchers. After providing background on the operation of STT-RAM magnetic tunnel junctions (MTJs), we demonstrate how to fit three different published MTJ models to our model and normalize their characteristics with respect to common metrics. The high-speed switching behavior of the designs is evaluated using macromagnetic simulations. We have also added a first-order model for STT-RAM memory arrays to the CACTI memory modeling tool, which we then use to evaluate the performance, energy consumption, and area for: (i) a high-performance cache, (ii) a high-capacity cache, and (iii) a high-density memory.
{"title":"The STeTSiMS STT-RAM simulation and modeling system","authors":"IV ClintonWillsSmullen, Anurag Nigam, S. Gurumurthi, M. Stan","doi":"10.1109/ICCAD.2011.6105348","DOIUrl":"https://doi.org/10.1109/ICCAD.2011.6105348","url":null,"abstract":"There is growing interest in emerging non-volatile memory technologies such as Phase-Change Memory, Memristors, and Spin-Transfer Torque RAM (STT-RAM). STT-RAM, in particular, is experiencing rapid development that can be difficult for memory systems researchers to take advantage of. What is needed are techniques that enable designers to explore the potential of recent STT-RAM designs and adjust the performance without needing a detailed understanding of the physics. In this paper, we present the STeTSiMS STT-RAM Simulation and Modeling System to assist memory systems researchers. After providing background on the operation of STT-RAM magnetic tunnel junctions (MTJs), we demonstrate how to fit three different published MTJ models to our model and normalize their characteristics with respect to common metrics. The high-speed switching behavior of the designs is evaluated using macromagnetic simulations. We have also added a first-order model for STT-RAM memory arrays to the CACTI memory modeling tool, which we then use to evaluate the performance, energy consumption, and area for: (i) a high-performance cache, (ii) a high-capacity cache, and (iii) a high-density memory.","PeriodicalId":6357,"journal":{"name":"2011 IEEE/ACM International Conference on Computer-Aided Design (ICCAD)","volume":"2 1","pages":"318-325"},"PeriodicalIF":0.0,"publicationDate":"2011-11-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"79088538","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2011-11-07DOI: 10.1109/ICCAD.2011.6105307
Myung-Chul Kim, Jin Hu, Dongjin Lee, I. Markov
Highly-optimized placements may lead to irreparable routing congestion due to inadequate models of modern interconnect stacks and the impact of partial routing obstacles. Additional challenges in routability-driven placement include scalability to large netlists and limiting the complexity of software integration. Addressing these challenges, we develop lookahead routing to give the placer advance, firsthand knowledge of trouble spots, not distorted by crude congestion models. We also extend global placement to (i) spread cells apart in congested areas, and (ii) move cells together in less-congested areas to ensure short, routable interconnects and moderate runtime. While previous work adds isolated steps to global placement, our SIMultaneous PLace-and-Route tool SimPLR integrates a layer- and via-aware global router into a leading-edge, force-directed placer. The complexity of integration is mitigated by careful design of simple yet effective optimizations. On the ISPD 2011 Contest Benchmark Suite, with the official evaluation protocol, SimPLR outperforms every contestant on every benchmark.
{"title":"A SimPLR method for routability-driven placement","authors":"Myung-Chul Kim, Jin Hu, Dongjin Lee, I. Markov","doi":"10.1109/ICCAD.2011.6105307","DOIUrl":"https://doi.org/10.1109/ICCAD.2011.6105307","url":null,"abstract":"Highly-optimized placements may lead to irreparable routing congestion due to inadequate models of modern interconnect stacks and the impact of partial routing obstacles. Additional challenges in routability-driven placement include scalability to large netlists and limiting the complexity of software integration. Addressing these challenges, we develop lookahead routing to give the placer advance, firsthand knowledge of trouble spots, not distorted by crude congestion models. We also extend global placement to (i) spread cells apart in congested areas, and (ii) move cells together in less-congested areas to ensure short, routable interconnects and moderate runtime. While previous work adds isolated steps to global placement, our SIMultaneous PLace-and-Route tool SimPLR integrates a layer- and via-aware global router into a leading-edge, force-directed placer. The complexity of integration is mitigated by careful design of simple yet effective optimizations. On the ISPD 2011 Contest Benchmark Suite, with the official evaluation protocol, SimPLR outperforms every contestant on every benchmark.","PeriodicalId":6357,"journal":{"name":"2011 IEEE/ACM International Conference on Computer-Aided Design (ICCAD)","volume":"129 1","pages":"67-73"},"PeriodicalIF":0.0,"publicationDate":"2011-11-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"81823062","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
This work presents a fast and flexible framework for congestion analysis at the global routing stage. It captures various factors that contribute to congestion in modern designs. The framework is a practical realization of a proposed parameterized integer programming formulation. The formulation minimizes overflow inside a set of regions covering the layout which is defined by an input resolution parameter. A resolution lower than the global routing grid-graph creates regions that are larger in size than the global-cells. The maximum resolution case simplifies the formulation to minimizing the total overflow which has been traditionally used as a metric to evaluate routability. A novel contribution of this work is to demonstrate that for a small analysis time budget, regional minimization of overflow with a lower resolution allows a more accurate identification of the routing congestion hotspot locations, compared to minimizing the total overflow. It allows generating a more accurate congestion heatmap. The other contributions include several new ideas for a practical realization of the formulation for industry-sized benchmark instances some of which are also improvements to existing global routing procedures. This work also describes coalesCgrip, a simpler variation of our framework which was used to evaluate the ISPD 2011 contest.
{"title":"Congestion analysis for global routing via integer programming","authors":"H. Shojaei, A. Davoodi, Jeff T. Linderoth","doi":"10.5555/2132325.2132386","DOIUrl":"https://doi.org/10.5555/2132325.2132386","url":null,"abstract":"This work presents a fast and flexible framework for congestion analysis at the global routing stage. It captures various factors that contribute to congestion in modern designs. The framework is a practical realization of a proposed parameterized integer programming formulation. The formulation minimizes overflow inside a set of regions covering the layout which is defined by an input resolution parameter. A resolution lower than the global routing grid-graph creates regions that are larger in size than the global-cells. The maximum resolution case simplifies the formulation to minimizing the total overflow which has been traditionally used as a metric to evaluate routability. A novel contribution of this work is to demonstrate that for a small analysis time budget, regional minimization of overflow with a lower resolution allows a more accurate identification of the routing congestion hotspot locations, compared to minimizing the total overflow. It allows generating a more accurate congestion heatmap. The other contributions include several new ideas for a practical realization of the formulation for industry-sized benchmark instances some of which are also improvements to existing global routing procedures. This work also describes coalesCgrip, a simpler variation of our framework which was used to evaluate the ISPD 2011 contest.","PeriodicalId":6357,"journal":{"name":"2011 IEEE/ACM International Conference on Computer-Aided Design (ICCAD)","volume":"35 1","pages":"256-262"},"PeriodicalIF":0.0,"publicationDate":"2011-11-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"86531622","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2011-11-07DOI: 10.1109/ICCAD.2011.6105415
Nathan Kupp, H. Stratigopoulos, P. Drineas, Y. Makris
The deployment of alternative, low-cost RF test methods in industry has been, to date, rather limited. This is due to the potentially impaired ability to identify device pass/fail labels when departing from traditional specification test. By relying on alternative tests, pass/fail labels must be derived indirectly through new test limits defined for the alternative tests, which may incur error in the form of test escapes or yield loss. Clearly, estimating these test metrics as early as possible in the test development process is key to the success of an alternative test approach. In this work, we employ a test metrics estimation technique based on non-parametric kernel density estimation to obtain such early estimates, and, for the first time, demonstrate a real-world case study of test metric estimation efficiency at parts-per-million levels. To achieve this, we employ a set of more than 1 million RF devices fabricated by Texas Instruments, which have been tested with both traditional specification tests as well as alternative, low-cost On-chip RF Built-in Tests, or “ORBiTs”.
{"title":"On proving the efficiency of alternative RF tests","authors":"Nathan Kupp, H. Stratigopoulos, P. Drineas, Y. Makris","doi":"10.1109/ICCAD.2011.6105415","DOIUrl":"https://doi.org/10.1109/ICCAD.2011.6105415","url":null,"abstract":"The deployment of alternative, low-cost RF test methods in industry has been, to date, rather limited. This is due to the potentially impaired ability to identify device pass/fail labels when departing from traditional specification test. By relying on alternative tests, pass/fail labels must be derived indirectly through new test limits defined for the alternative tests, which may incur error in the form of test escapes or yield loss. Clearly, estimating these test metrics as early as possible in the test development process is key to the success of an alternative test approach. In this work, we employ a test metrics estimation technique based on non-parametric kernel density estimation to obtain such early estimates, and, for the first time, demonstrate a real-world case study of test metric estimation efficiency at parts-per-million levels. To achieve this, we employ a set of more than 1 million RF devices fabricated by Texas Instruments, which have been tested with both traditional specification tests as well as alternative, low-cost On-chip RF Built-in Tests, or “ORBiTs”.","PeriodicalId":6357,"journal":{"name":"2011 IEEE/ACM International Conference on Computer-Aided Design (ICCAD)","volume":"14 1","pages":"762-767"},"PeriodicalIF":0.0,"publicationDate":"2011-11-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"86908684","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}