Design errors (or bugs) inadvertently escape the pre- silicon verification process. Before committing to a re-spin, it is expected that the escaped bugs have been identified during post-silicon validation. This is however hindered by the presence of blocking bugs in one erroneous module that inhibit the search for bugs in other parts of the chip that process data received from the erroneous module. In this paper we discuss how to design a novel embedded debug module that can bypass blocking bugs and aid the designer in validating the first silicon.
{"title":"On Bypassing Blocking Bugs during Post-Silicon Validation","authors":"Ehab Anis Daoud, N. Nicolici","doi":"10.1109/ETS.2008.29","DOIUrl":"https://doi.org/10.1109/ETS.2008.29","url":null,"abstract":"Design errors (or bugs) inadvertently escape the pre- silicon verification process. Before committing to a re-spin, it is expected that the escaped bugs have been identified during post-silicon validation. This is however hindered by the presence of blocking bugs in one erroneous module that inhibit the search for bugs in other parts of the chip that process data received from the erroneous module. In this paper we discuss how to design a novel embedded debug module that can bypass blocking bugs and aid the designer in validating the first silicon.","PeriodicalId":334529,"journal":{"name":"2008 13th European Test Symposium","volume":"22 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2008-05-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"125677391","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
C. Metra, Daniele Rossi, M. Omaña, A. Jas, R. Galivanche
We propose an on-line testing approach for the control logic of high performance microprocessors. Rather than adding information redundancy (in the form of error detecting codes), we propose to look for the information redundancy (referred to as function-inherent codes) that the microprocessor control logic may inherently have, due to its required functionality. We will show that this allows to achieve on-line testing at significant savings in terms of area and power consumption, and with lower or comparable impact on system performance and design costs, compared to alternate, traditional on-line testing approaches.
{"title":"Function-Inherent Code Checking: A New Low Cost On-Line Testing Approach for High Performance Microprocessor Control Logic","authors":"C. Metra, Daniele Rossi, M. Omaña, A. Jas, R. Galivanche","doi":"10.1109/ETS.2008.24","DOIUrl":"https://doi.org/10.1109/ETS.2008.24","url":null,"abstract":"We propose an on-line testing approach for the control logic of high performance microprocessors. Rather than adding information redundancy (in the form of error detecting codes), we propose to look for the information redundancy (referred to as function-inherent codes) that the microprocessor control logic may inherently have, due to its required functionality. We will show that this allows to achieve on-line testing at significant savings in terms of area and power consumption, and with lower or comparable impact on system performance and design costs, compared to alternate, traditional on-line testing approaches.","PeriodicalId":334529,"journal":{"name":"2008 13th European Test Symposium","volume":"19 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2008-05-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"122424323","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
This paper describes a tunable transient filter (TTF) design for soft error rate reduction in combinational logic circuits. TTFs can be inserted into combinational circuits to suppress propagated single- event upsets (SEUs) before they can be captured in latches/flip- flops. TTFs are tuned by adjusting the maximum width of the propagated SEU that can be suppressed. TTFs require 6-14 transistors, making them an attractive cost-effective option to reduce the soft error rate in combinational circuits. A global optimization approach based on geometric programming that integrates TTF insertion with dual-VoD and gate sizing is described. Simulation results for the 70 nm process technology indicate that a 17-48X reduction in the soft error rate can be achieved with this approach.
{"title":"Tunable Transient Filters for Soft Error Rate Reduction in Combinational Circuits","authors":"Q. Zhou, M. Choudhury, K. Mohanram","doi":"10.1109/ETS.2008.39","DOIUrl":"https://doi.org/10.1109/ETS.2008.39","url":null,"abstract":"This paper describes a tunable transient filter (TTF) design for soft error rate reduction in combinational logic circuits. TTFs can be inserted into combinational circuits to suppress propagated single- event upsets (SEUs) before they can be captured in latches/flip- flops. TTFs are tuned by adjusting the maximum width of the propagated SEU that can be suppressed. TTFs require 6-14 transistors, making them an attractive cost-effective option to reduce the soft error rate in combinational circuits. A global optimization approach based on geometric programming that integrates TTF insertion with dual-VoD and gate sizing is described. Simulation results for the 70 nm process technology indicate that a 17-48X reduction in the soft error rate can be achieved with this approach.","PeriodicalId":334529,"journal":{"name":"2008 13th European Test Symposium","volume":"38 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2008-05-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129843174","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
This paper presents a scan-based DFT technique that uses limited number of enhanced scan cells to reduce volume of delay test patterns and improve delay fault coverage. The proposed method controls a small number of enhanced scan cells by the skewed-load approach and the rest of scan cells by the broadside approach. Inserting enhanced scan cells reduces test data volume and ATPG run time and improves delay fault coverage. Hardware overhead for the proposed method is very low. The scan inputs where enhanced scan cells are inserted are selected by gain functions, which consist of controllability costs and usefulness measures. A regular ATPG can be used to generate transition delay test patterns for the proposed method. Experimental results show that test data volume is reduced by up to 65% and fault coverage is improved by up to about 6%.
{"title":"Low Overhead Partial Enhanced Scan Technique for Compact and High Fault Coverage Transition Delay Test Patterns","authors":"Seongmoon Wang, Wenlong Wei","doi":"10.1109/ETS.2008.12","DOIUrl":"https://doi.org/10.1109/ETS.2008.12","url":null,"abstract":"This paper presents a scan-based DFT technique that uses limited number of enhanced scan cells to reduce volume of delay test patterns and improve delay fault coverage. The proposed method controls a small number of enhanced scan cells by the skewed-load approach and the rest of scan cells by the broadside approach. Inserting enhanced scan cells reduces test data volume and ATPG run time and improves delay fault coverage. Hardware overhead for the proposed method is very low. The scan inputs where enhanced scan cells are inserted are selected by gain functions, which consist of controllability costs and usefulness measures. A regular ATPG can be used to generate transition delay test patterns for the proposed method. Experimental results show that test data volume is reduced by up to 65% and fault coverage is improved by up to about 6%.","PeriodicalId":334529,"journal":{"name":"2008 13th European Test Symposium","volume":"109 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2008-05-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"133629692","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Hundreds of memory instances and their frequency of operation have ruled out the possibility of sharing test structures amongst the embedded memories. This paper discusses the techniques and flow for sharing an embedded memory BIST for the at- speed testing of multiple memories on a typical SoC.
{"title":"Self-Programmable Shared BIST for Testing Multiple Memories","authors":"Swapnil Bahl, Vishal Srivastava","doi":"10.1109/ETS.2008.16","DOIUrl":"https://doi.org/10.1109/ETS.2008.16","url":null,"abstract":"Hundreds of memory instances and their frequency of operation have ruled out the possibility of sharing test structures amongst the embedded memories. This paper discusses the techniques and flow for sharing an embedded memory BIST for the at- speed testing of multiple memories on a typical SoC.","PeriodicalId":334529,"journal":{"name":"2008 13th European Test Symposium","volume":"128 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2008-05-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"134518697","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Prior effect-cause based chain diagnosis algorithms suffer from accuracy and performance problems when multiple stuck-at faults exist on the same scan chain. In this paper, we propose new chain diagnosis algorithms based on dominant fault pair to enhance diagnosis accuracy and efficiency. Several heuristic techniques are proposed, which include (1) double candidate range calculation, (2) dynamic learning and (3) two- dimensional space linear search. The experimental results illustrate the effectiveness and efficiency of the proposed chain diagnosis algorithms.
{"title":"Diagnose Multiple Stuck-at Scan Chain Faults","authors":"Yu Huang, Wu-Tung Cheng, Ruifeng Guo","doi":"10.1109/ETS.2008.20","DOIUrl":"https://doi.org/10.1109/ETS.2008.20","url":null,"abstract":"Prior effect-cause based chain diagnosis algorithms suffer from accuracy and performance problems when multiple stuck-at faults exist on the same scan chain. In this paper, we propose new chain diagnosis algorithms based on dominant fault pair to enhance diagnosis accuracy and efficiency. Several heuristic techniques are proposed, which include (1) double candidate range calculation, (2) dynamic learning and (3) two- dimensional space linear search. The experimental results illustrate the effectiveness and efficiency of the proposed chain diagnosis algorithms.","PeriodicalId":334529,"journal":{"name":"2008 13th European Test Symposium","volume":"59 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2008-05-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"133006840","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Summary form only given. Dr. Gordon E. Moore's Law - integration's capacity doubles every two years - is under server pressure. Leakage power is the major factor in the challenge to keep Moore's Law alive and well. As process technologies continue to shrink, and feature demands continue to increase, more and more capabilities are being pushed into smaller and smaller packages. But are we finally reaching the point where power density limitations make this trend no longer sustainable? The test environment has long been known to be more severe power wise than the functional environment. To that end, a number of new approaches need to be taken in the test area to control not only scan in power but also the capture power. What advanced techniques are in use today, and on the horizon, to address this? Are we limited only to hardware techniques, or can these power limitation issues be addressed with smarter software, such as automatic test pattern generation tools? And how do we handle verification of these complex implementations? How do these new demands affect our ability to compress test time and test data volume? This paper explores possible methods for improving the "power capacity" of power-sensitive design from both the functional and test perspectives.
{"title":"The Future Is Low Power and Test","authors":"T. Williams","doi":"10.1109/ETS.2008.37","DOIUrl":"https://doi.org/10.1109/ETS.2008.37","url":null,"abstract":"Summary form only given. Dr. Gordon E. Moore's Law - integration's capacity doubles every two years - is under server pressure. Leakage power is the major factor in the challenge to keep Moore's Law alive and well. As process technologies continue to shrink, and feature demands continue to increase, more and more capabilities are being pushed into smaller and smaller packages. But are we finally reaching the point where power density limitations make this trend no longer sustainable? The test environment has long been known to be more severe power wise than the functional environment. To that end, a number of new approaches need to be taken in the test area to control not only scan in power but also the capture power. What advanced techniques are in use today, and on the horizon, to address this? Are we limited only to hardware techniques, or can these power limitation issues be addressed with smarter software, such as automatic test pattern generation tools? And how do we handle verification of these complex implementations? How do these new demands affect our ability to compress test time and test data volume? This paper explores possible methods for improving the \"power capacity\" of power-sensitive design from both the functional and test perspectives.","PeriodicalId":334529,"journal":{"name":"2008 13th European Test Symposium","volume":"34 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2008-05-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"123455200","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Several existing methodologies have leveraged the correlation between the non-RF and the RF performances of a circuit in order to predict the latter from the former and, thus, reduce test cost. While this form of specification test compaction eliminates the need for expensive RF measurements, it also comes at the cost of reduced test accuracy, since the retained non-RF measurements and pertinent correlation models do not always suffice for adequately predicting the omitted RF measurements. To alleviate this problem, we develop a methodology that estimates the confidence in the obtained test outcome. Subsequently, devices for which this confidence is insufficient are retested through the complete specification test suite. As we demonstrate on production test data from a zero-IF down-converter fabricated at IBM, the proposed method outperforms previous defect filtering and guard banding methods and enables a more efficient exploration of the tradeoff between test accuracy and number of retested devices.
{"title":"Confidence Estimation in Non-RF to RF Correlation-Based Specification Test Compaction","authors":"Nathan Kupp, P. Drineas, M. Slamani, Y. Makris","doi":"10.1109/ETS.2008.31","DOIUrl":"https://doi.org/10.1109/ETS.2008.31","url":null,"abstract":"Several existing methodologies have leveraged the correlation between the non-RF and the RF performances of a circuit in order to predict the latter from the former and, thus, reduce test cost. While this form of specification test compaction eliminates the need for expensive RF measurements, it also comes at the cost of reduced test accuracy, since the retained non-RF measurements and pertinent correlation models do not always suffice for adequately predicting the omitted RF measurements. To alleviate this problem, we develop a methodology that estimates the confidence in the obtained test outcome. Subsequently, devices for which this confidence is insufficient are retested through the complete specification test suite. As we demonstrate on production test data from a zero-IF down-converter fabricated at IBM, the proposed method outperforms previous defect filtering and guard banding methods and enables a more efficient exploration of the tradeoff between test accuracy and number of retested devices.","PeriodicalId":334529,"journal":{"name":"2008 13th European Test Symposium","volume":"49 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2008-05-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"133760308","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
The talk will reconsider the role of the test in new emerging device circuits for CMOS Terascale and further technologies where a high level of redundancy will be present. As far as we are getting close to ultimate CMOS and ulterior new emerging nano-devices technologies the indication made by J. von Neumann in 1950 that errors had to be viewed not as an extraneous accident but as an essential part of the process under consideration caused by natural phenomena is becoming a real fact. It is well accepted that at the same time electronic technology is going into the deep nanoscale the device reliability decreases rapidly. For such future technologies internal electromagnetic coupling or just thermal noise as well as permanent manufacturing defects will cause a loss of reliability and introduce an inherent error probabilistic factor to every component of the system. These deviations motivate new design paradigms. Many of these deviations will be transient in nature, at the same time current computer architecture approaches are reaching their practical limits. In order to build reliable electronics it will be necessary to include fault and defect tolerant schemes through the introduction of massive redundancy. Within this change of scenario, in comparison to conventional deterministic logic circuits these emerging technologies have to face new design and test strategies in order to give support to this probabilistic behavior logic.
该演讲将重新考虑测试在新兴的CMOS万亿级器件电路中的作用,以及将存在高冗余的其他技术。当我们越来越接近最终的CMOS和新兴的纳米器件技术时,冯·诺伊曼(J. von Neumann)在1950年提出的错误不应被视为无关的事故,而应被视为由自然现象引起的过程的重要组成部分,这一观点正在成为一个现实。人们普遍认为,在电子技术向纳米深度发展的同时,器件的可靠性迅速下降。对于这种未来技术,内部电磁耦合或热噪声以及永久性制造缺陷将导致可靠性损失,并为系统的每个组件引入固有的误差概率因素。这些偏差激发了新的设计范式。许多这些偏差在本质上是暂时的,同时,当前的计算机体系结构方法正在达到其实际极限。为了构建可靠的电子设备,有必要通过引入大量冗余来包含故障和容错方案。在这种情况下,与传统的确定性逻辑电路相比,这些新兴技术必须面对新的设计和测试策略,以支持这种概率行为逻辑。
{"title":"The Role of Test in Circuits Built with Unreliable Components","authors":"A. Rubio","doi":"10.1109/ETS.2008.36","DOIUrl":"https://doi.org/10.1109/ETS.2008.36","url":null,"abstract":"The talk will reconsider the role of the test in new emerging device circuits for CMOS Terascale and further technologies where a high level of redundancy will be present. As far as we are getting close to ultimate CMOS and ulterior new emerging nano-devices technologies the indication made by J. von Neumann in 1950 that errors had to be viewed not as an extraneous accident but as an essential part of the process under consideration caused by natural phenomena is becoming a real fact. It is well accepted that at the same time electronic technology is going into the deep nanoscale the device reliability decreases rapidly. For such future technologies internal electromagnetic coupling or just thermal noise as well as permanent manufacturing defects will cause a loss of reliability and introduce an inherent error probabilistic factor to every component of the system. These deviations motivate new design paradigms. Many of these deviations will be transient in nature, at the same time current computer architecture approaches are reaching their practical limits. In order to build reliable electronics it will be necessary to include fault and defect tolerant schemes through the introduction of massive redundancy. Within this change of scenario, in comparison to conventional deterministic logic circuits these emerging technologies have to face new design and test strategies in order to give support to this probabilistic behavior logic.","PeriodicalId":334529,"journal":{"name":"2008 13th European Test Symposium","volume":"131 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2008-05-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114522069","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Ardy van den Berg, P. Ren, E. Marinissen, G. Gaydadjiev, K. Goossens
Test data travels through a System-on-Chip (SOC) from the chip pins to the module-under-test and vice versa via a Test Access Mechanism (TAM). Conventionally, a TAM is implemented with dedicated wires. However, also existing functional interconnect, such as a bus or Network-on-Chip (NOC), can be reused as TAM. This will reduce the overall design effort and the silicon area. For a given module, its test set, and maximal bandwidth that the functional interconnect can offer between ATE and module-under-test, our approach designs a test wrapper for the module-under-test such that the test length is minimized. Unfortunately, it is unavoidable that with the test data also unused (idle) bits are transported. This paper presents a TAM bandwidth utilization analysis and techniques for idle bits reduction, to minimize the test length. We classify the idle bits into four types which explain the reason for bandwidth under-utilization and pinpoint design improvement opportunities. Experimental results show an average bandwidth utilization of 80%, while the remaining 20% is consumed by the idle bits.
{"title":"Bandwidth Analysis for Reusing Functional Interconnect as Test Access Mechanism","authors":"Ardy van den Berg, P. Ren, E. Marinissen, G. Gaydadjiev, K. Goossens","doi":"10.1109/ETS.2008.34","DOIUrl":"https://doi.org/10.1109/ETS.2008.34","url":null,"abstract":"Test data travels through a System-on-Chip (SOC) from the chip pins to the module-under-test and vice versa via a Test Access Mechanism (TAM). Conventionally, a TAM is implemented with dedicated wires. However, also existing functional interconnect, such as a bus or Network-on-Chip (NOC), can be reused as TAM. This will reduce the overall design effort and the silicon area. For a given module, its test set, and maximal bandwidth that the functional interconnect can offer between ATE and module-under-test, our approach designs a test wrapper for the module-under-test such that the test length is minimized. Unfortunately, it is unavoidable that with the test data also unused (idle) bits are transported. This paper presents a TAM bandwidth utilization analysis and techniques for idle bits reduction, to minimize the test length. We classify the idle bits into four types which explain the reason for bandwidth under-utilization and pinpoint design improvement opportunities. Experimental results show an average bandwidth utilization of 80%, while the remaining 20% is consumed by the idle bits.","PeriodicalId":334529,"journal":{"name":"2008 13th European Test Symposium","volume":"15 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2008-05-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114670264","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}