In this paper, a new adaptive PLL is implemented. This PLL employs a simple yet effective jitter test circuit to monitor the PLL jitter performance. Additionally, it uses a digital control unit to dynamically adjust the switched loop filter to suppress the jitter. By using this structure, the trade-off between the PLL locking speed and jitter performance can be balanced
{"title":"Employing On-Chip Jitter Test Circuit for Phase Locked Loop Self-Calibration","authors":"T. Xia, S. Wyatt, Rupert Ho","doi":"10.1109/DFT.2006.26","DOIUrl":"https://doi.org/10.1109/DFT.2006.26","url":null,"abstract":"In this paper, a new adaptive PLL is implemented. This PLL employs a simple yet effective jitter test circuit to monitor the PLL jitter performance. Additionally, it uses a digital control unit to dynamically adjust the switched loop filter to suppress the jitter. By using this structure, the trade-off between the PLL locking speed and jitter performance can be balanced","PeriodicalId":113870,"journal":{"name":"2006 21st IEEE International Symposium on Defect and Fault Tolerance in VLSI Systems","volume":"34 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2006-12-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"123855845","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
A new framework for generating test sets with high test efficiency (TE) for critical path delay faults (PDFs) is presented. TE is defined as the number of new critical PDFs detected by a generated test. The proposed method accepts as input a set of potentially critical PDFs and generates a compact test set for only the critical PDFs (i.e., non-sensitizable PDFs are effectively dropped from consideration), whilst avoiding any path or segment enumeration. This is done by exploiting the properties of the ISOPs/ZBDD data structure, which is shown to efficiently represent a set of critical paths along with all their associated sensitization test cubes. The experimental results demonstrate that the proposed method is scalable in terms of test efficiency and can generate very compact test sets for critical PDFs
{"title":"Implicit Critical PDF Test Generation with Maximal Test Efficiency","authors":"Kyriakos Christou, M. Michael, S. Tragoudas","doi":"10.1109/DFT.2006.34","DOIUrl":"https://doi.org/10.1109/DFT.2006.34","url":null,"abstract":"A new framework for generating test sets with high test efficiency (TE) for critical path delay faults (PDFs) is presented. TE is defined as the number of new critical PDFs detected by a generated test. The proposed method accepts as input a set of potentially critical PDFs and generates a compact test set for only the critical PDFs (i.e., non-sensitizable PDFs are effectively dropped from consideration), whilst avoiding any path or segment enumeration. This is done by exploiting the properties of the ISOPs/ZBDD data structure, which is shown to efficiently represent a set of critical paths along with all their associated sensitization test cubes. The experimental results demonstrate that the proposed method is scalable in terms of test efficiency and can generate very compact test sets for critical PDFs","PeriodicalId":113870,"journal":{"name":"2006 21st IEEE International Symposium on Defect and Fault Tolerance in VLSI Systems","volume":"31 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2006-10-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"125005466","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
The timing margin of an operating physical device suffers from crosstalk, power supply voltage fluctuation, and temperature variation among other elements. This problem is increasingly pronounced with deep-submicron technology. A conservative testing, binning and marketing policy alleviates the reliability concerns but at a loss of realizable performance of the device. This paper presents a methodology for a more practical estimation of the timing margin through analytical and empirical analysis of noise sources. First, the sources of noise are modeled. Then physical experiments are conducted to measure time-to-failure of the target CPUs under stress. The accelerated test results are used for parameterizing the models to empirically determine the device timing margin under realistic operating conditions. The results indicate that the actual safe-operating region for a set of tested microprocessors is significantly wider than that reported in manufacturer's' specifications for new devices
{"title":"Timing Failure Analysis of Commercial CPUs Under Operating Stress","authors":"Sanghoan Chang, G. Choi","doi":"10.1109/DFT.2006.66","DOIUrl":"https://doi.org/10.1109/DFT.2006.66","url":null,"abstract":"The timing margin of an operating physical device suffers from crosstalk, power supply voltage fluctuation, and temperature variation among other elements. This problem is increasingly pronounced with deep-submicron technology. A conservative testing, binning and marketing policy alleviates the reliability concerns but at a loss of realizable performance of the device. This paper presents a methodology for a more practical estimation of the timing margin through analytical and empirical analysis of noise sources. First, the sources of noise are modeled. Then physical experiments are conducted to measure time-to-failure of the target CPUs under stress. The accelerated test results are used for parameterizing the models to empirically determine the device timing margin under realistic operating conditions. The results indicate that the actual safe-operating region for a set of tested microprocessors is significantly wider than that reported in manufacturer's' specifications for new devices","PeriodicalId":113870,"journal":{"name":"2006 21st IEEE International Symposium on Defect and Fault Tolerance in VLSI Systems","volume":"27 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2006-10-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114911608","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
J. Dudas, C. Jung, Linda Wu, G. Chapman, I. Koren, Z. Koren
Continued increase in complexity of digital image sensors means that defects are more likely to develop in the field, but little concrete information is available on in-field defect growth. This paper presents an algorithm to help quantify the problem by identifying defects and potentially tracking defect growth. Building on previous research, this technique is extended to utilize a more realistic defect model suitable for analyzing real-world camera systems. Monte Carlo simulations show that abnormal sensitivity defects are successfully detected by analyzing only 40 typical photographs. Experimentation also indicates that this technique can be applied to imagers with up to 4% defect density, and that noisy images can be diagnosed successfully with only a small reduction in accuracy. Extension to colour imagers has been accomplished through independent analysis of image colour planes
{"title":"On-Line Mapping of In-Field Defects in Image Sensor Arrays","authors":"J. Dudas, C. Jung, Linda Wu, G. Chapman, I. Koren, Z. Koren","doi":"10.1109/DFT.2006.48","DOIUrl":"https://doi.org/10.1109/DFT.2006.48","url":null,"abstract":"Continued increase in complexity of digital image sensors means that defects are more likely to develop in the field, but little concrete information is available on in-field defect growth. This paper presents an algorithm to help quantify the problem by identifying defects and potentially tracking defect growth. Building on previous research, this technique is extended to utilize a more realistic defect model suitable for analyzing real-world camera systems. Monte Carlo simulations show that abnormal sensitivity defects are successfully detected by analyzing only 40 typical photographs. Experimentation also indicates that this technique can be applied to imagers with up to 4% defect density, and that noisy images can be diagnosed successfully with only a small reduction in accuracy. Extension to colour imagers has been accomplished through independent analysis of image colour planes","PeriodicalId":113870,"journal":{"name":"2006 21st IEEE International Symposium on Defect and Fault Tolerance in VLSI Systems","volume":"24 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2006-10-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129890110","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Dynamic verification architectures provide fault detection by employing a simple checker processor that dynamically checks the computations of a complex processor. For dynamic verification to be viable, the checker processor must keep up with the retirement throughput of the core processor. However, the overall throughput would be limited if the checker processor is neither fast nor wide enough to keep up with the core processor. The authors investigate the impact of checker bandwidth on performance. As a solution for the checker's congestion, the authors propose an active verification management (AVM) approach with a filter checker. The goal of AVM is to reduce overloaded verification in the checker with a congestion avoidance policy and to minimize the performance degradation caused by congestion. Before the verification process starts at the checker processor, a filter checker marks a correctness non-criticality indicator (CNI) bit in advance to indicate how likely these pre-computed results are to be unimportant for reliability. Then AVM decides how to deal with the marked instructions by using a congestion avoidance policy. Both reactive and proactive congestion avoidance policies are proposed to skip the verification process at the checker. Results show that the proposed AVM has the potential to solve the verification congestion problem when perfect fault coverage is not needed. With no AVM, congestion at the checker badly affects performance, to the tune of 57%, when compared to that of a non-fault-tolerant processor. With good marking by AVM, the performance of a reliable processor approaches 95% of that of a non-fault-tolerant processor. Although instructions can be skipped on a random basis, such an approach reduces the fault coverage. A filter checker with a marking policy correlated with the correctness non-criticality metric, on the other hand, significantly reduces the soft error rate. Finally, the authors also present results showing the trade-off between performance and reliability
{"title":"The Filter Checker: An Active Verification Management Approach","authors":"Joonhyuk Yoo, M. Franklin","doi":"10.1109/DFT.2006.64","DOIUrl":"https://doi.org/10.1109/DFT.2006.64","url":null,"abstract":"Dynamic verification architectures provide fault detection by employing a simple checker processor that dynamically checks the computations of a complex processor. For dynamic verification to be viable, the checker processor must keep up with the retirement throughput of the core processor. However, the overall throughput would be limited if the checker processor is neither fast nor wide enough to keep up with the core processor. The authors investigate the impact of checker bandwidth on performance. As a solution for the checker's congestion, the authors propose an active verification management (AVM) approach with a filter checker. The goal of AVM is to reduce overloaded verification in the checker with a congestion avoidance policy and to minimize the performance degradation caused by congestion. Before the verification process starts at the checker processor, a filter checker marks a correctness non-criticality indicator (CNI) bit in advance to indicate how likely these pre-computed results are to be unimportant for reliability. Then AVM decides how to deal with the marked instructions by using a congestion avoidance policy. Both reactive and proactive congestion avoidance policies are proposed to skip the verification process at the checker. Results show that the proposed AVM has the potential to solve the verification congestion problem when perfect fault coverage is not needed. With no AVM, congestion at the checker badly affects performance, to the tune of 57%, when compared to that of a non-fault-tolerant processor. With good marking by AVM, the performance of a reliable processor approaches 95% of that of a non-fault-tolerant processor. Although instructions can be skipped on a random basis, such an approach reduces the fault coverage. A filter checker with a marking policy correlated with the correctness non-criticality metric, on the other hand, significantly reduces the soft error rate. Finally, the authors also present results showing the trade-off between performance and reliability","PeriodicalId":113870,"journal":{"name":"2006 21st IEEE International Symposium on Defect and Fault Tolerance in VLSI Systems","volume":"500 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2006-10-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"132127132","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Two modified triple modular redundancy (TMR) structures based on asynchronous circuit technique are proposed in this paper. Double modular redundancy (DMR) structure uses asynchronous C element to output and keep the correct value of two redundant storage cells. Temporal spatial triple modular redundancy structure with DCTREG (TSTMR-D) uses explicit separated master and slave latch structure of de-synchronous pipeline. Three soft error tolerant 8051 cores with DMR, TMR and TSTMR-D respectively are implemented in SMIC 0.35mum process. Fault injection experiments are also included. The experiment results indicate that DMR structure has a relatively low overhead on both area and latency than TMR, while tolerances SEUs in sequential logic. TSTMR-D structures can tolerance soft errors in both sequential logic and combinational logic with reasonable area and latency overhead
{"title":"Modified Triple Modular Redundancy Structure based on Asynchronous Circuit Technique","authors":"Gong Rui, Chen Wei, Liu Fang, Dai Kui, W. Zhiying","doi":"10.1109/DFT.2006.44","DOIUrl":"https://doi.org/10.1109/DFT.2006.44","url":null,"abstract":"Two modified triple modular redundancy (TMR) structures based on asynchronous circuit technique are proposed in this paper. Double modular redundancy (DMR) structure uses asynchronous C element to output and keep the correct value of two redundant storage cells. Temporal spatial triple modular redundancy structure with DCTREG (TSTMR-D) uses explicit separated master and slave latch structure of de-synchronous pipeline. Three soft error tolerant 8051 cores with DMR, TMR and TSTMR-D respectively are implemented in SMIC 0.35mum process. Fault injection experiments are also included. The experiment results indicate that DMR structure has a relatively low overhead on both area and latency than TMR, while tolerances SEUs in sequential logic. TSTMR-D structures can tolerance soft errors in both sequential logic and combinational logic with reasonable area and latency overhead","PeriodicalId":113870,"journal":{"name":"2006 21st IEEE International Symposium on Defect and Fault Tolerance in VLSI Systems","volume":"300 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2006-10-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"116363674","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
This paper presents simulations of 3 different implementations of the minority-3 function, with special focus on mismatch analysis through statistical Monte Carlo simulations. The simulations clearly favors the minority-3 mirrored gate, and a gate-level redundancy scheme, where identical circuits with the same input drive the same output-node, is further explored as a means of increasing fault- and defect-tolerance. Important tradeoffs between supply voltage, redundancy and yield are revealed, and VDD = 175 mV is suggested as a minimum useful operating voltage, combined with a redundancy factor of 2
{"title":"Improving Yield and Defect Tolerance in Multifunction Subthreshold CMOS Gates","authors":"K. Granhaug, S. Aunet","doi":"10.1109/DFT.2006.35","DOIUrl":"https://doi.org/10.1109/DFT.2006.35","url":null,"abstract":"This paper presents simulations of 3 different implementations of the minority-3 function, with special focus on mismatch analysis through statistical Monte Carlo simulations. The simulations clearly favors the minority-3 mirrored gate, and a gate-level redundancy scheme, where identical circuits with the same input drive the same output-node, is further explored as a means of increasing fault- and defect-tolerance. Important tradeoffs between supply voltage, redundancy and yield are revealed, and VDD = 175 mV is suggested as a minimum useful operating voltage, combined with a redundancy factor of 2","PeriodicalId":113870,"journal":{"name":"2006 21st IEEE International Symposium on Defect and Fault Tolerance in VLSI Systems","volume":"40 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2006-10-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128304148","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
This paper presents a novel method that utilizes multi-site and multi-probe facilities in an ATE for substrate testing. The test time for a batch can be considerably reduced by efficiently utilizing an ATE with a number of flying-probes and multiple substrates under test (SUTs). An analytical model that predicts very accurately the batch test time is proposed. This model establishes the optimal multi-site configuration as corresponding to the batch size that allow multiple SUTs to be simultaneously tested on a ATE. Simulation results for an ATE with 12 flying-probe as example of a commercially available tester are provided; for this ATE the proposed method achieves a reduction of 54.66% in test time over a single-site method (at complete coverage of the modeled faults)
{"title":"Multi-Site and Multi-Probe Substrate Testing on an ATE","authors":"Xiaojun Ma, F. Lombardi","doi":"10.1109/DFT.2006.45","DOIUrl":"https://doi.org/10.1109/DFT.2006.45","url":null,"abstract":"This paper presents a novel method that utilizes multi-site and multi-probe facilities in an ATE for substrate testing. The test time for a batch can be considerably reduced by efficiently utilizing an ATE with a number of flying-probes and multiple substrates under test (SUTs). An analytical model that predicts very accurately the batch test time is proposed. This model establishes the optimal multi-site configuration as corresponding to the batch size that allow multiple SUTs to be simultaneously tested on a ATE. Simulation results for an ATE with 12 flying-probe as example of a commercially available tester are provided; for this ATE the proposed method achieves a reduction of 54.66% in test time over a single-site method (at complete coverage of the modeled faults)","PeriodicalId":113870,"journal":{"name":"2006 21st IEEE International Symposium on Defect and Fault Tolerance in VLSI Systems","volume":"14 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2006-10-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127153193","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
This paper studies the effect of the type of scan-based delay fault tests used for a circuit on the ability to diagnose delay defects by studying its effect on diagnosis of transition faults. The authors consider enhanced scan tests, skewed-load tests, broadside tests, functional broadside tests, and a combination of skewed-load and broadside tests. The results indicate that while functional broadside tests should be used for fault detection to avoid overtesting, the test set should be extended for fault diagnosis by adding other types of tests. Adding a small number of skewed-load tests is especially useful for diagnosis if enhanced scan is not available
{"title":"Scan-Based Delay Fault Tests for Diagnosis of Transition Faults","authors":"I. Pomeranz, S. Reddy","doi":"10.1109/DFT.2006.56","DOIUrl":"https://doi.org/10.1109/DFT.2006.56","url":null,"abstract":"This paper studies the effect of the type of scan-based delay fault tests used for a circuit on the ability to diagnose delay defects by studying its effect on diagnosis of transition faults. The authors consider enhanced scan tests, skewed-load tests, broadside tests, functional broadside tests, and a combination of skewed-load and broadside tests. The results indicate that while functional broadside tests should be used for fault detection to avoid overtesting, the test set should be extended for fault diagnosis by adding other types of tests. Adding a small number of skewed-load tests is especially useful for diagnosis if enhanced scan is not available","PeriodicalId":113870,"journal":{"name":"2006 21st IEEE International Symposium on Defect and Fault Tolerance in VLSI Systems","volume":"7 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2006-10-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130765833","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
In this paper, a novel defect tolerance and test method is proposed for highly defect prone reconfigurable nanoscale devices. The method is based on searching for a fault-free implementation of functions in each configurable nanoblock. The proposed method has the advantage of not relying on defect location information (defect map). It also removes the requirement of per chip placement and routing. A simulation tool is developed and several experiments are performed on MCNC benchmarks to evaluate defect tolerance and yield achievable by the proposed method. A greedy search algorithm is also developed in this simulation program that finds a fault-free configuration of each function of an application on a nanoblock of the device. The experiments are performed for different defect rates and under different values of redundancy provided for the device model. The results show that the proposed method can achieve high yields in acceptable amount of test and reconfiguration time under very high defect densities and with minimum amount of redundancy provided in the device
{"title":"A Reconfiguration-based Defect Tolerance Method for Nanoscale Devices","authors":"R. Rad, M. Tehranipoor","doi":"10.1109/DFT.2006.10","DOIUrl":"https://doi.org/10.1109/DFT.2006.10","url":null,"abstract":"In this paper, a novel defect tolerance and test method is proposed for highly defect prone reconfigurable nanoscale devices. The method is based on searching for a fault-free implementation of functions in each configurable nanoblock. The proposed method has the advantage of not relying on defect location information (defect map). It also removes the requirement of per chip placement and routing. A simulation tool is developed and several experiments are performed on MCNC benchmarks to evaluate defect tolerance and yield achievable by the proposed method. A greedy search algorithm is also developed in this simulation program that finds a fault-free configuration of each function of an application on a nanoblock of the device. The experiments are performed for different defect rates and under different values of redundancy provided for the device model. The results show that the proposed method can achieve high yields in acceptable amount of test and reconfiguration time under very high defect densities and with minimum amount of redundancy provided in the device","PeriodicalId":113870,"journal":{"name":"2006 21st IEEE International Symposium on Defect and Fault Tolerance in VLSI Systems","volume":"85 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2006-10-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"132491263","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}