Pub Date : 2013-04-03DOI: 10.1109/LATW.2013.6562671
Hector Villacorta, J. L. Garcia-Gervacio, V. Champac, S. Bota, J. Martínez-Castillo, J. Segura
Bridge defects are an important manufacturing defect that may escape test. Even more, it has been shown that in nanometer regime, process variations pose important challenges for traditional delay test methods. Therefore, advances in test methodologies to deal with nanometer issues are required. In this work the feasibility of using Low VDD and body bias in a delay based test to detect resistive bridge defects in CMOS nanometer circuits is analyzed. The detection of bridge defects using a delay based test in nanometer circuits is strongly influenced by: (1) spatial correlation of the process parameters such as length, width and oxide thickness of the transistor, (2) random placement of dopants, and (3) the signal correlation due to reconvergent paths. Because of this, in this work a Statistical Timing Analysis Framework (STAF) is used to analyze the possibilities of detection of bridge defect using a delay based test. The STAF considers different values of VDD and body bias. The detection of the bridge defects of a circuit is computed by the Statistical Fault Coverage that gives a more realistic measure of the degree of detection of the defect. This methodology is applied to some ISCAS benchmark circuits implemented in a 65nm CMOS technology. The obtained results show the feasibility of the proposed methodology.
{"title":"Bridge defect detection in nanometer CMOS circuits using Low VDD and body bias","authors":"Hector Villacorta, J. L. Garcia-Gervacio, V. Champac, S. Bota, J. Martínez-Castillo, J. Segura","doi":"10.1109/LATW.2013.6562671","DOIUrl":"https://doi.org/10.1109/LATW.2013.6562671","url":null,"abstract":"Bridge defects are an important manufacturing defect that may escape test. Even more, it has been shown that in nanometer regime, process variations pose important challenges for traditional delay test methods. Therefore, advances in test methodologies to deal with nanometer issues are required. In this work the feasibility of using Low VDD and body bias in a delay based test to detect resistive bridge defects in CMOS nanometer circuits is analyzed. The detection of bridge defects using a delay based test in nanometer circuits is strongly influenced by: (1) spatial correlation of the process parameters such as length, width and oxide thickness of the transistor, (2) random placement of dopants, and (3) the signal correlation due to reconvergent paths. Because of this, in this work a Statistical Timing Analysis Framework (STAF) is used to analyze the possibilities of detection of bridge defect using a delay based test. The STAF considers different values of VDD and body bias. The detection of the bridge defects of a circuit is computed by the Statistical Fault Coverage that gives a more realistic measure of the degree of detection of the defect. This methodology is applied to some ISCAS benchmark circuits implemented in a 65nm CMOS technology. The obtained results show the feasibility of the proposed methodology.","PeriodicalId":186736,"journal":{"name":"2013 14th Latin American Test Workshop - LATW","volume":"67 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2013-04-03","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129451702","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2013-04-03DOI: 10.1109/LATW.2013.6562681
A. Boyer, S. Bendhia
Recent studies have shown that integrated circuit aging modifies electromagnetic emission significantly. The proposed paper aims at evaluating the impact of aging on the power integrity of digital integrated circuits and clarifying its origin. On-chip measurements of power supply voltage bounces in a CMOS 90 nm technology test chip are combined with electric stress to characterize the influence of aging on power integrity. Simulation based on ICEM modeling modified by an empirical coefficient in order to take into account the circuit aging is proposed to model the evolution of the power integrity induced by device aging.
{"title":"Effect of aging on power integrity of digital integrated circuits","authors":"A. Boyer, S. Bendhia","doi":"10.1109/LATW.2013.6562681","DOIUrl":"https://doi.org/10.1109/LATW.2013.6562681","url":null,"abstract":"Recent studies have shown that integrated circuit aging modifies electromagnetic emission significantly. The proposed paper aims at evaluating the impact of aging on the power integrity of digital integrated circuits and clarifying its origin. On-chip measurements of power supply voltage bounces in a CMOS 90 nm technology test chip are combined with electric stress to characterize the influence of aging on power integrity. Simulation based on ICEM modeling modified by an empirical coefficient in order to take into account the circuit aging is proposed to model the evolution of the power integrity induced by device aging.","PeriodicalId":186736,"journal":{"name":"2013 14th Latin American Test Workshop - LATW","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2013-04-03","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129610650","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2013-04-03DOI: 10.1109/LATW.2013.6562680
L. Entrena
Summary form only given. As manufacturing technology progresses by reducing feature size, providing more integration density and increasing device functionality with lower voltages and more aggressive clock frequencies, the susceptibility to soft errors has grown to an unacceptable level in several application domains. Thus, designers need to assess the needs for soft error mitigation during the design cycle in order to adopt appropriate mitigation strategies. Fault injection is a widely used method to evaluate fault effects and fault tolerance. Fault injection is intended to provide information about fault effects covering several main goals: validate the design under test with respect to reliability requirements; detect weak areas that require error mitigation; and forecast the expected circuit behaviour in the occurrence of faults. In the first case, a typical fault injection approach consists in using a simulation tool to inject and propagate faults in a design model. However, simulation-based fault injection is quite slow. While it can be used to obtain statistical estimations of the soft error susceptibility of a circuit, identifying the critical components of a design is a much more complex task that generally requires huge fault injection campaigns in order to individually assess every component in the circuit. Similarly, huge fault injection campaigns are also required to validate highly protected designs in order to ensure a high fault coverage. In order to accelerate the fault injection process, emulation-based fault injection methods have been developed in recent years. These methods use FPGAs to prototype the circuit under test and support the fault injection mechanisms. This talk will describe recent advances in emulation-based fault injection with FPGAs that can provide unprecedented levels of performance, in the order of millions of faults per second, and support the analysis of Single Event Upset (SEU) and Single-Event Transient (SET) effects on complex circuits. Thanks to this dramatic boost in performance, detailed and accurate evaluations of soft error effects can be obtained to support the adoption of optimal error mitigation strategies. As an illustrative example, emulation-based fault injection enables full characterization of a microprocessor against soft errors on a gate/FF basis for a given workload. Multiple faults, such as Single Event Multiple Upset (SEMU) or Single Event Multiple Transient (SEMT), can also be successfully covered with these methods in an efficient manner.
{"title":"Fast fault injection techniques using FPGAs","authors":"L. Entrena","doi":"10.1109/LATW.2013.6562680","DOIUrl":"https://doi.org/10.1109/LATW.2013.6562680","url":null,"abstract":"Summary form only given. As manufacturing technology progresses by reducing feature size, providing more integration density and increasing device functionality with lower voltages and more aggressive clock frequencies, the susceptibility to soft errors has grown to an unacceptable level in several application domains. Thus, designers need to assess the needs for soft error mitigation during the design cycle in order to adopt appropriate mitigation strategies. Fault injection is a widely used method to evaluate fault effects and fault tolerance. Fault injection is intended to provide information about fault effects covering several main goals: validate the design under test with respect to reliability requirements; detect weak areas that require error mitigation; and forecast the expected circuit behaviour in the occurrence of faults. In the first case, a typical fault injection approach consists in using a simulation tool to inject and propagate faults in a design model. However, simulation-based fault injection is quite slow. While it can be used to obtain statistical estimations of the soft error susceptibility of a circuit, identifying the critical components of a design is a much more complex task that generally requires huge fault injection campaigns in order to individually assess every component in the circuit. Similarly, huge fault injection campaigns are also required to validate highly protected designs in order to ensure a high fault coverage. In order to accelerate the fault injection process, emulation-based fault injection methods have been developed in recent years. These methods use FPGAs to prototype the circuit under test and support the fault injection mechanisms. This talk will describe recent advances in emulation-based fault injection with FPGAs that can provide unprecedented levels of performance, in the order of millions of faults per second, and support the analysis of Single Event Upset (SEU) and Single-Event Transient (SET) effects on complex circuits. Thanks to this dramatic boost in performance, detailed and accurate evaluations of soft error effects can be obtained to support the adoption of optimal error mitigation strategies. As an illustrative example, emulation-based fault injection enables full characterization of a microprocessor against soft errors on a gate/FF basis for a given workload. Multiple faults, such as Single Event Multiple Upset (SEMU) or Single Event Multiple Transient (SEMT), can also be successfully covered with these methods in an efficient manner.","PeriodicalId":186736,"journal":{"name":"2013 14th Latin American Test Workshop - LATW","volume":"166 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2013-04-03","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114476469","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2013-04-03DOI: 10.1109/LATW.2013.6562664
Joel Ivan Munoz Quispe, M. Strum, J. Wang
In order to guarantee high level of reliability of current complex digital systems, a robust functional verification process is mandatory. Random constrained functional verification has been a common technique used in the industry, but sound coverage models are needed in order to monitor and limit the amount of random testing. Item coverage refers to quantitative metrics based on occurrences of system parameters or variables, in general, specified under verification engineers expertise, particularly the output coverage modeling. In most cases, the actual output value distribution does not conform the established coverage model profile, leading to testbench execution time overhead. This work presents a methodology for a fast computation of profile similar to the real output value distribution, to assist the engineer in the selection of the proper check points or output ranges of interest. At the core of this methodology is the Probabilistic Output Coverage (PrOCov) tool, which was developed with the above goals.
{"title":"PrOCov: Probabilistic output coverage model","authors":"Joel Ivan Munoz Quispe, M. Strum, J. Wang","doi":"10.1109/LATW.2013.6562664","DOIUrl":"https://doi.org/10.1109/LATW.2013.6562664","url":null,"abstract":"In order to guarantee high level of reliability of current complex digital systems, a robust functional verification process is mandatory. Random constrained functional verification has been a common technique used in the industry, but sound coverage models are needed in order to monitor and limit the amount of random testing. Item coverage refers to quantitative metrics based on occurrences of system parameters or variables, in general, specified under verification engineers expertise, particularly the output coverage modeling. In most cases, the actual output value distribution does not conform the established coverage model profile, leading to testbench execution time overhead. This work presents a methodology for a fast computation of profile similar to the real output value distribution, to assist the engineer in the selection of the proper check points or output ranges of interest. At the core of this methodology is the Probabilistic Output Coverage (PrOCov) tool, which was developed with the above goals.","PeriodicalId":186736,"journal":{"name":"2013 14th Latin American Test Workshop - LATW","volume":"40 1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2013-04-03","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"122638544","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2013-04-03DOI: 10.1109/LATW.2013.6562667
G. F. Roberto, K. Branco, J. M. Machado, A. R. Pinto
Multisensor data fusion is a technique that combines the readings of multiple sensors to detect some phenomenon. Data fusion applications are numerous and they can be used in smart buildings, environment monitoring, industry and defense applications. The main goal of multisensor data fusion is to minimize false alarms and maximize the probability of detection based on the detection of multiple sensors. In this paper a local data fusion algorithm based on luminosity, temperature and flame for fire detection is presented. The data fusion approach was embedded in a low cost mobile robot. The prototype test validation has indicated that our approach can detect fire occurrence. Moreover, the low cost project allow the development of robots that could be discarded in their fire detection missions.
{"title":"Local data fusion algorithm for fire detection through mobile robot","authors":"G. F. Roberto, K. Branco, J. M. Machado, A. R. Pinto","doi":"10.1109/LATW.2013.6562667","DOIUrl":"https://doi.org/10.1109/LATW.2013.6562667","url":null,"abstract":"Multisensor data fusion is a technique that combines the readings of multiple sensors to detect some phenomenon. Data fusion applications are numerous and they can be used in smart buildings, environment monitoring, industry and defense applications. The main goal of multisensor data fusion is to minimize false alarms and maximize the probability of detection based on the detection of multiple sensors. In this paper a local data fusion algorithm based on luminosity, temperature and flame for fire detection is presented. The data fusion approach was embedded in a low cost mobile robot. The prototype test validation has indicated that our approach can detect fire occurrence. Moreover, the low cost project allow the development of robots that could be discarded in their fire detection missions.","PeriodicalId":186736,"journal":{"name":"2013 14th Latin American Test Workshop - LATW","volume":"42 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2013-04-03","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115099470","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2013-04-03DOI: 10.1109/LATW.2013.6562663
Wesley Silva, E. Bezerra, M. Winterholer, D. Lettnin
The flexibility of Commercial-Off-The-Shelf (COTS) SRAM based FPGAs is an attractive option for the design of artificial satellites, however, the functional verification of HDL-based designs is required and is of fundamental importance. Formal verification using model checking represents a system as formal model that are automatically generated by synthesis tools. On the other hand, the properties are represented by temporal logic expressions and are traditionally manually elaborated, which is susceptible to human errors increasing the costs and time of the verification. This work presents a new method for automatic property generation for formal verification of Hardware Description Language (HDL) based systems. The industrial case study is a communication subsystem of an artificial satellite, which was developed in cooperation with the Brazilian Institute of Space Research (INPE).
{"title":"Automatic property generation for formal verification applied to HDL-based design of an on-board computer for space applications","authors":"Wesley Silva, E. Bezerra, M. Winterholer, D. Lettnin","doi":"10.1109/LATW.2013.6562663","DOIUrl":"https://doi.org/10.1109/LATW.2013.6562663","url":null,"abstract":"The flexibility of Commercial-Off-The-Shelf (COTS) SRAM based FPGAs is an attractive option for the design of artificial satellites, however, the functional verification of HDL-based designs is required and is of fundamental importance. Formal verification using model checking represents a system as formal model that are automatically generated by synthesis tools. On the other hand, the properties are represented by temporal logic expressions and are traditionally manually elaborated, which is susceptible to human errors increasing the costs and time of the verification. This work presents a new method for automatic property generation for formal verification of Hardware Description Language (HDL) based systems. The industrial case study is a communication subsystem of an artificial satellite, which was developed in cooperation with the Brazilian Institute of Space Research (INPE).","PeriodicalId":186736,"journal":{"name":"2013 14th Latin American Test Workshop - LATW","volume":"180 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2013-04-03","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"132793760","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2013-04-03DOI: 10.1109/LATW.2013.6562666
Carlos Ivan Castro Marquez, M. Strum, J. Wang
Digital applications complexity makes it harder every day to discover and debug behavioral inconsistencies at register transfer level (RTL). Aiming to bring a solution, several techniques have appeared as alternatives to verify that a circuit description meets the requirements of its corresponding functional specification. Simulation is widely applied due to its convenience to uncover early design bugs, but is far from providing the exhaustiveness acquired through formal methods, for which improved and new tools continue to appear. On the other hand, formal verification can suffer from problems such as state-space explosion or modeling inaccuracy. Then, it is vital to develop new ways to check a design for consistency fast and comprehensively. In this paper, we propose a sequential equivalence checking (SEC) formalism and an algorithm, for use between a specification, written at electronic system level (ESL), and an implementation, written at RTL. Given that equivalence is checked between different levels of abstraction, it is no longer valid to perform SEC on single states, thus, we show a scheme to extract and compare complete sequences of states in order to determine if the design intention, which is described in the ESL specification, is contained and respected by the RTL implementation. The results obtained suggest that our methodology can be applied efficiently on real designs.
{"title":"Formal equivalence checking between high-level and RTL hardware designs","authors":"Carlos Ivan Castro Marquez, M. Strum, J. Wang","doi":"10.1109/LATW.2013.6562666","DOIUrl":"https://doi.org/10.1109/LATW.2013.6562666","url":null,"abstract":"Digital applications complexity makes it harder every day to discover and debug behavioral inconsistencies at register transfer level (RTL). Aiming to bring a solution, several techniques have appeared as alternatives to verify that a circuit description meets the requirements of its corresponding functional specification. Simulation is widely applied due to its convenience to uncover early design bugs, but is far from providing the exhaustiveness acquired through formal methods, for which improved and new tools continue to appear. On the other hand, formal verification can suffer from problems such as state-space explosion or modeling inaccuracy. Then, it is vital to develop new ways to check a design for consistency fast and comprehensively. In this paper, we propose a sequential equivalence checking (SEC) formalism and an algorithm, for use between a specification, written at electronic system level (ESL), and an implementation, written at RTL. Given that equivalence is checked between different levels of abstraction, it is no longer valid to perform SEC on single states, thus, we show a scheme to extract and compare complete sequences of states in order to determine if the design intention, which is described in the ESL specification, is contained and respected by the RTL implementation. The results obtained suggest that our methodology can be applied efficiently on real designs.","PeriodicalId":186736,"journal":{"name":"2013 14th Latin American Test Workshop - LATW","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2013-04-03","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128976814","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2013-04-03DOI: 10.1109/LATW.2013.6562679
O. Sinanoglu
Today's System on Chip (SoC) is being incorporated with digital, analog, radio frequency, photonic and other devices [1]. More recently, sensors, actuators, and biochips are also being integrated into these already powerful SoCs. On one hand, SoC integration has been enabled by advances in mixed system integration and the increase in the wafer sizes (currently about 300 mm and projected to be 450mm by 2018 [1]). Consequently, the cost per chip of such SOCs has reduced. On the other hand, support for multiple capabilities and mixed technologies has increased the cost of ownership of advanced foundries. For instance, the cost of owning a foundry will be $5 billion in 2015 [2]. Consequently, only large commercial foundries now manufacture such high performance, mixed system SoCs especially at the advanced technology nodes [3].
{"title":"Embedded tutorial: Regaining hardware security and trust","authors":"O. Sinanoglu","doi":"10.1109/LATW.2013.6562679","DOIUrl":"https://doi.org/10.1109/LATW.2013.6562679","url":null,"abstract":"Today's System on Chip (SoC) is being incorporated with digital, analog, radio frequency, photonic and other devices [1]. More recently, sensors, actuators, and biochips are also being integrated into these already powerful SoCs. On one hand, SoC integration has been enabled by advances in mixed system integration and the increase in the wafer sizes (currently about 300 mm and projected to be 450mm by 2018 [1]). Consequently, the cost per chip of such SOCs has reduced. On the other hand, support for multiple capabilities and mixed technologies has increased the cost of ownership of advanced foundries. For instance, the cost of owning a foundry will be $5 billion in 2015 [2]. Consequently, only large commercial foundries now manufacture such high performance, mixed system SoCs especially at the advanced technology nodes [3].","PeriodicalId":186736,"journal":{"name":"2013 14th Latin American Test Workshop - LATW","volume":"47 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2013-04-03","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130018118","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2013-04-03DOI: 10.1109/LATW.2013.6562676
Mario Schölzel, T. Koal, Stephanie Roder, H. Vierhaus
This paper deals with a diagnostic software-based self-test program for multiplexer based components in a processor. These are in particular the read ports of a multi-ported register file and the bypass structures of an instruction pipeline. Based on the detailed analysis of both multiplexer structures, first a manually coded diagnostic test program is presented. This test program can detect all single and multiple stuck-at data- and address faults in a multiplexer structure. But it does not fully cover the control-logic of the bypass. By further refinements a 100% fault coverage for single stuck-at faults, including the control logic, is finally obtained. Based on these results, an ATPG-assisted method for the generation of such a diagnostic test program is described for arbitrary processor components. This method is finally applied to the multiplexer structures for which the manually coded test program is available. The test length and test coverage of the generated test program and of the hand-coded test program are compared.
{"title":"Towards an automatic generation of diagnostic in-field SBST for processor components","authors":"Mario Schölzel, T. Koal, Stephanie Roder, H. Vierhaus","doi":"10.1109/LATW.2013.6562676","DOIUrl":"https://doi.org/10.1109/LATW.2013.6562676","url":null,"abstract":"This paper deals with a diagnostic software-based self-test program for multiplexer based components in a processor. These are in particular the read ports of a multi-ported register file and the bypass structures of an instruction pipeline. Based on the detailed analysis of both multiplexer structures, first a manually coded diagnostic test program is presented. This test program can detect all single and multiple stuck-at data- and address faults in a multiplexer structure. But it does not fully cover the control-logic of the bypass. By further refinements a 100% fault coverage for single stuck-at faults, including the control logic, is finally obtained. Based on these results, an ATPG-assisted method for the generation of such a diagnostic test program is described for arbitrary processor components. This method is finally applied to the multiplexer structures for which the manually coded test program is available. The test length and test coverage of the generated test program and of the hand-coded test program are compared.","PeriodicalId":186736,"journal":{"name":"2013 14th Latin American Test Workshop - LATW","volume":"76 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2013-04-03","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"131052157","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2013-04-03DOI: 10.1109/LATW.2013.6562662
S. Hellebrand
Summary form only given. Nanoscale circuit and system design must cope with increasing parameter variations and a growing susceptibility to external noise. To avoid an overly pessimistic design and fully exploit the potential of new technologies, various strategies for “robust” design have been developed in the past few years. Examples range from classical fault tolerant architectures to innovative self-calibrating solutions. However, a robust design style makes validation and test particularly challenging. For design validation, it is no longer sufficient to analyze the functionality, but also robustness properties must be verified. Already the analysis of traditional fault tolerance properties like fault secureness can get very complex. In addition to that, the fault tolerance can vary with the circuit parameters, which makes the analysis extremely difficult. Similarly, manufacturing test has to provide information about the remaining robustness in the presence of manufacturing defects (“quality binning”), and yield estimation should be refined to different quality levels. In this talk we discuss the mentioned problems in more detail for some typical architectures and show first solutions.
{"title":"Analyzing and quantifying fault tolerance properties","authors":"S. Hellebrand","doi":"10.1109/LATW.2013.6562662","DOIUrl":"https://doi.org/10.1109/LATW.2013.6562662","url":null,"abstract":"Summary form only given. Nanoscale circuit and system design must cope with increasing parameter variations and a growing susceptibility to external noise. To avoid an overly pessimistic design and fully exploit the potential of new technologies, various strategies for “robust” design have been developed in the past few years. Examples range from classical fault tolerant architectures to innovative self-calibrating solutions. However, a robust design style makes validation and test particularly challenging. For design validation, it is no longer sufficient to analyze the functionality, but also robustness properties must be verified. Already the analysis of traditional fault tolerance properties like fault secureness can get very complex. In addition to that, the fault tolerance can vary with the circuit parameters, which makes the analysis extremely difficult. Similarly, manufacturing test has to provide information about the remaining robustness in the presence of manufacturing defects (“quality binning”), and yield estimation should be refined to different quality levels. In this talk we discuss the mentioned problems in more detail for some typical architectures and show first solutions.","PeriodicalId":186736,"journal":{"name":"2013 14th Latin American Test Workshop - LATW","volume":"12 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2013-04-03","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127710690","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}