Functional constraints are an integral part of the VLSI design methodology. Pseudo-functional scan ATPG and untestable fault identification are two areas in test where functional constraints are widely used. The number and complexity of these constraints for large designs become a limiting factor in their successful usage. In this paper the authors define a constraint minimization problem and present a powerful framework to simplify such constraints. The feasibility and effectiveness of this approach is demonstrated by using untestability analysis of large industrial benchmarks as a case study
{"title":"An Approach to Minimizing Functional Constraints","authors":"A. Jas, Yi-Shing Chang, S. Chakravarty","doi":"10.1109/DFT.2006.13","DOIUrl":"https://doi.org/10.1109/DFT.2006.13","url":null,"abstract":"Functional constraints are an integral part of the VLSI design methodology. Pseudo-functional scan ATPG and untestable fault identification are two areas in test where functional constraints are widely used. The number and complexity of these constraints for large designs become a limiting factor in their successful usage. In this paper the authors define a constraint minimization problem and present a powerful framework to simplify such constraints. The feasibility and effectiveness of this approach is demonstrated by using untestability analysis of large industrial benchmarks as a case study","PeriodicalId":113870,"journal":{"name":"2006 21st IEEE International Symposium on Defect and Fault Tolerance in VLSI Systems","volume":"40 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2006-10-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127531491","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
The process of debugging timing failures requires the selection of a small set of high-quality tests which can excite critical paths and cause a circuit to fail at as low a frequency as possible. Since the primary source of such vectors are functional vectors which can run into millions of cycles, a cost-effective methodology for selecting high quality delay tests should not require an excessive computational effort and should guarantee reasonable accuracy. We propose two metrics for estimating the delay under a given test to aid in ranking tests in order of their ability to excite critical delays. The first metric is path-based, i.e., it estimates delays of excited paths, and associates the worst-case delay over all the excited paths with the test. The second metric is cone-based, i.e., it estimates the worst-case delay for the logic cone of every output without considering paths explicitly, and associates the largest delay over all the cones with the test. For each of these two metrics, we evaluate the correlation between the metric and the delay computed by circuit simulation. Results on combinational benchmark circuits demonstrate that the metrics achieve reasonable accuracy in test selection at a significantly lower computation time than circuit simulation
{"title":"Selecting High-Quality Delay Tests for Manufacturing Test and Debug","authors":"Hangkyu Lee, S. Natarajan, S. Patil, I. Pomeranz","doi":"10.1109/DFT.2006.57","DOIUrl":"https://doi.org/10.1109/DFT.2006.57","url":null,"abstract":"The process of debugging timing failures requires the selection of a small set of high-quality tests which can excite critical paths and cause a circuit to fail at as low a frequency as possible. Since the primary source of such vectors are functional vectors which can run into millions of cycles, a cost-effective methodology for selecting high quality delay tests should not require an excessive computational effort and should guarantee reasonable accuracy. We propose two metrics for estimating the delay under a given test to aid in ranking tests in order of their ability to excite critical delays. The first metric is path-based, i.e., it estimates delays of excited paths, and associates the worst-case delay over all the excited paths with the test. The second metric is cone-based, i.e., it estimates the worst-case delay for the logic cone of every output without considering paths explicitly, and associates the largest delay over all the cones with the test. For each of these two metrics, we evaluate the correlation between the metric and the delay computed by circuit simulation. Results on combinational benchmark circuits demonstrate that the metrics achieve reasonable accuracy in test selection at a significantly lower computation time than circuit simulation","PeriodicalId":113870,"journal":{"name":"2006 21st IEEE International Symposium on Defect and Fault Tolerance in VLSI Systems","volume":"55 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2006-10-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"123895966","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
P. Pande, A. Ganguly, B. Feero, B. Belzer, C. Grecu
With the ever-increasing degrees of integration, design of communication architectures for big systems on chip (SoCs) is a challenge. The communication requirements of these large multi processor SoCs (MP-SoCs) are convened by the emerging network-on-a-chip (NoC) paradigm. To become a viable alternative IC design methodology, the NoC paradigm must address system-level reliability issues, which are among the dominant concerns for SoC design. The basic operations of NoCs are governed by on-chip packet switched networks. On the other hand, incorporation of different coding schemes in SoC design is being investigated as a means to increase system reliability. As NoCs are built on packet-switching, it is very natural to modify the data packets by adding extra bits of coded information to protect against any transient malfunction. By incorporating joint crosstalk avoidance coding (CAC) and forward error correction (FEC) schemes in the NoC data stream we are able to enhance the system reliability and at the same time reduce communication energy
{"title":"Design of Low power & Reliable Networks on Chip through joint crosstalk avoidance and forward error correction coding","authors":"P. Pande, A. Ganguly, B. Feero, B. Belzer, C. Grecu","doi":"10.1109/DFT.2006.22","DOIUrl":"https://doi.org/10.1109/DFT.2006.22","url":null,"abstract":"With the ever-increasing degrees of integration, design of communication architectures for big systems on chip (SoCs) is a challenge. The communication requirements of these large multi processor SoCs (MP-SoCs) are convened by the emerging network-on-a-chip (NoC) paradigm. To become a viable alternative IC design methodology, the NoC paradigm must address system-level reliability issues, which are among the dominant concerns for SoC design. The basic operations of NoCs are governed by on-chip packet switched networks. On the other hand, incorporation of different coding schemes in SoC design is being investigated as a means to increase system reliability. As NoCs are built on packet-switching, it is very natural to modify the data packets by adding extra bits of coded information to protect against any transient malfunction. By incorporating joint crosstalk avoidance coding (CAC) and forward error correction (FEC) schemes in the NoC data stream we are able to enhance the system reliability and at the same time reduce communication energy","PeriodicalId":113870,"journal":{"name":"2006 21st IEEE International Symposium on Defect and Fault Tolerance in VLSI Systems","volume":"61 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2006-10-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"116884873","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
This paper presents a novel approach for compressing functional test data in automatic test equipment (ATE). A practical technique is presented for 2 dimensional (2D) reordering of test data in which additionally to test vector reordering, column reordering is also applied. An ATE based approach to extract the original test vectors from the 2D ordered data is presented. The advantage of the approach is substantiated using the figure of merit of entropy for the 2D ordered test data of ISCAS benchmark circuits
{"title":"A Novel Methodology for Functional Test Data Compression","authors":"H. Hashempour, F. Lombardi","doi":"10.1109/DFT.2006.9","DOIUrl":"https://doi.org/10.1109/DFT.2006.9","url":null,"abstract":"This paper presents a novel approach for compressing functional test data in automatic test equipment (ATE). A practical technique is presented for 2 dimensional (2D) reordering of test data in which additionally to test vector reordering, column reordering is also applied. An ATE based approach to extract the original test vectors from the 2D ordered data is presented. The advantage of the approach is substantiated using the figure of merit of entropy for the 2D ordered test data of ISCAS benchmark circuits","PeriodicalId":113870,"journal":{"name":"2006 21st IEEE International Symposium on Defect and Fault Tolerance in VLSI Systems","volume":"12 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2006-10-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"124889315","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
O. Novák, Z. Plíva, Jiri Jenícek, Zbynek Mader, Michal Jarkovský
This paper describes a methodology of creating a built-in test system of a system on chip and experimental results of the system application on the AT94K FPSLIC with cores designed according to the IEEE 1500 standard. The system spares memory and keeps acceptable test access mechanism requirements. The system uses built-in processor for test control and the embedded RAM memory for storing both the compressed test vectors and the partial reconfiguration bit streams. The highly compressed test vectors are transferred from the memory to the chosen cores that are reconfigured into the embedded tester cores. The patterns are decompressed within the internal scan chains of the embedded tester cores and they are simultaneously fed into the parallel scan chains of the cores under test through test access mechanism (TAM) and standard wrappers. After having tested the first cores under test the TAM of the SoC is partially reconfigured with the help of the partial reconfiguration bitstreams and the till now untested cores are tested by those cores that start to serve as embedded testers. By this traveling reconfiguration and testing the whole circuit can be tested. For test data compression we use a test pattern compaction and compression algorithm called COMPAS. It reorders and compresses test patterns previously generated in an ATPG in such a way that they are well suited for decompression by the scan chains in the embedded tester cores. The algorithm compresses the test patterns by overlapping patterns originally generated by an ATPG. The volume of test data stored in the embedded RAM is substantially lower than the compacted ATPG test data that are compressed by other compression method. The COMPAS algorithm spares the CPU time and CPU memory requirements; both are linearly dependent with the complexity of the tested core
{"title":"Self Testing SoC with Reduced Memory Requirements and Minimized Hardware Overhead","authors":"O. Novák, Z. Plíva, Jiri Jenícek, Zbynek Mader, Michal Jarkovský","doi":"10.1109/DFT.2006.58","DOIUrl":"https://doi.org/10.1109/DFT.2006.58","url":null,"abstract":"This paper describes a methodology of creating a built-in test system of a system on chip and experimental results of the system application on the AT94K FPSLIC with cores designed according to the IEEE 1500 standard. The system spares memory and keeps acceptable test access mechanism requirements. The system uses built-in processor for test control and the embedded RAM memory for storing both the compressed test vectors and the partial reconfiguration bit streams. The highly compressed test vectors are transferred from the memory to the chosen cores that are reconfigured into the embedded tester cores. The patterns are decompressed within the internal scan chains of the embedded tester cores and they are simultaneously fed into the parallel scan chains of the cores under test through test access mechanism (TAM) and standard wrappers. After having tested the first cores under test the TAM of the SoC is partially reconfigured with the help of the partial reconfiguration bitstreams and the till now untested cores are tested by those cores that start to serve as embedded testers. By this traveling reconfiguration and testing the whole circuit can be tested. For test data compression we use a test pattern compaction and compression algorithm called COMPAS. It reorders and compresses test patterns previously generated in an ATPG in such a way that they are well suited for decompression by the scan chains in the embedded tester cores. The algorithm compresses the test patterns by overlapping patterns originally generated by an ATPG. The volume of test data stored in the embedded RAM is substantially lower than the compacted ATPG test data that are compressed by other compression method. The COMPAS algorithm spares the CPU time and CPU memory requirements; both are linearly dependent with the complexity of the tested core","PeriodicalId":113870,"journal":{"name":"2006 21st IEEE International Symposium on Defect and Fault Tolerance in VLSI Systems","volume":"14 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2006-10-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"116061628","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
S. Pontarelli, M. Ottavi, V. Vankamamidi, A. Salsano, F. Lombardi
Many techniques have been proposed in the technical literature for repairing FPGAs when affected by permanent faults. Almost all of these works exploit the dynamic reconfiguration capabilities of an FPGA; a subset of the available resources is used as spares for replacing the faulty ones. Initially in this paper, a survey of these techniques is presented; subsequently, a framework is proposed for these techniques by which a fair comparison among them can be assessed and evaluated with respect to reliability. A reliability evaluation is provided for different repair strategies under the assumption that the area overhead is constant. Moreover, considerations about time to repair and feasibility of these techniques are provided. The ultimate goal of the paper is therefore to present the state-of-the-art repair techniques as applicable to FPGA and to establish their performance for reliability
{"title":"Reliability Evaluation of Repairable/Reconfigurable FPGAs","authors":"S. Pontarelli, M. Ottavi, V. Vankamamidi, A. Salsano, F. Lombardi","doi":"10.1109/DFT.2006.55","DOIUrl":"https://doi.org/10.1109/DFT.2006.55","url":null,"abstract":"Many techniques have been proposed in the technical literature for repairing FPGAs when affected by permanent faults. Almost all of these works exploit the dynamic reconfiguration capabilities of an FPGA; a subset of the available resources is used as spares for replacing the faulty ones. Initially in this paper, a survey of these techniques is presented; subsequently, a framework is proposed for these techniques by which a fair comparison among them can be assessed and evaluated with respect to reliability. A reliability evaluation is provided for different repair strategies under the assumption that the area overhead is constant. Moreover, considerations about time to repair and feasibility of these techniques are provided. The ultimate goal of the paper is therefore to present the state-of-the-art repair techniques as applicable to FPGA and to establish their performance for reliability","PeriodicalId":113870,"journal":{"name":"2006 21st IEEE International Symposium on Defect and Fault Tolerance in VLSI Systems","volume":"25 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2006-10-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128522965","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Control flow checking (CFC) is a well known concurrent checking technique for ensuring that a program's instruction execution sequence follows permissible paths. Almost all CFC techniques require direct access to the CPU-cache bus, meaning that the checking hardware (generally called a watchdog processor (WP)) has to be on-chip. However, an on-chip WP directly accessing the CPU-cache bus has a few disadvantages chief among them being that it will use up appreciable chip real estate of a commodity processor, but may be unnecessary in most environments that do not have significant transient error rates. On the other hand, if an off-chip CFC technique can be developed that imposes minor hardware overheads on the processor chip, then such a WP can be plugged onto the external system bus when needed for concurrent checking, and will have very little of the disadvantages of on-chip WPs. Such an off-chip WP, however, is not generally be able to monitor all instructions due to the bandwidth difference between the CPU bus and the system or memory bus. The authors present techniques that allow generally effective off-chip CFC using partial access to the instruction execution stream that respects the CPU/system bus bandwidth factor (ratio) K, and still achieve reasonable block-level instruction error coverage ranging from 70-80% for K = 5 to about 94% for a K = 2. Furthermore, our experimental results show that the program-level error coverage is almost 100% even for K = 5 (i.e., the authors almost always detect the presence of an instruction error in a program sooner or later before it completes execution, which is useful for fail-safe operation), underscoring the efficacy of our methods
{"title":"Off-Chip Control Flow Checking of On-Chip Processor-Cache Instruction Stream","authors":"F. Rota, S. Dutt, Siddharth Krishna","doi":"10.1109/dft.2006.47","DOIUrl":"https://doi.org/10.1109/dft.2006.47","url":null,"abstract":"Control flow checking (CFC) is a well known concurrent checking technique for ensuring that a program's instruction execution sequence follows permissible paths. Almost all CFC techniques require direct access to the CPU-cache bus, meaning that the checking hardware (generally called a watchdog processor (WP)) has to be on-chip. However, an on-chip WP directly accessing the CPU-cache bus has a few disadvantages chief among them being that it will use up appreciable chip real estate of a commodity processor, but may be unnecessary in most environments that do not have significant transient error rates. On the other hand, if an off-chip CFC technique can be developed that imposes minor hardware overheads on the processor chip, then such a WP can be plugged onto the external system bus when needed for concurrent checking, and will have very little of the disadvantages of on-chip WPs. Such an off-chip WP, however, is not generally be able to monitor all instructions due to the bandwidth difference between the CPU bus and the system or memory bus. The authors present techniques that allow generally effective off-chip CFC using partial access to the instruction execution stream that respects the CPU/system bus bandwidth factor (ratio) K, and still achieve reasonable block-level instruction error coverage ranging from 70-80% for K = 5 to about 94% for a K = 2. Furthermore, our experimental results show that the program-level error coverage is almost 100% even for K = 5 (i.e., the authors almost always detect the presence of an instruction error in a program sooner or later before it completes execution, which is useful for fail-safe operation), underscoring the efficacy of our methods","PeriodicalId":113870,"journal":{"name":"2006 21st IEEE International Symposium on Defect and Fault Tolerance in VLSI Systems","volume":"32 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2006-10-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127906111","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
In this paper, a software-based control flow checking technique called SWTES (software-based error detection technique using encoded signatures) is presented and evaluated. This technique is processor independent and can be applied to any kind of processors and microcontrollers. To implement this technique, the program is partitioned to a set of blocks and the encoded signatures are assigned during the compile time. In the run-time, the signatures are compared with the expected ones by a monitoring routine. The proposed technique is experimentally evaluated on an ATMEL MCS51 microcontroller using software implemented fault injection (SWIFI). The results show that this technique detects about 90% of the injected errors. The memory overhead is about 135% on average, and the performance overhead varies between 11% and 191% depending on the workload used
{"title":"A Software-Based Error Detection Technique Using Encoded Signatures","authors":"Yasser Sedaghat, S. Miremadi, M. Fazeli","doi":"10.1109/DFT.2006.11","DOIUrl":"https://doi.org/10.1109/DFT.2006.11","url":null,"abstract":"In this paper, a software-based control flow checking technique called SWTES (software-based error detection technique using encoded signatures) is presented and evaluated. This technique is processor independent and can be applied to any kind of processors and microcontrollers. To implement this technique, the program is partitioned to a set of blocks and the encoded signatures are assigned during the compile time. In the run-time, the signatures are compared with the expected ones by a monitoring routine. The proposed technique is experimentally evaluated on an ATMEL MCS51 microcontroller using software implemented fault injection (SWIFI). The results show that this technique detects about 90% of the injected errors. The memory overhead is about 135% on average, and the performance overhead varies between 11% and 191% depending on the workload used","PeriodicalId":113870,"journal":{"name":"2006 21st IEEE International Symposium on Defect and Fault Tolerance in VLSI Systems","volume":"82 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2006-10-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"132658677","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
This paper presents a procedure for synthesis of linear test pattern generators called SLING. SLING can synthesize linear test pattern generators that satisfy constraints on area, speed, internal fan out, and randomness properties and outperform existing linear test pattern generator designs including linear feedback shift registers (LFSRs) and cellular automatons (CAs). SLING is a constraint-driven synthesis procedure that takes as input a set of constraints and then synthesizes a test pattern generator that satisfies those constraints. SLING uses a set of linear transformations that it applies iteratively to evolve a linear test pattern generator. Because of the way the transformations are chosen and constraints are set, a high degree of phase shift is maintained between every pair of linear sequences generated at different bit positions of the generator and cross and auto correlations are highly minimized. Hardware overhead in terms of XOR gates is also minimized. Comparative analysis and experimental results show the effectiveness of the proposed synthesis scheme
{"title":"Synthesis of Efficient Linear Test Pattern Generators","authors":"Avijit Dutta, N. Touba","doi":"10.1109/DFT.2006.61","DOIUrl":"https://doi.org/10.1109/DFT.2006.61","url":null,"abstract":"This paper presents a procedure for synthesis of linear test pattern generators called SLING. SLING can synthesize linear test pattern generators that satisfy constraints on area, speed, internal fan out, and randomness properties and outperform existing linear test pattern generator designs including linear feedback shift registers (LFSRs) and cellular automatons (CAs). SLING is a constraint-driven synthesis procedure that takes as input a set of constraints and then synthesizes a test pattern generator that satisfies those constraints. SLING uses a set of linear transformations that it applies iteratively to evolve a linear test pattern generator. Because of the way the transformations are chosen and constraints are set, a high degree of phase shift is maintained between every pair of linear sequences generated at different bit positions of the generator and cross and auto correlations are highly minimized. Hardware overhead in terms of XOR gates is also minimized. Comparative analysis and experimental results show the effectiveness of the proposed synthesis scheme","PeriodicalId":113870,"journal":{"name":"2006 21st IEEE International Symposium on Defect and Fault Tolerance in VLSI Systems","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2006-10-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"132252503","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Fault injection is frequently used for the verification and validation of dependable systems. When targeting real time microprocessor based systems the process becomes significantly more complex. This paper proposes two complementary solutions to improve real time fault injection campaign execution, both in terms of performance and capabilities. The methodology is based on the use of the on-chip debug mechanisms present in modern electronic devices. The main objective is the injection of faults in microprocessor memory elements with minimum delay and intrusiveness. Different configurations were implemented and compared in terms of performance gain and logic overhead
{"title":"Real Time Fault Injection Using Enhanced OCD -- A Performance Analysis","authors":"A. Fidalgo, G. Alves, J. Ferreira","doi":"10.1109/DFT.2006.51","DOIUrl":"https://doi.org/10.1109/DFT.2006.51","url":null,"abstract":"Fault injection is frequently used for the verification and validation of dependable systems. When targeting real time microprocessor based systems the process becomes significantly more complex. This paper proposes two complementary solutions to improve real time fault injection campaign execution, both in terms of performance and capabilities. The methodology is based on the use of the on-chip debug mechanisms present in modern electronic devices. The main objective is the injection of faults in microprocessor memory elements with minimum delay and intrusiveness. Different configurations were implemented and compared in terms of performance gain and logic overhead","PeriodicalId":113870,"journal":{"name":"2006 21st IEEE International Symposium on Defect and Fault Tolerance in VLSI Systems","volume":"23 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2006-10-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"123833386","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}