Pub Date : 2015-11-09DOI: 10.1109/DFT.2015.7315139
Jyothish Soman, Negar Miralaei, A. Mycroft, Timothy M. Jones
Processor reliability at upcoming technology nodes presents significant challenges to designers from increased manufacturing variability, parametric variation and transistor wear-out leading to permanent faults. We present a design to tolerate this impact at the microarchitectural level-a chip with n cores together with one or more shared instruction re-execution units (IRUs). Instructions using a faulty component are identified and re-executed on an IRU. This design incurs no slowdown in the absence of errors and allows continued operation of all n cores after multiple hard errors on one or all cores in the structures protected by our scheme. Experiments show that a single-core chip experiences only a 23% slowdown with 1 error, rising to 43% in the presence of 5 errors. In a 4-core scenario with 4 errors on every core and a shared IRU, REPAIR enables performance of 0.68× of a fully functioning system.
{"title":"REPAIR: Hard-error recovery via re-execution","authors":"Jyothish Soman, Negar Miralaei, A. Mycroft, Timothy M. Jones","doi":"10.1109/DFT.2015.7315139","DOIUrl":"https://doi.org/10.1109/DFT.2015.7315139","url":null,"abstract":"Processor reliability at upcoming technology nodes presents significant challenges to designers from increased manufacturing variability, parametric variation and transistor wear-out leading to permanent faults. We present a design to tolerate this impact at the microarchitectural level-a chip with n cores together with one or more shared instruction re-execution units (IRUs). Instructions using a faulty component are identified and re-executed on an IRU. This design incurs no slowdown in the absence of errors and allows continued operation of all n cores after multiple hard errors on one or all cores in the structures protected by our scheme. Experiments show that a single-core chip experiences only a 23% slowdown with 1 error, rising to 43% in the presence of 5 errors. In a 4-core scenario with 4 errors on every core and a shared IRU, REPAIR enables performance of 0.68× of a fully functioning system.","PeriodicalId":383972,"journal":{"name":"2015 IEEE International Symposium on Defect and Fault Tolerance in VLSI and Nanotechnology Systems (DFTS)","volume":"167 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2015-11-09","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"116699169","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2015-11-09DOI: 10.1109/DFT.2015.7315164
F. Rosa, F. Kastensmidt, R. Reis, Luciano Ost
Increasing chip power densities allied to the continuous technology shrink is making emerging multiprocessor embedded systems more vulnerable to soft errors. Due the high cost and design time inherent to board-based fault injection approaches, more appropriate and efficient simulation-based fault injection frameworks become crucial to guarantee the adequate design exploration support at early design phase. In this scenario, this paper proposes a fast and flexible fault injector framework, called OVPSim-FIM, which supports parallel simulation to boost up the fault injection process. Aiming at validating OVPSim-FIM, several fault injection campaigns were performed in ARM processors, considering a market leading RTOS and benchmarks with up to 10 billions of object code instructions. Results have shown that OVPSim-FIM enables to inject faults at speed of up to 10,000 MIPS, depending on the processor and the benchmark profile, enabling to identify erros and exceptions according to different criteria and classifications.
{"title":"A fast and scalable fault injection framework to evaluate multi/many-core soft error reliability","authors":"F. Rosa, F. Kastensmidt, R. Reis, Luciano Ost","doi":"10.1109/DFT.2015.7315164","DOIUrl":"https://doi.org/10.1109/DFT.2015.7315164","url":null,"abstract":"Increasing chip power densities allied to the continuous technology shrink is making emerging multiprocessor embedded systems more vulnerable to soft errors. Due the high cost and design time inherent to board-based fault injection approaches, more appropriate and efficient simulation-based fault injection frameworks become crucial to guarantee the adequate design exploration support at early design phase. In this scenario, this paper proposes a fast and flexible fault injector framework, called OVPSim-FIM, which supports parallel simulation to boost up the fault injection process. Aiming at validating OVPSim-FIM, several fault injection campaigns were performed in ARM processors, considering a market leading RTOS and benchmarks with up to 10 billions of object code instructions. Results have shown that OVPSim-FIM enables to inject faults at speed of up to 10,000 MIPS, depending on the processor and the benchmark profile, enabling to identify erros and exceptions according to different criteria and classifications.","PeriodicalId":383972,"journal":{"name":"2015 IEEE International Symposium on Defect and Fault Tolerance in VLSI and Nanotechnology Systems (DFTS)","volume":"39 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2015-11-09","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"132456247","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2015-11-09DOI: 10.1109/DFT.2015.7315126
Salin Junsangsri, F. Lombardi, Jie Han
This paper presents a simulation-based analysis of spike and flicker noise in a Phase Change Memory (PCM); this investigation is based on HSPICE simulation by taking into account cell-level (with its neighbors) and array-level considerations. State switching phenomena in binary PCM memories are dealt in detail to assess the impact of these two types of noise. It is shown that a lower feature size is of concern for flicker noise in terms of value and percentage variation (while not substantially affecting array-level performance). This paper also shows that spike noise has a radically different behavior: spike noise shows a dependency on the PCM resistance more than the type of state of the PCM. It increases substantially when the amorphous resistance increases and has a nearly constant value when the memory cell is changing to an amorphous state.
{"title":"Evaluating the impact of spike and flicker noise in phase change memories","authors":"Salin Junsangsri, F. Lombardi, Jie Han","doi":"10.1109/DFT.2015.7315126","DOIUrl":"https://doi.org/10.1109/DFT.2015.7315126","url":null,"abstract":"This paper presents a simulation-based analysis of spike and flicker noise in a Phase Change Memory (PCM); this investigation is based on HSPICE simulation by taking into account cell-level (with its neighbors) and array-level considerations. State switching phenomena in binary PCM memories are dealt in detail to assess the impact of these two types of noise. It is shown that a lower feature size is of concern for flicker noise in terms of value and percentage variation (while not substantially affecting array-level performance). This paper also shows that spike noise has a radically different behavior: spike noise shows a dependency on the PCM resistance more than the type of state of the PCM. It increases substantially when the amorphous resistance increases and has a nearly constant value when the memory cell is changing to an amorphous state.","PeriodicalId":383972,"journal":{"name":"2015 IEEE International Symposium on Defect and Fault Tolerance in VLSI and Nanotechnology Systems (DFTS)","volume":"1 2","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2015-11-09","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"131436954","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2015-11-09DOI: 10.1109/DFT.2015.7315143
Muhammad Yasin, Bodhisatwa Mazumdar, Subidh Ali, O. Sinanoglu
Logic encryption has recently gained interest as a countermeasure against IP piracy and reverse engineering attacks. A secret key is used to lock/encrypt an IC such that the IC will not be functional without being activated with the correct key. Existing attacks against logic encryption are of theoretical and/or algorithmic nature. In this paper, we evaluate for the first time the security of logic encryption against side-channel attacks. We present a differential power analysis attack against random and strong logic encryption techniques. The proposed attack is highly effective against random logic encryption, revealing more than 70% of the key bits correctly in 50% of the circuits. However, in the case of strong logic encryption, which exhibits an inherent DPA-resistance, the attack could reveal more than 50% of the key bits in only 25% of the circuits.
{"title":"Security analysis of logic encryption against the most effective side-channel attack: DPA","authors":"Muhammad Yasin, Bodhisatwa Mazumdar, Subidh Ali, O. Sinanoglu","doi":"10.1109/DFT.2015.7315143","DOIUrl":"https://doi.org/10.1109/DFT.2015.7315143","url":null,"abstract":"Logic encryption has recently gained interest as a countermeasure against IP piracy and reverse engineering attacks. A secret key is used to lock/encrypt an IC such that the IC will not be functional without being activated with the correct key. Existing attacks against logic encryption are of theoretical and/or algorithmic nature. In this paper, we evaluate for the first time the security of logic encryption against side-channel attacks. We present a differential power analysis attack against random and strong logic encryption techniques. The proposed attack is highly effective against random logic encryption, revealing more than 70% of the key bits correctly in 50% of the circuits. However, in the case of strong logic encryption, which exhibits an inherent DPA-resistance, the attack could reveal more than 50% of the key bits in only 25% of the circuits.","PeriodicalId":383972,"journal":{"name":"2015 IEEE International Symposium on Defect and Fault Tolerance in VLSI and Nanotechnology Systems (DFTS)","volume":"16 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2015-11-09","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"122229256","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2015-11-09DOI: 10.1109/DFT.2015.7315136
A. Schöll, Claus Braun, M. Kochte, H. Wunderlich
Linear system solvers are an integral part for many different compute-intensive applications and they benefit from the compute power of heterogeneous computer architectures. However, the growing spectrum of reliability threats for such nano-scaled CMOS devices makes the integration of fault tolerance mandatory. The preconditioned conjugate gradient (PCG) method is one widely used solver as it finds solutions typically faster compared to direct methods. Although this iterative approach is able to tolerate certain errors, latest research shows that the PCG solver is still vulnerable to transient effects. Even single errors, for instance, caused by marginal hardware, harsh environments, or particle radiation, can considerably affect execution times, or lead to silent data corruption. In this work, a novel fault-tolerant PCG solver with extremely low runtime overhead is proposed. Since the error detection method does not involve expensive operations, it scales very well with increasing problem sizes. In case of errors, the method selects between three different correction methods according to the identified error. Experimental results show a runtime overhead for error detection ranging only from 0.04% to 1.70%.
{"title":"Low-overhead fault-tolerance for the preconditioned conjugate gradient solver","authors":"A. Schöll, Claus Braun, M. Kochte, H. Wunderlich","doi":"10.1109/DFT.2015.7315136","DOIUrl":"https://doi.org/10.1109/DFT.2015.7315136","url":null,"abstract":"Linear system solvers are an integral part for many different compute-intensive applications and they benefit from the compute power of heterogeneous computer architectures. However, the growing spectrum of reliability threats for such nano-scaled CMOS devices makes the integration of fault tolerance mandatory. The preconditioned conjugate gradient (PCG) method is one widely used solver as it finds solutions typically faster compared to direct methods. Although this iterative approach is able to tolerate certain errors, latest research shows that the PCG solver is still vulnerable to transient effects. Even single errors, for instance, caused by marginal hardware, harsh environments, or particle radiation, can considerably affect execution times, or lead to silent data corruption. In this work, a novel fault-tolerant PCG solver with extremely low runtime overhead is proposed. Since the error detection method does not involve expensive operations, it scales very well with increasing problem sizes. In case of errors, the method selects between three different correction methods according to the identified error. Experimental results show a runtime overhead for error detection ranging only from 0.04% to 1.70%.","PeriodicalId":383972,"journal":{"name":"2015 IEEE International Symposium on Defect and Fault Tolerance in VLSI and Nanotechnology Systems (DFTS)","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2015-11-09","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129171919","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2015-11-09DOI: 10.1109/DFT.2015.7315135
S. H. Mozafari, B. Meyer
Adding redundant components is a well known technique for replacing defective components either before shipment or in the field, resulting yield improvement and consequently cost reduction. However, most yield improvement strategies utilize redundant components only when another component fails (i.e., cold spares). In this paper, we investigate the cost and performance implications of employing hot spares in multi-core single-instruction, multiple-thread (SIMT) processors. Hot spares are available to increase yield (and reduce costs) when the components are defective; otherwise, they can be used to improve performance in the field. Starting with a baseline architecture with six cores, and 32 lanes each, we added three hot spare cores, with two lanes each. When we make the lanes of the hot spares available to replace defective lanes in the baseline cores, we observe that expected performance per cost improved more than 2.5 and 1.7 times relative to systems integrating no redundancy and cold spares, respectively.
{"title":"Hot spare components for performance-cost improvement in multi-core SIMT","authors":"S. H. Mozafari, B. Meyer","doi":"10.1109/DFT.2015.7315135","DOIUrl":"https://doi.org/10.1109/DFT.2015.7315135","url":null,"abstract":"Adding redundant components is a well known technique for replacing defective components either before shipment or in the field, resulting yield improvement and consequently cost reduction. However, most yield improvement strategies utilize redundant components only when another component fails (i.e., cold spares). In this paper, we investigate the cost and performance implications of employing hot spares in multi-core single-instruction, multiple-thread (SIMT) processors. Hot spares are available to increase yield (and reduce costs) when the components are defective; otherwise, they can be used to improve performance in the field. Starting with a baseline architecture with six cores, and 32 lanes each, we added three hot spare cores, with two lanes each. When we make the lanes of the hot spares available to replace defective lanes in the baseline cores, we observe that expected performance per cost improved more than 2.5 and 1.7 times relative to systems integrating no redundancy and cold spares, respectively.","PeriodicalId":383972,"journal":{"name":"2015 IEEE International Symposium on Defect and Fault Tolerance in VLSI and Nanotechnology Systems (DFTS)","volume":"10 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2015-11-09","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"125525845","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2015-11-09DOI: 10.1109/DFT.2015.7315149
M. Venkatasubramanian, V. Agrawal, James J. Janaher
It is colloquially known that searching for test vectors to test the last few hard to detect stuck-at faults is computationally most expensive and mathematically NP-complete. Due to the complex nature of this problem, attempts made to successfully test a digital circuit for all faults in computational linear time start becoming exponential with an increase in circuit size and complexity. Various algorithms have been proposed where new vectors are generated by using previous successful vectors with similar properties. However, this leads to a bottleneck when trying to find hard to detect stuck-at faults which have only one or two unique tests and their properties may not match other previously successful tests. We propose a new unique algorithm that attempts to vastly improve the test search time for these few hard to detect faults by classifying all test vectors in the vector space in three categories: Category I vectors that activate the desired stuck-at fault but may not propagate it to the primary outputs (POs), Category II vectors that propagate the fault site value to the POs, and Category III vectors that neither activate nor propagate the fault. By bounding our search to vectors in categories I and II, and avoiding category III vectors, it is easier to arrive at the solution faster than other algorithmic implementations. The final solution itself lies in the intersection of categories I and II vectors, and it is easier to search for a test vector in a smaller subset of the large vector space. We have demonstrated the proof of concept and detailed working of our algorithm by comparing it with a random test generator.
{"title":"Quest for a quantum search algorithm for testing stuck-at faults in digital circuits","authors":"M. Venkatasubramanian, V. Agrawal, James J. Janaher","doi":"10.1109/DFT.2015.7315149","DOIUrl":"https://doi.org/10.1109/DFT.2015.7315149","url":null,"abstract":"It is colloquially known that searching for test vectors to test the last few hard to detect stuck-at faults is computationally most expensive and mathematically NP-complete. Due to the complex nature of this problem, attempts made to successfully test a digital circuit for all faults in computational linear time start becoming exponential with an increase in circuit size and complexity. Various algorithms have been proposed where new vectors are generated by using previous successful vectors with similar properties. However, this leads to a bottleneck when trying to find hard to detect stuck-at faults which have only one or two unique tests and their properties may not match other previously successful tests. We propose a new unique algorithm that attempts to vastly improve the test search time for these few hard to detect faults by classifying all test vectors in the vector space in three categories: Category I vectors that activate the desired stuck-at fault but may not propagate it to the primary outputs (POs), Category II vectors that propagate the fault site value to the POs, and Category III vectors that neither activate nor propagate the fault. By bounding our search to vectors in categories I and II, and avoiding category III vectors, it is easier to arrive at the solution faster than other algorithmic implementations. The final solution itself lies in the intersection of categories I and II vectors, and it is easier to search for a test vector in a smaller subset of the large vector space. We have demonstrated the proof of concept and detailed working of our algorithm by comparing it with a random test generator.","PeriodicalId":383972,"journal":{"name":"2015 IEEE International Symposium on Defect and Fault Tolerance in VLSI and Nanotechnology Systems (DFTS)","volume":"51 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2015-11-09","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"134285225","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2015-11-09DOI: 10.1109/DFT.2015.7315145
Shuai Chen, Junlin Chen, Domenic Forte, J. Di, M. Tehranipoor, Lei Wang
Cloning of integrated circuit (IC) chips have emerged as a significant threat to the semiconductor industry. Unauthorized extraction of design information from IC chips can be carried out in numerous ways. Invasive methods physically disassemble chip package and gain access to the different layers of a die through the low-cost delaying processing. This paper presents a new countermeasure exploiting transformable IC technologies. Transformable ICs are fabricated using materials that not only are electronically active but also change their electrical properties and physical compositions when experiencing invasive attacks. Simulation results demonstrate the proposed approach in improving the complexity of chip reverse engineering without introducing large performance overhead.
{"title":"Chip-level anti-reverse engineering using transformable interconnects","authors":"Shuai Chen, Junlin Chen, Domenic Forte, J. Di, M. Tehranipoor, Lei Wang","doi":"10.1109/DFT.2015.7315145","DOIUrl":"https://doi.org/10.1109/DFT.2015.7315145","url":null,"abstract":"Cloning of integrated circuit (IC) chips have emerged as a significant threat to the semiconductor industry. Unauthorized extraction of design information from IC chips can be carried out in numerous ways. Invasive methods physically disassemble chip package and gain access to the different layers of a die through the low-cost delaying processing. This paper presents a new countermeasure exploiting transformable IC technologies. Transformable ICs are fabricated using materials that not only are electronically active but also change their electrical properties and physical compositions when experiencing invasive attacks. Simulation results demonstrate the proposed approach in improving the complexity of chip reverse engineering without introducing large performance overhead.","PeriodicalId":383972,"journal":{"name":"2015 IEEE International Symposium on Defect and Fault Tolerance in VLSI and Nanotechnology Systems (DFTS)","volume":"5 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2015-11-09","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"125195798","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2015-11-09DOI: 10.1109/DFT.2015.7315157
Junlin Chen, Lei Wang
This paper presents a low-power LDPC decoder design by exploiting inherent memory error statistics due to voltage scaling. By analyzing the error sensitivity to the decoding performance at different memory bits and memory locations in the LDPC decoder, the scaled supply voltage is applied to memory bits with high algorithmic error-tolerance capability to reduce the memory power consumption while mitigating the impact on decoding performance. We also discuss how to improve the tolerance to memory errors by increasing the number of iterations in LDPC decoders, and investigate the energy overheads and the decoding throughput loss due to extra iterations. Simulation results of the proposed low-power LDPC decoder technique demonstrate that, by deliberately adjusting the scaled supply voltage to memory bits in different memory locations, the memory power consumption as well as the overall energy consumption of the LDPC decoder can be significantly reduced with negligible performance loss.
{"title":"Low-power LDPC decoder design exploiting memory error statistics","authors":"Junlin Chen, Lei Wang","doi":"10.1109/DFT.2015.7315157","DOIUrl":"https://doi.org/10.1109/DFT.2015.7315157","url":null,"abstract":"This paper presents a low-power LDPC decoder design by exploiting inherent memory error statistics due to voltage scaling. By analyzing the error sensitivity to the decoding performance at different memory bits and memory locations in the LDPC decoder, the scaled supply voltage is applied to memory bits with high algorithmic error-tolerance capability to reduce the memory power consumption while mitigating the impact on decoding performance. We also discuss how to improve the tolerance to memory errors by increasing the number of iterations in LDPC decoders, and investigate the energy overheads and the decoding throughput loss due to extra iterations. Simulation results of the proposed low-power LDPC decoder technique demonstrate that, by deliberately adjusting the scaled supply voltage to memory bits in different memory locations, the memory power consumption as well as the overall energy consumption of the LDPC decoder can be significantly reduced with negligible performance loss.","PeriodicalId":383972,"journal":{"name":"2015 IEEE International Symposium on Defect and Fault Tolerance in VLSI and Nanotechnology Systems (DFTS)","volume":"40 3 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2015-11-09","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"125906595","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2015-11-09DOI: 10.1109/DFT.2015.7315150
I. Pomeranz
A characterization of broadside tests as p-way piecewise-functional broadside tests, for p ≥ 1, partitions the scan-in state of a broadside test into substates of p reachable states. This provides an indication of the proximity to functional operation conditions during the functional clock cycles of the tests. It is important for avoiding excessive power dissipation and overtesting of delay faults. This paper makes the new observations that the intersections of subsets of p reachable states can be used for guiding the generation of p-way piecewise-functional broadside tests. In addition, subsets of p reachable states with larger intersections allow more tests to be generated. The paper describes a logic simulation based procedure for computing subsets of reachable states with large intersections, and a test generation procedure based on these observations.
{"title":"Piecewise-functional broadside tests based on intersections of reachable states","authors":"I. Pomeranz","doi":"10.1109/DFT.2015.7315150","DOIUrl":"https://doi.org/10.1109/DFT.2015.7315150","url":null,"abstract":"A characterization of broadside tests as p-way piecewise-functional broadside tests, for p ≥ 1, partitions the scan-in state of a broadside test into substates of p reachable states. This provides an indication of the proximity to functional operation conditions during the functional clock cycles of the tests. It is important for avoiding excessive power dissipation and overtesting of delay faults. This paper makes the new observations that the intersections of subsets of p reachable states can be used for guiding the generation of p-way piecewise-functional broadside tests. In addition, subsets of p reachable states with larger intersections allow more tests to be generated. The paper describes a logic simulation based procedure for computing subsets of reachable states with large intersections, and a test generation procedure based on these observations.","PeriodicalId":383972,"journal":{"name":"2015 IEEE International Symposium on Defect and Fault Tolerance in VLSI and Nanotechnology Systems (DFTS)","volume":"101 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2015-11-09","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"125963983","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}