This paper presents a defect-tolerant architecture for molecular-based memories. A memory is designed from multiple modules that share the same address space, where each of the modules is constructed as a molecular-based crossbar array. Redundant rows and columns of each crossbar array and redundant modules with a proper assignment of control variables are utilized to tolerate defects generated during the fabrication process and faults occurring during normal operation. The crossbar area required for the molecular memory can be made smaller than those of existing schemes, while achieving higher memory configurability. An extensive simulation demonstrates that the proposed memory architecture outperforms existing molecular-based redundant memory architectures for a wide range of defect rates.
{"title":"A defect-tolerant molecular-based memory architecture","authors":"Yoon-Hwa Choi, Myeong-Hyeon Lee","doi":"10.1109/DFT.2007.22","DOIUrl":"https://doi.org/10.1109/DFT.2007.22","url":null,"abstract":"This paper presents a defect-tolerant architecture for molecular-based memories. A memory is designed from multiple modules that share the same address space, where each of the modules is constructed as a molecular-based crossbar array. Redundant rows and columns of each crossbar array and redundant modules with a proper assignment of control variables are utilized to tolerate defects generated during the fabrication process and faults occurring during normal operation. The crossbar area required for the molecular memory can be made smaller than those of existing schemes, while achieving higher memory configurability. An extensive simulation demonstrates that the proposed memory architecture outperforms existing molecular-based redundant memory architectures for a wide range of defect rates.","PeriodicalId":259700,"journal":{"name":"22nd IEEE International Symposium on Defect and Fault-Tolerance in VLSI Systems (DFT 2007)","volume":"9 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2007-12-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126817256","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
We introduce a reliable memory system that can tolerate multiple transient errors in the memory words as well as transient errors in the encoder and decoder (corrector) circuitry. The key novel development is the fault-secure detector (FSD) error-correcting code (ECC) definition and associated circuitry that can detect errors in the received encoded vector despite experiencing multiple transient faults in its circuitry. The structure of the detector is general enough that it can be used for any ECC that follows our FSD-ECC definition. We prove that two known classes of Low-Density Parity-Check Codes have the FSD-ECC property: Euclidean Geometry and Projective Geometry codes. We identify a specific FSD-LDPC code that can tolerate up to 33 errors in each memory word or supporting logic that requires only 30% area overhead for memory blocks of 10 Kbits or larger. Larger codes can achieve even higher reliability and lower area overhead. We quantify the importance of protecting encoder and decoder (corrector) circuitry and illustrate a scenario where the system failure rate (FIT) is dominated by the failure rate of the encoder and decoder.
{"title":"Fault Secure Encoder and Decoder for Memory Applications","authors":"Helia Naeimi, A. DeHon","doi":"10.1109/DFT.2007.54","DOIUrl":"https://doi.org/10.1109/DFT.2007.54","url":null,"abstract":"We introduce a reliable memory system that can tolerate multiple transient errors in the memory words as well as transient errors in the encoder and decoder (corrector) circuitry. The key novel development is the fault-secure detector (FSD) error-correcting code (ECC) definition and associated circuitry that can detect errors in the received encoded vector despite experiencing multiple transient faults in its circuitry. The structure of the detector is general enough that it can be used for any ECC that follows our FSD-ECC definition. We prove that two known classes of Low-Density Parity-Check Codes have the FSD-ECC property: Euclidean Geometry and Projective Geometry codes. We identify a specific FSD-LDPC code that can tolerate up to 33 errors in each memory word or supporting logic that requires only 30% area overhead for memory blocks of 10 Kbits or larger. Larger codes can achieve even higher reliability and lower area overhead. We quantify the importance of protecting encoder and decoder (corrector) circuitry and illustrate a scenario where the system failure rate (FIT) is dominated by the failure rate of the encoder and decoder.","PeriodicalId":259700,"journal":{"name":"22nd IEEE International Symposium on Defect and Fault-Tolerance in VLSI Systems (DFT 2007)","volume":"174 4443 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2007-09-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"123258741","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
The paper discusses a novel approach for reduction of fault detection latency in a self-checking sequential circuit. The Authors propose decomposing the finite state machine (FSM) which describes the sequential circuit of interest, thus obtaining a number of component FSMs respectively describing the number of component circuits. Being decomposed to the number of component circuits, the initial circuit becomes able to detect faults much faster since, at each specific moment of time, one of the component circuits (FSMs) is working and all the others are being tested. The paper deals with the following aspects: a) the decomposition procedure; b) evaluation of the proposed approach based on a fault injection simulation; c) estimation of trade-off between the reduction of latency and the required hardware overhead. Results of the study are tested on a number of standard benchmarks.
{"title":"Reduction of Fault Latency in Sequential Circuits by using Decomposition","authors":"I. Levin, B. Abramov, V. Ostrovsky","doi":"10.1109/DFT.2007.24","DOIUrl":"https://doi.org/10.1109/DFT.2007.24","url":null,"abstract":"The paper discusses a novel approach for reduction of fault detection latency in a self-checking sequential circuit. The Authors propose decomposing the finite state machine (FSM) which describes the sequential circuit of interest, thus obtaining a number of component FSMs respectively describing the number of component circuits. Being decomposed to the number of component circuits, the initial circuit becomes able to detect faults much faster since, at each specific moment of time, one of the component circuits (FSMs) is working and all the others are being tested. The paper deals with the following aspects: a) the decomposition procedure; b) evaluation of the proposed approach based on a fault injection simulation; c) estimation of trade-off between the reduction of latency and the required hardware overhead. Results of the study are tested on a number of standard benchmarks.","PeriodicalId":259700,"journal":{"name":"22nd IEEE International Symposium on Defect and Fault-Tolerance in VLSI Systems (DFT 2007)","volume":"7 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2007-09-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"125344014","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
A. Manuzzato, P. Rech, S. Gerardin, A. Paccagnella, L. Sterpone, M. Violante
We present an experimental analysis of the sensitivity of SRAM-based FPGAs to alpha particles. We study how the different resources inside the FPGA (LUTs, MUXs, PIPs, etc. ) are affected by alpha-induced SEUs, assessing the cross section for the configuration memory cells controlling each of them. We then show two case studies, a chain of FIR filters and a series of soft microcontrollers implemented in the FPGA, measuring the rate of functional interruptions during exposure to a constant flux of alpha particles. The designs are then hardened using triplication with a single final voter, with intermediate voters, and finally including also feedback voters. The robustness of each hardening solution is discussed, analyzing the trade-off between area and fault-tolerance as a function of the number of SEUs in the configuration memory. An analytical model to predict the cross section of a given design with and without hardening solutions is finally proposed, starting from the experimental data.
{"title":"Sensitivity evaluation of TMR-hardened circuits to multiple SEUs induced by alpha particles in commercial SRAM-based FPGAs","authors":"A. Manuzzato, P. Rech, S. Gerardin, A. Paccagnella, L. Sterpone, M. Violante","doi":"10.1109/DFT.2007.57","DOIUrl":"https://doi.org/10.1109/DFT.2007.57","url":null,"abstract":"We present an experimental analysis of the sensitivity of SRAM-based FPGAs to alpha particles. We study how the different resources inside the FPGA (LUTs, MUXs, PIPs, etc. ) are affected by alpha-induced SEUs, assessing the cross section for the configuration memory cells controlling each of them. We then show two case studies, a chain of FIR filters and a series of soft microcontrollers implemented in the FPGA, measuring the rate of functional interruptions during exposure to a constant flux of alpha particles. The designs are then hardened using triplication with a single final voter, with intermediate voters, and finally including also feedback voters. The robustness of each hardening solution is discussed, analyzing the trade-off between area and fault-tolerance as a function of the number of SEUs in the configuration memory. An analytical model to predict the cross section of a given design with and without hardening solutions is finally proposed, starting from the experimental data.","PeriodicalId":259700,"journal":{"name":"22nd IEEE International Symposium on Defect and Fault-Tolerance in VLSI Systems (DFT 2007)","volume":"106 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2007-09-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126863824","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Reversible logic design is a well-known paradigm in digital computation. In this paper, quantum-dot cellular automata (QCA) is investigated for testable implementations of reversible logic in array systems. C-testability of a ID array is investigated for multiple cell faults. It has been shown that fault masking is possible in the presence of multiple faults [9]. A technique for achieving C-testability of ID array is introduced by adding lines for controllability and observability. Rules for choosing lines for controllability and observability are proposed. Examples using the QCA reversible logic gates proposed in [9] are presented.
{"title":"Testing Reversible One-Dimensional QCA Arrays for Multiple F","authors":"Jing Huang, Xiaojun Ma, C. Metra, F. Lombardi","doi":"10.1109/DFT.2007.17","DOIUrl":"https://doi.org/10.1109/DFT.2007.17","url":null,"abstract":"Reversible logic design is a well-known paradigm in digital computation. In this paper, quantum-dot cellular automata (QCA) is investigated for testable implementations of reversible logic in array systems. C-testability of a ID array is investigated for multiple cell faults. It has been shown that fault masking is possible in the presence of multiple faults [9]. A technique for achieving C-testability of ID array is introduced by adding lines for controllability and observability. Rules for choosing lines for controllability and observability are proposed. Examples using the QCA reversible logic gates proposed in [9] are presented.","PeriodicalId":259700,"journal":{"name":"22nd IEEE International Symposium on Defect and Fault-Tolerance in VLSI Systems (DFT 2007)","volume":"229 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2007-09-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"122791624","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
F. Regazzoni, T. Eisenbarth, J. Großschädl, L. Breveglieri, P. Ienne, Israel Koren, C. Paar
Many side-channel attacks on implementations of cryptographic algorithms have been developed in recent years demonstrating the ease of extracting the secret key. In response, various schemes to protect cryptographic devices against such attacks have been devised and some implemented in practice. Almost all of these protection schemes target an individual side-channel attack and consequently, it is not obvious whether a scheme for protecting the device against one type of side- channel attacks may make the device more vulnerable to another type of side-channel attacks. We examine in this paper the possibility of such a negative impact for the case where fault detection circuitry is added to a device (to protect it against fault injection attacks) and analyze the resistance of the modified device to power attacks. To simplify the analysis we focus on only one component in the cryptographic device (namely, the S-box in the AES and Kasumi ciphers), and perform power attacks on the original implementation and on a modified implementation with an added parity check circuit. Our results show that the presence of the parity check circuitry has a negative impact on the resistance of the device to power analysis attacks.
{"title":"Power Attacks Resistance of Cryptographic S-boxes with added Error Detection Circuits","authors":"F. Regazzoni, T. Eisenbarth, J. Großschädl, L. Breveglieri, P. Ienne, Israel Koren, C. Paar","doi":"10.1109/DFT.2007.61","DOIUrl":"https://doi.org/10.1109/DFT.2007.61","url":null,"abstract":"Many side-channel attacks on implementations of cryptographic algorithms have been developed in recent years demonstrating the ease of extracting the secret key. In response, various schemes to protect cryptographic devices against such attacks have been devised and some implemented in practice. Almost all of these protection schemes target an individual side-channel attack and consequently, it is not obvious whether a scheme for protecting the device against one type of side- channel attacks may make the device more vulnerable to another type of side-channel attacks. We examine in this paper the possibility of such a negative impact for the case where fault detection circuitry is added to a device (to protect it against fault injection attacks) and analyze the resistance of the modified device to power attacks. To simplify the analysis we focus on only one component in the cryptographic device (namely, the S-box in the AES and Kasumi ciphers), and perform power attacks on the original implementation and on a modified implementation with an added parity check circuit. Our results show that the presence of the parity check circuitry has a negative impact on the resistance of the device to power analysis attacks.","PeriodicalId":259700,"journal":{"name":"22nd IEEE International Symposium on Defect and Fault-Tolerance in VLSI Systems (DFT 2007)","volume":"24 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2007-09-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"116745494","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
The growing bandwidth of digital ICs is often achieved using high speed pipelines that feature a low combinational depth. In this context, the combinational fraction of path delays becomes comparable to the timing parameters ensuring the correct logic behavior of memory elements (flip-flops and pulsed latches). In the presence of delay defects, the probability that faulty signal transitions give rise to a non-logic behavior of memory elements is no longer negligible with respect to the probability to sample a valid (correct or wrong) logic value as it is traditionally considered by the delay fault model. This phenomenon is here analyzed at the electrical level showing that it cannot be fully accounted by the path delay fault model. Hence, we propose a new fault model that accounts for memory elements possibly behaving in a non-logic way. This model has been validated at the electrical level in the presence of distributed defects and resistive opens.
{"title":"Delay Fault Detection Problems in Circuits Feautring a Low Combination Depth","authors":"M. Favalli","doi":"10.1109/DFT.2007.18","DOIUrl":"https://doi.org/10.1109/DFT.2007.18","url":null,"abstract":"The growing bandwidth of digital ICs is often achieved using high speed pipelines that feature a low combinational depth. In this context, the combinational fraction of path delays becomes comparable to the timing parameters ensuring the correct logic behavior of memory elements (flip-flops and pulsed latches). In the presence of delay defects, the probability that faulty signal transitions give rise to a non-logic behavior of memory elements is no longer negligible with respect to the probability to sample a valid (correct or wrong) logic value as it is traditionally considered by the delay fault model. This phenomenon is here analyzed at the electrical level showing that it cannot be fully accounted by the path delay fault model. Hence, we propose a new fault model that accounts for memory elements possibly behaving in a non-logic way. This model has been validated at the electrical level in the presence of distributed defects and resistive opens.","PeriodicalId":259700,"journal":{"name":"22nd IEEE International Symposium on Defect and Fault-Tolerance in VLSI Systems (DFT 2007)","volume":"71 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2007-09-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128365061","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Power dissipation, energy consumption of CUT and also number of required test vectors for obtaining predetermined fault coverage are the most important criteria used for evaluating the quality of a test pattern generator (TPG). In this paper, we analyze LFSR's flexibility in improving these evaluation criteria for TPG. Usually, we are interested in considering these different criteria simultaneously, while looking for the best configuration. For this purpose, we use genetic algorithm as our optimization algorithm and define some new optimization functions and analyze the capability of LFSR to reduce power, energy and test delay under these functions. From our experimental results on ISCAS'89 and ITC'99, we show that power dissipation of CUT is approximately independent of different optimization functions, and so energy consumption only depends on the number of test vectors. On the other hand, test delay depends on the optimization function, which enforces us to include delay term in the function to avoid test vector increment. Finally, by analyzing energy efficiency values and behavior of energy consumption in terms of fault coverage, under different optimization functions, we show that delay is a suitable optimization function for LFSRs with respect to the number of test vectors, power dissipation, energy consumption and energy efficiency.
{"title":"Empirical Analysis of the Dependence of Test Power, Delay, Energy and Fault Coverage on the Architecture of LFSR-Based TPGs","authors":"M. Kamal, S. Koohi, S. Hessabi","doi":"10.1109/DFT.2007.27","DOIUrl":"https://doi.org/10.1109/DFT.2007.27","url":null,"abstract":"Power dissipation, energy consumption of CUT and also number of required test vectors for obtaining predetermined fault coverage are the most important criteria used for evaluating the quality of a test pattern generator (TPG). In this paper, we analyze LFSR's flexibility in improving these evaluation criteria for TPG. Usually, we are interested in considering these different criteria simultaneously, while looking for the best configuration. For this purpose, we use genetic algorithm as our optimization algorithm and define some new optimization functions and analyze the capability of LFSR to reduce power, energy and test delay under these functions. From our experimental results on ISCAS'89 and ITC'99, we show that power dissipation of CUT is approximately independent of different optimization functions, and so energy consumption only depends on the number of test vectors. On the other hand, test delay depends on the optimization function, which enforces us to include delay term in the function to avoid test vector increment. Finally, by analyzing energy efficiency values and behavior of energy consumption in terms of fault coverage, under different optimization functions, we show that delay is a suitable optimization function for LFSRs with respect to the number of test vectors, power dissipation, energy consumption and energy efficiency.","PeriodicalId":259700,"journal":{"name":"22nd IEEE International Symposium on Defect and Fault-Tolerance in VLSI Systems (DFT 2007)","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2007-09-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"116247469","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Identification of bits that do not necessarily have to be specified in a test set can be beneficial to a number of applications, including low power test, test set encoding and embedding, and test set enriching with n-detect or other fault types properties. This work presents a new method for generating tests containing only a small number of specified bits, while keeping the number of total tests small. The method relies on finding a large number of faults that can be detected by a single test (compatible faults) with a small number of specified bits. Both the total number of specified bits in the test set as well as the number of specified bits per test are minimized. The obtained experimental results show that the proposed methodology can generate compact test sets with an average of 60% of unspecified bits, outperforming existing methods that consider this problem.
{"title":"Hierarchical Fault Compatibility Identification for Test Generation with a Small Number of Specified Bits","authors":"Stelios N. Neophytou, M. Michael","doi":"10.1109/DFT.2007.46","DOIUrl":"https://doi.org/10.1109/DFT.2007.46","url":null,"abstract":"Identification of bits that do not necessarily have to be specified in a test set can be beneficial to a number of applications, including low power test, test set encoding and embedding, and test set enriching with n-detect or other fault types properties. This work presents a new method for generating tests containing only a small number of specified bits, while keeping the number of total tests small. The method relies on finding a large number of faults that can be detected by a single test (compatible faults) with a small number of specified bits. Both the total number of specified bits in the test set as well as the number of specified bits per test are minimized. The obtained experimental results show that the proposed methodology can generate compact test sets with an average of 60% of unspecified bits, outperforming existing methods that consider this problem.","PeriodicalId":259700,"journal":{"name":"22nd IEEE International Symposium on Defect and Fault-Tolerance in VLSI Systems (DFT 2007)","volume":"15 2","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2007-09-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"131575371","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
M. Ottavi, H. Hashempour, V. Vankamamidi, F. Karim, K. Waluś, A. Ivanov
This paper analyzes the effect of random phase shifts in the underlying clock signals on the operation of several basic quantum-dot cellular automata (QCA) building blocks. Such phase shifts can result from manufacturing variations or from uneven path lengths in the clocking network. While previous literature has proposed various clock distribution architectures and also provided analysis of manufacturing variations on QCA layouts, so far no literature is available on the characterization of effects resulting from the lack of phase synchronization in the QCA clocks. We perform numerical simulations of these basic building blocks using two different simulation engines available in the QCADesigner tool. We assume that the phase shifts are characterized by a Gaussian distribution with a mean value of ipi/2, where i is the clock number. Our results indicate that the sensitivity of building blocks to phase shifts depends primarily on the layout of the building block, and that most building blocks were able to operate properly under random phase shifts characterized by sigma= 5% pi/2.
{"title":"On the Error Effects of Random Clock Shifts in Quantum-dot Cellular Automata Circuits","authors":"M. Ottavi, H. Hashempour, V. Vankamamidi, F. Karim, K. Waluś, A. Ivanov","doi":"10.1109/DFT.2007.58","DOIUrl":"https://doi.org/10.1109/DFT.2007.58","url":null,"abstract":"This paper analyzes the effect of random phase shifts in the underlying clock signals on the operation of several basic quantum-dot cellular automata (QCA) building blocks. Such phase shifts can result from manufacturing variations or from uneven path lengths in the clocking network. While previous literature has proposed various clock distribution architectures and also provided analysis of manufacturing variations on QCA layouts, so far no literature is available on the characterization of effects resulting from the lack of phase synchronization in the QCA clocks. We perform numerical simulations of these basic building blocks using two different simulation engines available in the QCADesigner tool. We assume that the phase shifts are characterized by a Gaussian distribution with a mean value of ipi/2, where i is the clock number. Our results indicate that the sensitivity of building blocks to phase shifts depends primarily on the layout of the building block, and that most building blocks were able to operate properly under random phase shifts characterized by sigma= 5% pi/2.","PeriodicalId":259700,"journal":{"name":"22nd IEEE International Symposium on Defect and Fault-Tolerance in VLSI Systems (DFT 2007)","volume":" 2","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2007-09-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"132189066","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}