Pub Date : 2016-07-04DOI: 10.1109/IOLTS.2016.7604703
Andreina Zambrano, H. Kerkhoff
Anisotropic Magnetoresistance (AMR) sensors are often used for angle measurements. The sensor outputs consist of two sinusoidal signals that show undesired characteristics as offset voltage, amplitude imbalance and harmonics, which affect the angle measurements. These parameters change due to aging effects, but until now it is considered that these variations do not affect the sensor accuracy. The largest sources of angle error are compensated at the start of the sensor life but they are not monitored during its lifetime. However, the accuracy requirements are increasing and in the future, it will be necessary to verify that the sensor satisfies the accuracy despite aging. This research proposes different equations that are useful to monitor online the maximum angle error due to different sources. Based on this information it is possible to take action in order to guaranty the accuracy during the entire sensor lifetime.
{"title":"Online monitoring of the maximum angle error in AMR sensors","authors":"Andreina Zambrano, H. Kerkhoff","doi":"10.1109/IOLTS.2016.7604703","DOIUrl":"https://doi.org/10.1109/IOLTS.2016.7604703","url":null,"abstract":"Anisotropic Magnetoresistance (AMR) sensors are often used for angle measurements. The sensor outputs consist of two sinusoidal signals that show undesired characteristics as offset voltage, amplitude imbalance and harmonics, which affect the angle measurements. These parameters change due to aging effects, but until now it is considered that these variations do not affect the sensor accuracy. The largest sources of angle error are compensated at the start of the sensor life but they are not monitored during its lifetime. However, the accuracy requirements are increasing and in the future, it will be necessary to verify that the sensor satisfies the accuracy despite aging. This research proposes different equations that are useful to monitor online the maximum angle error due to different sources. Based on this information it is possible to take action in order to guaranty the accuracy during the entire sensor lifetime.","PeriodicalId":6580,"journal":{"name":"2016 IEEE 22nd International Symposium on On-Line Testing and Robust System Design (IOLTS)","volume":"43 1","pages":"211-212"},"PeriodicalIF":0.0,"publicationDate":"2016-07-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"76326225","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2016-07-04DOI: 10.1109/IOLTS.2016.7604674
Mojtaba Ebrahimi, Maryam Rashvand, Firas Kaddachi, M. Tahoori, G. D. Natale
Radiation-induced soft errors are growing reliability concerns, especially in mission- and safety-critical systems. A variety of software-based fault tolerant techniques have widely been proposed and used to mitigate soft errors at the application-level. Such techniques are typically evaluated using statistical fault injection at software-visible variables of the system as fault injection at higher levels of abstraction is much faster than logic-level or Register Transfer Level (RTL). Recent studies revealed that software-based fault injection techniques are not accurate for analyzing soft errors originating in flip-flops. However, the effectiveness of such techniques for evaluation of the entire processor including register-files and cache arrays are not studied yet. In this paper, we comprehensively study the soft error rate of several workloads and their protected version using software-based fault tolerance by performing detailed error generation and propagation analysis at hardware-level. Our detailed experimental analysis shows that there is no significant correlation between the results of hardware- and software-based fault injection for the effectiveness of software-based fault tolerance. Furthermore, software-based fault injection cannot accurately model the relative improvement provided by fault tolerant software implementation, and hence, its results could be misleading.
{"title":"Revisiting software-based soft error mitigation techniques via accurate error generation and propagation models","authors":"Mojtaba Ebrahimi, Maryam Rashvand, Firas Kaddachi, M. Tahoori, G. D. Natale","doi":"10.1109/IOLTS.2016.7604674","DOIUrl":"https://doi.org/10.1109/IOLTS.2016.7604674","url":null,"abstract":"Radiation-induced soft errors are growing reliability concerns, especially in mission- and safety-critical systems. A variety of software-based fault tolerant techniques have widely been proposed and used to mitigate soft errors at the application-level. Such techniques are typically evaluated using statistical fault injection at software-visible variables of the system as fault injection at higher levels of abstraction is much faster than logic-level or Register Transfer Level (RTL). Recent studies revealed that software-based fault injection techniques are not accurate for analyzing soft errors originating in flip-flops. However, the effectiveness of such techniques for evaluation of the entire processor including register-files and cache arrays are not studied yet. In this paper, we comprehensively study the soft error rate of several workloads and their protected version using software-based fault tolerance by performing detailed error generation and propagation analysis at hardware-level. Our detailed experimental analysis shows that there is no significant correlation between the results of hardware- and software-based fault injection for the effectiveness of software-based fault tolerance. Furthermore, software-based fault injection cannot accurately model the relative improvement provided by fault tolerant software implementation, and hence, its results could be misleading.","PeriodicalId":6580,"journal":{"name":"2016 IEEE 22nd International Symposium on On-Line Testing and Robust System Design (IOLTS)","volume":"86 1","pages":"66-71"},"PeriodicalIF":0.0,"publicationDate":"2016-07-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"83753816","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2016-07-04DOI: 10.1109/IOLTS.2016.7604663
Maha Kooli, G. D. Natale, A. Bosio
Reliability evaluation is a high costly process that is mainly carried out through fault injection or by means of analytical techniques. While the analytical techniques are fast but inaccurate, the fault injection is more accurate but extremely time consuming. This paper presents an hybrid approach combining analytical and fault injection techniques in order to evaluate the reliability of a computing system, by considering errors that affect both the data and the instruction cache. Compared to existing techniques, instead of targeting the hardware model of the cache (e.g., VHDL description), we only consider the running application (i.e., the software layer). The proposed approach is based on the Low-Level Virtual Machine (LLVM) framework coupled with a cache emulator. As input, the tool requires the application source code, the cache size and policy, and the target microprocessor instruction set. The main advantage of the proposed approach is the achieved speed up quantified in magnitude orders compared to existing fault injection techniques. For the validation, we compare the simulation results to those obtained with an FPGA-based fault injector. The similarity of the results proves the accuracy of the approach.
{"title":"Cache-aware reliability evaluation through LLVM-based analysis and fault injection","authors":"Maha Kooli, G. D. Natale, A. Bosio","doi":"10.1109/IOLTS.2016.7604663","DOIUrl":"https://doi.org/10.1109/IOLTS.2016.7604663","url":null,"abstract":"Reliability evaluation is a high costly process that is mainly carried out through fault injection or by means of analytical techniques. While the analytical techniques are fast but inaccurate, the fault injection is more accurate but extremely time consuming. This paper presents an hybrid approach combining analytical and fault injection techniques in order to evaluate the reliability of a computing system, by considering errors that affect both the data and the instruction cache. Compared to existing techniques, instead of targeting the hardware model of the cache (e.g., VHDL description), we only consider the running application (i.e., the software layer). The proposed approach is based on the Low-Level Virtual Machine (LLVM) framework coupled with a cache emulator. As input, the tool requires the application source code, the cache size and policy, and the target microprocessor instruction set. The main advantage of the proposed approach is the achieved speed up quantified in magnitude orders compared to existing fault injection techniques. For the validation, we compare the simulation results to those obtained with an FPGA-based fault injector. The similarity of the results proves the accuracy of the approach.","PeriodicalId":6580,"journal":{"name":"2016 IEEE 22nd International Symposium on On-Line Testing and Robust System Design (IOLTS)","volume":"117 1","pages":"19-22"},"PeriodicalIF":0.0,"publicationDate":"2016-07-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"75756144","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2016-07-04DOI: 10.1109/IOLTS.2016.7604684
A. Kapoor, N. Engin, J. Verdaasdonk
Modern systems for ubiquitous computing domains such as Internet-of-things (IoT), wearable computing etc. are characterized by low duty cycle, low operating and stand by power consumption requirements. The design of such systems is further constrained by increasing leakages due to technology scaling and/or increased data retention requirements. These conflicting requirements make leakage reduction of digital logic and SRAM a primary objective for efficient system realization. In this work, we discuss the effectiveness of advance leakage reduction techniques in 40nm (HYT technology) for SRAM and digital logic. For SRAM memory, adding error correction coding (ECC) to the memory subsystem can provide new trade-offs which will be advantageous for these low-duty cycle systems. We show that decreasing the data retention voltage while preventing errors using ECC will help decrease the leakage current by 45% (leakage power by 70% for SRAM). For the digital logic, test and simulation data shows that reverse body biasing (RBB) can reduce the logic leakage current by ~3x in the worst case process and temperature conditions. However, it should be carefully implemented as RBB causes increase in leakage current at nominal temperatures due to higher junction currents. Moreover, the asymmetric biasing where PMOS is biased by 0.7V and NMOS by 0.3V provides optimum results. RBB can also help reducing the switching energy at low frequency due to increased contribution of leakage to total energy compare to conventional technologies. We also show that increasing the gate length by 20% can help reduce the leakage current by 2x while there is minimal penalty on dynamic power and speed. Combining the asymmetric RBB application and increased gate-length can result in ~6x leakage reduction.
{"title":"Leakage mitigation for low power microcontroller design in 40nm for Internet-of-Things (IoT)","authors":"A. Kapoor, N. Engin, J. Verdaasdonk","doi":"10.1109/IOLTS.2016.7604684","DOIUrl":"https://doi.org/10.1109/IOLTS.2016.7604684","url":null,"abstract":"Modern systems for ubiquitous computing domains such as Internet-of-things (IoT), wearable computing etc. are characterized by low duty cycle, low operating and stand by power consumption requirements. The design of such systems is further constrained by increasing leakages due to technology scaling and/or increased data retention requirements. These conflicting requirements make leakage reduction of digital logic and SRAM a primary objective for efficient system realization. In this work, we discuss the effectiveness of advance leakage reduction techniques in 40nm (HYT technology) for SRAM and digital logic. For SRAM memory, adding error correction coding (ECC) to the memory subsystem can provide new trade-offs which will be advantageous for these low-duty cycle systems. We show that decreasing the data retention voltage while preventing errors using ECC will help decrease the leakage current by 45% (leakage power by 70% for SRAM). For the digital logic, test and simulation data shows that reverse body biasing (RBB) can reduce the logic leakage current by ~3x in the worst case process and temperature conditions. However, it should be carefully implemented as RBB causes increase in leakage current at nominal temperatures due to higher junction currents. Moreover, the asymmetric biasing where PMOS is biased by 0.7V and NMOS by 0.3V provides optimum results. RBB can also help reducing the switching energy at low frequency due to increased contribution of leakage to total energy compare to conventional technologies. We also show that increasing the gate length by 20% can help reduce the leakage current by 2x while there is minimal penalty on dynamic power and speed. Combining the asymmetric RBB application and increased gate-length can result in ~6x leakage reduction.","PeriodicalId":6580,"journal":{"name":"2016 IEEE 22nd International Symposium on On-Line Testing and Robust System Design (IOLTS)","volume":"91 1","pages":"126-129"},"PeriodicalIF":0.0,"publicationDate":"2016-07-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"75239396","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2016-07-04DOI: 10.1109/IOLTS.2016.7604657
Adedotun Adeyemo, Xiaohan Yang, Anu Bala, J. Mathew, A. Jabir
Resistive memories have simpler structures and are capable of producing highly dense memory through crossbar architecture without the use of access devices. Reliability however remains a problem of resistive memories especially in its basic read operation. This paper presents a comprehensive model for resistive devices in crossbar array as well as models for four crossbar read schemes. These models are non-restrictive and are suitable for accurate analytical analysis of crossbar arrays and the evaluation of their performance during read operation.
{"title":"Analytic models for crossbar read operation","authors":"Adedotun Adeyemo, Xiaohan Yang, Anu Bala, J. Mathew, A. Jabir","doi":"10.1109/IOLTS.2016.7604657","DOIUrl":"https://doi.org/10.1109/IOLTS.2016.7604657","url":null,"abstract":"Resistive memories have simpler structures and are capable of producing highly dense memory through crossbar architecture without the use of access devices. Reliability however remains a problem of resistive memories especially in its basic read operation. This paper presents a comprehensive model for resistive devices in crossbar array as well as models for four crossbar read schemes. These models are non-restrictive and are suitable for accurate analytical analysis of crossbar arrays and the evaluation of their performance during read operation.","PeriodicalId":6580,"journal":{"name":"2016 IEEE 22nd International Symposium on On-Line Testing and Robust System Design (IOLTS)","volume":"94 1","pages":"3-4"},"PeriodicalIF":0.0,"publicationDate":"2016-07-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"75802156","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2016-07-04DOI: 10.1109/IOLTS.2016.7604698
B. Bhowmik, S. Biswas, J. Deka
Packet corruption, misrouting, and dropping have become an extra burden on network performances due to stuck-at and open faults on network-on-chip (NoC) interconnects. Existing works for testing interconnect faults have addressed either shorts and/or stuck-ats with the assumption that the opens do not exist on interconnects. A new distributed test scheme that addresses coexistent stuck-at and open faults on NoC interconnects is proposed. The scheme is governed by a set of odd/even router and cores and takes account of testing of a subset of interconnects in turn. Results achieve 100% fault coverage in terms of packets received and dropped, and test coverage in terms of link-wires tested. Results also show evaluation of different performance metrics affected by the faulty links in a NoC.
{"title":"An odd-even scheme to prevent a packet from being corrupted and dropped in fault tolerant NoCs","authors":"B. Bhowmik, S. Biswas, J. Deka","doi":"10.1109/IOLTS.2016.7604698","DOIUrl":"https://doi.org/10.1109/IOLTS.2016.7604698","url":null,"abstract":"Packet corruption, misrouting, and dropping have become an extra burden on network performances due to stuck-at and open faults on network-on-chip (NoC) interconnects. Existing works for testing interconnect faults have addressed either shorts and/or stuck-ats with the assumption that the opens do not exist on interconnects. A new distributed test scheme that addresses coexistent stuck-at and open faults on NoC interconnects is proposed. The scheme is governed by a set of odd/even router and cores and takes account of testing of a subset of interconnects in turn. Results achieve 100% fault coverage in terms of packets received and dropped, and test coverage in terms of link-wires tested. Results also show evaluation of different performance metrics affected by the faulty links in a NoC.","PeriodicalId":6580,"journal":{"name":"2016 IEEE 22nd International Symposium on On-Line Testing and Robust System Design (IOLTS)","volume":"21 1","pages":"195-198"},"PeriodicalIF":0.0,"publicationDate":"2016-07-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"85184497","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2016-07-04DOI: 10.1109/IOLTS.2016.7604697
Ghislain Takam Tchendjou, Rshdee Alhakim, E. Simeu, F. Lebowsky
In this article, we apply different machine learning (ML) techniques for building objective models, that permit to automatically assess the image quality in agreement with human visual perception. The six ML methods proposed are discriminant analysis, k-nearest neighbors, artificial neural network, non-linear regression, decision tree and fuzzy logic. Both the stability and the robustness of designed models are evaluated by using Monte-Carlo cross-validation approach (MCCV). The simulation results demonstrate that fuzzy logic model provides the best prediction accuracy.
{"title":"Evaluation of machine learning algorithms for image quality assessment","authors":"Ghislain Takam Tchendjou, Rshdee Alhakim, E. Simeu, F. Lebowsky","doi":"10.1109/IOLTS.2016.7604697","DOIUrl":"https://doi.org/10.1109/IOLTS.2016.7604697","url":null,"abstract":"In this article, we apply different machine learning (ML) techniques for building objective models, that permit to automatically assess the image quality in agreement with human visual perception. The six ML methods proposed are discriminant analysis, k-nearest neighbors, artificial neural network, non-linear regression, decision tree and fuzzy logic. Both the stability and the robustness of designed models are evaluated by using Monte-Carlo cross-validation approach (MCCV). The simulation results demonstrate that fuzzy logic model provides the best prediction accuracy.","PeriodicalId":6580,"journal":{"name":"2016 IEEE 22nd International Symposium on On-Line Testing and Robust System Design (IOLTS)","volume":"77 1","pages":"193-194"},"PeriodicalIF":0.0,"publicationDate":"2016-07-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"83309950","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2016-07-04DOI: 10.1109/IOLTS.2016.7604670
F. Cacho, A. Benhassain, S. Mhira, A. Sivadasan, V. Huard, P. Cathelin, V. Knopik, A. Jain, C. Parthasarathy, L. Anghel
Reliability for advanced CMOS nodes is becoming very challenging. The trade-off between high performance and reliability requirement can no longer be addressed by rough extra-margin. It would results in an overdesign and strong penalty of performance and area. A fine-grain analysis of mission profile is the path toward accurate assessment of ageing. A wide review of methodologies and results are presented, they are applied to digital, analog and RF/mmW circuits. Important set of experimental results are shown and compared to simulation. This paper highlights the correlation between activity profiling or workload and degradation performance induced by ageing.
{"title":"Activity profiling: Review of different solutions to develop reliable and performant design","authors":"F. Cacho, A. Benhassain, S. Mhira, A. Sivadasan, V. Huard, P. Cathelin, V. Knopik, A. Jain, C. Parthasarathy, L. Anghel","doi":"10.1109/IOLTS.2016.7604670","DOIUrl":"https://doi.org/10.1109/IOLTS.2016.7604670","url":null,"abstract":"Reliability for advanced CMOS nodes is becoming very challenging. The trade-off between high performance and reliability requirement can no longer be addressed by rough extra-margin. It would results in an overdesign and strong penalty of performance and area. A fine-grain analysis of mission profile is the path toward accurate assessment of ageing. A wide review of methodologies and results are presented, they are applied to digital, analog and RF/mmW circuits. Important set of experimental results are shown and compared to simulation. This paper highlights the correlation between activity profiling or workload and degradation performance induced by ageing.","PeriodicalId":6580,"journal":{"name":"2016 IEEE 22nd International Symposium on On-Line Testing and Robust System Design (IOLTS)","volume":"8 1","pages":"47-50"},"PeriodicalIF":0.0,"publicationDate":"2016-07-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"81761713","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2016-07-04DOI: 10.1109/IOLTS.2016.7604679
F. Filippou, G. Keramidas, Michail Mavropoulos, D. Nikolos
Dynamic voltage and frequency scaling (DVFS) is a commonly-used power-management technique. Unfortunately, voltage scaling increases the impact of process variations on memory cells reliability resulting in an exponential increase in the number of malfunctioning memory cells. In this work, we systematically investigate the behavior of branch target buffers (BTB) with faulty memory cells. Although being an intrinsically fault-tolerant unit (i.e., it does not affect correctness of the system), as we show in this work for several fault probabilities and core configurations, disabling the faulty parts of BTBs can damage the performance of the executing applications. To remedy the negative impact of malfunctioning BTB memory cells in contemporary BTB organizations, we present an ultra lightweight performance recovery mechanism. The proposed mechanism introduces minimal hardware overheads and practically-zero delays. Using cycle-accurate simulations, the benchmarks of SPEC2006 suite, a plethora of memory fault maps, and two fault probabilities corresponding to low supply voltages, we show the effectiveness of the proposed recovery mechanism.
{"title":"Recovery of performance degradation in defective branch target buffers","authors":"F. Filippou, G. Keramidas, Michail Mavropoulos, D. Nikolos","doi":"10.1109/IOLTS.2016.7604679","DOIUrl":"https://doi.org/10.1109/IOLTS.2016.7604679","url":null,"abstract":"Dynamic voltage and frequency scaling (DVFS) is a commonly-used power-management technique. Unfortunately, voltage scaling increases the impact of process variations on memory cells reliability resulting in an exponential increase in the number of malfunctioning memory cells. In this work, we systematically investigate the behavior of branch target buffers (BTB) with faulty memory cells. Although being an intrinsically fault-tolerant unit (i.e., it does not affect correctness of the system), as we show in this work for several fault probabilities and core configurations, disabling the faulty parts of BTBs can damage the performance of the executing applications. To remedy the negative impact of malfunctioning BTB memory cells in contemporary BTB organizations, we present an ultra lightweight performance recovery mechanism. The proposed mechanism introduces minimal hardware overheads and practically-zero delays. Using cycle-accurate simulations, the benchmarks of SPEC2006 suite, a plethora of memory fault maps, and two fault probabilities corresponding to low supply voltages, we show the effectiveness of the proposed recovery mechanism.","PeriodicalId":6580,"journal":{"name":"2016 IEEE 22nd International Symposium on On-Line Testing and Robust System Design (IOLTS)","volume":"39 1","pages":"96-102"},"PeriodicalIF":0.0,"publicationDate":"2016-07-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"88355983","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2016-07-04DOI: 10.1109/IOLTS.2016.7604708
Panagiotis Sismanoglou, D. Nikolos
Single event transient (SET) pulses are a significant cause of soft errors in a circuit. To cope with SET pulses, we propose a new storage cell that is able to operate either as a hard-edge or soft-edge flip-flop depending on the appearance or not of a transition in a time window. The efficiency of the proposed design with respect to the reduction of soft-errors coming from SET pulses was shown with extensive simulations.
{"title":"Conditional soft-edge flip-flop for SET mitigation","authors":"Panagiotis Sismanoglou, D. Nikolos","doi":"10.1109/IOLTS.2016.7604708","DOIUrl":"https://doi.org/10.1109/IOLTS.2016.7604708","url":null,"abstract":"Single event transient (SET) pulses are a significant cause of soft errors in a circuit. To cope with SET pulses, we propose a new storage cell that is able to operate either as a hard-edge or soft-edge flip-flop depending on the appearance or not of a transition in a time window. The efficiency of the proposed design with respect to the reduction of soft-errors coming from SET pulses was shown with extensive simulations.","PeriodicalId":6580,"journal":{"name":"2016 IEEE 22nd International Symposium on On-Line Testing and Robust System Design (IOLTS)","volume":"29 1","pages":"227-232"},"PeriodicalIF":0.0,"publicationDate":"2016-07-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"74934806","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}