Pub Date : 2015-04-27DOI: 10.1109/VTS.2015.7116271
F. Refan, B. Alizadeh, Z. Navabi
In this paper, we propose a signature based pruning technique to facilitate the debugging of multi-threaded processors. To accomplish this, a pipelined implementation of the multi-threaded processor model is checked for correspondence against the specification model based on flushing proof. Then, a two-stage signature oriented pruning method is proposed to avoid the space explosion problem caused by inserting debugging facilities in the model. The results show an average improvement of 47%, and 71% in the size of decision formula and CPU time for the DLX processor, respectively.
{"title":"Signature oriented model pruning to facilitate multi-threaded processors debugging","authors":"F. Refan, B. Alizadeh, Z. Navabi","doi":"10.1109/VTS.2015.7116271","DOIUrl":"https://doi.org/10.1109/VTS.2015.7116271","url":null,"abstract":"In this paper, we propose a signature based pruning technique to facilitate the debugging of multi-threaded processors. To accomplish this, a pipelined implementation of the multi-threaded processor model is checked for correspondence against the specification model based on flushing proof. Then, a two-stage signature oriented pruning method is proposed to avoid the space explosion problem caused by inserting debugging facilities in the model. The results show an average improvement of 47%, and 71% in the size of decision formula and CPU time for the DLX processor, respectively.","PeriodicalId":187545,"journal":{"name":"2015 IEEE 33rd VLSI Test Symposium (VTS)","volume":"34 13 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2015-04-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"123148181","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2015-04-27DOI: 10.1109/VTS.2015.7116288
Reza Sharafinejad, B. Alizadeh, M. Fujita
Ensuring from the correctness of system on a chip (SoC) designs after the insertion of high level power management strategies that are disconnected from low level controlling signals, is a serious challenge to be addressed. This paper proposes a methodology for formally verifying dynamic power management strategies on implementations in modern processors. The proposed methodology is based on correspondence checking between a golden model without power features as a specification and a pipelined implementation with various power management strategies. Our main contributions in this paper are: 1) extracting Power Management Unit (PMU) from Unified Power Format (UPF) and Global Power Management (GPM), 2) automatically integrating PMU into the implementation and 3) checking the correspondence between two models with efficient symbolic simulation. The experimental results show that our method enables the designers to verify the designs with different power management strategies up to several thousands of lines of Register Transfer Level (RTL) code in minutes. In comparison with existing methods such as [7], our method reduces the number of state variables, the number of clauses, the number of symbolic simulation steps, and the CPU time by 11.04×, 17.57×, 2.08× and 13.71×, respectively.
{"title":"UPF-based formal verification of low power techniques in modern processors","authors":"Reza Sharafinejad, B. Alizadeh, M. Fujita","doi":"10.1109/VTS.2015.7116288","DOIUrl":"https://doi.org/10.1109/VTS.2015.7116288","url":null,"abstract":"Ensuring from the correctness of system on a chip (SoC) designs after the insertion of high level power management strategies that are disconnected from low level controlling signals, is a serious challenge to be addressed. This paper proposes a methodology for formally verifying dynamic power management strategies on implementations in modern processors. The proposed methodology is based on correspondence checking between a golden model without power features as a specification and a pipelined implementation with various power management strategies. Our main contributions in this paper are: 1) extracting Power Management Unit (PMU) from Unified Power Format (UPF) and Global Power Management (GPM), 2) automatically integrating PMU into the implementation and 3) checking the correspondence between two models with efficient symbolic simulation. The experimental results show that our method enables the designers to verify the designs with different power management strategies up to several thousands of lines of Register Transfer Level (RTL) code in minutes. In comparison with existing methods such as [7], our method reduces the number of state variables, the number of clauses, the number of symbolic simulation steps, and the CPU time by 11.04×, 17.57×, 2.08× and 13.71×, respectively.","PeriodicalId":187545,"journal":{"name":"2015 IEEE 33rd VLSI Test Symposium (VTS)","volume":"22 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2015-04-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"123240217","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2015-04-27DOI: 10.1109/VTS.2015.7116291
I. Agbo, M. Taouil, S. Hamdioui, H. Kukner, P. Weckx, P. Raghavan, F. Catthoor
With the continuous downscaling of CMOS technologies, ICs become more vulnerable to transistor aging mainly due to Bias Temperature Instability (BTI). A lot of work is published on the impact of BTI in SRAMs; however most of the work focused mainly on the memory cell array. An SRAM consists also of peripheral circuitries such as address decoders, sense amplifiers, etc. This paper characterizes the combined impact of BTI and voltage temperature fluctuations on the memory sense amplifier for different technology nodes (45nm up to 16nm). The evaluation metric, the sensing delay (SD), is analyzed for various workloads. In contrast to earlier work, this paper thoroughly quantifies the increased impact of BTI in such sense amplifiers for all the relevant technology scaling parameters. The results show that the BTI impact for nominal voltage and temperature is 6.7% for 45nm and 12.0% for 16nm when applying the worst case workload, while this is 1.8% for 45nm technology and 3.6% higher for 16nm when applying the best case workload. In addition, the results show that the increase in power supply significantly reduces the BTI degradation; e.g., the degradation at -10%Vdd is 9.0%, while this does not exceed 5.3% at +10%Vdd at room temperature. Moreover, the results that the increase in temperature can double the degradation; for instance, the degradation at room temperature and nominal Vdd is 6.7% while this goes up to 18.5% at 398K.
{"title":"Integral impact of BTI and voltage temperature variation on SRAM sense amplifier","authors":"I. Agbo, M. Taouil, S. Hamdioui, H. Kukner, P. Weckx, P. Raghavan, F. Catthoor","doi":"10.1109/VTS.2015.7116291","DOIUrl":"https://doi.org/10.1109/VTS.2015.7116291","url":null,"abstract":"With the continuous downscaling of CMOS technologies, ICs become more vulnerable to transistor aging mainly due to Bias Temperature Instability (BTI). A lot of work is published on the impact of BTI in SRAMs; however most of the work focused mainly on the memory cell array. An SRAM consists also of peripheral circuitries such as address decoders, sense amplifiers, etc. This paper characterizes the combined impact of BTI and voltage temperature fluctuations on the memory sense amplifier for different technology nodes (45nm up to 16nm). The evaluation metric, the sensing delay (SD), is analyzed for various workloads. In contrast to earlier work, this paper thoroughly quantifies the increased impact of BTI in such sense amplifiers for all the relevant technology scaling parameters. The results show that the BTI impact for nominal voltage and temperature is 6.7% for 45nm and 12.0% for 16nm when applying the worst case workload, while this is 1.8% for 45nm technology and 3.6% higher for 16nm when applying the best case workload. In addition, the results show that the increase in power supply significantly reduces the BTI degradation; e.g., the degradation at -10%Vdd is 9.0%, while this does not exceed 5.3% at +10%Vdd at room temperature. Moreover, the results that the increase in temperature can double the degradation; for instance, the degradation at room temperature and nominal Vdd is 6.7% while this goes up to 18.5% at 398K.","PeriodicalId":187545,"journal":{"name":"2015 IEEE 33rd VLSI Test Symposium (VTS)","volume":"3 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2015-04-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"132698429","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2015-04-27DOI: 10.1109/VTS.2015.7116284
E. Larsson, B. Eklow, Scott Davidsson, R. Aitken, A. Jutman, Christophe Lotz
No Trouble Found (NTF) has been discussed for several years [1]. An NTF occurs when a device fails at the board/system level and that failure cannot be confirm by the component supplier. There are several explanations for why NTFs occur, including: device complexity; inability to create system level hardware/software transactions which uncover hard to find defects; different environments during testing (power, thermal, noise). More recently a new concept, No Fault Found (NFF), has emerged. A NFF represents a defect which cannot be detected by any known means so far. The premise is that at some point the defect will be exposed - most likely at a customer site when the device is in a system. Given that we looking for a defect that we know nothing about and are theoretically undetectable it will be interesting to see what the panel has to say about the nature of these defects and how we intend to find them.
{"title":"No Fault Found: The root cause","authors":"E. Larsson, B. Eklow, Scott Davidsson, R. Aitken, A. Jutman, Christophe Lotz","doi":"10.1109/VTS.2015.7116284","DOIUrl":"https://doi.org/10.1109/VTS.2015.7116284","url":null,"abstract":"No Trouble Found (NTF) has been discussed for several years [1]. An NTF occurs when a device fails at the board/system level and that failure cannot be confirm by the component supplier. There are several explanations for why NTFs occur, including: device complexity; inability to create system level hardware/software transactions which uncover hard to find defects; different environments during testing (power, thermal, noise). More recently a new concept, No Fault Found (NFF), has emerged. A NFF represents a defect which cannot be detected by any known means so far. The premise is that at some point the defect will be exposed - most likely at a customer site when the device is in a system. Given that we looking for a defect that we know nothing about and are theoretically undetectable it will be interesting to see what the panel has to say about the nature of these defects and how we intend to find them.","PeriodicalId":187545,"journal":{"name":"2015 IEEE 33rd VLSI Test Symposium (VTS)","volume":"196 4","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2015-04-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"132194193","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2015-04-27DOI: 10.1109/VTS.2015.7116270
I. Pomeranz
It was noted earlier that the accuracy of defect diagnosis may be improved if certain tests are removed from consideration by the defect diagnosis procedure. This paper observes that the effects, which support the removal of tests, also support the removal of observable outputs from consideration during defect diagnosis. Specifically, a test may create an output response that a defect diagnosis procedure will not be able to interpret correctly. This may affect some observable outputs more strongly than others. Therefore, the removal of observable outputs from consideration can improve the accuracy of diagnosis. This paper describes a generalized augmented defect diagnosis procedure that removes tests and observable outputs from consideration. It presents experimental results to demonstrate the effects of removing observable outputs on the accuracy of diagnosis.
{"title":"Improving the accuracy of defect diagnosis by considering reduced diagnostic information","authors":"I. Pomeranz","doi":"10.1109/VTS.2015.7116270","DOIUrl":"https://doi.org/10.1109/VTS.2015.7116270","url":null,"abstract":"It was noted earlier that the accuracy of defect diagnosis may be improved if certain tests are removed from consideration by the defect diagnosis procedure. This paper observes that the effects, which support the removal of tests, also support the removal of observable outputs from consideration during defect diagnosis. Specifically, a test may create an output response that a defect diagnosis procedure will not be able to interpret correctly. This may affect some observable outputs more strongly than others. Therefore, the removal of observable outputs from consideration can improve the accuracy of diagnosis. This paper describes a generalized augmented defect diagnosis procedure that removes tests and observable outputs from consideration. It presents experimental results to demonstrate the effects of removing observable outputs on the accuracy of diagnosis.","PeriodicalId":187545,"journal":{"name":"2015 IEEE 33rd VLSI Test Symposium (VTS)","volume":"59 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2015-04-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"116132085","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2015-04-27DOI: 10.1109/VTS.2015.7116264
V. Bertacco
As silicon feature sizes approach atomic scales, device reliability is waning and the cost of dependability is on the rise. Post silicon devices, such as CNTs or TFETs, promise better performance but at the cost of even worse reliability. Will we reach the point where the cost of reliability for future silicon substrates is too expensive to justify their existence? Or will we discover new ways to contain the cost of dependability? If we do discover low-cost reliability mechanisms, how much time do we have before we must deploy them? If not, how much life does silicon have left?
{"title":"Panel: When will the cost of dependability end innovation in computer design?","authors":"V. Bertacco","doi":"10.1109/VTS.2015.7116264","DOIUrl":"https://doi.org/10.1109/VTS.2015.7116264","url":null,"abstract":"As silicon feature sizes approach atomic scales, device reliability is waning and the cost of dependability is on the rise. Post silicon devices, such as CNTs or TFETs, promise better performance but at the cost of even worse reliability. Will we reach the point where the cost of reliability for future silicon substrates is too expensive to justify their existence? Or will we discover new ways to contain the cost of dependability? If we do discover low-cost reliability mechanisms, how much time do we have before we must deploy them? If not, how much life does silicon have left?","PeriodicalId":187545,"journal":{"name":"2015 IEEE 33rd VLSI Test Symposium (VTS)","volume":"72 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2015-04-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126296906","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2015-04-27DOI: 10.1109/VTS.2015.7116268
R. Karmakar, Aditya Agarwal, S. Chattopadhyay
This paper presents a test architecture optimization and test scheduling strategy for TSV based 3D-Stacked ICs (SICs). A test scheduling heuristic, that can fit in both session-based and session-less test environments, has been used to select the test concurrency between the dies of the stack. The proposed method minimizes the overall test time of the stack, without violating the system level resource and TSV limits. Particle Swarm Optimization (PSO) based meta search technique has been used to select the resource allocation of individual dies and also their internal test schedules. Incorporation of PSO in two stages of optimization produces a notable reduction in the overall test time of SIC. Experimental results show that upto 51% reduction in test time can be achieved using our strategy, over the existing techniques.
{"title":"Testing of 3D-stacked ICs with hard- and soft-dies - a Particle Swarm Optimization based approach","authors":"R. Karmakar, Aditya Agarwal, S. Chattopadhyay","doi":"10.1109/VTS.2015.7116268","DOIUrl":"https://doi.org/10.1109/VTS.2015.7116268","url":null,"abstract":"This paper presents a test architecture optimization and test scheduling strategy for TSV based 3D-Stacked ICs (SICs). A test scheduling heuristic, that can fit in both session-based and session-less test environments, has been used to select the test concurrency between the dies of the stack. The proposed method minimizes the overall test time of the stack, without violating the system level resource and TSV limits. Particle Swarm Optimization (PSO) based meta search technique has been used to select the resource allocation of individual dies and also their internal test schedules. Incorporation of PSO in two stages of optimization produces a notable reduction in the overall test time of SIC. Experimental results show that upto 51% reduction in test time can be achieved using our strategy, over the existing techniques.","PeriodicalId":187545,"journal":{"name":"2015 IEEE 33rd VLSI Test Symposium (VTS)","volume":"45 10 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2015-04-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"121184872","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2015-04-27DOI: 10.1109/VTS.2015.7116252
Hsunwei Hsiung, S. Gupta
Technology scaling increases circuits' susceptibility to manufacturing imperfections and dramatically decreases processor yields. Traditional defect-tolerance approaches add explicit redundant circuitry to improve yield and hence are very expensive for datapath modules in processors. We propose a multi-layered methodology to develop new and efficient defect-tolerance approaches for processors. Specifically, we develop a microarchitecture layer approach for arithmetic logic units (ALU), a circuit layer approach for multipliers, and an ISA layer approach for floating-point units (FPU). We demonstrate that our three approaches improve performance-per-fabricated-die-area of a modern processor core by 3.5%, 2.4%, and at least 9%, and hence collectively provide significant gains.
{"title":"A multi-layered methodology for defect-tolerance of datapath modules in processors","authors":"Hsunwei Hsiung, S. Gupta","doi":"10.1109/VTS.2015.7116252","DOIUrl":"https://doi.org/10.1109/VTS.2015.7116252","url":null,"abstract":"Technology scaling increases circuits' susceptibility to manufacturing imperfections and dramatically decreases processor yields. Traditional defect-tolerance approaches add explicit redundant circuitry to improve yield and hence are very expensive for datapath modules in processors. We propose a multi-layered methodology to develop new and efficient defect-tolerance approaches for processors. Specifically, we develop a microarchitecture layer approach for arithmetic logic units (ALU), a circuit layer approach for multipliers, and an ISA layer approach for floating-point units (FPU). We demonstrate that our three approaches improve performance-per-fabricated-die-area of a modern processor core by 3.5%, 2.4%, and at least 9%, and hence collectively provide significant gains.","PeriodicalId":187545,"journal":{"name":"2015 IEEE 33rd VLSI Test Symposium (VTS)","volume":"44 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2015-04-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128825872","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2015-04-27DOI: 10.1109/VTS.2015.7116294
Fengchao Zhang, Andrew Hennessy, S. Bhunia
The long and distributed supply chain of printed circuit boards (PCBs) makes them vulnerable to different forms of counterfeiting attacks. Existing chip-level integrity validation approaches cannot be readily extended to PCB. In this paper, we address this issue with a novel PCB authentication approach that creates robust, unique signatures from a PCB based on process-induced variations in its trace impedances. The approach comes at virtually zero design and hardware overhead and can be applied to legacy PCBs. Experiments with two sets of commercial PCBs as well as a set of custom designed PCBs show that the proposed approach can obtain unique authentication signature with inter-PCB hamming distance of 47.94% or higher.
{"title":"Robust counterfeit PCB detection exploiting intrinsic trace impedance variations","authors":"Fengchao Zhang, Andrew Hennessy, S. Bhunia","doi":"10.1109/VTS.2015.7116294","DOIUrl":"https://doi.org/10.1109/VTS.2015.7116294","url":null,"abstract":"The long and distributed supply chain of printed circuit boards (PCBs) makes them vulnerable to different forms of counterfeiting attacks. Existing chip-level integrity validation approaches cannot be readily extended to PCB. In this paper, we address this issue with a novel PCB authentication approach that creates robust, unique signatures from a PCB based on process-induced variations in its trace impedances. The approach comes at virtually zero design and hardware overhead and can be applied to legacy PCBs. Experiments with two sets of commercial PCBs as well as a set of custom designed PCBs show that the proposed approach can obtain unique authentication signature with inter-PCB hamming distance of 47.94% or higher.","PeriodicalId":187545,"journal":{"name":"2015 IEEE 33rd VLSI Test Symposium (VTS)","volume":"82 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2015-04-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126067385","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2015-04-27DOI: 10.1109/VTS.2015.7116280
Xuanle Ren, Mitchell Martin, R. D. Blanton
On-chip test/diagnosis is proposed to be an effective method to ensure the lifetime reliability of integrated systems. In order to manage the complexity of such an approach, an integrated system is partitioned into multiple modules where each module can be periodically tested, diagnosed and repaired if necessary. The limitation of on-chip memory and computing capability, coupled with the inherent uncertainty in diagnosis, causes the occurrence of misdiagnoses. To address this challenge, a novel incremental-learning algorithm, namely dynamic k-nearest-neighbor (DKNN), is developed to improve the accuracy of on-chip diagnosis. Different from the conventional KNN, DKNN employs online diagnosis data to update the learned classifier so that the classifier can keep evolving as new diagnosis data becomes available. Incorporating online diagnosis data enables tracking of the fault distribution and thus improves diagnostic accuracy. Experiments using various benchmark circuits (e.g., the cache controller from the OpenSPARC T2 processor design) demonstrate that diagnostic accuracy can be more than doubled.
{"title":"Improving accuracy of on-chip diagnosis via incremental learning","authors":"Xuanle Ren, Mitchell Martin, R. D. Blanton","doi":"10.1109/VTS.2015.7116280","DOIUrl":"https://doi.org/10.1109/VTS.2015.7116280","url":null,"abstract":"On-chip test/diagnosis is proposed to be an effective method to ensure the lifetime reliability of integrated systems. In order to manage the complexity of such an approach, an integrated system is partitioned into multiple modules where each module can be periodically tested, diagnosed and repaired if necessary. The limitation of on-chip memory and computing capability, coupled with the inherent uncertainty in diagnosis, causes the occurrence of misdiagnoses. To address this challenge, a novel incremental-learning algorithm, namely dynamic k-nearest-neighbor (DKNN), is developed to improve the accuracy of on-chip diagnosis. Different from the conventional KNN, DKNN employs online diagnosis data to update the learned classifier so that the classifier can keep evolving as new diagnosis data becomes available. Incorporating online diagnosis data enables tracking of the fault distribution and thus improves diagnostic accuracy. Experiments using various benchmark circuits (e.g., the cache controller from the OpenSPARC T2 processor design) demonstrate that diagnostic accuracy can be more than doubled.","PeriodicalId":187545,"journal":{"name":"2015 IEEE 33rd VLSI Test Symposium (VTS)","volume":"12 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2015-04-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127119000","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}