Pub Date : 2013-04-03DOI: 10.1109/LATW.2013.6562673
P. Gaillardon, H. Ghasemzadeh, G. Micheli
Vertically-stacked Silicon NanoWire FETs (SiNWFETs) with gate-all-around control are the natural and most advanced extension of FinFETs. At advanced technology nodes, due to Schottky contacts at channel interfaces, devices show an ambipolar behavior, i.e., the device exhibits n- and p-type characteristics simultaneously. This property, when controlled by an independent Double-Gate (DG) structure, can be exploited for logic computation, as it provides intrinsic XOR operation. Electrostatic doping of the transistor suppresses the need for dopant implantation at the source and drain regions, which potentially leads to a larger process variations immunity of the devices. In this paper, we propose a novel method based on Technology Computer-Aided Design (TCAD) simulations, enabling the prediction of emerging devices variability. This method is used within our DG-SiNWFET framework and shows that devices, whose polarity is controlled electrostatically, present better immunity to variations for some of their parameters, such as the off-current with 16× less standard deviation.
{"title":"Vertically-stacked silicon nanowire transistors with controllable polarity: A robustness study","authors":"P. Gaillardon, H. Ghasemzadeh, G. Micheli","doi":"10.1109/LATW.2013.6562673","DOIUrl":"https://doi.org/10.1109/LATW.2013.6562673","url":null,"abstract":"Vertically-stacked Silicon NanoWire FETs (SiNWFETs) with gate-all-around control are the natural and most advanced extension of FinFETs. At advanced technology nodes, due to Schottky contacts at channel interfaces, devices show an ambipolar behavior, i.e., the device exhibits n- and p-type characteristics simultaneously. This property, when controlled by an independent Double-Gate (DG) structure, can be exploited for logic computation, as it provides intrinsic XOR operation. Electrostatic doping of the transistor suppresses the need for dopant implantation at the source and drain regions, which potentially leads to a larger process variations immunity of the devices. In this paper, we propose a novel method based on Technology Computer-Aided Design (TCAD) simulations, enabling the prediction of emerging devices variability. This method is used within our DG-SiNWFET framework and shows that devices, whose polarity is controlled electrostatically, present better immunity to variations for some of their parameters, such as the off-current with 16× less standard deviation.","PeriodicalId":186736,"journal":{"name":"2013 14th Latin American Test Workshop - LATW","volume":"35 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2013-04-03","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115430272","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2013-04-03DOI: 10.1109/LATW.2013.6562675
R. Drechsler, Melanie Diepenbeck, Stephan Eggersglüß, R. Wille
An important step in the manufacturing process is the postproduction test. Here, a test set is applied to each manufactured chip in order to detect defective devices. The test set is typically generated by ATPG (Automatic Test Pattern Generation) algorithms. Classical ATPG algorithms work on a gate-level netlist and use structural knowledge and heuristics to guide the search in order to obtain a test set. Additionally, the use of ATPG is coupled or accompanied by other test techniques to increase the quality and the compaction of the test set. For example, timing-aware ATPG integrates timing information into the search process to guide the heuristic towards determining the longest paths and n-detection test generation is used to increase the detection quality for unmodeled defects. Fault simulation is applied as a post-processing technique to remove detected faults from the fault list and, by this, to decrease the pattern count as well as the overall ATPG run time. Static and dynamic test compaction techniques are further used for test set compaction. All these techniques are well developed. However, solving them separately limits the quality of the results.
{"title":"PASSAT 2.0: A multi-functional SAT-based testing framework","authors":"R. Drechsler, Melanie Diepenbeck, Stephan Eggersglüß, R. Wille","doi":"10.1109/LATW.2013.6562675","DOIUrl":"https://doi.org/10.1109/LATW.2013.6562675","url":null,"abstract":"An important step in the manufacturing process is the postproduction test. Here, a test set is applied to each manufactured chip in order to detect defective devices. The test set is typically generated by ATPG (Automatic Test Pattern Generation) algorithms. Classical ATPG algorithms work on a gate-level netlist and use structural knowledge and heuristics to guide the search in order to obtain a test set. Additionally, the use of ATPG is coupled or accompanied by other test techniques to increase the quality and the compaction of the test set. For example, timing-aware ATPG integrates timing information into the search process to guide the heuristic towards determining the longest paths and n-detection test generation is used to increase the detection quality for unmodeled defects. Fault simulation is applied as a post-processing technique to remove detected faults from the fault list and, by this, to decrease the pattern count as well as the overall ATPG run time. Static and dynamic test compaction techniques are further used for test set compaction. All these techniques are well developed. However, solving them separately limits the quality of the results.","PeriodicalId":186736,"journal":{"name":"2013 14th Latin American Test Workshop - LATW","volume":"11 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2013-04-03","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126950034","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2013-04-03DOI: 10.1109/LATW.2013.6562677
D. Changdao, M. Graziano, E. Sánchez, M. Reorda, M. Zamboni, N. Zhifan
Electronic systems are increasingly used for safety-critical applications, where the effects of faults must be taken under control and hopefully avoided. For this purpose, test of manufactured devices is particularly important, both at the end of the production line and during the operational phase. This paper describes a method to test the logic implementing the Branch Prediction Unit in pipelined and superscalar processors when this follows the Branch Target Buffer (BTB) architecture; the proposed approach is functional, i.e., it is based on forcing the processor to execute a suitably devised test program and observing the produced results. Experimental results are provided on the DLX processor, showing that the method can achieve a high value of stuck-at fault coverage while also testing the memory in the BTB.
{"title":"On the functional test of the BTB logic in pipelined and superscalar processors","authors":"D. Changdao, M. Graziano, E. Sánchez, M. Reorda, M. Zamboni, N. Zhifan","doi":"10.1109/LATW.2013.6562677","DOIUrl":"https://doi.org/10.1109/LATW.2013.6562677","url":null,"abstract":"Electronic systems are increasingly used for safety-critical applications, where the effects of faults must be taken under control and hopefully avoided. For this purpose, test of manufactured devices is particularly important, both at the end of the production line and during the operational phase. This paper describes a method to test the logic implementing the Branch Prediction Unit in pipelined and superscalar processors when this follows the Branch Target Buffer (BTB) architecture; the proposed approach is functional, i.e., it is based on forcing the processor to execute a suitably devised test program and observing the produced results. Experimental results are provided on the DLX processor, showing that the method can achieve a high value of stuck-at fault coverage while also testing the memory in the BTB.","PeriodicalId":186736,"journal":{"name":"2013 14th Latin American Test Workshop - LATW","volume":"58 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2013-04-03","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"121704451","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2013-04-03DOI: 10.1109/LATW.2013.6562678
Jorge Hernán Meza Escobar, Jörg Sachße, Steffen Ostendorff, H. Wuttke
This paper presents a study of FPGA test-processor configurability at the instruction set architecture (ISA) level used for board-level interconnection testing. The ISA configurability is used as adaptation mechanism to the test requirements, the FPGA properties, and the devices under test (DUTs). The aim is to show the advantages and limitations of processor configurability at this level, and demonstrate them in the FPGA based test system (FBTS) developed for board-level interconnection testing. The paper presents the test-processor's concept, adaptation aspects, and architecture, followed by experimental results performed with different processor configurations. Results show the advantages of having a configurable test-processor in terms of performance and FPGA resource utilization.
{"title":"ISA configurability of an FPGA test-processor used for board-level interconnection testing","authors":"Jorge Hernán Meza Escobar, Jörg Sachße, Steffen Ostendorff, H. Wuttke","doi":"10.1109/LATW.2013.6562678","DOIUrl":"https://doi.org/10.1109/LATW.2013.6562678","url":null,"abstract":"This paper presents a study of FPGA test-processor configurability at the instruction set architecture (ISA) level used for board-level interconnection testing. The ISA configurability is used as adaptation mechanism to the test requirements, the FPGA properties, and the devices under test (DUTs). The aim is to show the advantages and limitations of processor configurability at this level, and demonstrate them in the FPGA based test system (FBTS) developed for board-level interconnection testing. The paper presents the test-processor's concept, adaptation aspects, and architecture, followed by experimental results performed with different processor configurations. Results show the advantages of having a configurable test-processor in terms of performance and FPGA resource utilization.","PeriodicalId":186736,"journal":{"name":"2013 14th Latin American Test Workshop - LATW","volume":"29 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2013-04-03","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"125695767","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2013-04-03DOI: 10.1109/LATW.2013.6562683
P. Rech, C. Aguiar, C. Frost, L. Carro
Graphics Processing Units are very prone to be corrupted by neutrons. Experimental results obtained irradiating the GPU with high energy neutrons show that the input data type has a strong influence on the neutron-induced error rate of the executed algorithms. Moreover, when operations are performed using floating point data, the probabilities for the mantissa, the exponent or the sign to be corrupted are very different. We investigate the occurrences of errors in the different positions, evaluating the related effects on the result precision. The reported results and the architecture analysis demonstrate that under radiation, whenever possible, one should favor floating point arithmetic, which is both more reliable and potentially easier to protect than the integer one.
{"title":"Neutron sensitivity of integer and floating point operations executed in GPUs","authors":"P. Rech, C. Aguiar, C. Frost, L. Carro","doi":"10.1109/LATW.2013.6562683","DOIUrl":"https://doi.org/10.1109/LATW.2013.6562683","url":null,"abstract":"Graphics Processing Units are very prone to be corrupted by neutrons. Experimental results obtained irradiating the GPU with high energy neutrons show that the input data type has a strong influence on the neutron-induced error rate of the executed algorithms. Moreover, when operations are performed using floating point data, the probabilities for the mantissa, the exponent or the sign to be corrupted are very different. We investigate the occurrences of errors in the different positions, evaluating the related effects on the result precision. The reported results and the architecture analysis demonstrate that under radiation, whenever possible, one should favor floating point arithmetic, which is both more reliable and potentially easier to protect than the integer one.","PeriodicalId":186736,"journal":{"name":"2013 14th Latin American Test Workshop - LATW","volume":"34 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2013-04-03","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128883491","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2013-04-03DOI: 10.1109/LATW.2013.6562687
Worawit Somha, H. Yamauchi
This paper discusses, for the first time, how the guard band (GB) designs for screening-test should be unprecedentedly changed when the shift-amount of voltage-margin variations after screening becomes larger than that of before screening. Since the increasing-pace of time-dependent (TD) random telegraph noise (RTN) is a 1.4x faster than non-TD variations of random dopant fluctuation (RDF), the effect of TD-variations on the GB-shift will become larger than that of non-TD in coming process generations like 15nm and beyond. Three types of amplitude-ratios of RTN to RDF (RTN/RDF: 0.25, 1, 4) are assumed in this discussion. The screening yield-loss impacts, made by: 1) larger ratio of RTN/RDF and 2) approximation-error of longer tailed RTN distribution, are discussed. It has been shown that yield-loss (chip-discarding) by screening test may become crucial issues if RTN could not be reduced because the yield-loss can become 5-orders of magnitude times larger than that for 40nm when RTN/RDF=1. It has been found that the required accuracy-level of statistical model for approximating RTN tail-distributions significantly increases as RTN/RDF gets close to 1. Intolerable yield-loss can be increased by 6-orders of magnitude due to its errors of GB designs. A fitting method to approximate a longer tailed RTN Gamma-distribution by simple Gaussian mixtures model (GMM) is proposed. The proposed concepts are 1) adaptive segmentation of the long tailed distributions such that the log-likelihood of GMM in each partition is maximized and 2) copy and paste fashion with an adaptive weighting into each partition. It has been verified that the proposed method can reduce the error of the fail-bit predictions by 2-orders of magnitude while reducing the iterations for EM step convergence to 1/16 at the interest point of the fail probability of 10-12 which corresponds to the design point to realize a 99.9% yield of 1Gbit chips.
{"title":"A RTN variation tolerant guard band design for a deeper nanometer scaled SRAM screening test: Based on EM Gaussians mixtures approximations model of long-tail distributions","authors":"Worawit Somha, H. Yamauchi","doi":"10.1109/LATW.2013.6562687","DOIUrl":"https://doi.org/10.1109/LATW.2013.6562687","url":null,"abstract":"This paper discusses, for the first time, how the guard band (GB) designs for screening-test should be unprecedentedly changed when the shift-amount of voltage-margin variations after screening becomes larger than that of before screening. Since the increasing-pace of time-dependent (TD) random telegraph noise (RTN) is a 1.4x faster than non-TD variations of random dopant fluctuation (RDF), the effect of TD-variations on the GB-shift will become larger than that of non-TD in coming process generations like 15nm and beyond. Three types of amplitude-ratios of RTN to RDF (RTN/RDF: 0.25, 1, 4) are assumed in this discussion. The screening yield-loss impacts, made by: 1) larger ratio of RTN/RDF and 2) approximation-error of longer tailed RTN distribution, are discussed. It has been shown that yield-loss (chip-discarding) by screening test may become crucial issues if RTN could not be reduced because the yield-loss can become 5-orders of magnitude times larger than that for 40nm when RTN/RDF=1. It has been found that the required accuracy-level of statistical model for approximating RTN tail-distributions significantly increases as RTN/RDF gets close to 1. Intolerable yield-loss can be increased by 6-orders of magnitude due to its errors of GB designs. A fitting method to approximate a longer tailed RTN Gamma-distribution by simple Gaussian mixtures model (GMM) is proposed. The proposed concepts are 1) adaptive segmentation of the long tailed distributions such that the log-likelihood of GMM in each partition is maximized and 2) copy and paste fashion with an adaptive weighting into each partition. It has been verified that the proposed method can reduce the error of the fail-bit predictions by 2-orders of magnitude while reducing the iterations for EM step convergence to 1/16 at the interest point of the fail probability of 10-12 which corresponds to the design point to realize a 99.9% yield of 1Gbit chips.","PeriodicalId":186736,"journal":{"name":"2013 14th Latin American Test Workshop - LATW","volume":"33 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2013-04-03","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128300939","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2013-04-03DOI: 10.1109/LATW.2013.6562669
Luis Francisco, M. Jimenez
This paper presents a replicable and systematic procedure to extract the parameters used in models to estimate low frequency noise (LFN) in metal-oxide-semiconductor (MOS) transistors. This procedure does not neglect the effect of any source of noise manifesting in the device under test (DUT). This procedure includes the design and implementation of an automation process to perform noise measurements using a virtual instrumentation platform. Noise parameters were extracted in different DUT's and validated by comparing simulation data with experimental measurements. All the experimental data was extracted with the automation procedure proposed.
{"title":"Parametric model calibration and measurement extraction for LFN using virtual instrumentation","authors":"Luis Francisco, M. Jimenez","doi":"10.1109/LATW.2013.6562669","DOIUrl":"https://doi.org/10.1109/LATW.2013.6562669","url":null,"abstract":"This paper presents a replicable and systematic procedure to extract the parameters used in models to estimate low frequency noise (LFN) in metal-oxide-semiconductor (MOS) transistors. This procedure does not neglect the effect of any source of noise manifesting in the device under test (DUT). This procedure includes the design and implementation of an automation process to perform noise measurements using a virtual instrumentation platform. Noise parameters were extracted in different DUT's and validated by comparing simulation data with experimental measurements. All the experimental data was extracted with the automation procedure proposed.","PeriodicalId":186736,"journal":{"name":"2013 14th Latin American Test Workshop - LATW","volume":"112 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2013-04-03","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128728673","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2013-04-03DOI: 10.1109/LATW.2013.6562658
S. Devarakond, J. McCoy, A. Nahar, J. Carulli, S. Bhattacharya, A. Chatterjee
A methodology to predict the process e-test parameters corresponding to each die (even in regions of the die where e-test structures are not available) from die test measurements for analog/RF systems is developed. The methodology provides diagnosis of process variations with higher spatial resolution in volume manufacturing over other techniques due to the availability of manufacturing test data at every die site on the wafer as opposed to measurements of e-test parameters at only specific wafer locations. Manufacturing test data for each die is mapped to spatially interpolated e-test data using regression analysis tools. The resulting mapping function can be used to predict the implicit e-test parameter values for each die from its manufacturing test measurements. In addition, the proposed methodology provides guidance regarding which e-test parameters need to be controlled more accurately in comparison to other parameters for high device yield (i.e. the critical e-test parameters). Data collected from 4 different lots and 108 wafers for an analog device currently in production was used to demonstrate the proposed concept and feasibility of the proposed methodology for identifying the critical e-test parameters is presented.
{"title":"Predicting die-level process variations from wafer test data for analog devices: A feasibility study","authors":"S. Devarakond, J. McCoy, A. Nahar, J. Carulli, S. Bhattacharya, A. Chatterjee","doi":"10.1109/LATW.2013.6562658","DOIUrl":"https://doi.org/10.1109/LATW.2013.6562658","url":null,"abstract":"A methodology to predict the process e-test parameters corresponding to each die (even in regions of the die where e-test structures are not available) from die test measurements for analog/RF systems is developed. The methodology provides diagnosis of process variations with higher spatial resolution in volume manufacturing over other techniques due to the availability of manufacturing test data at every die site on the wafer as opposed to measurements of e-test parameters at only specific wafer locations. Manufacturing test data for each die is mapped to spatially interpolated e-test data using regression analysis tools. The resulting mapping function can be used to predict the implicit e-test parameter values for each die from its manufacturing test measurements. In addition, the proposed methodology provides guidance regarding which e-test parameters need to be controlled more accurately in comparison to other parameters for high device yield (i.e. the critical e-test parameters). Data collected from 4 different lots and 108 wafers for an analog device currently in production was used to demonstrate the proposed concept and feasibility of the proposed methodology for identifying the critical e-test parameters is presented.","PeriodicalId":186736,"journal":{"name":"2013 14th Latin American Test Workshop - LATW","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2013-04-03","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"131340800","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2013-04-03DOI: 10.1109/LATW.2013.6562660
Martin Kohlík, H. Kubátová
Dependability models allow calculating the rate of an event leading to a hazard state - a situation, where safety of the modeled dependable system (e.g. railway station signaling and interlocking equipment, automotive systems, etc.) is violated, thus the system may cause material loss, serious injuries or casualties. A hierarchical dependability model allows expressing multiple redundancies made at multiple levels of a system decomposed to multiple cooperating blocks. A hierarchical dependability model based on Markov chains allows each block and relations between these blocks to be expressed independently by Markov chains. This allows a decomposition of a complex dependability model into multiple small models to be made. The decomposed model is easier to read, understand and modify. A hazard rate is calculated significantly faster using hierarchical model, because the decomposition allows exponential calculation-time explosion to be avoided. The paper shows a method how to reduce Markov chains and use them to create hierarchical dependability models. An example study is used to demonstrate the advantages of the hierarchical dependability models (the decomposition of the complex model into multiple simple models and the speedup of the hazard rate calculation).
{"title":"Markov chains hierarchical dependability models: Worst-case computations","authors":"Martin Kohlík, H. Kubátová","doi":"10.1109/LATW.2013.6562660","DOIUrl":"https://doi.org/10.1109/LATW.2013.6562660","url":null,"abstract":"Dependability models allow calculating the rate of an event leading to a hazard state - a situation, where safety of the modeled dependable system (e.g. railway station signaling and interlocking equipment, automotive systems, etc.) is violated, thus the system may cause material loss, serious injuries or casualties. A hierarchical dependability model allows expressing multiple redundancies made at multiple levels of a system decomposed to multiple cooperating blocks. A hierarchical dependability model based on Markov chains allows each block and relations between these blocks to be expressed independently by Markov chains. This allows a decomposition of a complex dependability model into multiple small models to be made. The decomposed model is easier to read, understand and modify. A hazard rate is calculated significantly faster using hierarchical model, because the decomposition allows exponential calculation-time explosion to be avoided. The paper shows a method how to reduce Markov chains and use them to create hierarchical dependability models. An example study is used to demonstrate the advantages of the hierarchical dependability models (the decomposition of the complex model into multiple simple models and the speedup of the hazard rate calculation).","PeriodicalId":186736,"journal":{"name":"2013 14th Latin American Test Workshop - LATW","volume":"17 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2013-04-03","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"123719540","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2013-04-03DOI: 10.1109/LATW.2013.6562665
Valentin Tihhomirov, A. Tsepurov, M. Jenihhin, J. Raik, R. Ubar
Statistical simulation based design error debug approaches strongly rely on quality of the diagnostic test. At the same time there exists no dedicated technique to perform its quality assessment and engineers are forced to rely on subjective figures such as verification test quality metrics or just the size of the diagnostic test. This paper has proposed two new approaches for assessing diagnostic capability of diagnostic tests for automated bug localization. The first approach relies on probabilistic simulation of diagnostic experiments. The second assessment method is based on calculating Hamming distances of the individual sub-tests in the diagnostic test set. The methods are computationally cheap and they provide for a measure of confidence in the localization results and allow estimating impact of the diagnostic test enhancement. The approach is implemented as a part of an open-source hardware design and debugging framework zamiaCAD. Experimental results with an industrial processor design and a set of documented bugs demonstrate feasibility and effectiveness of the proposed approach.
{"title":"Assessment of diagnostic test for automated bug localization","authors":"Valentin Tihhomirov, A. Tsepurov, M. Jenihhin, J. Raik, R. Ubar","doi":"10.1109/LATW.2013.6562665","DOIUrl":"https://doi.org/10.1109/LATW.2013.6562665","url":null,"abstract":"Statistical simulation based design error debug approaches strongly rely on quality of the diagnostic test. At the same time there exists no dedicated technique to perform its quality assessment and engineers are forced to rely on subjective figures such as verification test quality metrics or just the size of the diagnostic test. This paper has proposed two new approaches for assessing diagnostic capability of diagnostic tests for automated bug localization. The first approach relies on probabilistic simulation of diagnostic experiments. The second assessment method is based on calculating Hamming distances of the individual sub-tests in the diagnostic test set. The methods are computationally cheap and they provide for a measure of confidence in the localization results and allow estimating impact of the diagnostic test enhancement. The approach is implemented as a part of an open-source hardware design and debugging framework zamiaCAD. Experimental results with an industrial processor design and a set of documented bugs demonstrate feasibility and effectiveness of the proposed approach.","PeriodicalId":186736,"journal":{"name":"2013 14th Latin American Test Workshop - LATW","volume":"70 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2013-04-03","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115084739","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}