Pub Date : 2013-04-29DOI: 10.1109/VTS.2013.6548907
Helia Naeimi, S. Natarajan, Kushagra Vaid, P. Kudva, Mahesh Natu
The rapid pace of integration, emergence of low power, low cost computing elements, and ubiquitous and ever-increasing bandwidth of connectivity have given rise to data center and cloud infrastructures. These infrastructures are beginning to be used on a massive scale across vast geographic boundaries to provide commercial services to businesses such as banking, enterprise computing, online sales, and data mining and processing for targeted marketing to name a few. Such an infrastructure comprises of thousands of compute and storage nodes that are interconnected by massive network fabrics, each of them having their own hardware and firmware stacks, with layers of software stacks for operating systems, network protocols, schedulers and application programs. The scale of such an infrastructure has made possible service that has been unimaginable only a few years ago, but has the downside of severe losses in case of failure. A system of such scale and risk necessitates methods to (a) proactively anticipate and protect against impending failures, (b) efficiently, transparently and quickly detect, diagnose and correct failures in any software or hardware layer, and (c) be able to automatically adapt itself based on prior failures to prevent future occurrences. Addressing the above reliability challenges is inherently different from the traditional reliability techniques. First, there is a great amount of redundant resources available in the cloud from networking to computing and storage nodes, which opens up many reliability approaches by harvesting these available redundancies. Second, due to the large scale of the system, techniques with high overheads, especially in power, are not acceptable. Consequently, cross layer approaches to optimize the availability and power have gained traction recently. This session will address these challenges in maintaining reliable service with solutions across the hardware/software stacks. The currently available commercial data-center and cloud infrastructures will be reviewed and the relative occurrences of different causalities of failures, the level to which they are anticipated and diagnosed in practice, and their impact on the quality of service and infrastructure design will be discussed. A study on real-time analytics to proactively address failures in a private, secure cloud engaged in domain-specific computations, with streaming inputs received from embedded computing platforms (such as airborne image sources, data streams, or sensors) will be presented next. The session concludes with a discussion on the increased relevance of resiliency features built inside individual systems and components (private cloud) and how the macro public cloud absorbs innovations from this realm.
{"title":"Innovative practices session 5C: Cloud atlas — Unreliability through massive connectivity","authors":"Helia Naeimi, S. Natarajan, Kushagra Vaid, P. Kudva, Mahesh Natu","doi":"10.1109/VTS.2013.6548907","DOIUrl":"https://doi.org/10.1109/VTS.2013.6548907","url":null,"abstract":"The rapid pace of integration, emergence of low power, low cost computing elements, and ubiquitous and ever-increasing bandwidth of connectivity have given rise to data center and cloud infrastructures. These infrastructures are beginning to be used on a massive scale across vast geographic boundaries to provide commercial services to businesses such as banking, enterprise computing, online sales, and data mining and processing for targeted marketing to name a few. Such an infrastructure comprises of thousands of compute and storage nodes that are interconnected by massive network fabrics, each of them having their own hardware and firmware stacks, with layers of software stacks for operating systems, network protocols, schedulers and application programs. The scale of such an infrastructure has made possible service that has been unimaginable only a few years ago, but has the downside of severe losses in case of failure. A system of such scale and risk necessitates methods to (a) proactively anticipate and protect against impending failures, (b) efficiently, transparently and quickly detect, diagnose and correct failures in any software or hardware layer, and (c) be able to automatically adapt itself based on prior failures to prevent future occurrences. Addressing the above reliability challenges is inherently different from the traditional reliability techniques. First, there is a great amount of redundant resources available in the cloud from networking to computing and storage nodes, which opens up many reliability approaches by harvesting these available redundancies. Second, due to the large scale of the system, techniques with high overheads, especially in power, are not acceptable. Consequently, cross layer approaches to optimize the availability and power have gained traction recently. This session will address these challenges in maintaining reliable service with solutions across the hardware/software stacks. The currently available commercial data-center and cloud infrastructures will be reviewed and the relative occurrences of different causalities of failures, the level to which they are anticipated and diagnosed in practice, and their impact on the quality of service and infrastructure design will be discussed. A study on real-time analytics to proactively address failures in a private, secure cloud engaged in domain-specific computations, with streaming inputs received from embedded computing platforms (such as airborne image sources, data streams, or sensors) will be presented next. The session concludes with a discussion on the increased relevance of resiliency features built inside individual systems and components (private cloud) and how the macro public cloud absorbs innovations from this realm.","PeriodicalId":138435,"journal":{"name":"2013 IEEE 31st VLSI Test Symposium (VTS)","volume":"19 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2013-04-29","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126355593","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2013-04-29DOI: 10.1109/VTS.2013.6548886
B. Arslan, A. Orailoglu
The increasing multiplicity of defect types forces the inclusion of tests from a variety of fault models. The quest for test quality is checkmated though by the considerable and frequently unnecessary cost of the large number of tests, driven by the lack of a clear correspondence between defects and fault models. While the static derivation of the appropriate test mixes from a variety of fault models to deliver high test quality at low cost is a desirable goal, it is challenged by the frequent changes in defect characteristics. The consequent necessity for adaptivity is addressed in this paper through a test framework that utilizes the continuous stream of failing test data during production testing to track the varying test quality based on evolving defect characteristics and thus dynamically adjust the production test set to deliver a target defect escape level at minimal test cost.
{"title":"Tracing the best test mix through multi-variate quality tracking","authors":"B. Arslan, A. Orailoglu","doi":"10.1109/VTS.2013.6548886","DOIUrl":"https://doi.org/10.1109/VTS.2013.6548886","url":null,"abstract":"The increasing multiplicity of defect types forces the inclusion of tests from a variety of fault models. The quest for test quality is checkmated though by the considerable and frequently unnecessary cost of the large number of tests, driven by the lack of a clear correspondence between defects and fault models. While the static derivation of the appropriate test mixes from a variety of fault models to deliver high test quality at low cost is a desirable goal, it is challenged by the frequent changes in defect characteristics. The consequent necessity for adaptivity is addressed in this paper through a test framework that utilizes the continuous stream of failing test data during production testing to track the varying test quality based on evolving defect characteristics and thus dynamically adjust the production test set to deliver a target defect escape level at minimal test cost.","PeriodicalId":138435,"journal":{"name":"2013 IEEE 31st VLSI Test Symposium (VTS)","volume":"20 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2013-04-29","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"132626151","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2013-04-29DOI: 10.1109/VTS.2013.6548913
Asma Laraba, H. Stratigopoulos, S. Mir, Hervé Naudet, G. Bret
Reduced code testing of a pipeline analog-to-digital converter (ADC) consists of inferring the complete static transfer function by measuring the width of a small subset of codes. This technique exploits the redundancy that is present in the way the ADC processes the analog input signal. The main challenge is to select the initial subset of codes such that the widths of the rest of the codes can be estimated correctly. By applying the state-of-the-art technique to a real 11-bit 2.5-bit/stage, 55nm pipeline ADC, we observed that the presence of noise affected the accuracy of the estimation of the static performances (e.g, differential nonlinearity and integral non-linearity). In this paper, we exploit another feature of the redundancy to cancel out the effect of noise. Experimental measurements demonstrate that this reduced code testing technique estimates the static performances with an accuracy equivalent to the standard histogram technique. Only 6 % of the codes need to be considered which represents a very significant test time reduction.
{"title":"Reduced code linearity testing of pipeline adcs in the presence of noise","authors":"Asma Laraba, H. Stratigopoulos, S. Mir, Hervé Naudet, G. Bret","doi":"10.1109/VTS.2013.6548913","DOIUrl":"https://doi.org/10.1109/VTS.2013.6548913","url":null,"abstract":"Reduced code testing of a pipeline analog-to-digital converter (ADC) consists of inferring the complete static transfer function by measuring the width of a small subset of codes. This technique exploits the redundancy that is present in the way the ADC processes the analog input signal. The main challenge is to select the initial subset of codes such that the widths of the rest of the codes can be estimated correctly. By applying the state-of-the-art technique to a real 11-bit 2.5-bit/stage, 55nm pipeline ADC, we observed that the presence of noise affected the accuracy of the estimation of the static performances (e.g, differential nonlinearity and integral non-linearity). In this paper, we exploit another feature of the redundancy to cancel out the effect of noise. Experimental measurements demonstrate that this reduced code testing technique estimates the static performances with an accuracy equivalent to the standard histogram technique. Only 6 % of the codes need to be considered which represents a very significant test time reduction.","PeriodicalId":138435,"journal":{"name":"2013 IEEE 31st VLSI Test Symposium (VTS)","volume":"27 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2013-04-29","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"134380148","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2013-04-29DOI: 10.1109/VTS.2013.6548916
Yu Huang, Xiaoxin Fan, Huaxing Tang, Manish Sharma, Wu-Tung Cheng, B. Benware, S. Reddy
Diagnosis memory footprint for large designs is growing as design sizes grow such that the diagnosis throughput for given computational resources becomes a bottleneck in volume diagnosis. In this paper, we propose a scan chain diagnosis flow based on dynamic design partitioning and distributed diagnosis architecture that can improve the diagnosis throughput over one order of magnitude.
{"title":"Distributed dynamic partitioning based diagnosis of scan chain","authors":"Yu Huang, Xiaoxin Fan, Huaxing Tang, Manish Sharma, Wu-Tung Cheng, B. Benware, S. Reddy","doi":"10.1109/VTS.2013.6548916","DOIUrl":"https://doi.org/10.1109/VTS.2013.6548916","url":null,"abstract":"Diagnosis memory footprint for large designs is growing as design sizes grow such that the diagnosis throughput for given computational resources becomes a bottleneck in volume diagnosis. In this paper, we propose a scan chain diagnosis flow based on dynamic design partitioning and distributed diagnosis architecture that can improve the diagnosis throughput over one order of magnitude.","PeriodicalId":138435,"journal":{"name":"2013 IEEE 31st VLSI Test Symposium (VTS)","volume":"5 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2013-04-29","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"116971446","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2013-04-29DOI: 10.1109/VTS.2013.6548883
P. Wohl, J. Waicukauski
Scan testing and scan compression have become key components for reducing test cost. We present a novel technique to increase automatic test pattern generation (ATPG) effectiveness by identifying and exploiting instances of increasingly common “majority gates”. Test generation is modified so that better decision are made and care bits can be reduced. Consequently, test coverage, pattern count and CPU time can be improved. The new method requires no hardware support, and can be applied to any ATPG system, although scan compression methods can benefit the most.
{"title":"Improving test generation by use of majority gates","authors":"P. Wohl, J. Waicukauski","doi":"10.1109/VTS.2013.6548883","DOIUrl":"https://doi.org/10.1109/VTS.2013.6548883","url":null,"abstract":"Scan testing and scan compression have become key components for reducing test cost. We present a novel technique to increase automatic test pattern generation (ATPG) effectiveness by identifying and exploiting instances of increasingly common “majority gates”. Test generation is modified so that better decision are made and care bits can be reduced. Consequently, test coverage, pattern count and CPU time can be improved. The new method requires no hardware support, and can be applied to any ATPG system, although scan compression methods can benefit the most.","PeriodicalId":138435,"journal":{"name":"2013 IEEE 31st VLSI Test Symposium (VTS)","volume":"10 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2013-04-29","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115607975","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2013-04-29DOI: 10.1109/VTS.2013.6548908
A. Chaudhari, Junyoung Park, J. Abraham
Transient errors during execution of a process running on a processor can lead to serious system failures or security lapses. It is necessary to detect, and if possible, correct these errors before any damage is caused to the system. Of the many approaches, monitoring the control flow of an application during runtime is one of the techniques used for transient error detection during an application execution. Although promising, the cost of implementing the control flow checks in software has been prohibitively high and hence is not widely used in practice. In this paper we describe a hardware based control flow monitoring technique which has the capability to detect errors in control flow and the instruction stream being executed on a processor. Our technique achieves a high coverage of control flow error detection (99.98 %) and has the capability to quickly recover from the error, making it resilient to transient control flow errors. It poses an extremely low performance overhead (~ 1 %) and reasonable area cost (<; 6 %) to the host processor. The framework for runtime monitoring of control flow described in this paper can be extended to efficiently monitor and detect any transient errors in the execution of instructions on a processor.
{"title":"A framework for low overhead hardware based runtime control flow error detection and recovery","authors":"A. Chaudhari, Junyoung Park, J. Abraham","doi":"10.1109/VTS.2013.6548908","DOIUrl":"https://doi.org/10.1109/VTS.2013.6548908","url":null,"abstract":"Transient errors during execution of a process running on a processor can lead to serious system failures or security lapses. It is necessary to detect, and if possible, correct these errors before any damage is caused to the system. Of the many approaches, monitoring the control flow of an application during runtime is one of the techniques used for transient error detection during an application execution. Although promising, the cost of implementing the control flow checks in software has been prohibitively high and hence is not widely used in practice. In this paper we describe a hardware based control flow monitoring technique which has the capability to detect errors in control flow and the instruction stream being executed on a processor. Our technique achieves a high coverage of control flow error detection (99.98 %) and has the capability to quickly recover from the error, making it resilient to transient control flow errors. It poses an extremely low performance overhead (~ 1 %) and reasonable area cost (<; 6 %) to the host processor. The framework for runtime monitoring of control flow described in this paper can be extended to efficiently monitor and detect any transient errors in the execution of instructions on a processor.","PeriodicalId":138435,"journal":{"name":"2013 IEEE 31st VLSI Test Symposium (VTS)","volume":"133 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2013-04-29","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115811188","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2013-04-29DOI: 10.1109/VTS.2013.6548938
P. Pant, M. Amodeo, S. Vora, J. E. Colburn
The importance of testing for timing related defects continues to increase as devices are manufactured at ever smaller geometries and IO frequencies have increased to the point that production testers can no longer provide stored response vectors at-speed. As a result, it is increasingly important to have high quality tests for delay defects to bring down the product's DPPM levels (defective parts per million) shipped to end customers. Moreover, during the design characterization phase, these same tests are also used for isolating systematic slow paths in the design (speedpaths). With the inexorable march toward lower power SKUs, there remains a critical need to find and fix these limiting speedpaths prior to revenue shipments. Over the years, testing for delay defect has morphed from pure functional vectors that try to exercise a device like it would be in an end-user system, to intermediate methods that load assembly code into on-chip caches and execute them at speed, to completely structural methods that utilize scan DFT and check delays at the signal and gate level without resorting to any functional methods at all. This innovative practices session includes three presentations that cover a wide range of topics related to delay testing. The first presentation from Cadence outlines an approach to at-speed coverage that utilizes synergies between clock generation logic, DFT logic and ATPG tools. The solution leverages On-Product Clock Generation logic (OPCG) for high-speed testing and is compatible with existing test compression DFT. The additional DFT proposed enables simultaneous test of multiple clock domains and the inter-domain interfaces, while accounting for timing constraints between them. The ATPG clocking sequences are automatically generated by analyzing the clock domains and interfaces, and this information is used to optimize the DFT structures and for use in the ATPG process. The second presentation discusses the transformation in Intel's microprocessor speedpath characterization world over the last few generations, going from pure functional content to scan based structural content. It presents a new “trend based approach” for efficient speedpath isolation, and also delves into a comparison of the effectiveness and correlation of functional vs. structural test patterns for speedpath debug. The third presentation presents the differences between the various delay defect models, namely transition delay, path delay and small-delay, and the pros and cons of each. It goes on to describe new small delay defect ATPG flows implemented at Nvidia that are designed to balance the test generation simplicity of transition delay test patterns and the defect coverage provided by path delay test patterns. These flows enable the small delay defect test patterns to meet the test quality, delivery schedules and ATPG efficiency requirements set by a product's test cost goals.
{"title":"Innovative practices session 10C: Delay test","authors":"P. Pant, M. Amodeo, S. Vora, J. E. Colburn","doi":"10.1109/VTS.2013.6548938","DOIUrl":"https://doi.org/10.1109/VTS.2013.6548938","url":null,"abstract":"The importance of testing for timing related defects continues to increase as devices are manufactured at ever smaller geometries and IO frequencies have increased to the point that production testers can no longer provide stored response vectors at-speed. As a result, it is increasingly important to have high quality tests for delay defects to bring down the product's DPPM levels (defective parts per million) shipped to end customers. Moreover, during the design characterization phase, these same tests are also used for isolating systematic slow paths in the design (speedpaths). With the inexorable march toward lower power SKUs, there remains a critical need to find and fix these limiting speedpaths prior to revenue shipments. Over the years, testing for delay defect has morphed from pure functional vectors that try to exercise a device like it would be in an end-user system, to intermediate methods that load assembly code into on-chip caches and execute them at speed, to completely structural methods that utilize scan DFT and check delays at the signal and gate level without resorting to any functional methods at all. This innovative practices session includes three presentations that cover a wide range of topics related to delay testing. The first presentation from Cadence outlines an approach to at-speed coverage that utilizes synergies between clock generation logic, DFT logic and ATPG tools. The solution leverages On-Product Clock Generation logic (OPCG) for high-speed testing and is compatible with existing test compression DFT. The additional DFT proposed enables simultaneous test of multiple clock domains and the inter-domain interfaces, while accounting for timing constraints between them. The ATPG clocking sequences are automatically generated by analyzing the clock domains and interfaces, and this information is used to optimize the DFT structures and for use in the ATPG process. The second presentation discusses the transformation in Intel's microprocessor speedpath characterization world over the last few generations, going from pure functional content to scan based structural content. It presents a new “trend based approach” for efficient speedpath isolation, and also delves into a comparison of the effectiveness and correlation of functional vs. structural test patterns for speedpath debug. The third presentation presents the differences between the various delay defect models, namely transition delay, path delay and small-delay, and the pros and cons of each. It goes on to describe new small delay defect ATPG flows implemented at Nvidia that are designed to balance the test generation simplicity of transition delay test patterns and the defect coverage provided by path delay test patterns. These flows enable the small delay defect test patterns to meet the test quality, delivery schedules and ATPG efficiency requirements set by a product's test cost goals.","PeriodicalId":138435,"journal":{"name":"2013 IEEE 31st VLSI Test Symposium (VTS)","volume":"4 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2013-04-29","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"125567635","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2013-04-29DOI: 10.1109/VTS.2013.6548882
P. Venkataramani, S. Sindia, V. Agrawal
In a digital test, supply voltage (VDD), clock frequency (ftest), peak power (PMAX) and test time (TT) are related parameters. For a given limit PMAX = PMAX func, normally set by functional specification, we find the optimum VDD = VDDopt and ftest = fopt to minimize TT. A solution is derived analytically from the technology-dependent characterization of semiconductor devices. It is shown that at VDDopt the peak power any test cycle consumes just equals PMAX func and ftest is fastest that the critical path at VDDopt will allow. The paper demonstrates how test parameters can be obtained numerically from MATLAB, or experimentally by bench test equipment like National Instruments' ELVIS. This optimization can cut the test time of ISCAS'89 benchmarks in 180nm CMOS into half.
{"title":"Finding best voltage and frequency to shorten power-constrained test time","authors":"P. Venkataramani, S. Sindia, V. Agrawal","doi":"10.1109/VTS.2013.6548882","DOIUrl":"https://doi.org/10.1109/VTS.2013.6548882","url":null,"abstract":"In a digital test, supply voltage (V<sub>DD</sub>), clock frequency (f<sub>test</sub>), peak power (P<sub>MAX</sub>) and test time (TT) are related parameters. For a given limit P<sub>MAX</sub> = P<sub>MAX func</sub>, normally set by functional specification, we find the optimum V<sub>DD</sub> = V<sub>DDopt</sub> and f<sub>test</sub> = f<sub>opt</sub> to minimize TT. A solution is derived analytically from the technology-dependent characterization of semiconductor devices. It is shown that at V<sub>DDopt</sub> the peak power any test cycle consumes just equals P<sub>MAX func</sub> and f<sub>test</sub> is fastest that the critical path at V<sub>DDopt</sub> will allow. The paper demonstrates how test parameters can be obtained numerically from MATLAB, or experimentally by bench test equipment like National Instruments' ELVIS. This optimization can cut the test time of ISCAS'89 benchmarks in 180nm CMOS into half.","PeriodicalId":138435,"journal":{"name":"2013 IEEE 31st VLSI Test Symposium (VTS)","volume":"253 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2013-04-29","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"133780173","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2013-04-29DOI: 10.1109/VTS.2013.6548914
J. E. Colburn, K. Chung, H. Konuk, Y. Dong
Test compression has become a requirement for many designs to meet the required test quality levels in reasonable test times and with acceptable test cost. This session will cover some of the tradeoffs and options available from the broad spectrum of test compression solutions. The first talk will address the difficulty when testing large numbers of logic blocks and processor cores of maintaining high test quality without a requisite increase in test cost stemming from the need to allocate substantially more pins for digital test. Simply adding more chip-level pins for testing conflicts with packaging constraints and can potentially undermine other cost-saving techniques that rely on utilizing fewer pins such as multi-site testing. What is needed instead is a DFT strategy optimized for complex SOC designs that use multicore processors-a strategy in which the architecture and automation elements work in tandem to lower test cost without compromising test quality or significantly increasing automatic test pattern generation (ATPG) runtime. This presentation highlights an optimized DFT architecture, referred to as “shared I/O” of DFTMAX, a synthesis-based test solution that has been used successfully in multicore processor designs as well as complex SOC designs. Using this approach, they were able to reduce scan test pins significantly with similar or even less ATPG patterns, without compromising test coverage, and achieve over 2X reduction in wafer level scan test time. The second talk will present many DFT techniques to reduce test time and improve coverage in the context of core wrapping. Some of these methods include using external scan chains with separate compression logic inside each place-and-route block instead of having ‘chip-top’ scan compression logic for all external scan chains from different place-and-route. In addition, some tradeoffs of using dynamic launch-on-shift/launch-on-capture (LOS/LOC) instead of static will be covered. Some other methods will be covered for preventing decompressor logic from feeding X'es during launch-on-shift test patterns and the benefits of control test-points to reduce ATPG vector counts. The final presentation will cover various methodologies for reducing the test data volume on different chips. Some work to achieve a higher compression ratio in the future will also be discussed. As with any good engineering solution, there are some constraints and tradeoffs that also need to be considered with those choices.
{"title":"Innovative practices session 6C: Latest practices in test compression","authors":"J. E. Colburn, K. Chung, H. Konuk, Y. Dong","doi":"10.1109/VTS.2013.6548914","DOIUrl":"https://doi.org/10.1109/VTS.2013.6548914","url":null,"abstract":"Test compression has become a requirement for many designs to meet the required test quality levels in reasonable test times and with acceptable test cost. This session will cover some of the tradeoffs and options available from the broad spectrum of test compression solutions. The first talk will address the difficulty when testing large numbers of logic blocks and processor cores of maintaining high test quality without a requisite increase in test cost stemming from the need to allocate substantially more pins for digital test. Simply adding more chip-level pins for testing conflicts with packaging constraints and can potentially undermine other cost-saving techniques that rely on utilizing fewer pins such as multi-site testing. What is needed instead is a DFT strategy optimized for complex SOC designs that use multicore processors-a strategy in which the architecture and automation elements work in tandem to lower test cost without compromising test quality or significantly increasing automatic test pattern generation (ATPG) runtime. This presentation highlights an optimized DFT architecture, referred to as “shared I/O” of DFTMAX, a synthesis-based test solution that has been used successfully in multicore processor designs as well as complex SOC designs. Using this approach, they were able to reduce scan test pins significantly with similar or even less ATPG patterns, without compromising test coverage, and achieve over 2X reduction in wafer level scan test time. The second talk will present many DFT techniques to reduce test time and improve coverage in the context of core wrapping. Some of these methods include using external scan chains with separate compression logic inside each place-and-route block instead of having ‘chip-top’ scan compression logic for all external scan chains from different place-and-route. In addition, some tradeoffs of using dynamic launch-on-shift/launch-on-capture (LOS/LOC) instead of static will be covered. Some other methods will be covered for preventing decompressor logic from feeding X'es during launch-on-shift test patterns and the benefits of control test-points to reduce ATPG vector counts. The final presentation will cover various methodologies for reducing the test data volume on different chips. Some work to achieve a higher compression ratio in the future will also be discussed. As with any good engineering solution, there are some constraints and tradeoffs that also need to be considered with those choices.","PeriodicalId":138435,"journal":{"name":"2013 IEEE 31st VLSI Test Symposium (VTS)","volume":"30 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2013-04-29","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"134154813","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2013-04-29DOI: 10.1109/VTS.2013.6548925
Michael Patterson, Aaron Mills, Ryan A. Scheel, Julie Tillman, Evan Dye, Joseph Zambreno
Three general approaches to detecting Trojans embedded in FPGA circuits were explored in the context of the 2012 CSAW Embedded Systems Challenge: functional testing, power analysis, and direct analysis of the bitfile. These tests were used to classify a set of 32 bitfiles which include Trojans of an unknown nature. The project is a step towards developing a framework for Trojan-detection which leverages the strengths of a variety of testing techniques.
{"title":"A multi-faceted approach to FPGA-based Trojan circuit detection","authors":"Michael Patterson, Aaron Mills, Ryan A. Scheel, Julie Tillman, Evan Dye, Joseph Zambreno","doi":"10.1109/VTS.2013.6548925","DOIUrl":"https://doi.org/10.1109/VTS.2013.6548925","url":null,"abstract":"Three general approaches to detecting Trojans embedded in FPGA circuits were explored in the context of the 2012 CSAW Embedded Systems Challenge: functional testing, power analysis, and direct analysis of the bitfile. These tests were used to classify a set of 32 bitfiles which include Trojans of an unknown nature. The project is a step towards developing a framework for Trojan-detection which leverages the strengths of a variety of testing techniques.","PeriodicalId":138435,"journal":{"name":"2013 IEEE 31st VLSI Test Symposium (VTS)","volume":"31 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2013-04-29","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"131013675","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}