首页 > 最新文献

2013 IEEE 31st VLSI Test Symposium (VTS)最新文献

英文 中文
Innovative practices session 5C: Cloud atlas — Unreliability through massive connectivity 创新实践环节5C:云图谱——海量连接带来的不可靠性
Pub Date : 2013-04-29 DOI: 10.1109/VTS.2013.6548907
Helia Naeimi, S. Natarajan, Kushagra Vaid, P. Kudva, Mahesh Natu
The rapid pace of integration, emergence of low power, low cost computing elements, and ubiquitous and ever-increasing bandwidth of connectivity have given rise to data center and cloud infrastructures. These infrastructures are beginning to be used on a massive scale across vast geographic boundaries to provide commercial services to businesses such as banking, enterprise computing, online sales, and data mining and processing for targeted marketing to name a few. Such an infrastructure comprises of thousands of compute and storage nodes that are interconnected by massive network fabrics, each of them having their own hardware and firmware stacks, with layers of software stacks for operating systems, network protocols, schedulers and application programs. The scale of such an infrastructure has made possible service that has been unimaginable only a few years ago, but has the downside of severe losses in case of failure. A system of such scale and risk necessitates methods to (a) proactively anticipate and protect against impending failures, (b) efficiently, transparently and quickly detect, diagnose and correct failures in any software or hardware layer, and (c) be able to automatically adapt itself based on prior failures to prevent future occurrences. Addressing the above reliability challenges is inherently different from the traditional reliability techniques. First, there is a great amount of redundant resources available in the cloud from networking to computing and storage nodes, which opens up many reliability approaches by harvesting these available redundancies. Second, due to the large scale of the system, techniques with high overheads, especially in power, are not acceptable. Consequently, cross layer approaches to optimize the availability and power have gained traction recently. This session will address these challenges in maintaining reliable service with solutions across the hardware/software stacks. The currently available commercial data-center and cloud infrastructures will be reviewed and the relative occurrences of different causalities of failures, the level to which they are anticipated and diagnosed in practice, and their impact on the quality of service and infrastructure design will be discussed. A study on real-time analytics to proactively address failures in a private, secure cloud engaged in domain-specific computations, with streaming inputs received from embedded computing platforms (such as airborne image sources, data streams, or sensors) will be presented next. The session concludes with a discussion on the increased relevance of resiliency features built inside individual systems and components (private cloud) and how the macro public cloud absorbs innovations from this realm.
集成的快速步伐、低功耗、低成本计算元素的出现,以及无处不在且不断增加的连接带宽,催生了数据中心和云基础设施。这些基础设施开始大规模地跨越广阔的地理边界,为企业提供商业服务,如银行、企业计算、在线销售、针对目标营销的数据挖掘和处理等等。这样的基础设施由数千个计算和存储节点组成,这些节点通过庞大的网络结构相互连接,每个节点都有自己的硬件和固件堆栈,以及操作系统、网络协议、调度程序和应用程序的软件堆栈层。这种基础设施的规模使几年前还无法想象的服务成为可能,但它的缺点是一旦出现故障就会造成严重损失。如此规模和风险的系统需要以下方法:(A)主动预测和防止即将发生的故障,(b)有效、透明和快速地检测、诊断和纠正任何软件或硬件层的故障,以及(c)能够根据先前的故障自动调整自身以防止未来发生。解决上述可靠性挑战与传统可靠性技术本质上是不同的。首先,从网络到计算和存储节点,云中有大量可用的冗余资源,通过收集这些可用的冗余,开辟了许多可靠性方法。其次,由于系统的规模大,开销高的技术,特别是在功率方面,是不可接受的。因此,优化可用性和功率的跨层方法最近获得了关注。本次会议将讨论在使用跨硬件/软件堆栈的解决方案来维护可靠服务方面的这些挑战。将审查目前可用的商业数据中心和云基础设施,并讨论不同故障因果关系的相对发生率、在实践中预测和诊断的程度以及它们对服务质量和基础设施设计的影响。接下来将介绍一项实时分析研究,以主动解决私有安全云中的故障,该云从事特定领域的计算,并从嵌入式计算平台(如机载图像源、数据流或传感器)接收流输入。会议最后讨论了在单个系统和组件(私有云)内部构建的弹性特性的相关性增加,以及宏观公共云如何从这一领域吸收创新。
{"title":"Innovative practices session 5C: Cloud atlas — Unreliability through massive connectivity","authors":"Helia Naeimi, S. Natarajan, Kushagra Vaid, P. Kudva, Mahesh Natu","doi":"10.1109/VTS.2013.6548907","DOIUrl":"https://doi.org/10.1109/VTS.2013.6548907","url":null,"abstract":"The rapid pace of integration, emergence of low power, low cost computing elements, and ubiquitous and ever-increasing bandwidth of connectivity have given rise to data center and cloud infrastructures. These infrastructures are beginning to be used on a massive scale across vast geographic boundaries to provide commercial services to businesses such as banking, enterprise computing, online sales, and data mining and processing for targeted marketing to name a few. Such an infrastructure comprises of thousands of compute and storage nodes that are interconnected by massive network fabrics, each of them having their own hardware and firmware stacks, with layers of software stacks for operating systems, network protocols, schedulers and application programs. The scale of such an infrastructure has made possible service that has been unimaginable only a few years ago, but has the downside of severe losses in case of failure. A system of such scale and risk necessitates methods to (a) proactively anticipate and protect against impending failures, (b) efficiently, transparently and quickly detect, diagnose and correct failures in any software or hardware layer, and (c) be able to automatically adapt itself based on prior failures to prevent future occurrences. Addressing the above reliability challenges is inherently different from the traditional reliability techniques. First, there is a great amount of redundant resources available in the cloud from networking to computing and storage nodes, which opens up many reliability approaches by harvesting these available redundancies. Second, due to the large scale of the system, techniques with high overheads, especially in power, are not acceptable. Consequently, cross layer approaches to optimize the availability and power have gained traction recently. This session will address these challenges in maintaining reliable service with solutions across the hardware/software stacks. The currently available commercial data-center and cloud infrastructures will be reviewed and the relative occurrences of different causalities of failures, the level to which they are anticipated and diagnosed in practice, and their impact on the quality of service and infrastructure design will be discussed. A study on real-time analytics to proactively address failures in a private, secure cloud engaged in domain-specific computations, with streaming inputs received from embedded computing platforms (such as airborne image sources, data streams, or sensors) will be presented next. The session concludes with a discussion on the increased relevance of resiliency features built inside individual systems and components (private cloud) and how the macro public cloud absorbs innovations from this realm.","PeriodicalId":138435,"journal":{"name":"2013 IEEE 31st VLSI Test Symposium (VTS)","volume":"19 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2013-04-29","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126355593","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 2
Tracing the best test mix through multi-variate quality tracking 通过多变量质量跟踪,跟踪最佳测试组合
Pub Date : 2013-04-29 DOI: 10.1109/VTS.2013.6548886
B. Arslan, A. Orailoglu
The increasing multiplicity of defect types forces the inclusion of tests from a variety of fault models. The quest for test quality is checkmated though by the considerable and frequently unnecessary cost of the large number of tests, driven by the lack of a clear correspondence between defects and fault models. While the static derivation of the appropriate test mixes from a variety of fault models to deliver high test quality at low cost is a desirable goal, it is challenged by the frequent changes in defect characteristics. The consequent necessity for adaptivity is addressed in this paper through a test framework that utilizes the continuous stream of failing test data during production testing to track the varying test quality based on evolving defect characteristics and thus dynamically adjust the production test set to deliver a target defect escape level at minimal test cost.
缺陷类型的增加的多样性迫使包含来自各种故障模型的测试。尽管由于缺陷和故障模型之间缺乏明确的对应关系,大量测试的大量且经常是不必要的成本导致了对测试质量的追求。虽然从各种故障模型中静态地推导出适当的测试混合,以低成本交付高测试质量是一个理想的目标,但它受到缺陷特征频繁变化的挑战。本文通过一个测试框架解决了适应性的必要性,该测试框架利用生产测试期间失败测试数据的连续流来跟踪基于不断发展的缺陷特征的变化测试质量,从而动态调整生产测试集,以最小的测试成本交付目标缺陷逃逸级别。
{"title":"Tracing the best test mix through multi-variate quality tracking","authors":"B. Arslan, A. Orailoglu","doi":"10.1109/VTS.2013.6548886","DOIUrl":"https://doi.org/10.1109/VTS.2013.6548886","url":null,"abstract":"The increasing multiplicity of defect types forces the inclusion of tests from a variety of fault models. The quest for test quality is checkmated though by the considerable and frequently unnecessary cost of the large number of tests, driven by the lack of a clear correspondence between defects and fault models. While the static derivation of the appropriate test mixes from a variety of fault models to deliver high test quality at low cost is a desirable goal, it is challenged by the frequent changes in defect characteristics. The consequent necessity for adaptivity is addressed in this paper through a test framework that utilizes the continuous stream of failing test data during production testing to track the varying test quality based on evolving defect characteristics and thus dynamically adjust the production test set to deliver a target defect escape level at minimal test cost.","PeriodicalId":138435,"journal":{"name":"2013 IEEE 31st VLSI Test Symposium (VTS)","volume":"20 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2013-04-29","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"132626151","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 4
Reduced code linearity testing of pipeline adcs in the presence of noise 减少了流水线adc在噪声存在下的代码线性测试
Pub Date : 2013-04-29 DOI: 10.1109/VTS.2013.6548913
Asma Laraba, H. Stratigopoulos, S. Mir, Hervé Naudet, G. Bret
Reduced code testing of a pipeline analog-to-digital converter (ADC) consists of inferring the complete static transfer function by measuring the width of a small subset of codes. This technique exploits the redundancy that is present in the way the ADC processes the analog input signal. The main challenge is to select the initial subset of codes such that the widths of the rest of the codes can be estimated correctly. By applying the state-of-the-art technique to a real 11-bit 2.5-bit/stage, 55nm pipeline ADC, we observed that the presence of noise affected the accuracy of the estimation of the static performances (e.g, differential nonlinearity and integral non-linearity). In this paper, we exploit another feature of the redundancy to cancel out the effect of noise. Experimental measurements demonstrate that this reduced code testing technique estimates the static performances with an accuracy equivalent to the standard histogram technique. Only 6 % of the codes need to be considered which represents a very significant test time reduction.
流水线模数转换器(ADC)的简化代码测试包括通过测量一小部分代码的宽度来推断完整的静态传递函数。这种技术利用了在ADC处理模拟输入信号的方式中存在的冗余。主要的挑战是选择代码的初始子集,以便可以正确估计其余代码的宽度。通过将最先进的技术应用于实际的11位2.5位/级55nm流水线ADC,我们观察到噪声的存在影响了静态性能估计的准确性(例如微分非线性和积分非线性)。在本文中,我们利用冗余的另一个特征来抵消噪声的影响。实验测量表明,这种减少代码测试技术估计静态性能的精度相当于标准直方图技术。只需要考虑6%的代码,这代表了非常显著的测试时间减少。
{"title":"Reduced code linearity testing of pipeline adcs in the presence of noise","authors":"Asma Laraba, H. Stratigopoulos, S. Mir, Hervé Naudet, G. Bret","doi":"10.1109/VTS.2013.6548913","DOIUrl":"https://doi.org/10.1109/VTS.2013.6548913","url":null,"abstract":"Reduced code testing of a pipeline analog-to-digital converter (ADC) consists of inferring the complete static transfer function by measuring the width of a small subset of codes. This technique exploits the redundancy that is present in the way the ADC processes the analog input signal. The main challenge is to select the initial subset of codes such that the widths of the rest of the codes can be estimated correctly. By applying the state-of-the-art technique to a real 11-bit 2.5-bit/stage, 55nm pipeline ADC, we observed that the presence of noise affected the accuracy of the estimation of the static performances (e.g, differential nonlinearity and integral non-linearity). In this paper, we exploit another feature of the redundancy to cancel out the effect of noise. Experimental measurements demonstrate that this reduced code testing technique estimates the static performances with an accuracy equivalent to the standard histogram technique. Only 6 % of the codes need to be considered which represents a very significant test time reduction.","PeriodicalId":138435,"journal":{"name":"2013 IEEE 31st VLSI Test Symposium (VTS)","volume":"27 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2013-04-29","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"134380148","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 13
Distributed dynamic partitioning based diagnosis of scan chain 基于分布式动态分区的扫描链诊断
Pub Date : 2013-04-29 DOI: 10.1109/VTS.2013.6548916
Yu Huang, Xiaoxin Fan, Huaxing Tang, Manish Sharma, Wu-Tung Cheng, B. Benware, S. Reddy
Diagnosis memory footprint for large designs is growing as design sizes grow such that the diagnosis throughput for given computational resources becomes a bottleneck in volume diagnosis. In this paper, we propose a scan chain diagnosis flow based on dynamic design partitioning and distributed diagnosis architecture that can improve the diagnosis throughput over one order of magnitude.
大型设计的诊断内存占用随着设计规模的增长而增长,因此给定计算资源的诊断吞吐量成为批量诊断的瓶颈。本文提出了一种基于动态设计划分和分布式诊断架构的扫描链诊断流程,可将诊断吞吐量提高一个数量级以上。
{"title":"Distributed dynamic partitioning based diagnosis of scan chain","authors":"Yu Huang, Xiaoxin Fan, Huaxing Tang, Manish Sharma, Wu-Tung Cheng, B. Benware, S. Reddy","doi":"10.1109/VTS.2013.6548916","DOIUrl":"https://doi.org/10.1109/VTS.2013.6548916","url":null,"abstract":"Diagnosis memory footprint for large designs is growing as design sizes grow such that the diagnosis throughput for given computational resources becomes a bottleneck in volume diagnosis. In this paper, we propose a scan chain diagnosis flow based on dynamic design partitioning and distributed diagnosis architecture that can improve the diagnosis throughput over one order of magnitude.","PeriodicalId":138435,"journal":{"name":"2013 IEEE 31st VLSI Test Symposium (VTS)","volume":"5 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2013-04-29","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"116971446","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 12
Improving test generation by use of majority gates 利用多数门改进测试生成
Pub Date : 2013-04-29 DOI: 10.1109/VTS.2013.6548883
P. Wohl, J. Waicukauski
Scan testing and scan compression have become key components for reducing test cost. We present a novel technique to increase automatic test pattern generation (ATPG) effectiveness by identifying and exploiting instances of increasingly common “majority gates”. Test generation is modified so that better decision are made and care bits can be reduced. Consequently, test coverage, pattern count and CPU time can be improved. The new method requires no hardware support, and can be applied to any ATPG system, although scan compression methods can benefit the most.
扫描测试和扫描压缩已成为降低测试成本的关键环节。我们提出了一种新的技术,通过识别和利用日益常见的“多数门”实例来提高自动测试模式生成(ATPG)的有效性。通过修改测试生成,可以做出更好的决策,并减少错误。因此,测试覆盖率、模式计数和CPU时间可以得到改善。新方法不需要硬件支持,可以适用于任何ATPG系统,尽管扫描压缩方法可以受益最多。
{"title":"Improving test generation by use of majority gates","authors":"P. Wohl, J. Waicukauski","doi":"10.1109/VTS.2013.6548883","DOIUrl":"https://doi.org/10.1109/VTS.2013.6548883","url":null,"abstract":"Scan testing and scan compression have become key components for reducing test cost. We present a novel technique to increase automatic test pattern generation (ATPG) effectiveness by identifying and exploiting instances of increasingly common “majority gates”. Test generation is modified so that better decision are made and care bits can be reduced. Consequently, test coverage, pattern count and CPU time can be improved. The new method requires no hardware support, and can be applied to any ATPG system, although scan compression methods can benefit the most.","PeriodicalId":138435,"journal":{"name":"2013 IEEE 31st VLSI Test Symposium (VTS)","volume":"10 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2013-04-29","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115607975","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
A framework for low overhead hardware based runtime control flow error detection and recovery 基于运行时控制流错误检测和恢复的低开销硬件框架
Pub Date : 2013-04-29 DOI: 10.1109/VTS.2013.6548908
A. Chaudhari, Junyoung Park, J. Abraham
Transient errors during execution of a process running on a processor can lead to serious system failures or security lapses. It is necessary to detect, and if possible, correct these errors before any damage is caused to the system. Of the many approaches, monitoring the control flow of an application during runtime is one of the techniques used for transient error detection during an application execution. Although promising, the cost of implementing the control flow checks in software has been prohibitively high and hence is not widely used in practice. In this paper we describe a hardware based control flow monitoring technique which has the capability to detect errors in control flow and the instruction stream being executed on a processor. Our technique achieves a high coverage of control flow error detection (99.98 %) and has the capability to quickly recover from the error, making it resilient to transient control flow errors. It poses an extremely low performance overhead (~ 1 %) and reasonable area cost (<; 6 %) to the host processor. The framework for runtime monitoring of control flow described in this paper can be extended to efficiently monitor and detect any transient errors in the execution of instructions on a processor.
在处理器上运行的进程执行期间的短暂错误可能导致严重的系统故障或安全漏洞。有必要在对系统造成任何损害之前检测并在可能的情况下纠正这些错误。在许多方法中,在运行时期间监视应用程序的控制流是用于在应用程序执行期间检测瞬态错误的技术之一。虽然很有希望,但是在软件中实现控制流检查的成本过高,因此在实践中没有广泛使用。本文描述了一种基于硬件的控制流监测技术,该技术能够检测处理器上正在执行的控制流和指令流中的错误。我们的技术实现了控制流错误检测的高覆盖率(99.98%),并且具有从错误中快速恢复的能力,使其具有瞬时控制流错误的弹性。它具有极低的性能开销(~ 1%)和合理的面积成本(<;6%)到主处理器。本文所描述的控制流运行时监控框架可以扩展到有效地监控和检测处理器指令执行中的任何暂态错误。
{"title":"A framework for low overhead hardware based runtime control flow error detection and recovery","authors":"A. Chaudhari, Junyoung Park, J. Abraham","doi":"10.1109/VTS.2013.6548908","DOIUrl":"https://doi.org/10.1109/VTS.2013.6548908","url":null,"abstract":"Transient errors during execution of a process running on a processor can lead to serious system failures or security lapses. It is necessary to detect, and if possible, correct these errors before any damage is caused to the system. Of the many approaches, monitoring the control flow of an application during runtime is one of the techniques used for transient error detection during an application execution. Although promising, the cost of implementing the control flow checks in software has been prohibitively high and hence is not widely used in practice. In this paper we describe a hardware based control flow monitoring technique which has the capability to detect errors in control flow and the instruction stream being executed on a processor. Our technique achieves a high coverage of control flow error detection (99.98 %) and has the capability to quickly recover from the error, making it resilient to transient control flow errors. It poses an extremely low performance overhead (~ 1 %) and reasonable area cost (<; 6 %) to the host processor. The framework for runtime monitoring of control flow described in this paper can be extended to efficiently monitor and detect any transient errors in the execution of instructions on a processor.","PeriodicalId":138435,"journal":{"name":"2013 IEEE 31st VLSI Test Symposium (VTS)","volume":"133 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2013-04-29","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115811188","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 10
Innovative practices session 10C: Delay test 创新实践环节10C:延迟测试
Pub Date : 2013-04-29 DOI: 10.1109/VTS.2013.6548938
P. Pant, M. Amodeo, S. Vora, J. E. Colburn
The importance of testing for timing related defects continues to increase as devices are manufactured at ever smaller geometries and IO frequencies have increased to the point that production testers can no longer provide stored response vectors at-speed. As a result, it is increasingly important to have high quality tests for delay defects to bring down the product's DPPM levels (defective parts per million) shipped to end customers. Moreover, during the design characterization phase, these same tests are also used for isolating systematic slow paths in the design (speedpaths). With the inexorable march toward lower power SKUs, there remains a critical need to find and fix these limiting speedpaths prior to revenue shipments. Over the years, testing for delay defect has morphed from pure functional vectors that try to exercise a device like it would be in an end-user system, to intermediate methods that load assembly code into on-chip caches and execute them at speed, to completely structural methods that utilize scan DFT and check delays at the signal and gate level without resorting to any functional methods at all. This innovative practices session includes three presentations that cover a wide range of topics related to delay testing. The first presentation from Cadence outlines an approach to at-speed coverage that utilizes synergies between clock generation logic, DFT logic and ATPG tools. The solution leverages On-Product Clock Generation logic (OPCG) for high-speed testing and is compatible with existing test compression DFT. The additional DFT proposed enables simultaneous test of multiple clock domains and the inter-domain interfaces, while accounting for timing constraints between them. The ATPG clocking sequences are automatically generated by analyzing the clock domains and interfaces, and this information is used to optimize the DFT structures and for use in the ATPG process. The second presentation discusses the transformation in Intel's microprocessor speedpath characterization world over the last few generations, going from pure functional content to scan based structural content. It presents a new “trend based approach” for efficient speedpath isolation, and also delves into a comparison of the effectiveness and correlation of functional vs. structural test patterns for speedpath debug. The third presentation presents the differences between the various delay defect models, namely transition delay, path delay and small-delay, and the pros and cons of each. It goes on to describe new small delay defect ATPG flows implemented at Nvidia that are designed to balance the test generation simplicity of transition delay test patterns and the defect coverage provided by path delay test patterns. These flows enable the small delay defect test patterns to meet the test quality, delivery schedules and ATPG efficiency requirements set by a product's test cost goals.
随着器件的几何尺寸越来越小,以及IO频率的增加,生产测试人员无法再快速提供存储的响应向量,测试定时相关缺陷的重要性也在不断增加。因此,对延迟缺陷进行高质量的测试以降低交付给最终客户的产品的DPPM水平(百万分之缺陷)变得越来越重要。此外,在设计表征阶段,这些相同的测试也用于隔离设计中的系统慢路径(速度路径)。随着低功耗sku的不可阻挡的发展,在收入发货之前,仍然迫切需要找到并修复这些限制速度的路径。多年来,延迟缺陷的测试已经从纯粹的功能向量(试图像在最终用户系统中那样运行设备)演变为中间方法(将汇编代码加载到片上缓存中并快速执行它们),再到完全结构化的方法(利用扫描DFT并检查信号和门级的延迟),而根本不诉诸任何功能方法。这个创新实践课程包括三个演讲,涵盖了与延迟测试相关的广泛主题。Cadence的第一份报告概述了一种利用时钟生成逻辑、DFT逻辑和ATPG工具之间的协同作用实现高速覆盖的方法。该解决方案利用产品时钟生成逻辑(OPCG)进行高速测试,并与现有的测试压缩DFT兼容。提出的附加DFT可以同时测试多个时钟域和域间接口,同时考虑到它们之间的时间约束。通过分析时钟域和接口,自动生成ATPG时钟序列,并利用该信息优化DFT结构,用于ATPG过程。第二个演示讨论了过去几代英特尔微处理器速度路径表征领域的转变,从纯粹的功能内容到基于扫描的结构内容。本文提出了一种新的“基于趋势的方法”来实现高效的快速路径隔离,并深入研究了快速路径调试中功能测试模式与结构测试模式的有效性和相关性的比较。第三部分介绍了各种延迟缺陷模型,即转换延迟、路径延迟和小延迟之间的区别,以及每种模型的优缺点。它继续描述在Nvidia实现的新的小延迟缺陷ATPG流,旨在平衡转换延迟测试模式的测试生成简便性和路径延迟测试模式提供的缺陷覆盖率。这些流程使小延迟缺陷测试模式能够满足产品测试成本目标所设定的测试质量、交付时间表和ATPG效率要求。
{"title":"Innovative practices session 10C: Delay test","authors":"P. Pant, M. Amodeo, S. Vora, J. E. Colburn","doi":"10.1109/VTS.2013.6548938","DOIUrl":"https://doi.org/10.1109/VTS.2013.6548938","url":null,"abstract":"The importance of testing for timing related defects continues to increase as devices are manufactured at ever smaller geometries and IO frequencies have increased to the point that production testers can no longer provide stored response vectors at-speed. As a result, it is increasingly important to have high quality tests for delay defects to bring down the product's DPPM levels (defective parts per million) shipped to end customers. Moreover, during the design characterization phase, these same tests are also used for isolating systematic slow paths in the design (speedpaths). With the inexorable march toward lower power SKUs, there remains a critical need to find and fix these limiting speedpaths prior to revenue shipments. Over the years, testing for delay defect has morphed from pure functional vectors that try to exercise a device like it would be in an end-user system, to intermediate methods that load assembly code into on-chip caches and execute them at speed, to completely structural methods that utilize scan DFT and check delays at the signal and gate level without resorting to any functional methods at all. This innovative practices session includes three presentations that cover a wide range of topics related to delay testing. The first presentation from Cadence outlines an approach to at-speed coverage that utilizes synergies between clock generation logic, DFT logic and ATPG tools. The solution leverages On-Product Clock Generation logic (OPCG) for high-speed testing and is compatible with existing test compression DFT. The additional DFT proposed enables simultaneous test of multiple clock domains and the inter-domain interfaces, while accounting for timing constraints between them. The ATPG clocking sequences are automatically generated by analyzing the clock domains and interfaces, and this information is used to optimize the DFT structures and for use in the ATPG process. The second presentation discusses the transformation in Intel's microprocessor speedpath characterization world over the last few generations, going from pure functional content to scan based structural content. It presents a new “trend based approach” for efficient speedpath isolation, and also delves into a comparison of the effectiveness and correlation of functional vs. structural test patterns for speedpath debug. The third presentation presents the differences between the various delay defect models, namely transition delay, path delay and small-delay, and the pros and cons of each. It goes on to describe new small delay defect ATPG flows implemented at Nvidia that are designed to balance the test generation simplicity of transition delay test patterns and the defect coverage provided by path delay test patterns. These flows enable the small delay defect test patterns to meet the test quality, delivery schedules and ATPG efficiency requirements set by a product's test cost goals.","PeriodicalId":138435,"journal":{"name":"2013 IEEE 31st VLSI Test Symposium (VTS)","volume":"4 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2013-04-29","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"125567635","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
Finding best voltage and frequency to shorten power-constrained test time 寻找最佳电压和频率以缩短功率受限的测试时间
Pub Date : 2013-04-29 DOI: 10.1109/VTS.2013.6548882
P. Venkataramani, S. Sindia, V. Agrawal
In a digital test, supply voltage (VDD), clock frequency (ftest), peak power (PMAX) and test time (TT) are related parameters. For a given limit PMAX = PMAX func, normally set by functional specification, we find the optimum VDD = VDDopt and ftest = fopt to minimize TT. A solution is derived analytically from the technology-dependent characterization of semiconductor devices. It is shown that at VDDopt the peak power any test cycle consumes just equals PMAX func and ftest is fastest that the critical path at VDDopt will allow. The paper demonstrates how test parameters can be obtained numerically from MATLAB, or experimentally by bench test equipment like National Instruments' ELVIS. This optimization can cut the test time of ISCAS'89 benchmarks in 180nm CMOS into half.
在数字测试中,电源电压(VDD)、时钟频率(ftest)、峰值功率(PMAX)和测试时间(TT)是相关参数。对于给定的极限PMAX = PMAX函数,通常由功能规范设置,我们找到最优的VDD = VDDopt和ftest = fopt来最小化TT。从半导体器件的技术依赖特性中导出了一个解析的解决方案。结果表明,在VDDopt时,任何测试周期消耗的峰值功率正好等于PMAX函数,并且ftest是VDDopt时关键路径允许的最快速度。本文阐述了如何利用MATLAB软件进行数值计算,或利用美国国家仪器公司的ELVIS等台架测试设备进行实验。此优化可将ISCAS'89在180nm CMOS上的基准测试时间缩短一半。
{"title":"Finding best voltage and frequency to shorten power-constrained test time","authors":"P. Venkataramani, S. Sindia, V. Agrawal","doi":"10.1109/VTS.2013.6548882","DOIUrl":"https://doi.org/10.1109/VTS.2013.6548882","url":null,"abstract":"In a digital test, supply voltage (V<sub>DD</sub>), clock frequency (f<sub>test</sub>), peak power (P<sub>MAX</sub>) and test time (TT) are related parameters. For a given limit P<sub>MAX</sub> = P<sub>MAX func</sub>, normally set by functional specification, we find the optimum V<sub>DD</sub> = V<sub>DDopt</sub> and f<sub>test</sub> = f<sub>opt</sub> to minimize TT. A solution is derived analytically from the technology-dependent characterization of semiconductor devices. It is shown that at V<sub>DDopt</sub> the peak power any test cycle consumes just equals P<sub>MAX func</sub> and f<sub>test</sub> is fastest that the critical path at V<sub>DDopt</sub> will allow. The paper demonstrates how test parameters can be obtained numerically from MATLAB, or experimentally by bench test equipment like National Instruments' ELVIS. This optimization can cut the test time of ISCAS'89 benchmarks in 180nm CMOS into half.","PeriodicalId":138435,"journal":{"name":"2013 IEEE 31st VLSI Test Symposium (VTS)","volume":"253 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2013-04-29","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"133780173","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 11
Innovative practices session 6C: Latest practices in test compression 创新实践环节6C:测试压缩的最新实践
Pub Date : 2013-04-29 DOI: 10.1109/VTS.2013.6548914
J. E. Colburn, K. Chung, H. Konuk, Y. Dong
Test compression has become a requirement for many designs to meet the required test quality levels in reasonable test times and with acceptable test cost. This session will cover some of the tradeoffs and options available from the broad spectrum of test compression solutions. The first talk will address the difficulty when testing large numbers of logic blocks and processor cores of maintaining high test quality without a requisite increase in test cost stemming from the need to allocate substantially more pins for digital test. Simply adding more chip-level pins for testing conflicts with packaging constraints and can potentially undermine other cost-saving techniques that rely on utilizing fewer pins such as multi-site testing. What is needed instead is a DFT strategy optimized for complex SOC designs that use multicore processors-a strategy in which the architecture and automation elements work in tandem to lower test cost without compromising test quality or significantly increasing automatic test pattern generation (ATPG) runtime. This presentation highlights an optimized DFT architecture, referred to as “shared I/O” of DFTMAX, a synthesis-based test solution that has been used successfully in multicore processor designs as well as complex SOC designs. Using this approach, they were able to reduce scan test pins significantly with similar or even less ATPG patterns, without compromising test coverage, and achieve over 2X reduction in wafer level scan test time. The second talk will present many DFT techniques to reduce test time and improve coverage in the context of core wrapping. Some of these methods include using external scan chains with separate compression logic inside each place-and-route block instead of having ‘chip-top’ scan compression logic for all external scan chains from different place-and-route. In addition, some tradeoffs of using dynamic launch-on-shift/launch-on-capture (LOS/LOC) instead of static will be covered. Some other methods will be covered for preventing decompressor logic from feeding X'es during launch-on-shift test patterns and the benefits of control test-points to reduce ATPG vector counts. The final presentation will cover various methodologies for reducing the test data volume on different chips. Some work to achieve a higher compression ratio in the future will also be discussed. As with any good engineering solution, there are some constraints and tradeoffs that also need to be considered with those choices.
为了在合理的测试时间和可接受的测试成本内达到所需的测试质量水平,测试压缩已成为许多设计的要求。本次会议将涵盖测试压缩解决方案的一些权衡和选项。第一次演讲将讨论在测试大量逻辑块和处理器内核时保持高测试质量而不增加测试成本的困难,因为需要为数字测试分配更多引脚。简单地为测试添加更多的芯片级引脚与封装限制相冲突,并且可能潜在地破坏依赖于使用较少引脚的其他节省成本的技术,例如多站点测试。相反,我们需要的是针对使用多核处理器的复杂SOC设计进行优化的DFT策略,该策略中架构和自动化元素协同工作以降低测试成本,而不会影响测试质量或显着增加自动测试模式生成(ATPG)运行时。本次演讲重点介绍了一种优化的DFT架构,称为DFTMAX的“共享I/O”,这是一种基于综合的测试解决方案,已成功用于多核处理器设计以及复杂的SOC设计。使用这种方法,他们能够在相同甚至更少的ATPG模式下显著减少扫描测试引脚,而不影响测试覆盖率,并将晶圆级扫描测试时间减少了2倍以上。第二次演讲将介绍许多DFT技术,以减少测试时间并提高核心包装上下文的覆盖率。其中一些方法包括在每个位置和路由块中使用具有单独压缩逻辑的外部扫描链,而不是为来自不同位置和路由的所有外部扫描链使用“芯片顶部”扫描压缩逻辑。此外,还将介绍使用动态移动发射/捕获发射(LOS/LOC)而不是静态发射的一些权衡。本文还将介绍一些其他方法,以防止在发射-移位测试模式期间减压器逻辑输入X',以及控制测试点减少ATPG矢量计数的好处。最后的介绍将涵盖各种方法,以减少不同芯片上的测试数据量。本文还将讨论未来实现更高压缩比的一些工作。与任何好的工程解决方案一样,这些选择也需要考虑一些约束和权衡。
{"title":"Innovative practices session 6C: Latest practices in test compression","authors":"J. E. Colburn, K. Chung, H. Konuk, Y. Dong","doi":"10.1109/VTS.2013.6548914","DOIUrl":"https://doi.org/10.1109/VTS.2013.6548914","url":null,"abstract":"Test compression has become a requirement for many designs to meet the required test quality levels in reasonable test times and with acceptable test cost. This session will cover some of the tradeoffs and options available from the broad spectrum of test compression solutions. The first talk will address the difficulty when testing large numbers of logic blocks and processor cores of maintaining high test quality without a requisite increase in test cost stemming from the need to allocate substantially more pins for digital test. Simply adding more chip-level pins for testing conflicts with packaging constraints and can potentially undermine other cost-saving techniques that rely on utilizing fewer pins such as multi-site testing. What is needed instead is a DFT strategy optimized for complex SOC designs that use multicore processors-a strategy in which the architecture and automation elements work in tandem to lower test cost without compromising test quality or significantly increasing automatic test pattern generation (ATPG) runtime. This presentation highlights an optimized DFT architecture, referred to as “shared I/O” of DFTMAX, a synthesis-based test solution that has been used successfully in multicore processor designs as well as complex SOC designs. Using this approach, they were able to reduce scan test pins significantly with similar or even less ATPG patterns, without compromising test coverage, and achieve over 2X reduction in wafer level scan test time. The second talk will present many DFT techniques to reduce test time and improve coverage in the context of core wrapping. Some of these methods include using external scan chains with separate compression logic inside each place-and-route block instead of having ‘chip-top’ scan compression logic for all external scan chains from different place-and-route. In addition, some tradeoffs of using dynamic launch-on-shift/launch-on-capture (LOS/LOC) instead of static will be covered. Some other methods will be covered for preventing decompressor logic from feeding X'es during launch-on-shift test patterns and the benefits of control test-points to reduce ATPG vector counts. The final presentation will cover various methodologies for reducing the test data volume on different chips. Some work to achieve a higher compression ratio in the future will also be discussed. As with any good engineering solution, there are some constraints and tradeoffs that also need to be considered with those choices.","PeriodicalId":138435,"journal":{"name":"2013 IEEE 31st VLSI Test Symposium (VTS)","volume":"30 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2013-04-29","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"134154813","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
A multi-faceted approach to FPGA-based Trojan circuit detection 一种基于fpga的木马电路检测方法
Pub Date : 2013-04-29 DOI: 10.1109/VTS.2013.6548925
Michael Patterson, Aaron Mills, Ryan A. Scheel, Julie Tillman, Evan Dye, Joseph Zambreno
Three general approaches to detecting Trojans embedded in FPGA circuits were explored in the context of the 2012 CSAW Embedded Systems Challenge: functional testing, power analysis, and direct analysis of the bitfile. These tests were used to classify a set of 32 bitfiles which include Trojans of an unknown nature. The project is a step towards developing a framework for Trojan-detection which leverages the strengths of a variety of testing techniques.
在2012年CSAW嵌入式系统挑战赛的背景下,研究了三种检测嵌入在FPGA电路中的木马的一般方法:功能测试、功耗分析和位文件的直接分析。这些测试用于对一组32位文件进行分类,其中包含未知性质的木马。该项目是朝着开发一个利用各种测试技术优势的木马检测框架迈出的一步。
{"title":"A multi-faceted approach to FPGA-based Trojan circuit detection","authors":"Michael Patterson, Aaron Mills, Ryan A. Scheel, Julie Tillman, Evan Dye, Joseph Zambreno","doi":"10.1109/VTS.2013.6548925","DOIUrl":"https://doi.org/10.1109/VTS.2013.6548925","url":null,"abstract":"Three general approaches to detecting Trojans embedded in FPGA circuits were explored in the context of the 2012 CSAW Embedded Systems Challenge: functional testing, power analysis, and direct analysis of the bitfile. These tests were used to classify a set of 32 bitfiles which include Trojans of an unknown nature. The project is a step towards developing a framework for Trojan-detection which leverages the strengths of a variety of testing techniques.","PeriodicalId":138435,"journal":{"name":"2013 IEEE 31st VLSI Test Symposium (VTS)","volume":"31 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2013-04-29","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"131013675","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 4
期刊
2013 IEEE 31st VLSI Test Symposium (VTS)
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1