Pub Date : 2014-10-01DOI: 10.1109/TEST.2014.7035323
Y. Zorian
Due to their spatial structures, FinFETs have several advantages including controlled Fin body thickness, low threshold voltage variation, reduced variability and lower operating voltage. Because of the special structures of FinFET transistors, modern FinFET-based memories can lead to defects that require new test and repair solutions. Usually the existing approaches are not able to provide appropriate level of defect coverage and yield for FinFET memories. This presentation will discuss the design complexity, defect coverage and yield challenges of FinFET-based memories and introduce new methods to address them. This will include new design techniques, new FinFET specific defect and their coverage, as well as yield optimization infrastructure. Based on the obtained results, the presentation will also cover the synthesis of test algorithms for detection of diagnosis of FinFET memories s and built-in self-test infrastructure with a high efficiency of test and repair capability to ensure adequate yield improvement for FinFET-based memories. The presented methodology is validated by silicon data from multiple FinFET-based embedded memory technologies.
{"title":"Design, test & repair methodology for FinFET-based memories","authors":"Y. Zorian","doi":"10.1109/TEST.2014.7035323","DOIUrl":"https://doi.org/10.1109/TEST.2014.7035323","url":null,"abstract":"Due to their spatial structures, FinFETs have several advantages including controlled Fin body thickness, low threshold voltage variation, reduced variability and lower operating voltage. Because of the special structures of FinFET transistors, modern FinFET-based memories can lead to defects that require new test and repair solutions. Usually the existing approaches are not able to provide appropriate level of defect coverage and yield for FinFET memories. This presentation will discuss the design complexity, defect coverage and yield challenges of FinFET-based memories and introduce new methods to address them. This will include new design techniques, new FinFET specific defect and their coverage, as well as yield optimization infrastructure. Based on the obtained results, the presentation will also cover the synthesis of test algorithms for detection of diagnosis of FinFET memories s and built-in self-test infrastructure with a high efficiency of test and repair capability to ensure adequate yield improvement for FinFET-based memories. The presented methodology is validated by silicon data from multiple FinFET-based embedded memory technologies.","PeriodicalId":6403,"journal":{"name":"2007 IEEE International Test Conference","volume":"21 1","pages":"1"},"PeriodicalIF":0.0,"publicationDate":"2014-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"87089330","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2014-10-01DOI: 10.1109/TEST.2014.7035309
G. Brat
Formal methods are seen as a cheaper and more exhaustive solution to the current expensive testing process used in the aviation industry. However, aviation systems are getting more and more complex. So, formal methods have no hope to address these systems unless some compositional argument is being made. In this talk, I will present the results of the effort led by NASA to demonstrate the use of formal methods and compositional verification for the V&V of safety requirements of a flight critical system. The talk will show how the formal arguments made at the component level are being composed into a system-level argument. The study is done on Simulink models for a quad-redundant flight control system for a transport class airplane.
{"title":"Compositional verification using formal analysis for a flight critical system","authors":"G. Brat","doi":"10.1109/TEST.2014.7035309","DOIUrl":"https://doi.org/10.1109/TEST.2014.7035309","url":null,"abstract":"Formal methods are seen as a cheaper and more exhaustive solution to the current expensive testing process used in the aviation industry. However, aviation systems are getting more and more complex. So, formal methods have no hope to address these systems unless some compositional argument is being made. In this talk, I will present the results of the effort led by NASA to demonstrate the use of formal methods and compositional verification for the V&V of safety requirements of a flight critical system. The talk will show how the formal arguments made at the component level are being composed into a system-level argument. The study is done on Simulink models for a quad-redundant flight control system for a transport class airplane.","PeriodicalId":6403,"journal":{"name":"2007 IEEE International Test Conference","volume":"128 1","pages":"1"},"PeriodicalIF":0.0,"publicationDate":"2014-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"90281148","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2014-10-01DOI: 10.1109/TEST.2014.7035322
K. Bowman, A. Park, V. Narayanan, Francois Atallah, A. Artieri, S. Yoon, Kendrick Yuen, David Hansquine
Circuit techniques for reducing the minimum supply voltage (V MIN ) of last-level and intermediate static random-access memory (SRAM) caches enhance processor energy efficiency. For the first time at a 16nm technology node, projections of a high-density 6-transistor SRAM bit cell indicate that the VMIN of a 4Mb or larger cache exceeds the maximum supply voltage (V MAX ) for reliability. Thus, circuit techniques for cache VMIN reduction are transitioning from an energy-efficient solution to a requirement for cache functionality. Traditionally, error-correcting codes (ECC) such as single-error correction, double-error detection (SECDED) aim to protect the cache operation from radiation-induced soft errors. Moreover, during the qualification of a system-on-chip (SoC) processor, test engineers monitor the rate of correctable cache errors from SECDED for observing the on-die interactions between integrated components (e.g., CPU, GPU, DSP, etc.). This presentation highlights the opportunity to exploit ECC for reducing the cache V MIN while simultaneously providing coverage for radiation-induced soft errors. Silicon test-chip measurements from a 7Mb data cache in a 20nm technology demonstrate a V MIN reduction of 19% from SECDED. In addition, silicon measurements provide a salient insight in that only 0.12% of the cache words contain an error when operating at the cache V MIN with SECDED. Therefore, SECDED improves V MIN by 19% while maintaining 99.88% coverage for radiation-induced soft errors. In applying SECDED for a lower cache VMIN, the rate of correctable errors exponentially increases, thus eliminating a useful metric for on-die observability. The presentation concludes by offering alternative solutions for on-die observability.
{"title":"Trading-off on-die observability for cache minimum supply voltage reduction in system-on-chip (SoC) processors","authors":"K. Bowman, A. Park, V. Narayanan, Francois Atallah, A. Artieri, S. Yoon, Kendrick Yuen, David Hansquine","doi":"10.1109/TEST.2014.7035322","DOIUrl":"https://doi.org/10.1109/TEST.2014.7035322","url":null,"abstract":"Circuit techniques for reducing the minimum supply voltage (V MIN ) of last-level and intermediate static random-access memory (SRAM) caches enhance processor energy efficiency. For the first time at a 16nm technology node, projections of a high-density 6-transistor SRAM bit cell indicate that the VMIN of a 4Mb or larger cache exceeds the maximum supply voltage (V MAX ) for reliability. Thus, circuit techniques for cache VMIN reduction are transitioning from an energy-efficient solution to a requirement for cache functionality. Traditionally, error-correcting codes (ECC) such as single-error correction, double-error detection (SECDED) aim to protect the cache operation from radiation-induced soft errors. Moreover, during the qualification of a system-on-chip (SoC) processor, test engineers monitor the rate of correctable cache errors from SECDED for observing the on-die interactions between integrated components (e.g., CPU, GPU, DSP, etc.). This presentation highlights the opportunity to exploit ECC for reducing the cache V MIN while simultaneously providing coverage for radiation-induced soft errors. Silicon test-chip measurements from a 7Mb data cache in a 20nm technology demonstrate a V MIN reduction of 19% from SECDED. In addition, silicon measurements provide a salient insight in that only 0.12% of the cache words contain an error when operating at the cache V MIN with SECDED. Therefore, SECDED improves V MIN by 19% while maintaining 99.88% coverage for radiation-induced soft errors. In applying SECDED for a lower cache VMIN, the rate of correctable errors exponentially increases, thus eliminating a useful metric for on-die observability. The presentation concludes by offering alternative solutions for on-die observability.","PeriodicalId":6403,"journal":{"name":"2007 IEEE International Test Conference","volume":"3 1","pages":"1"},"PeriodicalIF":0.0,"publicationDate":"2014-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"79705243","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2014-10-01DOI: 10.1109/TEST.2014.7035283
B. Curtis
End to end security is becoming a prerequisite of the Internet of Things. Data must be managed securely at generation, in flight and at rest to avoid critical enterprise or personal data being intercepted. Privacy becomes paramount as our lives and health become increasingly digital, and devices must evolve to deliver security and robustness while pricing continues to be constrained. This talk will highlight the security requirements of the IoT as outlined by the Dept. of Homeland Security and the UK Centre for Protection of National Infrastructure to counter the emergence of threats ranging from advanced persistent software threats to physical tampering and side channel attacks. Following the definition of the attack threats we will then establish the definition of advanced device security features, system implementation requirements and testability criteria to develop Security by Design within the Internet of Things.
{"title":"Delivering security by design in the Internet of Things","authors":"B. Curtis","doi":"10.1109/TEST.2014.7035283","DOIUrl":"https://doi.org/10.1109/TEST.2014.7035283","url":null,"abstract":"End to end security is becoming a prerequisite of the Internet of Things. Data must be managed securely at generation, in flight and at rest to avoid critical enterprise or personal data being intercepted. Privacy becomes paramount as our lives and health become increasingly digital, and devices must evolve to deliver security and robustness while pricing continues to be constrained. This talk will highlight the security requirements of the IoT as outlined by the Dept. of Homeland Security and the UK Centre for Protection of National Infrastructure to counter the emergence of threats ranging from advanced persistent software threats to physical tampering and side channel attacks. Following the definition of the attack threats we will then establish the definition of advanced device security features, system implementation requirements and testability criteria to develop Security by Design within the Internet of Things.","PeriodicalId":6403,"journal":{"name":"2007 IEEE International Test Conference","volume":"6 1","pages":"1"},"PeriodicalIF":0.0,"publicationDate":"2014-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"84036736","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2014-10-01DOI: 10.1109/TEST.2014.7035284
P. Bose
Modern processor chips and associated systems are generally equipped with dynamic power managers. These are implemented as sense-control-and-actuate feedback control systems. In response to sensed metrics of power and/or performance, the controller tries to actuate control knobs (e.g. voltage and/or frequency) in order to make sure that some target metric (e.g. power consumption or a power-performance efficiency metric) tracks a set (reference) value as closely as feasible. This scenario is true even if the system does not have a dedicated, firmware-driven microcontroller to aid in such dynamic resource management. Some systems may have hardwired control logic to effect the same or similar feedback control algorithm. Regardless of how it is implemented, such a dynamic, feedback control system can be “fooled” into an inappropriate (or wrong) state or action - under certain conditions or properties of the workload. The workload conditions to trigger such undesirable actions may occur spontaneously (without user intent), or they may be a result of malicious intent. Regardless of intent, such “virus” workloads are of concern, because they can make the system unstable or even cause a large power overrun (or performance shortfall). In an extreme scenario, the system may incur permanent damage, requiring expensive repair. In this talk, we look at specific examples of such potential reliability-cum-security “holes” in current power-managed systems. We then propose system-level mitigation approaches to combat this problem. The underlying system architectural solution strategies are referred to here as “Energy-Secure System Architectures” (ESSA).
{"title":"Energy-secure computer architectures","authors":"P. Bose","doi":"10.1109/TEST.2014.7035284","DOIUrl":"https://doi.org/10.1109/TEST.2014.7035284","url":null,"abstract":"Modern processor chips and associated systems are generally equipped with dynamic power managers. These are implemented as sense-control-and-actuate feedback control systems. In response to sensed metrics of power and/or performance, the controller tries to actuate control knobs (e.g. voltage and/or frequency) in order to make sure that some target metric (e.g. power consumption or a power-performance efficiency metric) tracks a set (reference) value as closely as feasible. This scenario is true even if the system does not have a dedicated, firmware-driven microcontroller to aid in such dynamic resource management. Some systems may have hardwired control logic to effect the same or similar feedback control algorithm. Regardless of how it is implemented, such a dynamic, feedback control system can be “fooled” into an inappropriate (or wrong) state or action - under certain conditions or properties of the workload. The workload conditions to trigger such undesirable actions may occur spontaneously (without user intent), or they may be a result of malicious intent. Regardless of intent, such “virus” workloads are of concern, because they can make the system unstable or even cause a large power overrun (or performance shortfall). In an extreme scenario, the system may incur permanent damage, requiring expensive repair. In this talk, we look at specific examples of such potential reliability-cum-security “holes” in current power-managed systems. We then propose system-level mitigation approaches to combat this problem. The underlying system architectural solution strategies are referred to here as “Energy-Secure System Architectures” (ESSA).","PeriodicalId":6403,"journal":{"name":"2007 IEEE International Test Conference","volume":"6 1","pages":"1"},"PeriodicalIF":0.0,"publicationDate":"2014-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"84086008","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2014-10-01DOI: 10.1109/TEST.2014.7035316
Bianca Schroeder
The reliability and availability of today's large-scale systems hinges on the reliability of the often millions of hardware components they comprise. Before deployment, devices undergo rigorous testing as part of the design and manufacturing process to assure they meet reliability expectations. In this talk we will look at the other half of the story: the post-deployment life of devices, once they enter production use in the field. Based on field data from large-scale production systems, we will study different aspects of hardware reliability in the wild, with a focus on DRAM DIMMs, and show that life in in the real world can be quite different from that in the lab.
{"title":"A tale of two lives: Under test and in the wild","authors":"Bianca Schroeder","doi":"10.1109/TEST.2014.7035316","DOIUrl":"https://doi.org/10.1109/TEST.2014.7035316","url":null,"abstract":"The reliability and availability of today's large-scale systems hinges on the reliability of the often millions of hardware components they comprise. Before deployment, devices undergo rigorous testing as part of the design and manufacturing process to assure they meet reliability expectations. In this talk we will look at the other half of the story: the post-deployment life of devices, once they enter production use in the field. Based on field data from large-scale production systems, we will study different aspects of hardware reliability in the wild, with a focus on DRAM DIMMs, and show that life in in the real world can be quite different from that in the lab.","PeriodicalId":6403,"journal":{"name":"2007 IEEE International Test Conference","volume":"177 1","pages":"1"},"PeriodicalIF":0.0,"publicationDate":"2014-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"76434245","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2014-10-01DOI: 10.1109/TEST.2014.7035346
Harry H. Chen
In the hyper-competitive consumer mobile product space where aggressive schedules, mass volume, and short life-cycles are the norm, system-level testing (SLT) plays a key role in achieving time-to-market (TTM) goals. But SLT also impedes time-to-volume (TTV) and cuts into profit margins. This talk will describe our recent experimental research to establish links between post-silicon SLT failures and production structural patterns. Operating on-chip-clocked scan patterns under non-destructive stress conditions to force incorrect responses from all devices, we apply machine learning to discern SLT failure signatures in noisy scan output data. One goal of the work is to significantly reduce SLT effort and cost, thus achieving early TTV and increased profitability. Other possibilities include diagnosis to identify systematically vulnerable regions of the design for selective test targeting with more through patterns.
{"title":"The case for analyzing system level failures using structural patterns","authors":"Harry H. Chen","doi":"10.1109/TEST.2014.7035346","DOIUrl":"https://doi.org/10.1109/TEST.2014.7035346","url":null,"abstract":"In the hyper-competitive consumer mobile product space where aggressive schedules, mass volume, and short life-cycles are the norm, system-level testing (SLT) plays a key role in achieving time-to-market (TTM) goals. But SLT also impedes time-to-volume (TTV) and cuts into profit margins. This talk will describe our recent experimental research to establish links between post-silicon SLT failures and production structural patterns. Operating on-chip-clocked scan patterns under non-destructive stress conditions to force incorrect responses from all devices, we apply machine learning to discern SLT failure signatures in noisy scan output data. One goal of the work is to significantly reduce SLT effort and cost, thus achieving early TTV and increased profitability. Other possibilities include diagnosis to identify systematically vulnerable regions of the design for selective test targeting with more through patterns.","PeriodicalId":6403,"journal":{"name":"2007 IEEE International Test Conference","volume":"216 1","pages":"1"},"PeriodicalIF":0.0,"publicationDate":"2014-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"76783033","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2014-10-01DOI: 10.1109/TEST.2014.7035334
T. M. Mak
Silicon Interposer is the new PCB where silicon of different process technologies (like logic, DRAM, analog, etc.) can be bonded onto and integrated into the same package. Silicon interposer has microbumps on one side and flipchip (C4) bumps on the other, and signal on one side are connected to the other with TSV (Through Silicon Vias). Die to die interconnects are just wires from one microbump to another without connecting to any C4 on the bottom side. Essentially, these are tiny PCBs that have their dimensions shrink by 100x. Conceptually PCB essentially are just interconnects so testing really are just open/short and maybe leakage, i.e., ONLY if you can connect (or probe) to the microbumps. However, at 40–50um pitch, they are almost half the pitch of the most advanced flipchip bump technology with tens of thousands of microbumps in a typical application. The tight pitch and mass quantity of microbumps would drive for new probe technologies (read, more expensive) and complex test optimization at the ATE side. There is also no transistors (nor diodes) on this new PCB, so all you learnt about DFT is out the window. At the same time, it is expected to have zero test cost (as yield should be high). Some in the industry have suggested “Pretty Good Interposer”, only testing for systematics and not defects. Is, “pretty good”, good enough to stand in for “known good”? It all depends on what you put on these interposers and potentially yield loss can kill a product's viability. This talk will try to elaborate the challenges and will try to propose new test methods for testing these new, miniature PCB.
硅中间层是一种新型PCB,不同工艺技术的硅(如逻辑、DRAM、模拟等)可以粘合在一起并集成到同一个封装中。硅中间层一侧有微凸起,另一侧有倒装芯片(C4)凸起,一侧的信号通过TSV (Through Silicon Vias)连接到另一侧。Die to Die互连只是从一个微凸起到另一个微凸起的电线,没有连接到底部的任何C4。本质上,这些是尺寸缩小了100倍的微型pcb。从概念上讲,PCB本质上只是互连,因此测试实际上只是开/短和可能泄漏,也就是说,只有当您可以连接(或探头)到微凸起时。然而,在40-50um的间距下,它们几乎是最先进的倒装芯片碰撞技术的一半,在典型应用中有数万个微碰撞。紧凑的间距和大量的微凸点将推动新的探针技术(读取,更昂贵)和ATE方面复杂的测试优化。在这个新的PCB上也没有晶体管(也没有二极管),所以你学到的关于DFT的所有知识都被抛在了窗外。同时,预计测试成本为零(因为成品率应该很高)。一些业内人士建议使用“相当好的中介器”,只测试系统而不测试缺陷。“相当好”足以代替“已知好”吗?这完全取决于你在这些中间体上放了什么,潜在的产量损失可能会扼杀产品的生存能力。本次演讲将尝试阐述这些挑战,并尝试提出测试这些新型微型PCB的新测试方法。
{"title":"Interposer test: Testing PCBs that have shrunk 100x","authors":"T. M. Mak","doi":"10.1109/TEST.2014.7035334","DOIUrl":"https://doi.org/10.1109/TEST.2014.7035334","url":null,"abstract":"Silicon Interposer is the new PCB where silicon of different process technologies (like logic, DRAM, analog, etc.) can be bonded onto and integrated into the same package. Silicon interposer has microbumps on one side and flipchip (C4) bumps on the other, and signal on one side are connected to the other with TSV (Through Silicon Vias). Die to die interconnects are just wires from one microbump to another without connecting to any C4 on the bottom side. Essentially, these are tiny PCBs that have their dimensions shrink by 100x. Conceptually PCB essentially are just interconnects so testing really are just open/short and maybe leakage, i.e., ONLY if you can connect (or probe) to the microbumps. However, at 40–50um pitch, they are almost half the pitch of the most advanced flipchip bump technology with tens of thousands of microbumps in a typical application. The tight pitch and mass quantity of microbumps would drive for new probe technologies (read, more expensive) and complex test optimization at the ATE side. There is also no transistors (nor diodes) on this new PCB, so all you learnt about DFT is out the window. At the same time, it is expected to have zero test cost (as yield should be high). Some in the industry have suggested “Pretty Good Interposer”, only testing for systematics and not defects. Is, “pretty good”, good enough to stand in for “known good”? It all depends on what you put on these interposers and potentially yield loss can kill a product's viability. This talk will try to elaborate the challenges and will try to propose new test methods for testing these new, miniature PCB.","PeriodicalId":6403,"journal":{"name":"2007 IEEE International Test Conference","volume":"5 1","pages":"1"},"PeriodicalIF":0.0,"publicationDate":"2014-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"88737167","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2014-10-01DOI: 10.1109/TEST.2014.7035320
B. Lee
Thermal characteristic is one of the key specifications in mobile SOC products. Typically, process scaling tends to improve global thermal characteristics by power reduction; however, it also increases local hot-spot issues due to higher power density. Moreover, the finfet device technology introduces a new thermal problem, called “self-heating.” Therefore, Samsung is considering thermal issues comprehensively from design to test in order to ensure both product yield and quality. In this talk, we will address three key thermal problems; the self-heating in finfet device, the on-chip thermal hot-spots, and the set-level thermal-induced performance degradation. To tackle these obstacles, the design and test flows were enhanced to prevent and screen the thermal problems, and they were validated in the first mobile SOC product of 14nm finfet process.
{"title":"Thermal-aware mobile SoC design and test in 14nm finfet technology","authors":"B. Lee","doi":"10.1109/TEST.2014.7035320","DOIUrl":"https://doi.org/10.1109/TEST.2014.7035320","url":null,"abstract":"Thermal characteristic is one of the key specifications in mobile SOC products. Typically, process scaling tends to improve global thermal characteristics by power reduction; however, it also increases local hot-spot issues due to higher power density. Moreover, the finfet device technology introduces a new thermal problem, called “self-heating.” Therefore, Samsung is considering thermal issues comprehensively from design to test in order to ensure both product yield and quality. In this talk, we will address three key thermal problems; the self-heating in finfet device, the on-chip thermal hot-spots, and the set-level thermal-induced performance degradation. To tackle these obstacles, the design and test flows were enhanced to prevent and screen the thermal problems, and they were validated in the first mobile SOC product of 14nm finfet process.","PeriodicalId":6403,"journal":{"name":"2007 IEEE International Test Conference","volume":"39 1","pages":"1"},"PeriodicalIF":0.0,"publicationDate":"2014-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"86601516","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2014-10-01DOI: 10.1109/TEST.2014.7035282
S. Trimberger
FPGAs have grown from a simple logic replacement to fully-programmable SoC, with multi-core CPU subsystems, a broad spectrum of peripherals, hundreds of thousands of gates of programmable logic and high-speed multi-gigabit transceivers. As the complexity of the underlying hardware has grown, so has the value of the applications built in them and the data handled by them. Traditional FPGA bitstream security has been enhanced to address these greater security requirements. This paper presents an overview of the security features of the Zynq All-Programmable SoC. The secure boot process includes asymmetric and symmetric authentication as well as symmetric encryption to protect software and programmable hardware during programming. During operation the hardware can disable test ports, monitor on-chip power and temperature and detect tampering with configuration data. ARM Trust Zone is integrated through the AXI busses into both the processor and the programmable logic subsystems.
{"title":"Security solutions in the first-generation Zynq All-Programmable SoC","authors":"S. Trimberger","doi":"10.1109/TEST.2014.7035282","DOIUrl":"https://doi.org/10.1109/TEST.2014.7035282","url":null,"abstract":"FPGAs have grown from a simple logic replacement to fully-programmable SoC, with multi-core CPU subsystems, a broad spectrum of peripherals, hundreds of thousands of gates of programmable logic and high-speed multi-gigabit transceivers. As the complexity of the underlying hardware has grown, so has the value of the applications built in them and the data handled by them. Traditional FPGA bitstream security has been enhanced to address these greater security requirements. This paper presents an overview of the security features of the Zynq All-Programmable SoC. The secure boot process includes asymmetric and symmetric authentication as well as symmetric encryption to protect software and programmable hardware during programming. During operation the hardware can disable test ports, monitor on-chip power and temperature and detect tampering with configuration data. ARM Trust Zone is integrated through the AXI busses into both the processor and the programmable logic subsystems.","PeriodicalId":6403,"journal":{"name":"2007 IEEE International Test Conference","volume":"36 1","pages":"1"},"PeriodicalIF":0.0,"publicationDate":"2014-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"77226523","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}