Pub Date : 2013-04-29DOI: 10.1109/VTS.2013.6548879
S. Dasnurkar, A. Datta, M. Abu-Rahma, Hieu Nguyen, Martin Villafana, Hadi Rasouli, Sean Tamjidi, M. Cai, S. Sengupta, P. Chidambaram, Raghavan Thirumala, Nikhil Kulkarni, Prasanna Seeram, Prasad Bhadri, P. Patel, S. Yoon, E. Terzioglu
Mobile devices spend most of the time in standby mode. Supported features and functionalities are increasing in each newer model. With the wide spread adaptation of multitasking in mobile devices, retaining current status and data for all active tasks is critical for user satisfaction. Extending battery life in portable mobile devices necessitates the use of minimum possible energy in standby mode while retaining present states for all active tasks. This paper for the first time, explains the low voltage data-retention failure mechanism in flops. It analyzes the impact of design and process parameters on the data retention failure. Statistical nature of data retention failure is established and validated with extensive Monte-Carlo simulations across various process corners. Finally, silicon measurement from several 28nm industrial mobile chips is presented showing good correlation of retention failure prediction from simulation.
{"title":"Experiments and analysis to characterize logic state retention limitations in 28nm process node","authors":"S. Dasnurkar, A. Datta, M. Abu-Rahma, Hieu Nguyen, Martin Villafana, Hadi Rasouli, Sean Tamjidi, M. Cai, S. Sengupta, P. Chidambaram, Raghavan Thirumala, Nikhil Kulkarni, Prasanna Seeram, Prasad Bhadri, P. Patel, S. Yoon, E. Terzioglu","doi":"10.1109/VTS.2013.6548879","DOIUrl":"https://doi.org/10.1109/VTS.2013.6548879","url":null,"abstract":"Mobile devices spend most of the time in standby mode. Supported features and functionalities are increasing in each newer model. With the wide spread adaptation of multitasking in mobile devices, retaining current status and data for all active tasks is critical for user satisfaction. Extending battery life in portable mobile devices necessitates the use of minimum possible energy in standby mode while retaining present states for all active tasks. This paper for the first time, explains the low voltage data-retention failure mechanism in flops. It analyzes the impact of design and process parameters on the data retention failure. Statistical nature of data retention failure is established and validated with extensive Monte-Carlo simulations across various process corners. Finally, silicon measurement from several 28nm industrial mobile chips is presented showing good correlation of retention failure prediction from simulation.","PeriodicalId":138435,"journal":{"name":"2013 IEEE 31st VLSI Test Symposium (VTS)","volume":"79 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2013-04-29","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114198876","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2013-04-29DOI: 10.1109/VTS.2013.6548932
F. Haddad, W. Rahajandraibe, H. Aziza, K. Castellani-Coulié, J. Portal
This paper presents a built-in tuning technique in radiofrequency receivers using on-chip polyphase filters. Auto-calibration of the filter resistance values, based on Design-Of-Experiment (DOE) methodology, is proposed. This approach investigates process and temperature monitoring of the frequency band, the image-rejection-ratio (IRR) and the I/Q-accuracy resulting in robust and low-cost solutions.
{"title":"On the investigation of built-in tuning of RF receivers using on-chip polyphase filters","authors":"F. Haddad, W. Rahajandraibe, H. Aziza, K. Castellani-Coulié, J. Portal","doi":"10.1109/VTS.2013.6548932","DOIUrl":"https://doi.org/10.1109/VTS.2013.6548932","url":null,"abstract":"This paper presents a built-in tuning technique in radiofrequency receivers using on-chip polyphase filters. Auto-calibration of the filter resistance values, based on Design-Of-Experiment (DOE) methodology, is proposed. This approach investigates process and temperature monitoring of the frequency band, the image-rejection-ratio (IRR) and the I/Q-accuracy resulting in robust and low-cost solutions.","PeriodicalId":138435,"journal":{"name":"2013 IEEE 31st VLSI Test Symposium (VTS)","volume":"26 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2013-04-29","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115051461","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2013-04-29DOI: 10.1109/VTS.2013.6548918
B. Kaminska, B. Courtois, M. Alioto
In this talk, a unitary perspective is given on the design challenges involved in ultra-low voltage (ULV) VLSI circuits and systems, as well as on directions to tackle them. Innovative approaches are described to improve the energy efficiency of ULV systems, while maintaining adequate resiliency and yield with low overhead. Experimental results based on the testing of 65-nm to 28-nm prototypes are presented to develop a quantitative sense of the achievable benefits. Emphasis is given on applications that require extremely high energy efficiency, such as compact portable devices and energy-autonomous VLSI systems. Although CMOS is the mainstream choice for the foreseeable future, Tunnel FETs (TFETs) are introduced as very promising alternative that favors more aggressive voltage scaling and energy reduction. Although still immature, device-circuit co-design is shown to be critical to the success of such technology. Potential of TFETs is discussed in a general framework through representative metrics and vehicle circuits, emphasizing how design will be impacted by their adoption.
{"title":"New topic session 7B: Challenges and directions for ultra-low voltage VLSI circuits and systems: CMOS and beyond","authors":"B. Kaminska, B. Courtois, M. Alioto","doi":"10.1109/VTS.2013.6548918","DOIUrl":"https://doi.org/10.1109/VTS.2013.6548918","url":null,"abstract":"In this talk, a unitary perspective is given on the design challenges involved in ultra-low voltage (ULV) VLSI circuits and systems, as well as on directions to tackle them. Innovative approaches are described to improve the energy efficiency of ULV systems, while maintaining adequate resiliency and yield with low overhead. Experimental results based on the testing of 65-nm to 28-nm prototypes are presented to develop a quantitative sense of the achievable benefits. Emphasis is given on applications that require extremely high energy efficiency, such as compact portable devices and energy-autonomous VLSI systems. Although CMOS is the mainstream choice for the foreseeable future, Tunnel FETs (TFETs) are introduced as very promising alternative that favors more aggressive voltage scaling and energy reduction. Although still immature, device-circuit co-design is shown to be critical to the success of such technology. Potential of TFETs is discussed in a general framework through representative metrics and vehicle circuits, emphasizing how design will be impacted by their adoption.","PeriodicalId":138435,"journal":{"name":"2013 IEEE 31st VLSI Test Symposium (VTS)","volume":"51 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2013-04-29","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"133209820","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2013-04-29DOI: 10.1109/VTS.2013.6548921
M. d'Abreu, Amitava Mazumdar
Nand Flash memory is the NVM technology of choice for solid storage devices. This tutorial will give an introduction to Flash Non Volatile Memory (NVM). Nand Flash will be discussed in detail. For completeness the tutorial will present Nor Flash as well as the roles for Nand and Nor Flash. The second part of the tutorial will be focused on issues related to reliability and endurance. Despite the advantages, NAND-based storage systems are not without challenges. For the next decade, Flash storage systems are expected to provide solutions with reduced product costs, further improved read/write performance at low power consumption, as well as better data integrity for the users. Growth in storage demand is phenomenal, which leads to the adoption of more aggressive technology to keep cost reasonable. This further leads to using smaller cells (∼10nm in geometry), as well as more bits/cell to improve storage density, as well as cost. Newer physical storage media requires closer system-level interaction to make the system feasible for reliable data storage solution. State-of-the-art error correcting coding (ECC) solution, as well as advanced digital signal processing (DSP) techniques, will be deployed to make future flash media reliable for all data storage customers. In addition, new system solutions will provide the NAND-based storage system longer endurance and better data retention.
{"title":"Special session 8B: Embedded tutorial challenges in SSD","authors":"M. d'Abreu, Amitava Mazumdar","doi":"10.1109/VTS.2013.6548921","DOIUrl":"https://doi.org/10.1109/VTS.2013.6548921","url":null,"abstract":"Nand Flash memory is the NVM technology of choice for solid storage devices. This tutorial will give an introduction to Flash Non Volatile Memory (NVM). Nand Flash will be discussed in detail. For completeness the tutorial will present Nor Flash as well as the roles for Nand and Nor Flash. The second part of the tutorial will be focused on issues related to reliability and endurance. Despite the advantages, NAND-based storage systems are not without challenges. For the next decade, Flash storage systems are expected to provide solutions with reduced product costs, further improved read/write performance at low power consumption, as well as better data integrity for the users. Growth in storage demand is phenomenal, which leads to the adoption of more aggressive technology to keep cost reasonable. This further leads to using smaller cells (∼10nm in geometry), as well as more bits/cell to improve storage density, as well as cost. Newer physical storage media requires closer system-level interaction to make the system feasible for reliable data storage solution. State-of-the-art error correcting coding (ECC) solution, as well as advanced digital signal processing (DSP) techniques, will be deployed to make future flash media reliable for all data storage customers. In addition, new system solutions will provide the NAND-based storage system longer endurance and better data retention.","PeriodicalId":138435,"journal":{"name":"2013 IEEE 31st VLSI Test Symposium (VTS)","volume":"28 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2013-04-29","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"121976951","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2013-04-29DOI: 10.1109/VTS.2013.6548945
Takahiro J. Yamaguchi, J. Abraham, G. Roberts, S. Natarajan, D. Ciplickas
At the 1999 ITC, Pat Gelsinger from Intel delivered an important keynote address where he outlined the need for a low-pin count tester with lower performance pin electronics to meet the stringent test cost requirements of a billion transistor machine. At the 2009 ITC, engineers from AMD came forward with an I/O test solution that is believed to meet the Intel challenge using a cash-resident self-testing strategy combined with an external low-pin count tester. How can we drive major challenges to post-silicon validation and in huge variance era? Technology scaling enables us to trade off amplitude resolution for time resolution. Accordingly, both internal and external tests, some of which use low-pin count testers, are also shifting from voltage centric tests to timing centric tests. How can time resolution be used to push the timing centric tests beyond current limitations? How can spatial resolution be realized to enhance yields in terms of both die-to-die variations and within-die variations? What is necessary to provide robust on-chip solutions subject to huge variations, which may be combined with an external low-pin count tester?
{"title":"Special session 12B: Panel post-silicon validation & test in huge variance era","authors":"Takahiro J. Yamaguchi, J. Abraham, G. Roberts, S. Natarajan, D. Ciplickas","doi":"10.1109/VTS.2013.6548945","DOIUrl":"https://doi.org/10.1109/VTS.2013.6548945","url":null,"abstract":"At the 1999 ITC, Pat Gelsinger from Intel delivered an important keynote address where he outlined the need for a low-pin count tester with lower performance pin electronics to meet the stringent test cost requirements of a billion transistor machine. At the 2009 ITC, engineers from AMD came forward with an I/O test solution that is believed to meet the Intel challenge using a cash-resident self-testing strategy combined with an external low-pin count tester. How can we drive major challenges to post-silicon validation and in huge variance era? Technology scaling enables us to trade off amplitude resolution for time resolution. Accordingly, both internal and external tests, some of which use low-pin count testers, are also shifting from voltage centric tests to timing centric tests. How can time resolution be used to push the timing centric tests beyond current limitations? How can spatial resolution be realized to enhance yields in terms of both die-to-die variations and within-die variations? What is necessary to provide robust on-chip solutions subject to huge variations, which may be combined with an external low-pin count tester?","PeriodicalId":138435,"journal":{"name":"2013 IEEE 31st VLSI Test Symposium (VTS)","volume":"53 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2013-04-29","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127440819","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2013-04-29DOI: 10.1109/VTS.2013.6548936
Andreas Riefert, Jörg Müller, M. Sauer, Wolfram Burgard, B. Becker
The shrinking nanometer technologies of modern microprocessors and the aggressive supply voltage down-scaling drastically increase the risk of soft errors. In order to cope with this risk efficiently, selective hardware and software protection schemes are applied. In this paper, we propose an FPGA-based fault injection framework which is able to identify the most critical registers of an entire microprocessor. Further-more, our framework identifies critical variables in the source code of an arbitrary application running in its native environment. We verify the feasibility and relevance of our approach by implementing a lightweight and efficient error correction mechanism protecting only the most critical parts of the system. Experimental results with state estimation applications demonstrate a significantly reduced number of critical calculation errors caused by faults injected into the processor.
{"title":"Identification of critical variables using an FPGA-based fault injection framework","authors":"Andreas Riefert, Jörg Müller, M. Sauer, Wolfram Burgard, B. Becker","doi":"10.1109/VTS.2013.6548936","DOIUrl":"https://doi.org/10.1109/VTS.2013.6548936","url":null,"abstract":"The shrinking nanometer technologies of modern microprocessors and the aggressive supply voltage down-scaling drastically increase the risk of soft errors. In order to cope with this risk efficiently, selective hardware and software protection schemes are applied. In this paper, we propose an FPGA-based fault injection framework which is able to identify the most critical registers of an entire microprocessor. Further-more, our framework identifies critical variables in the source code of an arbitrary application running in its native environment. We verify the feasibility and relevance of our approach by implementing a lightweight and efficient error correction mechanism protecting only the most critical parts of the system. Experimental results with state estimation applications demonstrate a significantly reduced number of critical calculation errors caused by faults injected into the processor.","PeriodicalId":138435,"journal":{"name":"2013 IEEE 31st VLSI Test Symposium (VTS)","volume":"69 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2013-04-29","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129569504","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2013-04-29DOI: 10.1109/VTS.2013.6548881
Tengteng Zhang, D. Walker
Pseudo functional K Longest Path Per Gate (KLPG) test (PKLPG) is proposed to generate delay tests that test the longest paths while having power supply noise similar to that seen during normal functional operation. Our experimental results show that PKLPG is more vulnerable to under-testing than traditional two-cycle transition fault test. In this work, a simulation-based X'Filling method, Bit-Flip, is proposed to maximize the power supply noise during PKLPG test. Given a set of partially-specified scan patterns, random filling is done and then an iterative procedure is invoked to flip some of the filled bits, to increase the effective weighted switching activity (WSA). Experimental results on both compacted and uncompacted test patterns are presented. The results demonstrate that our method can significantly increase effective WSA while limiting the fill rate.
{"title":"Power supply noise control in pseudo functional test","authors":"Tengteng Zhang, D. Walker","doi":"10.1109/VTS.2013.6548881","DOIUrl":"https://doi.org/10.1109/VTS.2013.6548881","url":null,"abstract":"Pseudo functional K Longest Path Per Gate (KLPG) test (PKLPG) is proposed to generate delay tests that test the longest paths while having power supply noise similar to that seen during normal functional operation. Our experimental results show that PKLPG is more vulnerable to under-testing than traditional two-cycle transition fault test. In this work, a simulation-based X'Filling method, Bit-Flip, is proposed to maximize the power supply noise during PKLPG test. Given a set of partially-specified scan patterns, random filling is done and then an iterative procedure is invoked to flip some of the filled bits, to increase the effective weighted switching activity (WSA). Experimental results on both compacted and uncompacted test patterns are presented. The results demonstrate that our method can significantly increase effective WSA while limiting the fill rate.","PeriodicalId":138435,"journal":{"name":"2013 IEEE 31st VLSI Test Symposium (VTS)","volume":"50 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2013-04-29","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129445417","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2013-04-29DOI: 10.1109/VTS.2013.6548897
S. Shaikh
With increasing advances in VLSI technology, process, packaging and architecture, SoC systems continue to increase in complexity. This has resulted in an unprecedented increase in design errors, manufacturing flaws and customer returns in modern VLSI systems related to High Speed IO (HSIO) circuits. The situation will be exacerbated in future systems with increasingly smaller form factors, higher integration complexity, and more complex manufacturing process. This session comprises of three presentations each highlighting the challenges and describing a few solutions for test and debug of HSIOs.
{"title":"Innovative practices session 3C: Harnessing the challenges of testability and debug of high speed I/Os","authors":"S. Shaikh","doi":"10.1109/VTS.2013.6548897","DOIUrl":"https://doi.org/10.1109/VTS.2013.6548897","url":null,"abstract":"With increasing advances in VLSI technology, process, packaging and architecture, SoC systems continue to increase in complexity. This has resulted in an unprecedented increase in design errors, manufacturing flaws and customer returns in modern VLSI systems related to High Speed IO (HSIO) circuits. The situation will be exacerbated in future systems with increasingly smaller form factors, higher integration complexity, and more complex manufacturing process. This session comprises of three presentations each highlighting the challenges and describing a few solutions for test and debug of HSIOs.","PeriodicalId":138435,"journal":{"name":"2013 IEEE 31st VLSI Test Symposium (VTS)","volume":"49 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2013-04-29","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127598880","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2013-04-29DOI: 10.1109/VTS.2013.6548917
B. Muldrey, Sabyasachi Deyati, Michael J. Giardino, A. Chatterjee
With trends in mixed-signal systems-on-chip indicating increasingly extreme scaling of device dimensions and higher levels of integration, the tasks of both design and device validation is becoming increasingly complex. Post-silicon validation of mixed-signal/RF systems provides assurances of functionality of complex systems that cannot be asserted by even some of the most advanced simulators. We introduce RAVAGE (from “random;” “validation;” and “generation”), an algorithm for generating stimuli for post-silicon validation of mixed-signal systems. The approach of RAVAGE is new in that no assumption is made about any design anomaly present in the DDT; but rather, the stimulus is generated using the DUT itself with the objective of maximizing the effects of any behavioral differences between the DUT (hardware) and its behavioral model (software) as can be seen in the differences of their response to the same stimulus. Stochastic test generation is used since the exact nature of any behavioral anomaly in the DUT cannot be known a priori. Once a difference is observed, the model parameters are tuned using nonlinear optimization algorithms to remove the difference between its and the DUT's responses and the process (test generation→tuning) is repeated. If a residual error remains at the end of this process that is larger than a predetermined threshold, then it is concluded that the DUT contains unknown and possibly malicious behaviors that need further investigation. Experimental results on an RF system (hardware) are presented to prove feasibility of the proposed technique.
{"title":"RAVAGE: Post-silicon validation of mixed signal systems using genetic stimulus evolution and model tuning","authors":"B. Muldrey, Sabyasachi Deyati, Michael J. Giardino, A. Chatterjee","doi":"10.1109/VTS.2013.6548917","DOIUrl":"https://doi.org/10.1109/VTS.2013.6548917","url":null,"abstract":"With trends in mixed-signal systems-on-chip indicating increasingly extreme scaling of device dimensions and higher levels of integration, the tasks of both design and device validation is becoming increasingly complex. Post-silicon validation of mixed-signal/RF systems provides assurances of functionality of complex systems that cannot be asserted by even some of the most advanced simulators. We introduce RAVAGE (from “random;” “validation;” and “generation”), an algorithm for generating stimuli for post-silicon validation of mixed-signal systems. The approach of RAVAGE is new in that no assumption is made about any design anomaly present in the DDT; but rather, the stimulus is generated using the DUT itself with the objective of maximizing the effects of any behavioral differences between the DUT (hardware) and its behavioral model (software) as can be seen in the differences of their response to the same stimulus. Stochastic test generation is used since the exact nature of any behavioral anomaly in the DUT cannot be known a priori. Once a difference is observed, the model parameters are tuned using nonlinear optimization algorithms to remove the difference between its and the DUT's responses and the process (test generation→tuning) is repeated. If a residual error remains at the end of this process that is larger than a predetermined threshold, then it is concluded that the DUT contains unknown and possibly malicious behaviors that need further investigation. Experimental results on an RF system (hardware) are presented to prove feasibility of the proposed technique.","PeriodicalId":138435,"journal":{"name":"2013 IEEE 31st VLSI Test Symposium (VTS)","volume":"23 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2013-04-29","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"117020351","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2013-04-29DOI: 10.1109/VTS.2013.6548887
N. Hakim, C. Meissner
In the processor functional verification field, pre-silicon verification and post-silicon validation have traditionally been divided into separate disciplines. With the growing use of high-speed hardware emulation, there is an opportunity to join a significant portion of each into a continuous workflow [2], [1]. Three elements of functional verification rely on random code generation (RCG) as a primary test stimulus: processor core-level simulation, hardware emulation, and early hardware validation. Each of these environments becomes the primary focus of the functional verification effort at different phases of the project. Focusing on random-code-based test generation as a central feature, and the primary feature for commonality between these environments, the advantages of a unified workflow include people versatility, test tooling efficiency, and continuity of test technology between design phases. Related common features include some of the debugging techniques - e.g., software-trace-based debugging, and instruction flow analysis; and some of the instrumentation, for example counters that are built into the final hardware. Three key use cases that show the value of continuity of a pre-/post-silicon workflow are as follows: First, the functional test coverage of a common test can be evaluated in a pre-silicon environment, where more observability for functional test coverage is available, by way of simulation/emulation-only tracing capabilities and simulation/emulation model instrumentation not built into actual hardware [3]. The second is having the the last test program run on the emulator the day before early hardware arrives being the first validation test program on the new hardware. This allows processor bringup to proceed with protection against simple logic bugs and test code issues, having only to be concerned with more subtle logic bugs, circuit bugs and manufacturing defects. The last use case is taking an early hardware lab observation and dropping it seamlessly into both the simulation and emulation environments. Essential differences exist in the three environments, and create a challenge to a common workflow. These differences exist in three areas: The first is observability & controllability, which touches on checking, instrumentation & coverage evaluation, and debugging facilities & techniques. For observability, a simulator may leverage instruction-by-instruction results checking; bus trace analysis and protocol verification; and many more error-condition detectors in the model than in actual hardware. For hardware a fail scenario must defined, considering how behavior would propagate to checking point. For example “how do I know if this store wrote the wrong value to memory?” For hardware, an explicit check in code, a load and compare, would be required. The impact of less controllabilty is also that early hardware tests require more elaborate test case and test harness code, since fewer simulator crutches are available to h
{"title":"Innovative practices session 1C: Post-silicon validation","authors":"N. Hakim, C. Meissner","doi":"10.1109/VTS.2013.6548887","DOIUrl":"https://doi.org/10.1109/VTS.2013.6548887","url":null,"abstract":"In the processor functional verification field, pre-silicon verification and post-silicon validation have traditionally been divided into separate disciplines. With the growing use of high-speed hardware emulation, there is an opportunity to join a significant portion of each into a continuous workflow [2], [1]. Three elements of functional verification rely on random code generation (RCG) as a primary test stimulus: processor core-level simulation, hardware emulation, and early hardware validation. Each of these environments becomes the primary focus of the functional verification effort at different phases of the project. Focusing on random-code-based test generation as a central feature, and the primary feature for commonality between these environments, the advantages of a unified workflow include people versatility, test tooling efficiency, and continuity of test technology between design phases. Related common features include some of the debugging techniques - e.g., software-trace-based debugging, and instruction flow analysis; and some of the instrumentation, for example counters that are built into the final hardware. Three key use cases that show the value of continuity of a pre-/post-silicon workflow are as follows: First, the functional test coverage of a common test can be evaluated in a pre-silicon environment, where more observability for functional test coverage is available, by way of simulation/emulation-only tracing capabilities and simulation/emulation model instrumentation not built into actual hardware [3]. The second is having the the last test program run on the emulator the day before early hardware arrives being the first validation test program on the new hardware. This allows processor bringup to proceed with protection against simple logic bugs and test code issues, having only to be concerned with more subtle logic bugs, circuit bugs and manufacturing defects. The last use case is taking an early hardware lab observation and dropping it seamlessly into both the simulation and emulation environments. Essential differences exist in the three environments, and create a challenge to a common workflow. These differences exist in three areas: The first is observability & controllability, which touches on checking, instrumentation & coverage evaluation, and debugging facilities & techniques. For observability, a simulator may leverage instruction-by-instruction results checking; bus trace analysis and protocol verification; and many more error-condition detectors in the model than in actual hardware. For hardware a fail scenario must defined, considering how behavior would propagate to checking point. For example “how do I know if this store wrote the wrong value to memory?” For hardware, an explicit check in code, a load and compare, would be required. The impact of less controllabilty is also that early hardware tests require more elaborate test case and test harness code, since fewer simulator crutches are available to h","PeriodicalId":138435,"journal":{"name":"2013 IEEE 31st VLSI Test Symposium (VTS)","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2013-04-29","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"125384564","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}