M. Chupilko, A. Kamkin, A. Kotsynyak, Alexander Protsenko, S. Smolov, A. Tatarnikov
In this paper, a tool for automatically generating test programs for ARM VMSAv8-64 memory management units is described. The solution is based on the MicroTESK framework being developed at ISP RAS. The tool consists of two parts: an architecture-independent test program generation core and VMSAv8-64 specifications. Such separation is not a new principle in the area -- it is applied in a number of industrial test program generators, including IBM's Genesys-Pro. The main distinction is in how specifications are represented, what sort of information is extracted from them, and how that information is exploited. In the suggested approach, specifications comprise descriptions of the memory access instructions, loads and stores, and definition of the memory management mechanisms such as translation lookaside buffers, page tables, and cache units. The tool analyzes the specifications and extracts the execution paths and inter-path dependencies. The extracted information is used to systematically enumerate test programs for a given user-defined template. Test data for a particular program are generated by using symbolic execution and constraint solving techniques.
{"title":"Specification-Based Test Program Generation for ARM VMSAv8-64 Memory Management Units","authors":"M. Chupilko, A. Kamkin, A. Kotsynyak, Alexander Protsenko, S. Smolov, A. Tatarnikov","doi":"10.1109/MTV.2015.13","DOIUrl":"https://doi.org/10.1109/MTV.2015.13","url":null,"abstract":"In this paper, a tool for automatically generating test programs for ARM VMSAv8-64 memory management units is described. The solution is based on the MicroTESK framework being developed at ISP RAS. The tool consists of two parts: an architecture-independent test program generation core and VMSAv8-64 specifications. Such separation is not a new principle in the area -- it is applied in a number of industrial test program generators, including IBM's Genesys-Pro. The main distinction is in how specifications are represented, what sort of information is extracted from them, and how that information is exploited. In the suggested approach, specifications comprise descriptions of the memory access instructions, loads and stores, and definition of the memory management mechanisms such as translation lookaside buffers, page tables, and cache units. The tool analyzes the specifications and extracts the execution paths and inter-path dependencies. The extracted information is used to systematically enumerate test programs for a given user-defined template. Test data for a particular program are generated by using symbolic execution and constraint solving techniques.","PeriodicalId":273432,"journal":{"name":"2015 16th International Workshop on Microprocessor and SOC Test and Verification (MTV)","volume":"118 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2015-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"117281366","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Constrained random verification relies on efficient generation of random values according to constraints provided. As constraint solver metrics are not easily determined, usually solver efficiency can only be measured per-project and late in the verification cycle. In this paper we dissect several SystemVerilog based Sudoku puzzle solvers and compare their efficiency with the VCS constraint solver. Further, we compare efficiency between constraints applied over object instance hierarchies (game board is object oriented) versus flat constraints (game board is fully contained within a single class). Finally, we compare both approaches with several optimizations in the Sudoku solver. The common Sudoku game board is a 9x9 grid yielding approximately 2,349 constraint clauses to solve. We show that VCS can solve grid sizes up to 49x49 with 357,749 clauses. While each clause is a simple inequality, the size of the constraint formula to solve and its structure provides valuable feedback on the solvers efficiency.
{"title":"Performance of a SystemVerilog Sudoku Solver with VCS","authors":"Jeremy Ridgeway","doi":"10.1109/MTV.2015.14","DOIUrl":"https://doi.org/10.1109/MTV.2015.14","url":null,"abstract":"Constrained random verification relies on efficient generation of random values according to constraints provided. As constraint solver metrics are not easily determined, usually solver efficiency can only be measured per-project and late in the verification cycle. In this paper we dissect several SystemVerilog based Sudoku puzzle solvers and compare their efficiency with the VCS constraint solver. Further, we compare efficiency between constraints applied over object instance hierarchies (game board is object oriented) versus flat constraints (game board is fully contained within a single class). Finally, we compare both approaches with several optimizations in the Sudoku solver. The common Sudoku game board is a 9x9 grid yielding approximately 2,349 constraint clauses to solve. We show that VCS can solve grid sizes up to 49x49 with 357,749 clauses. While each clause is a simple inequality, the size of the constraint formula to solve and its structure provides valuable feedback on the solvers efficiency.","PeriodicalId":273432,"journal":{"name":"2015 16th International Workshop on Microprocessor and SOC Test and Verification (MTV)","volume":"20 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2015-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"121207023","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Bicky Shakya, Fahim Rahman, M. Tehranipoor, Domenic Forte
Traditional measures for hardware security have heavily relied on currently prevalent CMOS technology. However, with the emergence of new vulnerabilities, attacks and limitations in current solutions, researchers are now looking into exploiting emerging nanoelectronic devices for security applications. In this paper, we discuss three emerging nanoelectronic technologies, namely phase change memory, graphene and carbon nanotubes, to point out some unique features that they offer, and analyze how these features can aid in hardware security. In addition, we present challenges and future research directions for effectively integrating emerging nanoscale devices into hardware security.
{"title":"Harnessing Nanoscale Device Properties for Hardware Security","authors":"Bicky Shakya, Fahim Rahman, M. Tehranipoor, Domenic Forte","doi":"10.1109/MTV.2015.18","DOIUrl":"https://doi.org/10.1109/MTV.2015.18","url":null,"abstract":"Traditional measures for hardware security have heavily relied on currently prevalent CMOS technology. However, with the emergence of new vulnerabilities, attacks and limitations in current solutions, researchers are now looking into exploiting emerging nanoelectronic devices for security applications. In this paper, we discuss three emerging nanoelectronic technologies, namely phase change memory, graphene and carbon nanotubes, to point out some unique features that they offer, and analyze how these features can aid in hardware security. In addition, we present challenges and future research directions for effectively integrating emerging nanoscale devices into hardware security.","PeriodicalId":273432,"journal":{"name":"2015 16th International Workshop on Microprocessor and SOC Test and Verification (MTV)","volume":"20 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2015-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"133278379","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Verification is a critical bottleneck in the time to market of a new digital design. As complexity continues to increase, post-silicon validation shoulders an increasing share of the verification/validation effort. Post-silicon validation is burdened by large volumes of test failures, and is further complicated by root cause bugs that manifest in multiple test failures. At present, these failures are prioritized and assigned to validation engineers in an ad-hoc fashion. When multiple failures caused by the same root cause bug are debugged by multiple engineers at the same time, scarce, time-critical engineering resources are wasted. Our scalable bug triage technique begins with a database of test failures. It extracts defining features from the failure reports, using a novel, topology-aware approach based on graph partitioning. It then leverages unsupervised machine learning to extract the structure of the failures, identifying groups of failures that are likely to be the result of a common root cause. With our technique, related failures can be debugged as a group, rather than individually. Additionally, we propose a metric for measuring verification efficiency as a result of bug triage called Unique Debugging Instances (UDI). We evaluated our approach on the industrial-size OpenSPARC T2 design with a set of injected bugs, and found that our approach increased average verification efficiency by 243%, with a confidence interval of 99%.
{"title":"A Topological Approach to Hardware Bug Triage","authors":"Rico Angell, Ben Oztalay, A. DeOrio","doi":"10.1109/MTV.2015.10","DOIUrl":"https://doi.org/10.1109/MTV.2015.10","url":null,"abstract":"Verification is a critical bottleneck in the time to market of a new digital design. As complexity continues to increase, post-silicon validation shoulders an increasing share of the verification/validation effort. Post-silicon validation is burdened by large volumes of test failures, and is further complicated by root cause bugs that manifest in multiple test failures. At present, these failures are prioritized and assigned to validation engineers in an ad-hoc fashion. When multiple failures caused by the same root cause bug are debugged by multiple engineers at the same time, scarce, time-critical engineering resources are wasted. Our scalable bug triage technique begins with a database of test failures. It extracts defining features from the failure reports, using a novel, topology-aware approach based on graph partitioning. It then leverages unsupervised machine learning to extract the structure of the failures, identifying groups of failures that are likely to be the result of a common root cause. With our technique, related failures can be debugged as a group, rather than individually. Additionally, we propose a metric for measuring verification efficiency as a result of bug triage called Unique Debugging Instances (UDI). We evaluated our approach on the industrial-size OpenSPARC T2 design with a set of injected bugs, and found that our approach increased average verification efficiency by 243%, with a confidence interval of 99%.","PeriodicalId":273432,"journal":{"name":"2015 16th International Workshop on Microprocessor and SOC Test and Verification (MTV)","volume":"4 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2015-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"132870212","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
A processor executes a computing job in a certain number of clock cycles. The clock frequency determines the time that the job will take. Another parameter, cycle efficiency or cycles per joule, determines how much energy the job will consume. The execution time measures performance and, in combination with energy dissipation, influences power, thermal behavior, power supply noise and battery life. We describe a method for power management of a processor. An Intel processor in 32nm bulk CMOS technology is used as an illustrative example. First, we characterize the technology by H-spice simulation of a ripple carry adder for critical path delay, dynamic energy and static power at a wide range of supply voltages. The adder data is then scaled based on the clock frequency, supply voltage, thermal design power (TDP) and other specifications of the processor. To optimize the time and energy performance, voltage and clock frequency are determined showing 28% reduction both in execution time and energy dissipation.
{"title":"Characterizing Processors for Energy and Performance Management","authors":"Harshit Goyal, V. Agrawal","doi":"10.1109/MTV.2015.22","DOIUrl":"https://doi.org/10.1109/MTV.2015.22","url":null,"abstract":"A processor executes a computing job in a certain number of clock cycles. The clock frequency determines the time that the job will take. Another parameter, cycle efficiency or cycles per joule, determines how much energy the job will consume. The execution time measures performance and, in combination with energy dissipation, influences power, thermal behavior, power supply noise and battery life. We describe a method for power management of a processor. An Intel processor in 32nm bulk CMOS technology is used as an illustrative example. First, we characterize the technology by H-spice simulation of a ripple carry adder for critical path delay, dynamic energy and static power at a wide range of supply voltages. The adder data is then scaled based on the clock frequency, supply voltage, thermal design power (TDP) and other specifications of the processor. To optimize the time and energy performance, voltage and clock frequency are determined showing 28% reduction both in execution time and energy dissipation.","PeriodicalId":273432,"journal":{"name":"2015 16th International Workshop on Microprocessor and SOC Test and Verification (MTV)","volume":"14 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2015-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127460997","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Several EDA tools automate the debug process1,2 or part of the debug process3,4. The result is less manual work and bugs are fixed faster5. However, the actual process of fixing the bugs and committing the fixes to the revision control system is still a manual process. In this paper we explore how to automate that last step: automate bug fixing. First we discuss how the automatic bug fix flow should work. We implemented the automatic bug fixing mechanism into our existing automatic debug tool1 and ran an internal trial. Then we list the various issues that we learned from this experience and how to avoid them. Our conclusion is that automatic bug fixing, i.e. the art of automatically modifying the code in order to make a failing test pass, is very useful, but it is done best locally, i.e. the fix should not be committed. Instead a bug report should be issued to the engineers who made the bad commits and let them take action. Automatically committing the identified fix is very simple (unlike the analysis that leads to the fix), but it this leads to a list of issues such as human-tool race conditions, fault oscillation and removal of partial implementations.
{"title":"Automatic Bug Fixing","authors":"Daniel Hansson","doi":"10.1109/MTV.2015.21","DOIUrl":"https://doi.org/10.1109/MTV.2015.21","url":null,"abstract":"Several EDA tools automate the debug process1,2 or part of the debug process3,4. The result is less manual work and bugs are fixed faster5. However, the actual process of fixing the bugs and committing the fixes to the revision control system is still a manual process. In this paper we explore how to automate that last step: automate bug fixing. First we discuss how the automatic bug fix flow should work. We implemented the automatic bug fixing mechanism into our existing automatic debug tool1 and ran an internal trial. Then we list the various issues that we learned from this experience and how to avoid them. Our conclusion is that automatic bug fixing, i.e. the art of automatically modifying the code in order to make a failing test pass, is very useful, but it is done best locally, i.e. the fix should not be committed. Instead a bug report should be issued to the engineers who made the bad commits and let them take action. Automatically committing the identified fix is very simple (unlike the analysis that leads to the fix), but it this leads to a list of issues such as human-tool race conditions, fault oscillation and removal of partial implementations.","PeriodicalId":273432,"journal":{"name":"2015 16th International Workshop on Microprocessor and SOC Test and Verification (MTV)","volume":"58 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2015-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127745146","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Random instruction sequence (RIS) tools continue to be the main strategy for verifying and validating chip designs. In every RIS tool, test suites are created targeted to a particular functionality and run on the design. Coverage metrics provide us one mechanism to ensure and measure the completeness and thoroughness of these test suites and create new test suites directed towards unexplored areas of the design. The results from the coverage metrics can also be used to improve the cluster efficiency. In this work we discuss the results from a coverage tool that extracted and analyzed stimuli quality from large regressions, using statistical visualization. Using this coverage tool, we captured events relating to the memory sub-system and improved the stress/efficiency of the tool by making the required modifications to the tool. We ran several experiments based on the event collection and increased the ability in the tool to create scenarios exercising patterns that can potentially highlight complex bugs.
{"title":"Enhancing the Stress and Efficiency of RIS Tools Using Coverage Metrics","authors":"John Hudson, Gunaranjan Kurucheti","doi":"10.1109/MTV.2015.19","DOIUrl":"https://doi.org/10.1109/MTV.2015.19","url":null,"abstract":"Random instruction sequence (RIS) tools continue to be the main strategy for verifying and validating chip designs. In every RIS tool, test suites are created targeted to a particular functionality and run on the design. Coverage metrics provide us one mechanism to ensure and measure the completeness and thoroughness of these test suites and create new test suites directed towards unexplored areas of the design. The results from the coverage metrics can also be used to improve the cluster efficiency. In this work we discuss the results from a coverage tool that extracted and analyzed stimuli quality from large regressions, using statistical visualization. Using this coverage tool, we captured events relating to the memory sub-system and improved the stress/efficiency of the tool by making the required modifications to the tool. We ran several experiments based on the event collection and increased the ability in the tool to create scenarios exercising patterns that can potentially highlight complex bugs.","PeriodicalId":273432,"journal":{"name":"2015 16th International Workshop on Microprocessor and SOC Test and Verification (MTV)","volume":"15 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2015-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115282092","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
We are discussing a framework for formally modeling and analyzing the security of trusted boot processes. The presented framework is based on actor networks. It considers essential cyber-physical features of the system and how to check the authenticity of the software it is running.
{"title":"Modeling and Analysis of Trusted Boot Processes Based on Actor Network Procedures","authors":"Mark Nelson, P. Seidel","doi":"10.1109/MTV.2015.20","DOIUrl":"https://doi.org/10.1109/MTV.2015.20","url":null,"abstract":"We are discussing a framework for formally modeling and analyzing the security of trusted boot processes. The presented framework is based on actor networks. It considers essential cyber-physical features of the system and how to check the authenticity of the software it is running.","PeriodicalId":273432,"journal":{"name":"2015 16th International Workshop on Microprocessor and SOC Test and Verification (MTV)","volume":"22 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2015-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129688865","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
This paper introduces Modified Condition/Decision Coverage (MC/DC), novel MC/DC coverage test sets generation algorithm named OBSRV, and MC/DC design fault detection strength. The paper presents an overview about MC/DC in terms of the MC/DC definition, MC/DC types, and the conventional MC/DC approaches. It introduces a novel algorithm, called OBSRV, for MC/DC coverage test sets generation. OBSRV resolves MC/DC controllability and observability by using principles found in the D-algorithm that is the foundation for state-of-the-art ATPG. It thereby leverages hardware test principles to advance MC/DC for software, and hardware structural coverage. The paper presents an investigation of the introduced OBSRV algorithm scalability, and complexity to prove its suitability for practical designs. The paper investigates MC/DC functional design faults detection strength, and analyzes empirical results conducted on main design fault classes in microprocessors.
{"title":"Novel MC/DC Coverage Test Sets Generation Algorithm, and MC/DC Design Fault Detection Strength Insights","authors":"Mohamed A. Salem, K. Eder","doi":"10.1109/MTV.2015.15","DOIUrl":"https://doi.org/10.1109/MTV.2015.15","url":null,"abstract":"This paper introduces Modified Condition/Decision Coverage (MC/DC), novel MC/DC coverage test sets generation algorithm named OBSRV, and MC/DC design fault detection strength. The paper presents an overview about MC/DC in terms of the MC/DC definition, MC/DC types, and the conventional MC/DC approaches. It introduces a novel algorithm, called OBSRV, for MC/DC coverage test sets generation. OBSRV resolves MC/DC controllability and observability by using principles found in the D-algorithm that is the foundation for state-of-the-art ATPG. It thereby leverages hardware test principles to advance MC/DC for software, and hardware structural coverage. The paper presents an investigation of the introduced OBSRV algorithm scalability, and complexity to prove its suitability for practical designs. The paper investigates MC/DC functional design faults detection strength, and analyzes empirical results conducted on main design fault classes in microprocessors.","PeriodicalId":273432,"journal":{"name":"2015 16th International Workshop on Microprocessor and SOC Test and Verification (MTV)","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2015-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"122096665","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Design advancements in semiconductor industry have resulted in shrinking schedules of time-to-market and improved quality assurance of the chips to be in perfect tandem with their specifications. Hence, Post-Silicon Validation, having a significant percentage in time-to-money, becomes one of the most highly leveraged steps in chip implementation. This also puts more pressure to reduce the validation cycle and automate extensively to speedup validation. Nowadays, companies are aiming for more complex designs in a shorter duration. So, as the SoC complexity keeps growing, we need real software applications, specialized and random tests to observe and check functionality, added with regression and electrical tests for checking chip specifications. For this, kernel boot is one of the best methodologies to run on the first silicon parts for a complete system test, which is followed by random tests & electrical validation. This paper presents a novel methodology for validation flow which facilitates kernel boot, both secure and non-secure, from various memory sources, integrating random test generation in every iteration. This flow also covers boot validation, electrical validation and complex scenarios like secure boot with deep sleep. It will cut down validation run time by 3-4 times, thus notably improving the performance which will lead to a major reduction in time to market. Other enhancements are in Customer Satisfaction Index (CSI) and Performance Quality Index (PQI) for boot and in shortening of electrical cycles.
{"title":"Hybrid Post Silicon Validation Methodology for Layerscape SoCs involving Secure Boot: Boot (Secure & Non-secure) and Kernel Integration with Randomized Test","authors":"Amandeep Sharan, Ashish Gupta","doi":"10.1109/MTV.2015.16","DOIUrl":"https://doi.org/10.1109/MTV.2015.16","url":null,"abstract":"Design advancements in semiconductor industry have resulted in shrinking schedules of time-to-market and improved quality assurance of the chips to be in perfect tandem with their specifications. Hence, Post-Silicon Validation, having a significant percentage in time-to-money, becomes one of the most highly leveraged steps in chip implementation. This also puts more pressure to reduce the validation cycle and automate extensively to speedup validation. Nowadays, companies are aiming for more complex designs in a shorter duration. So, as the SoC complexity keeps growing, we need real software applications, specialized and random tests to observe and check functionality, added with regression and electrical tests for checking chip specifications. For this, kernel boot is one of the best methodologies to run on the first silicon parts for a complete system test, which is followed by random tests & electrical validation. This paper presents a novel methodology for validation flow which facilitates kernel boot, both secure and non-secure, from various memory sources, integrating random test generation in every iteration. This flow also covers boot validation, electrical validation and complex scenarios like secure boot with deep sleep. It will cut down validation run time by 3-4 times, thus notably improving the performance which will lead to a major reduction in time to market. Other enhancements are in Customer Satisfaction Index (CSI) and Performance Quality Index (PQI) for boot and in shortening of electrical cycles.","PeriodicalId":273432,"journal":{"name":"2015 16th International Workshop on Microprocessor and SOC Test and Verification (MTV)","volume":"40 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2015-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"122730784","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}