首页 > 最新文献

2015 16th International Workshop on Microprocessor and SOC Test and Verification (MTV)最新文献

英文 中文
Specification-Based Test Program Generation for ARM VMSAv8-64 Memory Management Units 基于规范的ARM VMSAv8-64内存管理单元测试程序生成
M. Chupilko, A. Kamkin, A. Kotsynyak, Alexander Protsenko, S. Smolov, A. Tatarnikov
In this paper, a tool for automatically generating test programs for ARM VMSAv8-64 memory management units is described. The solution is based on the MicroTESK framework being developed at ISP RAS. The tool consists of two parts: an architecture-independent test program generation core and VMSAv8-64 specifications. Such separation is not a new principle in the area -- it is applied in a number of industrial test program generators, including IBM's Genesys-Pro. The main distinction is in how specifications are represented, what sort of information is extracted from them, and how that information is exploited. In the suggested approach, specifications comprise descriptions of the memory access instructions, loads and stores, and definition of the memory management mechanisms such as translation lookaside buffers, page tables, and cache units. The tool analyzes the specifications and extracts the execution paths and inter-path dependencies. The extracted information is used to systematically enumerate test programs for a given user-defined template. Test data for a particular program are generated by using symbolic execution and constraint solving techniques.
本文介绍了一种自动生成ARM VMSAv8-64内存管理单元测试程序的工具。该解决方案基于ISP RAS正在开发的microtask框架。该工具由两部分组成:独立于体系结构的测试程序生成核心和VMSAv8-64规范。这种分离在该领域并不是一个新原理,它已经应用于许多工业测试程序生成器中,包括IBM的genesis - pro。主要的区别在于如何表示规范,从规范中提取什么类型的信息,以及如何利用这些信息。在建议的方法中,规范包括对内存访问指令、加载和存储的描述,以及内存管理机制的定义,例如翻译暂存缓冲区、页表和缓存单元。该工具分析规范并提取执行路径和路径间依赖项。提取的信息用于系统地枚举给定用户定义模板的测试程序。使用符号执行和约束求解技术生成特定程序的测试数据。
{"title":"Specification-Based Test Program Generation for ARM VMSAv8-64 Memory Management Units","authors":"M. Chupilko, A. Kamkin, A. Kotsynyak, Alexander Protsenko, S. Smolov, A. Tatarnikov","doi":"10.1109/MTV.2015.13","DOIUrl":"https://doi.org/10.1109/MTV.2015.13","url":null,"abstract":"In this paper, a tool for automatically generating test programs for ARM VMSAv8-64 memory management units is described. The solution is based on the MicroTESK framework being developed at ISP RAS. The tool consists of two parts: an architecture-independent test program generation core and VMSAv8-64 specifications. Such separation is not a new principle in the area -- it is applied in a number of industrial test program generators, including IBM's Genesys-Pro. The main distinction is in how specifications are represented, what sort of information is extracted from them, and how that information is exploited. In the suggested approach, specifications comprise descriptions of the memory access instructions, loads and stores, and definition of the memory management mechanisms such as translation lookaside buffers, page tables, and cache units. The tool analyzes the specifications and extracts the execution paths and inter-path dependencies. The extracted information is used to systematically enumerate test programs for a given user-defined template. Test data for a particular program are generated by using symbolic execution and constraint solving techniques.","PeriodicalId":273432,"journal":{"name":"2015 16th International Workshop on Microprocessor and SOC Test and Verification (MTV)","volume":"118 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2015-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"117281366","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 8
Performance of a SystemVerilog Sudoku Solver with VCS SystemVerilog数独求解器的性能研究
Jeremy Ridgeway
Constrained random verification relies on efficient generation of random values according to constraints provided. As constraint solver metrics are not easily determined, usually solver efficiency can only be measured per-project and late in the verification cycle. In this paper we dissect several SystemVerilog based Sudoku puzzle solvers and compare their efficiency with the VCS constraint solver. Further, we compare efficiency between constraints applied over object instance hierarchies (game board is object oriented) versus flat constraints (game board is fully contained within a single class). Finally, we compare both approaches with several optimizations in the Sudoku solver. The common Sudoku game board is a 9x9 grid yielding approximately 2,349 constraint clauses to solve. We show that VCS can solve grid sizes up to 49x49 with 357,749 clauses. While each clause is a simple inequality, the size of the constraint formula to solve and its structure provides valuable feedback on the solvers efficiency.
约束随机验证依赖于根据所提供的约束有效地生成随机值。由于约束求解器度量不容易确定,通常求解器的效率只能在每个项目和验证周期的后期进行度量。本文分析了几种基于SystemVerilog的数独解谜器,并将其效率与VCS约束解谜器进行了比较。此外,我们比较了应用于对象实例层次结构的约束(游戏板是面向对象的)与平面约束(游戏板完全包含在单个类中)之间的效率。最后,我们将这两种方法与数独解算器中的几种优化方法进行比较。常见的数独游戏棋盘是一个9x9的网格,大约有2349个约束条款需要解决。我们证明VCS可以用357,749个子句解决网格大小高达49x49的问题。虽然每个子句都是一个简单的不等式,但要求解的约束公式的大小及其结构为求解器的效率提供了有价值的反馈。
{"title":"Performance of a SystemVerilog Sudoku Solver with VCS","authors":"Jeremy Ridgeway","doi":"10.1109/MTV.2015.14","DOIUrl":"https://doi.org/10.1109/MTV.2015.14","url":null,"abstract":"Constrained random verification relies on efficient generation of random values according to constraints provided. As constraint solver metrics are not easily determined, usually solver efficiency can only be measured per-project and late in the verification cycle. In this paper we dissect several SystemVerilog based Sudoku puzzle solvers and compare their efficiency with the VCS constraint solver. Further, we compare efficiency between constraints applied over object instance hierarchies (game board is object oriented) versus flat constraints (game board is fully contained within a single class). Finally, we compare both approaches with several optimizations in the Sudoku solver. The common Sudoku game board is a 9x9 grid yielding approximately 2,349 constraint clauses to solve. We show that VCS can solve grid sizes up to 49x49 with 357,749 clauses. While each clause is a simple inequality, the size of the constraint formula to solve and its structure provides valuable feedback on the solvers efficiency.","PeriodicalId":273432,"journal":{"name":"2015 16th International Workshop on Microprocessor and SOC Test and Verification (MTV)","volume":"20 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2015-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"121207023","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Harnessing Nanoscale Device Properties for Hardware Security 利用纳米级器件特性实现硬件安全
Bicky Shakya, Fahim Rahman, M. Tehranipoor, Domenic Forte
Traditional measures for hardware security have heavily relied on currently prevalent CMOS technology. However, with the emergence of new vulnerabilities, attacks and limitations in current solutions, researchers are now looking into exploiting emerging nanoelectronic devices for security applications. In this paper, we discuss three emerging nanoelectronic technologies, namely phase change memory, graphene and carbon nanotubes, to point out some unique features that they offer, and analyze how these features can aid in hardware security. In addition, we present challenges and future research directions for effectively integrating emerging nanoscale devices into hardware security.
传统的硬件安全措施严重依赖于目前流行的CMOS技术。然而,随着新的漏洞、攻击和现有解决方案的局限性的出现,研究人员现在正在研究利用新兴的纳米电子设备进行安全应用。在本文中,我们讨论了三种新兴的纳米电子技术,即相变存储器,石墨烯和碳纳米管,指出了它们提供的一些独特功能,并分析了这些功能如何帮助硬件安全。此外,我们还提出了将新兴纳米器件有效集成到硬件安全中的挑战和未来的研究方向。
{"title":"Harnessing Nanoscale Device Properties for Hardware Security","authors":"Bicky Shakya, Fahim Rahman, M. Tehranipoor, Domenic Forte","doi":"10.1109/MTV.2015.18","DOIUrl":"https://doi.org/10.1109/MTV.2015.18","url":null,"abstract":"Traditional measures for hardware security have heavily relied on currently prevalent CMOS technology. However, with the emergence of new vulnerabilities, attacks and limitations in current solutions, researchers are now looking into exploiting emerging nanoelectronic devices for security applications. In this paper, we discuss three emerging nanoelectronic technologies, namely phase change memory, graphene and carbon nanotubes, to point out some unique features that they offer, and analyze how these features can aid in hardware security. In addition, we present challenges and future research directions for effectively integrating emerging nanoscale devices into hardware security.","PeriodicalId":273432,"journal":{"name":"2015 16th International Workshop on Microprocessor and SOC Test and Verification (MTV)","volume":"20 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2015-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"133278379","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
A Topological Approach to Hardware Bug Triage 硬件Bug分类的拓扑方法
Rico Angell, Ben Oztalay, A. DeOrio
Verification is a critical bottleneck in the time to market of a new digital design. As complexity continues to increase, post-silicon validation shoulders an increasing share of the verification/validation effort. Post-silicon validation is burdened by large volumes of test failures, and is further complicated by root cause bugs that manifest in multiple test failures. At present, these failures are prioritized and assigned to validation engineers in an ad-hoc fashion. When multiple failures caused by the same root cause bug are debugged by multiple engineers at the same time, scarce, time-critical engineering resources are wasted. Our scalable bug triage technique begins with a database of test failures. It extracts defining features from the failure reports, using a novel, topology-aware approach based on graph partitioning. It then leverages unsupervised machine learning to extract the structure of the failures, identifying groups of failures that are likely to be the result of a common root cause. With our technique, related failures can be debugged as a group, rather than individually. Additionally, we propose a metric for measuring verification efficiency as a result of bug triage called Unique Debugging Instances (UDI). We evaluated our approach on the industrial-size OpenSPARC T2 design with a set of injected bugs, and found that our approach increased average verification efficiency by 243%, with a confidence interval of 99%.
验证是新数字设计推向市场的关键瓶颈。随着复杂性的不断增加,后硅验证承担了越来越多的验证/确认工作。硅后验证被大量的测试失败所负担,并且由于在多个测试失败中出现的根本原因错误而进一步复杂化。目前,这些故障被按优先级排序,并以一种特别的方式分配给验证工程师。当由同一个根本原因bug引起的多个故障由多个工程师同时调试时,稀缺的、时间紧迫的工程资源就被浪费了。我们可扩展的bug分类技术从一个测试失败数据库开始。它使用一种基于图划分的新颖的拓扑感知方法,从故障报告中提取定义特征。然后,它利用无监督机器学习来提取故障的结构,识别可能由共同根本原因导致的故障组。使用我们的技术,相关的故障可以作为一个组进行调试,而不是单独调试。此外,我们提出了一个度量标准,用于度量作为错误分类结果的验证效率,称为唯一调试实例(Unique Debugging Instances, UDI)。我们在工业规模的OpenSPARC T2设计上用一组注入的错误评估了我们的方法,发现我们的方法将平均验证效率提高了243%,置信区间为99%。
{"title":"A Topological Approach to Hardware Bug Triage","authors":"Rico Angell, Ben Oztalay, A. DeOrio","doi":"10.1109/MTV.2015.10","DOIUrl":"https://doi.org/10.1109/MTV.2015.10","url":null,"abstract":"Verification is a critical bottleneck in the time to market of a new digital design. As complexity continues to increase, post-silicon validation shoulders an increasing share of the verification/validation effort. Post-silicon validation is burdened by large volumes of test failures, and is further complicated by root cause bugs that manifest in multiple test failures. At present, these failures are prioritized and assigned to validation engineers in an ad-hoc fashion. When multiple failures caused by the same root cause bug are debugged by multiple engineers at the same time, scarce, time-critical engineering resources are wasted. Our scalable bug triage technique begins with a database of test failures. It extracts defining features from the failure reports, using a novel, topology-aware approach based on graph partitioning. It then leverages unsupervised machine learning to extract the structure of the failures, identifying groups of failures that are likely to be the result of a common root cause. With our technique, related failures can be debugged as a group, rather than individually. Additionally, we propose a metric for measuring verification efficiency as a result of bug triage called Unique Debugging Instances (UDI). We evaluated our approach on the industrial-size OpenSPARC T2 design with a set of injected bugs, and found that our approach increased average verification efficiency by 243%, with a confidence interval of 99%.","PeriodicalId":273432,"journal":{"name":"2015 16th International Workshop on Microprocessor and SOC Test and Verification (MTV)","volume":"4 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2015-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"132870212","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 4
Characterizing Processors for Energy and Performance Management 用于能源和性能管理的处理器特性
Harshit Goyal, V. Agrawal
A processor executes a computing job in a certain number of clock cycles. The clock frequency determines the time that the job will take. Another parameter, cycle efficiency or cycles per joule, determines how much energy the job will consume. The execution time measures performance and, in combination with energy dissipation, influences power, thermal behavior, power supply noise and battery life. We describe a method for power management of a processor. An Intel processor in 32nm bulk CMOS technology is used as an illustrative example. First, we characterize the technology by H-spice simulation of a ripple carry adder for critical path delay, dynamic energy and static power at a wide range of supply voltages. The adder data is then scaled based on the clock frequency, supply voltage, thermal design power (TDP) and other specifications of the processor. To optimize the time and energy performance, voltage and clock frequency are determined showing 28% reduction both in execution time and energy dissipation.
处理器在一定数量的时钟周期内执行计算作业。时钟频率决定了工作所需的时间。另一个参数,循环效率或循环每焦耳,决定了工作将消耗多少能量。执行时间衡量性能,并与能量耗散相结合,影响功率、热行为、电源噪声和电池寿命。本文描述了一种处理器电源管理方法。以Intel处理器32nm块体CMOS技术为例进行说明。首先,我们通过H-spice模拟关键路径延迟、动态能量和大范围电源电压下的静态功率的纹波进位加法器来表征该技术。然后根据时钟频率、电源电压、热设计功率(TDP)和处理器的其他规格对加法器数据进行缩放。为了优化时间和能量性能,确定了电压和时钟频率,显示执行时间和能量消耗都减少了28%。
{"title":"Characterizing Processors for Energy and Performance Management","authors":"Harshit Goyal, V. Agrawal","doi":"10.1109/MTV.2015.22","DOIUrl":"https://doi.org/10.1109/MTV.2015.22","url":null,"abstract":"A processor executes a computing job in a certain number of clock cycles. The clock frequency determines the time that the job will take. Another parameter, cycle efficiency or cycles per joule, determines how much energy the job will consume. The execution time measures performance and, in combination with energy dissipation, influences power, thermal behavior, power supply noise and battery life. We describe a method for power management of a processor. An Intel processor in 32nm bulk CMOS technology is used as an illustrative example. First, we characterize the technology by H-spice simulation of a ripple carry adder for critical path delay, dynamic energy and static power at a wide range of supply voltages. The adder data is then scaled based on the clock frequency, supply voltage, thermal design power (TDP) and other specifications of the processor. To optimize the time and energy performance, voltage and clock frequency are determined showing 28% reduction both in execution time and energy dissipation.","PeriodicalId":273432,"journal":{"name":"2015 16th International Workshop on Microprocessor and SOC Test and Verification (MTV)","volume":"14 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2015-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127460997","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 3
Automatic Bug Fixing 自动修复Bug
Daniel Hansson
Several EDA tools automate the debug process1,2 or part of the debug process3,4. The result is less manual work and bugs are fixed faster5. However, the actual process of fixing the bugs and committing the fixes to the revision control system is still a manual process. In this paper we explore how to automate that last step: automate bug fixing. First we discuss how the automatic bug fix flow should work. We implemented the automatic bug fixing mechanism into our existing automatic debug tool1 and ran an internal trial. Then we list the various issues that we learned from this experience and how to avoid them. Our conclusion is that automatic bug fixing, i.e. the art of automatically modifying the code in order to make a failing test pass, is very useful, but it is done best locally, i.e. the fix should not be committed. Instead a bug report should be issued to the engineers who made the bad commits and let them take action. Automatically committing the identified fix is very simple (unlike the analysis that leads to the fix), but it this leads to a list of issues such as human-tool race conditions, fault oscillation and removal of partial implementations.
一些EDA工具自动化调试过程1,2或部分调试过程3,4。其结果是减少了手工工作,bug修复得更快5。然而,修复bug并将修复提交到修订控制系统的实际过程仍然是一个手动过程。在本文中,我们将探讨如何自动化最后一步:自动化bug修复。首先,我们讨论自动错误修复流程应该如何工作。我们在现有的自动调试工具中实现了自动bug修复机制1,并进行了内部试验。然后我们列出了我们从这次经历中学到的各种问题,以及如何避免这些问题。我们的结论是,自动bug修复,即自动修改代码以使失败的测试通过的艺术,是非常有用的,但它最好在本地完成,即修复不应该提交。相反,应该向提交错误提交的工程师发布错误报告,让他们采取行动。自动提交确定的修复非常简单(与导致修复的分析不同),但是如果这会导致一系列问题,例如人-工具竞争条件、故障振荡和部分实现的删除。
{"title":"Automatic Bug Fixing","authors":"Daniel Hansson","doi":"10.1109/MTV.2015.21","DOIUrl":"https://doi.org/10.1109/MTV.2015.21","url":null,"abstract":"Several EDA tools automate the debug process1,2 or part of the debug process3,4. The result is less manual work and bugs are fixed faster5. However, the actual process of fixing the bugs and committing the fixes to the revision control system is still a manual process. In this paper we explore how to automate that last step: automate bug fixing. First we discuss how the automatic bug fix flow should work. We implemented the automatic bug fixing mechanism into our existing automatic debug tool1 and ran an internal trial. Then we list the various issues that we learned from this experience and how to avoid them. Our conclusion is that automatic bug fixing, i.e. the art of automatically modifying the code in order to make a failing test pass, is very useful, but it is done best locally, i.e. the fix should not be committed. Instead a bug report should be issued to the engineers who made the bad commits and let them take action. Automatically committing the identified fix is very simple (unlike the analysis that leads to the fix), but it this leads to a list of issues such as human-tool race conditions, fault oscillation and removal of partial implementations.","PeriodicalId":273432,"journal":{"name":"2015 16th International Workshop on Microprocessor and SOC Test and Verification (MTV)","volume":"58 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2015-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127745146","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 3
Enhancing the Stress and Efficiency of RIS Tools Using Coverage Metrics 使用覆盖度量来增强RIS工具的压力和效率
John Hudson, Gunaranjan Kurucheti
Random instruction sequence (RIS) tools continue to be the main strategy for verifying and validating chip designs. In every RIS tool, test suites are created targeted to a particular functionality and run on the design. Coverage metrics provide us one mechanism to ensure and measure the completeness and thoroughness of these test suites and create new test suites directed towards unexplored areas of the design. The results from the coverage metrics can also be used to improve the cluster efficiency. In this work we discuss the results from a coverage tool that extracted and analyzed stimuli quality from large regressions, using statistical visualization. Using this coverage tool, we captured events relating to the memory sub-system and improved the stress/efficiency of the tool by making the required modifications to the tool. We ran several experiments based on the event collection and increased the ability in the tool to create scenarios exercising patterns that can potentially highlight complex bugs.
随机指令序列(RIS)工具仍然是验证和验证芯片设计的主要策略。在每个RIS工具中,测试套件都是针对特定功能创建的,并在设计上运行。覆盖度量为我们提供了一种机制来确保和度量这些测试套件的完整性和彻底性,并创建针对设计中未开发区域的新测试套件。覆盖度量的结果也可以用于提高集群效率。在这项工作中,我们讨论了覆盖工具的结果,该工具使用统计可视化从大回归中提取和分析刺激质量。使用这个覆盖工具,我们捕获了与内存子系统相关的事件,并通过对工具进行必要的修改来提高工具的压力/效率。我们在事件收集的基础上运行了几个实验,并增强了工具中创建场景的能力,以执行可能突出复杂错误的模式。
{"title":"Enhancing the Stress and Efficiency of RIS Tools Using Coverage Metrics","authors":"John Hudson, Gunaranjan Kurucheti","doi":"10.1109/MTV.2015.19","DOIUrl":"https://doi.org/10.1109/MTV.2015.19","url":null,"abstract":"Random instruction sequence (RIS) tools continue to be the main strategy for verifying and validating chip designs. In every RIS tool, test suites are created targeted to a particular functionality and run on the design. Coverage metrics provide us one mechanism to ensure and measure the completeness and thoroughness of these test suites and create new test suites directed towards unexplored areas of the design. The results from the coverage metrics can also be used to improve the cluster efficiency. In this work we discuss the results from a coverage tool that extracted and analyzed stimuli quality from large regressions, using statistical visualization. Using this coverage tool, we captured events relating to the memory sub-system and improved the stress/efficiency of the tool by making the required modifications to the tool. We ran several experiments based on the event collection and increased the ability in the tool to create scenarios exercising patterns that can potentially highlight complex bugs.","PeriodicalId":273432,"journal":{"name":"2015 16th International Workshop on Microprocessor and SOC Test and Verification (MTV)","volume":"15 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2015-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115282092","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
Modeling and Analysis of Trusted Boot Processes Based on Actor Network Procedures 基于Actor网络过程的可信启动过程建模与分析
Mark Nelson, P. Seidel
We are discussing a framework for formally modeling and analyzing the security of trusted boot processes. The presented framework is based on actor networks. It considers essential cyber-physical features of the system and how to check the authenticity of the software it is running.
我们正在讨论一个框架,用于形式化建模和分析可信引导进程的安全性。提出的框架是基于参与者网络的。它考虑了系统的基本网络物理特征以及如何检查其运行的软件的真实性。
{"title":"Modeling and Analysis of Trusted Boot Processes Based on Actor Network Procedures","authors":"Mark Nelson, P. Seidel","doi":"10.1109/MTV.2015.20","DOIUrl":"https://doi.org/10.1109/MTV.2015.20","url":null,"abstract":"We are discussing a framework for formally modeling and analyzing the security of trusted boot processes. The presented framework is based on actor networks. It considers essential cyber-physical features of the system and how to check the authenticity of the software it is running.","PeriodicalId":273432,"journal":{"name":"2015 16th International Workshop on Microprocessor and SOC Test and Verification (MTV)","volume":"22 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2015-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129688865","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
Novel MC/DC Coverage Test Sets Generation Algorithm, and MC/DC Design Fault Detection Strength Insights 新型MC/DC覆盖测试集生成算法及MC/DC设计故障检测强度洞察
Mohamed A. Salem, K. Eder
This paper introduces Modified Condition/Decision Coverage (MC/DC), novel MC/DC coverage test sets generation algorithm named OBSRV, and MC/DC design fault detection strength. The paper presents an overview about MC/DC in terms of the MC/DC definition, MC/DC types, and the conventional MC/DC approaches. It introduces a novel algorithm, called OBSRV, for MC/DC coverage test sets generation. OBSRV resolves MC/DC controllability and observability by using principles found in the D-algorithm that is the foundation for state-of-the-art ATPG. It thereby leverages hardware test principles to advance MC/DC for software, and hardware structural coverage. The paper presents an investigation of the introduced OBSRV algorithm scalability, and complexity to prove its suitability for practical designs. The paper investigates MC/DC functional design faults detection strength, and analyzes empirical results conducted on main design fault classes in microprocessors.
本文介绍了改进的条件/决策覆盖(MC/DC),一种新的MC/DC覆盖测试集生成算法OBSRV,以及MC/DC设计故障检测强度。本文从MC/DC的定义、MC/DC类型和传统的MC/DC方法三个方面对MC/DC进行了概述。介绍了一种新的用于MC/DC覆盖测试集生成的OBSRV算法。OBSRV通过使用d算法中的原理来解决MC/DC可控性和可观测性,d算法是最先进的ATPG的基础。因此,它利用硬件测试原则来推进软件和硬件结构覆盖的MC/DC。本文对引入的OBSRV算法的可扩展性和复杂度进行了研究,以证明其在实际设计中的适用性。研究了MC/DC功能设计故障检测强度,并对微处理器主要设计故障类别的实证结果进行了分析。
{"title":"Novel MC/DC Coverage Test Sets Generation Algorithm, and MC/DC Design Fault Detection Strength Insights","authors":"Mohamed A. Salem, K. Eder","doi":"10.1109/MTV.2015.15","DOIUrl":"https://doi.org/10.1109/MTV.2015.15","url":null,"abstract":"This paper introduces Modified Condition/Decision Coverage (MC/DC), novel MC/DC coverage test sets generation algorithm named OBSRV, and MC/DC design fault detection strength. The paper presents an overview about MC/DC in terms of the MC/DC definition, MC/DC types, and the conventional MC/DC approaches. It introduces a novel algorithm, called OBSRV, for MC/DC coverage test sets generation. OBSRV resolves MC/DC controllability and observability by using principles found in the D-algorithm that is the foundation for state-of-the-art ATPG. It thereby leverages hardware test principles to advance MC/DC for software, and hardware structural coverage. The paper presents an investigation of the introduced OBSRV algorithm scalability, and complexity to prove its suitability for practical designs. The paper investigates MC/DC functional design faults detection strength, and analyzes empirical results conducted on main design fault classes in microprocessors.","PeriodicalId":273432,"journal":{"name":"2015 16th International Workshop on Microprocessor and SOC Test and Verification (MTV)","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2015-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"122096665","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
Hybrid Post Silicon Validation Methodology for Layerscape SoCs involving Secure Boot: Boot (Secure & Non-secure) and Kernel Integration with Randomized Test 涉及安全引导的层级soc混合硅后验证方法:引导(安全和非安全)和随机测试的内核集成
Amandeep Sharan, Ashish Gupta
Design advancements in semiconductor industry have resulted in shrinking schedules of time-to-market and improved quality assurance of the chips to be in perfect tandem with their specifications. Hence, Post-Silicon Validation, having a significant percentage in time-to-money, becomes one of the most highly leveraged steps in chip implementation. This also puts more pressure to reduce the validation cycle and automate extensively to speedup validation. Nowadays, companies are aiming for more complex designs in a shorter duration. So, as the SoC complexity keeps growing, we need real software applications, specialized and random tests to observe and check functionality, added with regression and electrical tests for checking chip specifications. For this, kernel boot is one of the best methodologies to run on the first silicon parts for a complete system test, which is followed by random tests & electrical validation. This paper presents a novel methodology for validation flow which facilitates kernel boot, both secure and non-secure, from various memory sources, integrating random test generation in every iteration. This flow also covers boot validation, electrical validation and complex scenarios like secure boot with deep sleep. It will cut down validation run time by 3-4 times, thus notably improving the performance which will lead to a major reduction in time to market. Other enhancements are in Customer Satisfaction Index (CSI) and Performance Quality Index (PQI) for boot and in shortening of electrical cycles.
半导体行业的设计进步导致了上市时间的缩短,并提高了芯片的质量保证,使其与规格完美结合。因此,后晶片验证在时间到资金方面占很大比例,成为芯片实现中最具杠杆作用的步骤之一。这也给缩短验证周期和广泛自动化以加速验证带来了更大的压力。如今,公司的目标是在更短的时间内完成更复杂的设计。因此,随着SoC复杂性的不断增长,我们需要真正的软件应用程序,专业和随机测试来观察和检查功能,并添加回归和电气测试来检查芯片规格。为此,内核引导是在第一个硅部件上运行完整系统测试的最佳方法之一,然后是随机测试和电气验证。本文提出了一种新的验证流方法,该方法可以促进从各种内存源安全或非安全地启动内核,并在每次迭代中集成随机测试生成。这个流程还涵盖了启动验证、电气验证和复杂的场景,比如带深度睡眠的安全启动。它将把验证运行时间缩短3-4倍,从而显著提高性能,从而大大缩短上市时间。其他改进包括客户满意度指数(CSI)和启动性能质量指数(PQI)以及缩短电气周期。
{"title":"Hybrid Post Silicon Validation Methodology for Layerscape SoCs involving Secure Boot: Boot (Secure & Non-secure) and Kernel Integration with Randomized Test","authors":"Amandeep Sharan, Ashish Gupta","doi":"10.1109/MTV.2015.16","DOIUrl":"https://doi.org/10.1109/MTV.2015.16","url":null,"abstract":"Design advancements in semiconductor industry have resulted in shrinking schedules of time-to-market and improved quality assurance of the chips to be in perfect tandem with their specifications. Hence, Post-Silicon Validation, having a significant percentage in time-to-money, becomes one of the most highly leveraged steps in chip implementation. This also puts more pressure to reduce the validation cycle and automate extensively to speedup validation. Nowadays, companies are aiming for more complex designs in a shorter duration. So, as the SoC complexity keeps growing, we need real software applications, specialized and random tests to observe and check functionality, added with regression and electrical tests for checking chip specifications. For this, kernel boot is one of the best methodologies to run on the first silicon parts for a complete system test, which is followed by random tests & electrical validation. This paper presents a novel methodology for validation flow which facilitates kernel boot, both secure and non-secure, from various memory sources, integrating random test generation in every iteration. This flow also covers boot validation, electrical validation and complex scenarios like secure boot with deep sleep. It will cut down validation run time by 3-4 times, thus notably improving the performance which will lead to a major reduction in time to market. Other enhancements are in Customer Satisfaction Index (CSI) and Performance Quality Index (PQI) for boot and in shortening of electrical cycles.","PeriodicalId":273432,"journal":{"name":"2015 16th International Workshop on Microprocessor and SOC Test and Verification (MTV)","volume":"40 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2015-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"122730784","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
期刊
2015 16th International Workshop on Microprocessor and SOC Test and Verification (MTV)
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1