首页 > 最新文献

2016 IEEE Real-Time and Embedded Technology and Applications Symposium (RTAS)最新文献

英文 中文
TaskShuffler: A Schedule Randomization Protocol for Obfuscation against Timing Inference Attacks in Real-Time Systems TaskShuffler:一种针对实时系统中时序推理攻击的混淆调度随机化协议
Pub Date : 2016-04-11 DOI: 10.1109/RTAS.2016.7461362
Man-Ki Yoon, Sibin Mohan, Chien-Ying Chen, L. Sha
The high degree of predictability in real-time systems makes it possible for adversaries to launch timing inference attacks such as those based on side-channels and covert-channels. We present TaskShuffler, a schedule obfuscation method aimed at randomizing the schedule for such systems while still providing the real-time guarantees that are necessary for their safe operation. This paper also analyzes the effect of these mechanisms by presenting schedule entropy - a metric to measure the uncertainty (as perceived by attackers) introduced by TaskShuffler. These mechanisms will increase the difficulty for would-be attackers thus improving the overall security guarantees for real-time systems.
实时系统的高度可预测性使得对手有可能发动诸如基于侧信道和隐蔽信道的定时推理攻击。我们提出了TaskShuffler,一种调度混淆方法,旨在随机化此类系统的调度,同时仍然提供其安全运行所需的实时保证。本文还通过提出调度熵来分析这些机制的影响——调度熵是一种度量由TaskShuffler引入的不确定性(被攻击者感知)的度量。这些机制将增加潜在攻击者的难度,从而提高实时系统的整体安全保障。
{"title":"TaskShuffler: A Schedule Randomization Protocol for Obfuscation against Timing Inference Attacks in Real-Time Systems","authors":"Man-Ki Yoon, Sibin Mohan, Chien-Ying Chen, L. Sha","doi":"10.1109/RTAS.2016.7461362","DOIUrl":"https://doi.org/10.1109/RTAS.2016.7461362","url":null,"abstract":"The high degree of predictability in real-time systems makes it possible for adversaries to launch timing inference attacks such as those based on side-channels and covert-channels. We present TaskShuffler, a schedule obfuscation method aimed at randomizing the schedule for such systems while still providing the real-time guarantees that are necessary for their safe operation. This paper also analyzes the effect of these mechanisms by presenting schedule entropy - a metric to measure the uncertainty (as perceived by attackers) introduced by TaskShuffler. These mechanisms will increase the difficulty for would-be attackers thus improving the overall security guarantees for real-time systems.","PeriodicalId":338179,"journal":{"name":"2016 IEEE Real-Time and Embedded Technology and Applications Symposium (RTAS)","volume":"352 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2016-04-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115978750","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 66
Buffer Space Allocation for Real-Time Priority-Aware Networks 实时优先级感知网络的缓冲空间分配
Pub Date : 2016-04-11 DOI: 10.1109/RTAS.2016.7461324
H. Kashif, Hiren D. Patel
In this work, we address the challenge of incorporating buffer space constraints in worst-case latency analysis for priority-aware networks. A priority-aware network is a wormhole-switched network-on-chip with distinct virtual channels per priority. Prior worst-case latency analyses assume that the routers have infinite buffer space allocated to the virtual channels. This assumption renders these analyses impractical when considering actual deployments. This is because an implementation of the priority-aware network imposes buffer constraints on the application. These constraints can result in back pressure on the communication, which the analyses must incorporate. Consequently, we extend a worst- case latency analysis for priority-aware networks to include buffer space constraints. We provide the theory for these extensions and prove their correctness. We experiment on a large set of synthetic benchmarks, and show that we can deploy applications on priority-aware networks with virtual channels of sizes as small as two flits. In addition, we propose a polynomial time buffer space allocation algorithm. This algorithm minimizes the buffer space required at the virtual channels while scheduling the application sets on the target priority-aware network. Our empirical evaluation shows that the proposed algorithm reduces buffer space requirements in the virtual channels by approximately 85% on average.
在这项工作中,我们解决了在优先级感知网络的最坏延迟分析中纳入缓冲空间约束的挑战。优先级感知网络是一种虫洞交换的片上网络,每个优先级都有不同的虚拟通道。先前的最坏情况延迟分析假设路由器有无限的缓冲空间分配给虚拟通道。这个假设使得这些分析在考虑实际部署时变得不切实际。这是因为优先级感知网络的实现对应用程序施加了缓冲区约束。这些约束可能导致通信上的反压力,这是分析必须考虑的。因此,我们扩展了优先级感知网络的最坏情况延迟分析,以包括缓冲空间约束。我们为这些扩展提供了理论依据,并证明了它们的正确性。我们在大量的综合基准测试上进行了实验,并证明我们可以在具有优先级感知的网络上部署应用程序,其虚拟通道的大小只有两个flits。此外,我们提出了一个多项式时间缓冲空间分配算法。该算法在调度目标优先级感知网络上的应用程序集时,将虚拟通道所需的缓冲区空间最小化。我们的经验评估表明,所提出的算法将虚拟通道中的缓冲空间需求平均减少了约85%。
{"title":"Buffer Space Allocation for Real-Time Priority-Aware Networks","authors":"H. Kashif, Hiren D. Patel","doi":"10.1109/RTAS.2016.7461324","DOIUrl":"https://doi.org/10.1109/RTAS.2016.7461324","url":null,"abstract":"In this work, we address the challenge of incorporating buffer space constraints in worst-case latency analysis for priority-aware networks. A priority-aware network is a wormhole-switched network-on-chip with distinct virtual channels per priority. Prior worst-case latency analyses assume that the routers have infinite buffer space allocated to the virtual channels. This assumption renders these analyses impractical when considering actual deployments. This is because an implementation of the priority-aware network imposes buffer constraints on the application. These constraints can result in back pressure on the communication, which the analyses must incorporate. Consequently, we extend a worst- case latency analysis for priority-aware networks to include buffer space constraints. We provide the theory for these extensions and prove their correctness. We experiment on a large set of synthetic benchmarks, and show that we can deploy applications on priority-aware networks with virtual channels of sizes as small as two flits. In addition, we propose a polynomial time buffer space allocation algorithm. This algorithm minimizes the buffer space required at the virtual channels while scheduling the application sets on the target priority-aware network. Our empirical evaluation shows that the proposed algorithm reduces buffer space requirements in the virtual channels by approximately 85% on average.","PeriodicalId":338179,"journal":{"name":"2016 IEEE Real-Time and Embedded Technology and Applications Symposium (RTAS)","volume":"19 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2016-04-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"125264241","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 20
A Real-Time Scratchpad-Centric OS for Multi-Core Embedded Systems 面向多核嵌入式系统的实时刮刮板中心操作系统
Pub Date : 2016-04-11 DOI: 10.1109/RTAS.2016.7461321
Rohan Tabish, R. Mancuso, Saud Wasly, A. Alhammad, Sujit S. Phatak, R. Pellizzoni, M. Caccamo
Multi-core processors have replaced single-core systems in almost every segment of the industry. Unfortunately, their increased complexity often causes a loss of temporal predictability which represents a key requirement for hard real-time systems. Major sources of unpredictability are the shared low level resources, such as the memory hierarchy and the I/O subsystem. In this paper, we approach the problem of shared resource arbitration at an OS-level and propose a novel scratchpad-centric OS design for multi-core platforms. In the proposed OS, the predictable usage of shared resources across multiple cores represents a central design-time goal. Hence, we show (i) how contention-free execution of real-time tasks can be achieved on scratchpad-based architectures, and (ii) how a separation of application logic and I/O perations in the time domain can be enforced. To validate the proposed design, we implemented the proposed OS using a commercial-off-the-shelf (COTS) platform. Experiments show that this novel design delivers predictable temporal behavior to hard real-time tasks, and it improves performance up to 2.1× compared to traditional approaches.
多核处理器几乎在工业的每个领域都取代了单核系统。不幸的是,它们增加的复杂性经常导致时间可预测性的丧失,这是硬实时系统的关键要求。不可预测性的主要来源是共享的低级资源,例如内存层次结构和I/O子系统。在本文中,我们探讨了在操作系统级别的共享资源仲裁问题,并提出了一种新的多核平台的以刮本为中心的操作系统设计。在建议的操作系统中,跨多个内核的共享资源的可预测使用代表了一个中心设计时目标。因此,我们将展示(i)如何在基于刮擦板的架构上实现实时任务的无争用执行,以及(ii)如何在时间域中强制实现应用程序逻辑和i /O操作的分离。为了验证所建议的设计,我们使用商用现货(COTS)平台实现了所建议的操作系统。实验表明,这种新颖的设计为硬实时任务提供了可预测的时间行为,与传统方法相比,性能提高了2.1倍。
{"title":"A Real-Time Scratchpad-Centric OS for Multi-Core Embedded Systems","authors":"Rohan Tabish, R. Mancuso, Saud Wasly, A. Alhammad, Sujit S. Phatak, R. Pellizzoni, M. Caccamo","doi":"10.1109/RTAS.2016.7461321","DOIUrl":"https://doi.org/10.1109/RTAS.2016.7461321","url":null,"abstract":"Multi-core processors have replaced single-core systems in almost every segment of the industry. Unfortunately, their increased complexity often causes a loss of temporal predictability which represents a key requirement for hard real-time systems. Major sources of unpredictability are the shared low level resources, such as the memory hierarchy and the I/O subsystem. In this paper, we approach the problem of shared resource arbitration at an OS-level and propose a novel scratchpad-centric OS design for multi-core platforms. In the proposed OS, the predictable usage of shared resources across multiple cores represents a central design-time goal. Hence, we show (i) how contention-free execution of real-time tasks can be achieved on scratchpad-based architectures, and (ii) how a separation of application logic and I/O perations in the time domain can be enforced. To validate the proposed design, we implemented the proposed OS using a commercial-off-the-shelf (COTS) platform. Experiments show that this novel design delivers predictable temporal behavior to hard real-time tasks, and it improves performance up to 2.1× compared to traditional approaches.","PeriodicalId":338179,"journal":{"name":"2016 IEEE Real-Time and Embedded Technology and Applications Symposium (RTAS)","volume":"56 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2016-04-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127490894","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 68
Demo Abstract: Timing Aware Hardware Virtualization on the L4Re Microkernel Systems 摘要:L4Re微核系统上的时序感知硬件虚拟化
Pub Date : 2016-04-11 DOI: 10.1109/RTAS.2016.7461335
A. Lackorzynski, Alexander Warg
Hardware virtualization support has found its way into real-time and embedded systems. It is paramount for an efficient concurrent execution of multiple systems on a single platform, including commodity operating-systems and their applications. Isolation is a key feature for these systems, both in the spatial and temporal domain, as it allows for secure combinations of real-time and non real-time applications. For such requirements, microkernels are a perfect fit as they provide the foundation for building secure as well as real-time aware systems. Lately, microkernels learned to support hardware-provided virtualization features, morphing them into microhypervisors. In our demo, we show our open-source and commercially supported L4Re system running Linux and FreeRTOS side by side on a multi-core ARM platform. While for Linux we use the hardware features for virtualization, i.e., ARM's virtualized extension, we revert to paravirtualization for running the FreeRTOS guest. Paravirtualization adapts the guest kernel to run as a native application on the microkernel. For simple guests that do not use advanced hardware features such as virtual memory and multiple privilege levels, virtualization is simplified and the state of a virtual machine is significantly reduced, improving interrupt delivery and context switching latency. Both guests as well as the native application drive LEDs to exemplify steering actual devices as well as to show their liveliness. Taking down the Linux guest will not disturb the others.
硬件虚拟化支持已经进入了实时和嵌入式系统。对于在单个平台上高效地并发执行多个系统(包括商用操作系统及其应用程序)来说,这是至关重要的。隔离是这些系统在空间和时间域中的一个关键特性,因为它允许实时和非实时应用程序的安全组合。对于这样的需求,微内核是一个完美的选择,因为它们为构建安全和实时感知系统提供了基础。最近,微内核学会了支持硬件提供的虚拟化特性,将它们转变为微管理程序。在我们的演示中,我们展示了在多核ARM平台上同时运行Linux和FreeRTOS的开源和商业支持的L4Re系统。对于Linux,我们使用硬件特性进行虚拟化,也就是ARM的虚拟化扩展,而对于运行FreeRTOS客户机,我们恢复到半虚拟化。准虚拟化使来宾内核作为本机应用程序在微内核上运行。对于不使用高级硬件特性(如虚拟内存和多特权级别)的简单客户机,虚拟化得到了简化,虚拟机的状态得到了显著降低,从而改善了中断交付和上下文切换延迟。嘉宾和本地应用程序驱动led,以指导实际设备为例,并展示他们的活力。关闭Linux客户端不会影响到其他客户端。
{"title":"Demo Abstract: Timing Aware Hardware Virtualization on the L4Re Microkernel Systems","authors":"A. Lackorzynski, Alexander Warg","doi":"10.1109/RTAS.2016.7461335","DOIUrl":"https://doi.org/10.1109/RTAS.2016.7461335","url":null,"abstract":"Hardware virtualization support has found its way into real-time and embedded systems. It is paramount for an efficient concurrent execution of multiple systems on a single platform, including commodity operating-systems and their applications. Isolation is a key feature for these systems, both in the spatial and temporal domain, as it allows for secure combinations of real-time and non real-time applications. For such requirements, microkernels are a perfect fit as they provide the foundation for building secure as well as real-time aware systems. Lately, microkernels learned to support hardware-provided virtualization features, morphing them into microhypervisors. In our demo, we show our open-source and commercially supported L4Re system running Linux and FreeRTOS side by side on a multi-core ARM platform. While for Linux we use the hardware features for virtualization, i.e., ARM's virtualized extension, we revert to paravirtualization for running the FreeRTOS guest. Paravirtualization adapts the guest kernel to run as a native application on the microkernel. For simple guests that do not use advanced hardware features such as virtual memory and multiple privilege levels, virtualization is simplified and the state of a virtual machine is significantly reduced, improving interrupt delivery and context switching latency. Both guests as well as the native application drive LEDs to exemplify steering actual devices as well as to show their liveliness. Taking down the Linux guest will not disturb the others.","PeriodicalId":338179,"journal":{"name":"2016 IEEE Real-Time and Embedded Technology and Applications Symposium (RTAS)","volume":"17 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2016-04-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"121259068","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
Criticality- and Requirement-Aware Bus Arbitration for Multi-Core Mixed Criticality Systems 多核混合临界系统的临界和需求感知总线仲裁
Pub Date : 2016-04-11 DOI: 10.1109/RTAS.2016.7461327
Mohamed Hassan, Hiren D. Patel
This work presents CArb, an arbiter for controlling accesses to the shared memory bus in multi-core mixed criticality systems. CArb is a requirement-aware arbiter that optimally allocates service to tasks based on their requirements. It is also criticality-aware since it incorporates criticality as a first-class principle in arbitration decisions. CArb supports any number of criticality levels and does not impose any restrictions on mapping tasks to processors. Hence, it operates in tandem with existing processor scheduling policies. In addition, CArb is able to dynamically adapt memory bus arbitration at run time to respond to increases in the monitored execution times of tasks. Utilizing this adaptation, CArb is able to offset these increases; hence, postpones the system need to switch to a degraded mode. We prototype CArb, and evaluate it with an avionics case-study from Honeywell as well as synthetic experiments.
本工作提出了CArb,一种在多核混合临界系统中控制对共享内存总线访问的仲裁器。CArb是一个需求感知的仲裁器,它根据任务的需求将服务最佳地分配给任务。它还具有临界意识,因为它将临界性作为仲裁决策中的头等原则。CArb支持任意数量的临界级别,并且不会对将任务映射到处理器施加任何限制。因此,它与现有的处理器调度策略协同工作。此外,CArb能够在运行时动态调整内存总线仲裁,以响应被监视的任务执行时间的增加。利用这种适应性,碳水化合物能够抵消这些增加;因此,推迟了系统切换到降级模式的需要。我们对CArb进行了原型设计,并通过霍尼韦尔航空电子案例研究和综合实验对其进行了评估。
{"title":"Criticality- and Requirement-Aware Bus Arbitration for Multi-Core Mixed Criticality Systems","authors":"Mohamed Hassan, Hiren D. Patel","doi":"10.1109/RTAS.2016.7461327","DOIUrl":"https://doi.org/10.1109/RTAS.2016.7461327","url":null,"abstract":"This work presents CArb, an arbiter for controlling accesses to the shared memory bus in multi-core mixed criticality systems. CArb is a requirement-aware arbiter that optimally allocates service to tasks based on their requirements. It is also criticality-aware since it incorporates criticality as a first-class principle in arbitration decisions. CArb supports any number of criticality levels and does not impose any restrictions on mapping tasks to processors. Hence, it operates in tandem with existing processor scheduling policies. In addition, CArb is able to dynamically adapt memory bus arbitration at run time to respond to increases in the monitored execution times of tasks. Utilizing this adaptation, CArb is able to offset these increases; hence, postpones the system need to switch to a degraded mode. We prototype CArb, and evaluate it with an avionics case-study from Honeywell as well as synthetic experiments.","PeriodicalId":338179,"journal":{"name":"2016 IEEE Real-Time and Embedded Technology and Applications Symposium (RTAS)","volume":"47 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2016-04-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127215995","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 43
OSEK-Like Kernel Support for Engine Control Applications under EDF Scheduling 类似osek的内核支持引擎控制应用程序下的EDF调度
Pub Date : 2016-04-11 DOI: 10.1109/RTAS.2016.7461345
Vincenzo Apuzzo, Alessandro Biondi, G. Buttazzo
Engine control applications typically include computational activities consisting of periodic tasks, activated by timers, and engine-triggered tasks, activated at specific angular positions of the crankshaft. Such tasks are typically managed by a OSEK-compliant real-time kernel using a fixed-priority scheduler, as specified in the AUTOSAR standard adopted by most automotive industries. Recent theoretical results, however, have highlighted significant limitations of fixed-priority scheduling in managing engine-triggered tasks that could be solved by a dynamic scheduling policy. To address this issue, this paper proposes a new kernel implementation within the ERIKA Enterprise operating system, providing EDF scheduling for both periodic and engine-triggered tasks. The proposed kernel has been conceived to have an API similar to the AUTOSAR/OSEK standard one, limiting the effort needed to use the new kernel with an existing legacy application. The proposed kernel implementation is discussed and evaluated in terms of run-time overhead and footprint. In addition, a simulation framework is presented, showing a powerful environment for studying the execution of tasks under the proposed kernel.
发动机控制应用通常包括由计时器激活的周期性任务和由发动机触发的任务组成的计算活动,这些任务在曲轴的特定角度位置激活。这些任务通常由符合osek的实时内核管理,使用固定优先级调度器,正如大多数汽车行业采用的AUTOSAR标准中指定的那样。然而,最近的理论结果强调了固定优先级调度在管理引擎触发的任务方面的显著局限性,这些任务可以通过动态调度策略来解决。为了解决这个问题,本文在ERIKA Enterprise操作系统中提出了一个新的内核实现,为周期性和引擎触发的任务提供EDF调度。提议的内核被设想为具有类似于AUTOSAR/OSEK标准的API,从而限制了将新内核与现有遗留应用程序一起使用所需的工作量。根据运行时开销和内存占用来讨论和评估所建议的内核实现。此外,还提供了一个仿真框架,为研究该内核下任务的执行提供了一个强大的环境。
{"title":"OSEK-Like Kernel Support for Engine Control Applications under EDF Scheduling","authors":"Vincenzo Apuzzo, Alessandro Biondi, G. Buttazzo","doi":"10.1109/RTAS.2016.7461345","DOIUrl":"https://doi.org/10.1109/RTAS.2016.7461345","url":null,"abstract":"Engine control applications typically include computational activities consisting of periodic tasks, activated by timers, and engine-triggered tasks, activated at specific angular positions of the crankshaft. Such tasks are typically managed by a OSEK-compliant real-time kernel using a fixed-priority scheduler, as specified in the AUTOSAR standard adopted by most automotive industries. Recent theoretical results, however, have highlighted significant limitations of fixed-priority scheduling in managing engine-triggered tasks that could be solved by a dynamic scheduling policy. To address this issue, this paper proposes a new kernel implementation within the ERIKA Enterprise operating system, providing EDF scheduling for both periodic and engine-triggered tasks. The proposed kernel has been conceived to have an API similar to the AUTOSAR/OSEK standard one, limiting the effort needed to use the new kernel with an existing legacy application. The proposed kernel implementation is discussed and evaluated in terms of run-time overhead and footprint. In addition, a simulation framework is presented, showing a powerful environment for studying the execution of tasks under the proposed kernel.","PeriodicalId":338179,"journal":{"name":"2016 IEEE Real-Time and Embedded Technology and Applications Symposium (RTAS)","volume":"60 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2016-04-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"122111592","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 10
Demo Abstract: Response-Time Analysis for Task Chains in Communicating Threads with pyCPA 摘要:pyCPA通信线程中任务链的响应时间分析
Pub Date : 2016-04-11 DOI: 10.1109/RTAS.2016.7461332
Johannes Schlatow, Jonas Peeck, R. Ernst
Summary form only given. When modelling software components for timing analysis, we typically encounter functional chains of tasks that lead to precedence relations. As these task chains represent a functionally-dependent sequence of operations, in real-time systems, there is usually a requirement for their end-to-end latency. When mapped to software components, functional chains often result in communicating threads. Since threads are scheduled rather than tasks, specific task chain properties arise that can be exploited for response-time analysis by extending the busy-window analysis for such task chains in static-priority preemptive systems. We implemented this analysis by means of an analysis extension for pyCPA, a research-grade implementation of compositional performance analysis (CPA). The major scope of this demo is to show how CPA can be reasonably performed for realistic component-based systems. It also demonstrates how research on and with CPA is conducted using the pyCPA analysis framework. In the course of this demo, we show two approaches for the extraction of an appropriate timing model: 1) the derivation from a contract-based specification of the software components and 2) a tracing-based approach suitable for black-box components. We also demonstrate how this timing model is fed into the analysis extension in order to obtain response-time results for the task chains of interest. Finally, we present how the developed analysis extension speeds up the CPA and therefore enables an automated design-space exploration and optimisation of the threads' priority assignments in order to satisfy the pre-defined latency requirements.
只提供摘要形式。当为时序分析建模软件组件时,我们通常会遇到导致优先关系的任务功能链。由于这些任务链表示与功能相关的操作序列,因此在实时系统中,通常需要它们的端到端延迟。当映射到软件组件时,功能链通常导致通信线程。由于线程是被调度的,而不是任务,因此出现了特定的任务链属性,可以通过扩展静态优先级抢占系统中此类任务链的忙窗分析来利用这些属性进行响应时间分析。我们通过pyCPA的分析扩展来实现这个分析,pyCPA是一个研究级的成分性能分析(CPA)实现。本演示的主要范围是展示如何在实际的基于组件的系统中合理地执行CPA。它还演示了如何使用pyCPA分析框架对CPA进行研究。在本演示的过程中,我们展示了两种提取适当时序模型的方法:1)从基于契约的软件组件规范中派生,以及2)适用于黑盒组件的基于跟踪的方法。我们还演示了如何将这个计时模型输入到分析扩展中,以获得感兴趣的任务链的响应时间结果。最后,我们介绍了开发的分析扩展如何加速CPA,从而实现自动的设计空间探索和线程优先级分配的优化,以满足预定义的延迟需求。
{"title":"Demo Abstract: Response-Time Analysis for Task Chains in Communicating Threads with pyCPA","authors":"Johannes Schlatow, Jonas Peeck, R. Ernst","doi":"10.1109/RTAS.2016.7461332","DOIUrl":"https://doi.org/10.1109/RTAS.2016.7461332","url":null,"abstract":"Summary form only given. When modelling software components for timing analysis, we typically encounter functional chains of tasks that lead to precedence relations. As these task chains represent a functionally-dependent sequence of operations, in real-time systems, there is usually a requirement for their end-to-end latency. When mapped to software components, functional chains often result in communicating threads. Since threads are scheduled rather than tasks, specific task chain properties arise that can be exploited for response-time analysis by extending the busy-window analysis for such task chains in static-priority preemptive systems. We implemented this analysis by means of an analysis extension for pyCPA, a research-grade implementation of compositional performance analysis (CPA). The major scope of this demo is to show how CPA can be reasonably performed for realistic component-based systems. It also demonstrates how research on and with CPA is conducted using the pyCPA analysis framework. In the course of this demo, we show two approaches for the extraction of an appropriate timing model: 1) the derivation from a contract-based specification of the software components and 2) a tracing-based approach suitable for black-box components. We also demonstrate how this timing model is fed into the analysis extension in order to obtain response-time results for the task chains of interest. Finally, we present how the developed analysis extension speeds up the CPA and therefore enables an automated design-space exploration and optimisation of the threads' priority assignments in order to satisfy the pre-defined latency requirements.","PeriodicalId":338179,"journal":{"name":"2016 IEEE Real-Time and Embedded Technology and Applications Symposium (RTAS)","volume":"25 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2016-04-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"116575047","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Demo Abstract: Demonstration of the FMTV 2016 Timing Verification Challenge 摘要:FMTV 2016时序验证挑战赛演示
Pub Date : 2016-04-11 DOI: 10.1109/RTAS.2016.7461330
A. Hamann, D. Ziegenbein, S. Kramer, M. Lukasiewycz
The complex dynamic behavior of automotive software systems, in particular engine management, in combination with emerging multi-core execution platforms, significantly increased the problem space for timing analysis methods. As a result, the risk of divergence between academic research and industrial practice is currently increasing. Therefore, we provided a concrete automotive benchmark for the Formal Methods for Timing Verification (FMTV) challenge 2016 (https://waters2016.inria.fr/challenge/), a full blown performance model of a modern engine management system (downloadable at http://ecrts.eit.uni-kl.de/forum/viewtopic.php?f=27&t=62), with the goal to challenge existing timing analysis approaches with respect to their expressiveness and precision. In the demo session we will present the performance model of the engine management system using the Amalthea tool (http://www.amalthea-project.org/). Furthermore, we will show the model in action using professional timing tools such as from Symtavision (https://www.symtavision.com/), Timing Architects (http://www.timing-architects.com/), and Inchron (https://www.inchron.de/). Thereby, the focus will lie on determining tight end-to-end latency bounds for a set of given cause-effect chains. This is challenging since the dynamic behavior of a engine management software is quite complex and contains mechanisms that explore the limits of existing academic approaches: preemptive and cooperative priority based scheduling; periodic, sporadic, and engine synchronous tasks; multi-core platform with distributed cause-effect chains including cross-core communication; label (i.e. data) placement dependent execution times of runnables Overall the demo gives an impression of the current state-of-practice in industrial product development, and serves as baseline for further academic research.
汽车软件系统的复杂动态行为,特别是发动机管理,与新兴的多核执行平台相结合,大大增加了正时分析方法的问题空间。因此,学术研究和工业实践之间分歧的风险目前正在增加。因此,我们为2016年正时验证的正式方法(FMTV)挑战赛(https://waters2016.inria.fr/challenge/)提供了一个具体的汽车基准,这是一个现代发动机管理系统的完整性能模型(可在http://ecrts.eit.uni-kl.de/forum/viewtopic.php?f=27&t=62下载),其目标是挑战现有的正时分析方法的表达性和准确性。在演示环节中,我们将使用Amalthea工具(http://www.amalthea-project.org/)展示发动机管理系统的性能模型。此外,我们将使用专业的计时工具,如Symtavision (https://www.symtavision.com/)、timing Architects (http://www.timing-architects.com/)和Inchron (https://www.inchron.de/)来展示实际运行中的模型。因此,重点将放在确定一组给定因果链的紧密端到端延迟边界上。这是具有挑战性的,因为发动机管理软件的动态行为非常复杂,并且包含探索现有学术方法局限性的机制:基于抢占和合作优先级的调度;周期性、偶发和引擎同步任务;具有分布式因果链的多核平台,包括跨核通信;总体而言,演示给出了工业产品开发中当前实践状态的印象,并为进一步的学术研究提供了基线。
{"title":"Demo Abstract: Demonstration of the FMTV 2016 Timing Verification Challenge","authors":"A. Hamann, D. Ziegenbein, S. Kramer, M. Lukasiewycz","doi":"10.1109/RTAS.2016.7461330","DOIUrl":"https://doi.org/10.1109/RTAS.2016.7461330","url":null,"abstract":"The complex dynamic behavior of automotive software systems, in particular engine management, in combination with emerging multi-core execution platforms, significantly increased the problem space for timing analysis methods. As a result, the risk of divergence between academic research and industrial practice is currently increasing. Therefore, we provided a concrete automotive benchmark for the Formal Methods for Timing Verification (FMTV) challenge 2016 (https://waters2016.inria.fr/challenge/), a full blown performance model of a modern engine management system (downloadable at http://ecrts.eit.uni-kl.de/forum/viewtopic.php?f=27&t=62), with the goal to challenge existing timing analysis approaches with respect to their expressiveness and precision. In the demo session we will present the performance model of the engine management system using the Amalthea tool (http://www.amalthea-project.org/). Furthermore, we will show the model in action using professional timing tools such as from Symtavision (https://www.symtavision.com/), Timing Architects (http://www.timing-architects.com/), and Inchron (https://www.inchron.de/). Thereby, the focus will lie on determining tight end-to-end latency bounds for a set of given cause-effect chains. This is challenging since the dynamic behavior of a engine management software is quite complex and contains mechanisms that explore the limits of existing academic approaches: preemptive and cooperative priority based scheduling; periodic, sporadic, and engine synchronous tasks; multi-core platform with distributed cause-effect chains including cross-core communication; label (i.e. data) placement dependent execution times of runnables Overall the demo gives an impression of the current state-of-practice in industrial product development, and serves as baseline for further academic research.","PeriodicalId":338179,"journal":{"name":"2016 IEEE Real-Time and Embedded Technology and Applications Symposium (RTAS)","volume":"45 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2016-04-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"133367144","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 8
Precise Cache Timing Analysis via Symbolic Execution 通过符号执行的精确缓存计时分析
Pub Date : 2016-04-11 DOI: 10.1109/RTAS.2016.7461358
D. Chu, J. Jaffar, Rasool Maghareh
We present a framework for WCET analysis of programs with emphasis on cache micro-architecture. Such an analysis is challenging primarily because of the timing model of a dynamic nature, that is, the timing of a basic block is heavily dependent on the context in which it is executed. At its core, our algorithm is based on symbolic execution, and an analysis is obtained by locating the "longest" symbolic execution path. Clearly a challenge is the intractable number of paths in the symbolic execution tree. Traditionally this challenge is met by performing some form of abstraction in the path generation process but this leads to a loss of path-sensitivity and thus precision in the analysis. The key feature of our algorithm is the ability for reuse. This is critical for maintaining a high-level of path-sensitivity, which in turn produces significantly increased accuracy. In other words, reuse allows scalability in path-sensitive exploration. Finally, we present an experimental evaluation on well known benchmarks in order to show two things: that systematic path-sensitivity in fact brings significant accuracy gains, and that the algorithm still scales well.
我们提出了一个程序的WCET分析框架,重点是缓存微体系结构。这样的分析之所以具有挑战性,主要是因为动态性质的计时模型,也就是说,基本块的计时严重依赖于执行它的上下文。该算法的核心是基于符号执行,并通过定位“最长”符号执行路径来进行分析。显然,一个挑战是符号执行树中难以处理的路径数量。传统上,这一挑战是通过在路径生成过程中执行某种形式的抽象来解决的,但这会导致路径敏感性的丧失,从而导致分析的精度降低。该算法的关键特征是可重用性。这对于保持高水平的路径灵敏度至关重要,从而显著提高准确性。换句话说,重用允许在路径敏感的探索中进行可伸缩性。最后,我们在众所周知的基准上进行了实验评估,以显示两件事:系统路径敏感性实际上带来了显着的准确性提高,并且该算法仍然具有良好的可扩展性。
{"title":"Precise Cache Timing Analysis via Symbolic Execution","authors":"D. Chu, J. Jaffar, Rasool Maghareh","doi":"10.1109/RTAS.2016.7461358","DOIUrl":"https://doi.org/10.1109/RTAS.2016.7461358","url":null,"abstract":"We present a framework for WCET analysis of programs with emphasis on cache micro-architecture. Such an analysis is challenging primarily because of the timing model of a dynamic nature, that is, the timing of a basic block is heavily dependent on the context in which it is executed. At its core, our algorithm is based on symbolic execution, and an analysis is obtained by locating the \"longest\" symbolic execution path. Clearly a challenge is the intractable number of paths in the symbolic execution tree. Traditionally this challenge is met by performing some form of abstraction in the path generation process but this leads to a loss of path-sensitivity and thus precision in the analysis. The key feature of our algorithm is the ability for reuse. This is critical for maintaining a high-level of path-sensitivity, which in turn produces significantly increased accuracy. In other words, reuse allows scalability in path-sensitive exploration. Finally, we present an experimental evaluation on well known benchmarks in order to show two things: that systematic path-sensitivity in fact brings significant accuracy gains, and that the algorithm still scales well.","PeriodicalId":338179,"journal":{"name":"2016 IEEE Real-Time and Embedded Technology and Applications Symposium (RTAS)","volume":"46 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2016-04-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"133707842","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 25
Analysis and Implementation of Global Preemptive Fixed-Priority Scheduling with Dynamic Cache Allocation 基于动态缓存分配的全局抢占式固定优先级调度的分析与实现
Pub Date : 2016-04-11 DOI: 10.1109/RTAS.2016.7461322
Meng Xu, L. T. Phan, Hyon-Young Choi, Insup Lee
We introduce gFPca, a cache-aware global pre-emptive fixed-priority (FP) scheduling algorithm with dynamic cache allocation for multicore systems, and we present its analysis and implementation. We introduce a new overhead-aware analysis that integrates several novel ideas to safely and tightly account for the cache overhead. Our evaluation shows that the proposed overhead-accounting approach is highly accurate, and that gFPca improves the schedulability of cache-intensive tasksets substantially compared to the cache-agnostic global FP algorithm. Our evaluation also shows that gFPca outperforms the existing cache-aware non- preemptive global FP algorithm in most cases. Through our implementation and empirical evaluation, we demonstrate the feasibility of cache-aware global scheduling with dynamic cache allocation and highlight scenarios in which gFPca is especially useful in practice.
本文介绍了一种基于缓存感知的全局抢占式固定优先级调度算法gFPca,并对其进行了分析和实现。我们引入了一个新的开销感知分析,它集成了几个新颖的想法,以安全和紧密地考虑缓存开销。我们的评估表明,所提出的开销计算方法是高度准确的,并且与缓存不可知的全局FP算法相比,gFPca大大提高了缓存密集型任务集的可调度性。我们的评估还表明,在大多数情况下,gFPca优于现有的缓存感知非抢占全局FP算法。通过我们的实现和经验评估,我们证明了具有动态缓存分配的缓存感知全局调度的可行性,并强调了gFPca在实践中特别有用的场景。
{"title":"Analysis and Implementation of Global Preemptive Fixed-Priority Scheduling with Dynamic Cache Allocation","authors":"Meng Xu, L. T. Phan, Hyon-Young Choi, Insup Lee","doi":"10.1109/RTAS.2016.7461322","DOIUrl":"https://doi.org/10.1109/RTAS.2016.7461322","url":null,"abstract":"We introduce gFPca, a cache-aware global pre-emptive fixed-priority (FP) scheduling algorithm with dynamic cache allocation for multicore systems, and we present its analysis and implementation. We introduce a new overhead-aware analysis that integrates several novel ideas to safely and tightly account for the cache overhead. Our evaluation shows that the proposed overhead-accounting approach is highly accurate, and that gFPca improves the schedulability of cache-intensive tasksets substantially compared to the cache-agnostic global FP algorithm. Our evaluation also shows that gFPca outperforms the existing cache-aware non- preemptive global FP algorithm in most cases. Through our implementation and empirical evaluation, we demonstrate the feasibility of cache-aware global scheduling with dynamic cache allocation and highlight scenarios in which gFPca is especially useful in practice.","PeriodicalId":338179,"journal":{"name":"2016 IEEE Real-Time and Embedded Technology and Applications Symposium (RTAS)","volume":"18 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2016-04-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"132397860","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 42
期刊
2016 IEEE Real-Time and Embedded Technology and Applications Symposium (RTAS)
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1