Pub Date : 2016-04-11DOI: 10.1109/RTAS.2016.7461362
Man-Ki Yoon, Sibin Mohan, Chien-Ying Chen, L. Sha
The high degree of predictability in real-time systems makes it possible for adversaries to launch timing inference attacks such as those based on side-channels and covert-channels. We present TaskShuffler, a schedule obfuscation method aimed at randomizing the schedule for such systems while still providing the real-time guarantees that are necessary for their safe operation. This paper also analyzes the effect of these mechanisms by presenting schedule entropy - a metric to measure the uncertainty (as perceived by attackers) introduced by TaskShuffler. These mechanisms will increase the difficulty for would-be attackers thus improving the overall security guarantees for real-time systems.
{"title":"TaskShuffler: A Schedule Randomization Protocol for Obfuscation against Timing Inference Attacks in Real-Time Systems","authors":"Man-Ki Yoon, Sibin Mohan, Chien-Ying Chen, L. Sha","doi":"10.1109/RTAS.2016.7461362","DOIUrl":"https://doi.org/10.1109/RTAS.2016.7461362","url":null,"abstract":"The high degree of predictability in real-time systems makes it possible for adversaries to launch timing inference attacks such as those based on side-channels and covert-channels. We present TaskShuffler, a schedule obfuscation method aimed at randomizing the schedule for such systems while still providing the real-time guarantees that are necessary for their safe operation. This paper also analyzes the effect of these mechanisms by presenting schedule entropy - a metric to measure the uncertainty (as perceived by attackers) introduced by TaskShuffler. These mechanisms will increase the difficulty for would-be attackers thus improving the overall security guarantees for real-time systems.","PeriodicalId":338179,"journal":{"name":"2016 IEEE Real-Time and Embedded Technology and Applications Symposium (RTAS)","volume":"352 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2016-04-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115978750","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2016-04-11DOI: 10.1109/RTAS.2016.7461324
H. Kashif, Hiren D. Patel
In this work, we address the challenge of incorporating buffer space constraints in worst-case latency analysis for priority-aware networks. A priority-aware network is a wormhole-switched network-on-chip with distinct virtual channels per priority. Prior worst-case latency analyses assume that the routers have infinite buffer space allocated to the virtual channels. This assumption renders these analyses impractical when considering actual deployments. This is because an implementation of the priority-aware network imposes buffer constraints on the application. These constraints can result in back pressure on the communication, which the analyses must incorporate. Consequently, we extend a worst- case latency analysis for priority-aware networks to include buffer space constraints. We provide the theory for these extensions and prove their correctness. We experiment on a large set of synthetic benchmarks, and show that we can deploy applications on priority-aware networks with virtual channels of sizes as small as two flits. In addition, we propose a polynomial time buffer space allocation algorithm. This algorithm minimizes the buffer space required at the virtual channels while scheduling the application sets on the target priority-aware network. Our empirical evaluation shows that the proposed algorithm reduces buffer space requirements in the virtual channels by approximately 85% on average.
{"title":"Buffer Space Allocation for Real-Time Priority-Aware Networks","authors":"H. Kashif, Hiren D. Patel","doi":"10.1109/RTAS.2016.7461324","DOIUrl":"https://doi.org/10.1109/RTAS.2016.7461324","url":null,"abstract":"In this work, we address the challenge of incorporating buffer space constraints in worst-case latency analysis for priority-aware networks. A priority-aware network is a wormhole-switched network-on-chip with distinct virtual channels per priority. Prior worst-case latency analyses assume that the routers have infinite buffer space allocated to the virtual channels. This assumption renders these analyses impractical when considering actual deployments. This is because an implementation of the priority-aware network imposes buffer constraints on the application. These constraints can result in back pressure on the communication, which the analyses must incorporate. Consequently, we extend a worst- case latency analysis for priority-aware networks to include buffer space constraints. We provide the theory for these extensions and prove their correctness. We experiment on a large set of synthetic benchmarks, and show that we can deploy applications on priority-aware networks with virtual channels of sizes as small as two flits. In addition, we propose a polynomial time buffer space allocation algorithm. This algorithm minimizes the buffer space required at the virtual channels while scheduling the application sets on the target priority-aware network. Our empirical evaluation shows that the proposed algorithm reduces buffer space requirements in the virtual channels by approximately 85% on average.","PeriodicalId":338179,"journal":{"name":"2016 IEEE Real-Time and Embedded Technology and Applications Symposium (RTAS)","volume":"19 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2016-04-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"125264241","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2016-04-11DOI: 10.1109/RTAS.2016.7461321
Rohan Tabish, R. Mancuso, Saud Wasly, A. Alhammad, Sujit S. Phatak, R. Pellizzoni, M. Caccamo
Multi-core processors have replaced single-core systems in almost every segment of the industry. Unfortunately, their increased complexity often causes a loss of temporal predictability which represents a key requirement for hard real-time systems. Major sources of unpredictability are the shared low level resources, such as the memory hierarchy and the I/O subsystem. In this paper, we approach the problem of shared resource arbitration at an OS-level and propose a novel scratchpad-centric OS design for multi-core platforms. In the proposed OS, the predictable usage of shared resources across multiple cores represents a central design-time goal. Hence, we show (i) how contention-free execution of real-time tasks can be achieved on scratchpad-based architectures, and (ii) how a separation of application logic and I/O perations in the time domain can be enforced. To validate the proposed design, we implemented the proposed OS using a commercial-off-the-shelf (COTS) platform. Experiments show that this novel design delivers predictable temporal behavior to hard real-time tasks, and it improves performance up to 2.1× compared to traditional approaches.
{"title":"A Real-Time Scratchpad-Centric OS for Multi-Core Embedded Systems","authors":"Rohan Tabish, R. Mancuso, Saud Wasly, A. Alhammad, Sujit S. Phatak, R. Pellizzoni, M. Caccamo","doi":"10.1109/RTAS.2016.7461321","DOIUrl":"https://doi.org/10.1109/RTAS.2016.7461321","url":null,"abstract":"Multi-core processors have replaced single-core systems in almost every segment of the industry. Unfortunately, their increased complexity often causes a loss of temporal predictability which represents a key requirement for hard real-time systems. Major sources of unpredictability are the shared low level resources, such as the memory hierarchy and the I/O subsystem. In this paper, we approach the problem of shared resource arbitration at an OS-level and propose a novel scratchpad-centric OS design for multi-core platforms. In the proposed OS, the predictable usage of shared resources across multiple cores represents a central design-time goal. Hence, we show (i) how contention-free execution of real-time tasks can be achieved on scratchpad-based architectures, and (ii) how a separation of application logic and I/O perations in the time domain can be enforced. To validate the proposed design, we implemented the proposed OS using a commercial-off-the-shelf (COTS) platform. Experiments show that this novel design delivers predictable temporal behavior to hard real-time tasks, and it improves performance up to 2.1× compared to traditional approaches.","PeriodicalId":338179,"journal":{"name":"2016 IEEE Real-Time and Embedded Technology and Applications Symposium (RTAS)","volume":"56 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2016-04-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127490894","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2016-04-11DOI: 10.1109/RTAS.2016.7461335
A. Lackorzynski, Alexander Warg
Hardware virtualization support has found its way into real-time and embedded systems. It is paramount for an efficient concurrent execution of multiple systems on a single platform, including commodity operating-systems and their applications. Isolation is a key feature for these systems, both in the spatial and temporal domain, as it allows for secure combinations of real-time and non real-time applications. For such requirements, microkernels are a perfect fit as they provide the foundation for building secure as well as real-time aware systems. Lately, microkernels learned to support hardware-provided virtualization features, morphing them into microhypervisors. In our demo, we show our open-source and commercially supported L4Re system running Linux and FreeRTOS side by side on a multi-core ARM platform. While for Linux we use the hardware features for virtualization, i.e., ARM's virtualized extension, we revert to paravirtualization for running the FreeRTOS guest. Paravirtualization adapts the guest kernel to run as a native application on the microkernel. For simple guests that do not use advanced hardware features such as virtual memory and multiple privilege levels, virtualization is simplified and the state of a virtual machine is significantly reduced, improving interrupt delivery and context switching latency. Both guests as well as the native application drive LEDs to exemplify steering actual devices as well as to show their liveliness. Taking down the Linux guest will not disturb the others.
{"title":"Demo Abstract: Timing Aware Hardware Virtualization on the L4Re Microkernel Systems","authors":"A. Lackorzynski, Alexander Warg","doi":"10.1109/RTAS.2016.7461335","DOIUrl":"https://doi.org/10.1109/RTAS.2016.7461335","url":null,"abstract":"Hardware virtualization support has found its way into real-time and embedded systems. It is paramount for an efficient concurrent execution of multiple systems on a single platform, including commodity operating-systems and their applications. Isolation is a key feature for these systems, both in the spatial and temporal domain, as it allows for secure combinations of real-time and non real-time applications. For such requirements, microkernels are a perfect fit as they provide the foundation for building secure as well as real-time aware systems. Lately, microkernels learned to support hardware-provided virtualization features, morphing them into microhypervisors. In our demo, we show our open-source and commercially supported L4Re system running Linux and FreeRTOS side by side on a multi-core ARM platform. While for Linux we use the hardware features for virtualization, i.e., ARM's virtualized extension, we revert to paravirtualization for running the FreeRTOS guest. Paravirtualization adapts the guest kernel to run as a native application on the microkernel. For simple guests that do not use advanced hardware features such as virtual memory and multiple privilege levels, virtualization is simplified and the state of a virtual machine is significantly reduced, improving interrupt delivery and context switching latency. Both guests as well as the native application drive LEDs to exemplify steering actual devices as well as to show their liveliness. Taking down the Linux guest will not disturb the others.","PeriodicalId":338179,"journal":{"name":"2016 IEEE Real-Time and Embedded Technology and Applications Symposium (RTAS)","volume":"17 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2016-04-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"121259068","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2016-04-11DOI: 10.1109/RTAS.2016.7461327
Mohamed Hassan, Hiren D. Patel
This work presents CArb, an arbiter for controlling accesses to the shared memory bus in multi-core mixed criticality systems. CArb is a requirement-aware arbiter that optimally allocates service to tasks based on their requirements. It is also criticality-aware since it incorporates criticality as a first-class principle in arbitration decisions. CArb supports any number of criticality levels and does not impose any restrictions on mapping tasks to processors. Hence, it operates in tandem with existing processor scheduling policies. In addition, CArb is able to dynamically adapt memory bus arbitration at run time to respond to increases in the monitored execution times of tasks. Utilizing this adaptation, CArb is able to offset these increases; hence, postpones the system need to switch to a degraded mode. We prototype CArb, and evaluate it with an avionics case-study from Honeywell as well as synthetic experiments.
{"title":"Criticality- and Requirement-Aware Bus Arbitration for Multi-Core Mixed Criticality Systems","authors":"Mohamed Hassan, Hiren D. Patel","doi":"10.1109/RTAS.2016.7461327","DOIUrl":"https://doi.org/10.1109/RTAS.2016.7461327","url":null,"abstract":"This work presents CArb, an arbiter for controlling accesses to the shared memory bus in multi-core mixed criticality systems. CArb is a requirement-aware arbiter that optimally allocates service to tasks based on their requirements. It is also criticality-aware since it incorporates criticality as a first-class principle in arbitration decisions. CArb supports any number of criticality levels and does not impose any restrictions on mapping tasks to processors. Hence, it operates in tandem with existing processor scheduling policies. In addition, CArb is able to dynamically adapt memory bus arbitration at run time to respond to increases in the monitored execution times of tasks. Utilizing this adaptation, CArb is able to offset these increases; hence, postpones the system need to switch to a degraded mode. We prototype CArb, and evaluate it with an avionics case-study from Honeywell as well as synthetic experiments.","PeriodicalId":338179,"journal":{"name":"2016 IEEE Real-Time and Embedded Technology and Applications Symposium (RTAS)","volume":"47 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2016-04-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127215995","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2016-04-11DOI: 10.1109/RTAS.2016.7461345
Vincenzo Apuzzo, Alessandro Biondi, G. Buttazzo
Engine control applications typically include computational activities consisting of periodic tasks, activated by timers, and engine-triggered tasks, activated at specific angular positions of the crankshaft. Such tasks are typically managed by a OSEK-compliant real-time kernel using a fixed-priority scheduler, as specified in the AUTOSAR standard adopted by most automotive industries. Recent theoretical results, however, have highlighted significant limitations of fixed-priority scheduling in managing engine-triggered tasks that could be solved by a dynamic scheduling policy. To address this issue, this paper proposes a new kernel implementation within the ERIKA Enterprise operating system, providing EDF scheduling for both periodic and engine-triggered tasks. The proposed kernel has been conceived to have an API similar to the AUTOSAR/OSEK standard one, limiting the effort needed to use the new kernel with an existing legacy application. The proposed kernel implementation is discussed and evaluated in terms of run-time overhead and footprint. In addition, a simulation framework is presented, showing a powerful environment for studying the execution of tasks under the proposed kernel.
{"title":"OSEK-Like Kernel Support for Engine Control Applications under EDF Scheduling","authors":"Vincenzo Apuzzo, Alessandro Biondi, G. Buttazzo","doi":"10.1109/RTAS.2016.7461345","DOIUrl":"https://doi.org/10.1109/RTAS.2016.7461345","url":null,"abstract":"Engine control applications typically include computational activities consisting of periodic tasks, activated by timers, and engine-triggered tasks, activated at specific angular positions of the crankshaft. Such tasks are typically managed by a OSEK-compliant real-time kernel using a fixed-priority scheduler, as specified in the AUTOSAR standard adopted by most automotive industries. Recent theoretical results, however, have highlighted significant limitations of fixed-priority scheduling in managing engine-triggered tasks that could be solved by a dynamic scheduling policy. To address this issue, this paper proposes a new kernel implementation within the ERIKA Enterprise operating system, providing EDF scheduling for both periodic and engine-triggered tasks. The proposed kernel has been conceived to have an API similar to the AUTOSAR/OSEK standard one, limiting the effort needed to use the new kernel with an existing legacy application. The proposed kernel implementation is discussed and evaluated in terms of run-time overhead and footprint. In addition, a simulation framework is presented, showing a powerful environment for studying the execution of tasks under the proposed kernel.","PeriodicalId":338179,"journal":{"name":"2016 IEEE Real-Time and Embedded Technology and Applications Symposium (RTAS)","volume":"60 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2016-04-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"122111592","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2016-04-11DOI: 10.1109/RTAS.2016.7461332
Johannes Schlatow, Jonas Peeck, R. Ernst
Summary form only given. When modelling software components for timing analysis, we typically encounter functional chains of tasks that lead to precedence relations. As these task chains represent a functionally-dependent sequence of operations, in real-time systems, there is usually a requirement for their end-to-end latency. When mapped to software components, functional chains often result in communicating threads. Since threads are scheduled rather than tasks, specific task chain properties arise that can be exploited for response-time analysis by extending the busy-window analysis for such task chains in static-priority preemptive systems. We implemented this analysis by means of an analysis extension for pyCPA, a research-grade implementation of compositional performance analysis (CPA). The major scope of this demo is to show how CPA can be reasonably performed for realistic component-based systems. It also demonstrates how research on and with CPA is conducted using the pyCPA analysis framework. In the course of this demo, we show two approaches for the extraction of an appropriate timing model: 1) the derivation from a contract-based specification of the software components and 2) a tracing-based approach suitable for black-box components. We also demonstrate how this timing model is fed into the analysis extension in order to obtain response-time results for the task chains of interest. Finally, we present how the developed analysis extension speeds up the CPA and therefore enables an automated design-space exploration and optimisation of the threads' priority assignments in order to satisfy the pre-defined latency requirements.
{"title":"Demo Abstract: Response-Time Analysis for Task Chains in Communicating Threads with pyCPA","authors":"Johannes Schlatow, Jonas Peeck, R. Ernst","doi":"10.1109/RTAS.2016.7461332","DOIUrl":"https://doi.org/10.1109/RTAS.2016.7461332","url":null,"abstract":"Summary form only given. When modelling software components for timing analysis, we typically encounter functional chains of tasks that lead to precedence relations. As these task chains represent a functionally-dependent sequence of operations, in real-time systems, there is usually a requirement for their end-to-end latency. When mapped to software components, functional chains often result in communicating threads. Since threads are scheduled rather than tasks, specific task chain properties arise that can be exploited for response-time analysis by extending the busy-window analysis for such task chains in static-priority preemptive systems. We implemented this analysis by means of an analysis extension for pyCPA, a research-grade implementation of compositional performance analysis (CPA). The major scope of this demo is to show how CPA can be reasonably performed for realistic component-based systems. It also demonstrates how research on and with CPA is conducted using the pyCPA analysis framework. In the course of this demo, we show two approaches for the extraction of an appropriate timing model: 1) the derivation from a contract-based specification of the software components and 2) a tracing-based approach suitable for black-box components. We also demonstrate how this timing model is fed into the analysis extension in order to obtain response-time results for the task chains of interest. Finally, we present how the developed analysis extension speeds up the CPA and therefore enables an automated design-space exploration and optimisation of the threads' priority assignments in order to satisfy the pre-defined latency requirements.","PeriodicalId":338179,"journal":{"name":"2016 IEEE Real-Time and Embedded Technology and Applications Symposium (RTAS)","volume":"25 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2016-04-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"116575047","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2016-04-11DOI: 10.1109/RTAS.2016.7461330
A. Hamann, D. Ziegenbein, S. Kramer, M. Lukasiewycz
The complex dynamic behavior of automotive software systems, in particular engine management, in combination with emerging multi-core execution platforms, significantly increased the problem space for timing analysis methods. As a result, the risk of divergence between academic research and industrial practice is currently increasing. Therefore, we provided a concrete automotive benchmark for the Formal Methods for Timing Verification (FMTV) challenge 2016 (https://waters2016.inria.fr/challenge/), a full blown performance model of a modern engine management system (downloadable at http://ecrts.eit.uni-kl.de/forum/viewtopic.php?f=27&t=62), with the goal to challenge existing timing analysis approaches with respect to their expressiveness and precision. In the demo session we will present the performance model of the engine management system using the Amalthea tool (http://www.amalthea-project.org/). Furthermore, we will show the model in action using professional timing tools such as from Symtavision (https://www.symtavision.com/), Timing Architects (http://www.timing-architects.com/), and Inchron (https://www.inchron.de/). Thereby, the focus will lie on determining tight end-to-end latency bounds for a set of given cause-effect chains. This is challenging since the dynamic behavior of a engine management software is quite complex and contains mechanisms that explore the limits of existing academic approaches: preemptive and cooperative priority based scheduling; periodic, sporadic, and engine synchronous tasks; multi-core platform with distributed cause-effect chains including cross-core communication; label (i.e. data) placement dependent execution times of runnables Overall the demo gives an impression of the current state-of-practice in industrial product development, and serves as baseline for further academic research.
{"title":"Demo Abstract: Demonstration of the FMTV 2016 Timing Verification Challenge","authors":"A. Hamann, D. Ziegenbein, S. Kramer, M. Lukasiewycz","doi":"10.1109/RTAS.2016.7461330","DOIUrl":"https://doi.org/10.1109/RTAS.2016.7461330","url":null,"abstract":"The complex dynamic behavior of automotive software systems, in particular engine management, in combination with emerging multi-core execution platforms, significantly increased the problem space for timing analysis methods. As a result, the risk of divergence between academic research and industrial practice is currently increasing. Therefore, we provided a concrete automotive benchmark for the Formal Methods for Timing Verification (FMTV) challenge 2016 (https://waters2016.inria.fr/challenge/), a full blown performance model of a modern engine management system (downloadable at http://ecrts.eit.uni-kl.de/forum/viewtopic.php?f=27&t=62), with the goal to challenge existing timing analysis approaches with respect to their expressiveness and precision. In the demo session we will present the performance model of the engine management system using the Amalthea tool (http://www.amalthea-project.org/). Furthermore, we will show the model in action using professional timing tools such as from Symtavision (https://www.symtavision.com/), Timing Architects (http://www.timing-architects.com/), and Inchron (https://www.inchron.de/). Thereby, the focus will lie on determining tight end-to-end latency bounds for a set of given cause-effect chains. This is challenging since the dynamic behavior of a engine management software is quite complex and contains mechanisms that explore the limits of existing academic approaches: preemptive and cooperative priority based scheduling; periodic, sporadic, and engine synchronous tasks; multi-core platform with distributed cause-effect chains including cross-core communication; label (i.e. data) placement dependent execution times of runnables Overall the demo gives an impression of the current state-of-practice in industrial product development, and serves as baseline for further academic research.","PeriodicalId":338179,"journal":{"name":"2016 IEEE Real-Time and Embedded Technology and Applications Symposium (RTAS)","volume":"45 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2016-04-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"133367144","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2016-04-11DOI: 10.1109/RTAS.2016.7461358
D. Chu, J. Jaffar, Rasool Maghareh
We present a framework for WCET analysis of programs with emphasis on cache micro-architecture. Such an analysis is challenging primarily because of the timing model of a dynamic nature, that is, the timing of a basic block is heavily dependent on the context in which it is executed. At its core, our algorithm is based on symbolic execution, and an analysis is obtained by locating the "longest" symbolic execution path. Clearly a challenge is the intractable number of paths in the symbolic execution tree. Traditionally this challenge is met by performing some form of abstraction in the path generation process but this leads to a loss of path-sensitivity and thus precision in the analysis. The key feature of our algorithm is the ability for reuse. This is critical for maintaining a high-level of path-sensitivity, which in turn produces significantly increased accuracy. In other words, reuse allows scalability in path-sensitive exploration. Finally, we present an experimental evaluation on well known benchmarks in order to show two things: that systematic path-sensitivity in fact brings significant accuracy gains, and that the algorithm still scales well.
{"title":"Precise Cache Timing Analysis via Symbolic Execution","authors":"D. Chu, J. Jaffar, Rasool Maghareh","doi":"10.1109/RTAS.2016.7461358","DOIUrl":"https://doi.org/10.1109/RTAS.2016.7461358","url":null,"abstract":"We present a framework for WCET analysis of programs with emphasis on cache micro-architecture. Such an analysis is challenging primarily because of the timing model of a dynamic nature, that is, the timing of a basic block is heavily dependent on the context in which it is executed. At its core, our algorithm is based on symbolic execution, and an analysis is obtained by locating the \"longest\" symbolic execution path. Clearly a challenge is the intractable number of paths in the symbolic execution tree. Traditionally this challenge is met by performing some form of abstraction in the path generation process but this leads to a loss of path-sensitivity and thus precision in the analysis. The key feature of our algorithm is the ability for reuse. This is critical for maintaining a high-level of path-sensitivity, which in turn produces significantly increased accuracy. In other words, reuse allows scalability in path-sensitive exploration. Finally, we present an experimental evaluation on well known benchmarks in order to show two things: that systematic path-sensitivity in fact brings significant accuracy gains, and that the algorithm still scales well.","PeriodicalId":338179,"journal":{"name":"2016 IEEE Real-Time and Embedded Technology and Applications Symposium (RTAS)","volume":"46 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2016-04-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"133707842","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2016-04-11DOI: 10.1109/RTAS.2016.7461322
Meng Xu, L. T. Phan, Hyon-Young Choi, Insup Lee
We introduce gFPca, a cache-aware global pre-emptive fixed-priority (FP) scheduling algorithm with dynamic cache allocation for multicore systems, and we present its analysis and implementation. We introduce a new overhead-aware analysis that integrates several novel ideas to safely and tightly account for the cache overhead. Our evaluation shows that the proposed overhead-accounting approach is highly accurate, and that gFPca improves the schedulability of cache-intensive tasksets substantially compared to the cache-agnostic global FP algorithm. Our evaluation also shows that gFPca outperforms the existing cache-aware non- preemptive global FP algorithm in most cases. Through our implementation and empirical evaluation, we demonstrate the feasibility of cache-aware global scheduling with dynamic cache allocation and highlight scenarios in which gFPca is especially useful in practice.
{"title":"Analysis and Implementation of Global Preemptive Fixed-Priority Scheduling with Dynamic Cache Allocation","authors":"Meng Xu, L. T. Phan, Hyon-Young Choi, Insup Lee","doi":"10.1109/RTAS.2016.7461322","DOIUrl":"https://doi.org/10.1109/RTAS.2016.7461322","url":null,"abstract":"We introduce gFPca, a cache-aware global pre-emptive fixed-priority (FP) scheduling algorithm with dynamic cache allocation for multicore systems, and we present its analysis and implementation. We introduce a new overhead-aware analysis that integrates several novel ideas to safely and tightly account for the cache overhead. Our evaluation shows that the proposed overhead-accounting approach is highly accurate, and that gFPca improves the schedulability of cache-intensive tasksets substantially compared to the cache-agnostic global FP algorithm. Our evaluation also shows that gFPca outperforms the existing cache-aware non- preemptive global FP algorithm in most cases. Through our implementation and empirical evaluation, we demonstrate the feasibility of cache-aware global scheduling with dynamic cache allocation and highlight scenarios in which gFPca is especially useful in practice.","PeriodicalId":338179,"journal":{"name":"2016 IEEE Real-Time and Embedded Technology and Applications Symposium (RTAS)","volume":"18 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2016-04-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"132397860","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}