Pub Date : 2015-04-13DOI: 10.1109/RTAS.2015.7108440
Zaid Al-bayati, Youcheng Sun, Haibo Zeng, M. Natale, Qi Zhu, B. Meyer
Multicores are today used in automotive, controls and avionics systems supporting real-time functionality. When real-time tasks allocated on different cores cooperate through the use of shared communication resources, they need to be protected by mechanisms that guarantee access in a mutual exclusive way with bounded worst-case blocking time. Lock-based mechanisms such as MPCP and MSRP have been developed to fulfill this demand, and research papers are today tackling the problem of finding the optimal task placement in multicores while trying to meet the deadlines against blocking times. In this paper, we propose a resource-aware task allocation algorithm for systems that use MSRP to protect shared resources. Furthermore, we leverage the additional opportunity provided by wait-free methods as an alternative data consistency mechanism for the case that the shared resource is communication or state memory. An algorithm that performs both task allocation and data consistency mechanism (MSRP or wait-free) selection is proposed. The selective use of wait-free methods can significantly extend the range of schedulable systems at the cost of memory.
{"title":"Task placement and selection of data consistency mechanisms for real-time multicore applications","authors":"Zaid Al-bayati, Youcheng Sun, Haibo Zeng, M. Natale, Qi Zhu, B. Meyer","doi":"10.1109/RTAS.2015.7108440","DOIUrl":"https://doi.org/10.1109/RTAS.2015.7108440","url":null,"abstract":"Multicores are today used in automotive, controls and avionics systems supporting real-time functionality. When real-time tasks allocated on different cores cooperate through the use of shared communication resources, they need to be protected by mechanisms that guarantee access in a mutual exclusive way with bounded worst-case blocking time. Lock-based mechanisms such as MPCP and MSRP have been developed to fulfill this demand, and research papers are today tackling the problem of finding the optimal task placement in multicores while trying to meet the deadlines against blocking times. In this paper, we propose a resource-aware task allocation algorithm for systems that use MSRP to protect shared resources. Furthermore, we leverage the additional opportunity provided by wait-free methods as an alternative data consistency mechanism for the case that the shared resource is communication or state memory. An algorithm that performs both task allocation and data consistency mechanism (MSRP or wait-free) selection is proposed. The selective use of wait-free methods can significantly extend the range of schedulable systems at the cost of memory.","PeriodicalId":320300,"journal":{"name":"21st IEEE Real-Time and Embedded Technology and Applications Symposium","volume":"107 1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2015-04-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115773166","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2015-04-13DOI: 10.1109/RTAS.2015.7108460
Zhuoqun Cheng, Ye Li, R. West
Arduino [1] is an open source platform that offers a clear and simple environment for physical computing. It is now widely used in modern robotics and Internet-of-Things (IoT) applications, due in part to its low-cost, ease of programming, and rapid prototyping capabilities. Sensors and actuators can easily be connected to the analog and digital I/O pins of an Arduino device, which features an on-board microcontroller programmed using the Arduino API. We present Qduino, a system developed for Arduino compatible boards. It is built upon our Quest realtime operating system kernel [4] and new Arduino-compatible boards.
{"title":"Demo abstract: A multithreaded arduino system for embedded computing","authors":"Zhuoqun Cheng, Ye Li, R. West","doi":"10.1109/RTAS.2015.7108460","DOIUrl":"https://doi.org/10.1109/RTAS.2015.7108460","url":null,"abstract":"Arduino [1] is an open source platform that offers a clear and simple environment for physical computing. It is now widely used in modern robotics and Internet-of-Things (IoT) applications, due in part to its low-cost, ease of programming, and rapid prototyping capabilities. Sensors and actuators can easily be connected to the analog and digital I/O pins of an Arduino device, which features an on-board microcontroller programmed using the Arduino API. We present Qduino, a system developed for Arduino compatible boards. It is built upon our Quest realtime operating system kernel [4] and new Arduino-compatible boards.","PeriodicalId":320300,"journal":{"name":"21st IEEE Real-Time and Embedded Technology and Applications Symposium","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2015-04-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130668725","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2015-04-13DOI: 10.1109/RTAS.2015.7108421
Dohwan Kim, Kyung-Joon Park, Y. Eun, S. Son, Chenyang Lu
Thermal control is critical for real-time systems as overheated processors can result in serious performance degradation or even system breakdown due to hardware throttling. The major challenges in thermal control for real-time systems are (i) the need to enforce both real-time and thermal constraints; (ii) uncertain system dynamics; and (iii) thermal sensor noise. Previous studies have resolved the first two, but the practical issue of sensor noise has not been properly addressed yet. In this paper, we introduce a novel thermal control algorithm that can appropriately handle thermal sensor noise. Our key observation is that even a small zero-mean sensor noise can induce a significant steady-state error between the target and the actual temperature of a processor. This steady-state error is contrary to our intuition that zero-mean sensor noise induces zero-mean fluctuations. We show that an intuitive attempt to resolve this unusual situation is not effective at all. By a rigorous approach, we analyze the underlying mechanism and quantify the noised-induced error in a closed form in terms of noise statistics and system parameters. Based on our analysis, we propose a simple and effective solution for eliminating the error and maintaining the desired processor temperature. Through extensive simulations, we show the advantages of our proposed algorithm, referred to as Thermal Control under Utilization Bound with Virtual Saturation (TCUB-VS).
{"title":"When thermal control meets sensor noise: analysis of noise-induced temperature error","authors":"Dohwan Kim, Kyung-Joon Park, Y. Eun, S. Son, Chenyang Lu","doi":"10.1109/RTAS.2015.7108421","DOIUrl":"https://doi.org/10.1109/RTAS.2015.7108421","url":null,"abstract":"Thermal control is critical for real-time systems as overheated processors can result in serious performance degradation or even system breakdown due to hardware throttling. The major challenges in thermal control for real-time systems are (i) the need to enforce both real-time and thermal constraints; (ii) uncertain system dynamics; and (iii) thermal sensor noise. Previous studies have resolved the first two, but the practical issue of sensor noise has not been properly addressed yet. In this paper, we introduce a novel thermal control algorithm that can appropriately handle thermal sensor noise. Our key observation is that even a small zero-mean sensor noise can induce a significant steady-state error between the target and the actual temperature of a processor. This steady-state error is contrary to our intuition that zero-mean sensor noise induces zero-mean fluctuations. We show that an intuitive attempt to resolve this unusual situation is not effective at all. By a rigorous approach, we analyze the underlying mechanism and quantify the noised-induced error in a closed form in terms of noise statistics and system parameters. Based on our analysis, we propose a simple and effective solution for eliminating the error and maintaining the desired processor temperature. Through extensive simulations, we show the advantages of our proposed algorithm, referred to as Thermal Control under Utilization Bound with Virtual Saturation (TCUB-VS).","PeriodicalId":320300,"journal":{"name":"21st IEEE Real-Time and Embedded Technology and Applications Symposium","volume":"143 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2015-04-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"123561978","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2015-04-13DOI: 10.1109/RTAS.2015.7108449
Martin Hoffmann, Florian Lukas, Christian J. Dietrich, D. Lohmann
Because of shrinking structure sizes and operating voltages, computing hardware exhibits an increasing susceptibility against transient hardware faults: Issues previously only known from avionics systems, such as bit flips caused by cosmic radiation, nowadays also affect automotive and other cost-sensitive “ground-level” control systems. For such cost-sensitive systems, many software-based measures have been suggested to harden applications against transient effects. However, all these measures assume that the underlying operating system works reliably in all cases. We present software-based concepts for constructing an operating system that provides a reliable computing base even on unreliable hardware. Our design is based on two pillars: First, strict fault avoidance by static tailoring and elimination of susceptible indirections. Second, reliable fault detection by fine-grained arithmetic encoding of the complete kernel execution path. Compared to an industry-grade off-the-shelf RTOS, our resulting dOSEK kernel thereby achieves a robustness improvement by four orders of magnitude. Our results are based on extensive fault-injection campaigns that cover the entire space of single-bit faults in random-access memory and registers.
{"title":"dOSEK: the design and implementation of a dependability-oriented static embedded kernel","authors":"Martin Hoffmann, Florian Lukas, Christian J. Dietrich, D. Lohmann","doi":"10.1109/RTAS.2015.7108449","DOIUrl":"https://doi.org/10.1109/RTAS.2015.7108449","url":null,"abstract":"Because of shrinking structure sizes and operating voltages, computing hardware exhibits an increasing susceptibility against transient hardware faults: Issues previously only known from avionics systems, such as bit flips caused by cosmic radiation, nowadays also affect automotive and other cost-sensitive “ground-level” control systems. For such cost-sensitive systems, many software-based measures have been suggested to harden applications against transient effects. However, all these measures assume that the underlying operating system works reliably in all cases. We present software-based concepts for constructing an operating system that provides a reliable computing base even on unreliable hardware. Our design is based on two pillars: First, strict fault avoidance by static tailoring and elimination of susceptible indirections. Second, reliable fault detection by fine-grained arithmetic encoding of the complete kernel execution path. Compared to an industry-grade off-the-shelf RTOS, our resulting dOSEK kernel thereby achieves a robustness improvement by four orders of magnitude. Our results are based on extensive fault-injection campaigns that cover the entire space of single-bit faults in random-access memory and registers.","PeriodicalId":320300,"journal":{"name":"21st IEEE Real-Time and Embedded Technology and Applications Symposium","volume":"19 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2015-04-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"123754355","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2015-04-13DOI: 10.1109/RTAS.2015.7108455
Hokeun Kim, David Broman, Edward A. Lee, Michael Zimmer, Aviral Shrivastava, Junkwang Oh
Mixed-criticality systems have tasks with different criticality levels running on the same hardware platform. Today's DRAM controllers cannot adequately satisfy the often conflicting requirements of tightly bounded worst-case latency for critical tasks and high performance for non-critical real-time tasks. We propose a DRAM memory controller that meets these requirements by using bank-aware address mapping and DRAM command-level priority-based scheduling with preemption. Many standard DRAM controllers can be extended with our approach, incurring no performance penalty when critical tasks are not generating DRAM requests. Our approach is evaluated by replaying memory traces obtained from executing benchmarks on an ARM ISA-based processor with caches, which is simulated on the gem5 architecture simulator. We compare our approach against previous TDM-based approaches, showing that our proposed memory controller achieves dramatically higher performance for non-critical tasks, without any significant impact on the worstcase latency of critical tasks.
{"title":"A predictable and command-level priority-based DRAM controller for mixed-criticality systems","authors":"Hokeun Kim, David Broman, Edward A. Lee, Michael Zimmer, Aviral Shrivastava, Junkwang Oh","doi":"10.1109/RTAS.2015.7108455","DOIUrl":"https://doi.org/10.1109/RTAS.2015.7108455","url":null,"abstract":"Mixed-criticality systems have tasks with different criticality levels running on the same hardware platform. Today's DRAM controllers cannot adequately satisfy the often conflicting requirements of tightly bounded worst-case latency for critical tasks and high performance for non-critical real-time tasks. We propose a DRAM memory controller that meets these requirements by using bank-aware address mapping and DRAM command-level priority-based scheduling with preemption. Many standard DRAM controllers can be extended with our approach, incurring no performance penalty when critical tasks are not generating DRAM requests. Our approach is evaluated by replaying memory traces obtained from executing benchmarks on an ARM ISA-based processor with caches, which is simulated on the gem5 architecture simulator. We compare our approach against previous TDM-based approaches, showing that our proposed memory controller achieves dramatically higher performance for non-critical tasks, without any significant impact on the worstcase latency of critical tasks.","PeriodicalId":320300,"journal":{"name":"21st IEEE Real-Time and Embedded Technology and Applications Symposium","volume":"15 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2015-04-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"131012708","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2015-04-13DOI: 10.1109/RTAS.2015.7108439
B. Akesson, Anna Minaeva, P. Šůcha, Andrew Nelson, Z. Hanzálek
Complex contemporary systems contain multiple applications, some which have firm real-time requirements while others do not. These applications are deployed on multi-core platforms with shared resources, such as processors, interconnect, and memories. However, resource sharing causes contention between sharing applications that must be resolved by a resource arbiter. Time-Division Multiplexing (TDM) is a commonly used arbiter, but it is challenging to configure such that the bandwidth and latency requirements of the real-time resource clients are satisfied, while minimizing their total allocation to improve the performance of non-real-time clients. This work addresses this problem by presenting an efficient TDM configuration methodology. The five main contributions are: 1) An analysis to derive a bandwidth and latency guarantee for a TDM schedule with arbitrary slot assignment, 2) A formulation of the TDM configuration problem and a proof that it is NP-hard, 3) An integer-linear programming model that optimally solves the configuration problem by exhaustively evaluating all possible TDM schedule sizes, 4) A heuristic method to choose candidate schedule sizes that substantially reduces computation time with only a slight decrease in efficiency, 5) An experimental evaluation of the methodology that examines its scalability and quantifies the trade-off between computation time and total allocation for the optimal and the heuristic algorithms. The approach is also demonstrated on a case study of a HD video and graphics processing system, where a memory controller is shared by a number of processing elements.
{"title":"An efficient configuration methodology for time-division multiplexed single resources","authors":"B. Akesson, Anna Minaeva, P. Šůcha, Andrew Nelson, Z. Hanzálek","doi":"10.1109/RTAS.2015.7108439","DOIUrl":"https://doi.org/10.1109/RTAS.2015.7108439","url":null,"abstract":"Complex contemporary systems contain multiple applications, some which have firm real-time requirements while others do not. These applications are deployed on multi-core platforms with shared resources, such as processors, interconnect, and memories. However, resource sharing causes contention between sharing applications that must be resolved by a resource arbiter. Time-Division Multiplexing (TDM) is a commonly used arbiter, but it is challenging to configure such that the bandwidth and latency requirements of the real-time resource clients are satisfied, while minimizing their total allocation to improve the performance of non-real-time clients. This work addresses this problem by presenting an efficient TDM configuration methodology. The five main contributions are: 1) An analysis to derive a bandwidth and latency guarantee for a TDM schedule with arbitrary slot assignment, 2) A formulation of the TDM configuration problem and a proof that it is NP-hard, 3) An integer-linear programming model that optimally solves the configuration problem by exhaustively evaluating all possible TDM schedule sizes, 4) A heuristic method to choose candidate schedule sizes that substantially reduces computation time with only a slight decrease in efficiency, 5) An experimental evaluation of the methodology that examines its scalability and quantifies the trade-off between computation time and total allocation for the optimal and the heuristic algorithms. The approach is also demonstrated on a case study of a HD video and graphics processing system, where a memory controller is shared by a number of processing elements.","PeriodicalId":320300,"journal":{"name":"21st IEEE Real-Time and Embedded Technology and Applications Symposium","volume":"90 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2015-04-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130522868","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2015-04-13DOI: 10.1109/RTAS.2015.7108448
Jiguo Song, Gabriel Parmer
Embedded and real-time systems must balance between many often conflicting goals including predictability, high utilization, efficiency, reliability, and SWaP (size, weight, and power). Reliability is particularly difficult to achieve without significantly impacting the other factors. Though reliability solutions exist for application-level, they are invalidated by system-level faults that are particularly difficult to detect and recover from. This paper presents the C'Mon system for predictably and efficiently monitoring system-level execution, and validating that it conforms with the high-level analytical models that underlie the timing guarantees of the system. Latent faults such as timing errors, incorrect scheduler decisions, unbounded priority inversions, or deadlocks are detected, the faulty component is identified, and using previous work in system recovery, the system is brought back to a stable state - all without missing deadlines.
{"title":"C'Mon: a predictable monitoring infrastructure for system-level latent fault detection and recovery","authors":"Jiguo Song, Gabriel Parmer","doi":"10.1109/RTAS.2015.7108448","DOIUrl":"https://doi.org/10.1109/RTAS.2015.7108448","url":null,"abstract":"Embedded and real-time systems must balance between many often conflicting goals including predictability, high utilization, efficiency, reliability, and SWaP (size, weight, and power). Reliability is particularly difficult to achieve without significantly impacting the other factors. Though reliability solutions exist for application-level, they are invalidated by system-level faults that are particularly difficult to detect and recover from. This paper presents the C'Mon system for predictably and efficiently monitoring system-level execution, and validating that it conforms with the high-level analytical models that underlie the timing guarantees of the system. Latent faults such as timing errors, incorrect scheduler decisions, unbounded priority inversions, or deadlocks are detected, the faulty component is identified, and using previous work in system recovery, the system is brought back to a stable state - all without missing deadlines.","PeriodicalId":320300,"journal":{"name":"21st IEEE Real-Time and Embedded Technology and Applications Symposium","volume":"290 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2015-04-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"116569973","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2015-04-13DOI: 10.1109/RTAS.2015.7108441
N. Khalilzad, Fanxin Kong, Xue Liu, M. Behnam, Thomas Nolte
Component-based software systems with real-time requirements are often scheduled using processor reservation techniques. Such techniques have mainly evolved around hard real-time systems in which worst-case resource demands are considered for the reservations. In soft real-time systems, reserv- ing the processors based on the worst-case demands results in unnecessary over-allocations. In this paper, targeting soft real-time systems running on multiprocessor platforms, we focus on components for which processor demand varies during run-time. We propose a feedback scheduling framework where processor reservations are used for scheduling components. The reservation bandwidths as well as the reservation periods are adapted using MIMO LQR controllers. We provide an allocation mechanism for distributing components over processors. The proposed framework is implemented in the TrueTime simulation tool for system identification. We use a case study to investigate the performance of our framework in the simulation tool. Finally, the framework is implemented in the Linux kernel for practical evaluations. The evaluation results suggest that the framework can efficiently adapt the reservation parameters during run-time by imposing negligible overhead.
{"title":"A feedback scheduling framework for component-based soft real-time systems","authors":"N. Khalilzad, Fanxin Kong, Xue Liu, M. Behnam, Thomas Nolte","doi":"10.1109/RTAS.2015.7108441","DOIUrl":"https://doi.org/10.1109/RTAS.2015.7108441","url":null,"abstract":"Component-based software systems with real-time requirements are often scheduled using processor reservation techniques. Such techniques have mainly evolved around hard real-time systems in which worst-case resource demands are considered for the reservations. In soft real-time systems, reserv- ing the processors based on the worst-case demands results in unnecessary over-allocations. In this paper, targeting soft real-time systems running on multiprocessor platforms, we focus on components for which processor demand varies during run-time. We propose a feedback scheduling framework where processor reservations are used for scheduling components. The reservation bandwidths as well as the reservation periods are adapted using MIMO LQR controllers. We provide an allocation mechanism for distributing components over processors. The proposed framework is implemented in the TrueTime simulation tool for system identification. We use a case study to investigate the performance of our framework in the simulation tool. Finally, the framework is implemented in the Linux kernel for practical evaluations. The evaluation results suggest that the framework can efficiently adapt the reservation parameters during run-time by imposing negligible overhead.","PeriodicalId":320300,"journal":{"name":"21st IEEE Real-Time and Embedded Technology and Applications Symposium","volume":"16 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2015-04-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"116248207","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2015-04-13DOI: 10.1109/RTAS.2015.7108454
Mohamed Hassan, Hiren D. Patel, R. Pellizzoni
Mixed-time critical systems are real-time systems that accommodate both hard real-time (HRT) and soft realtime (SRT) tasks. HRT tasks mandate a gurantee on the worstcase latency, while SRT tasks have average-case bandwidth (BW) demands. Memory requests in mixed-time critical systems usually have different transaction sizes based on whether the issuer task is HRT or SRT. For example, HRT tasks often issue requests with a cache line size. On the other side, SRT tasks may issue requests with a size of KBs. Requests from multimedia cores, cores controlling network interfaces and direct memory accesses (DMAs) are obvious examples of these large-size requests. Based on these observations, we promote in this work a new approach to schedule memory requests. This approach retains locality within large-size requests to minimize the worst-case latency, while maintaining the average-case BW as high as required. To achieve this target, we introduce a novel and compact time-division-multiplexing scheduler that is adequate for mixed-time critical systems. We also present a novel framework that constructs optimal offchip DRAM memory controller schedules for multi-core mixedtime critical systems. These schedules are loaded to the memory controller during boot-time. Based on the proposed schedule, we provide a detailed static analysis that guarantees predictability. We compare the proposed controller against state-of-the-art realtime memory controllers using synthetic experiments as well as a practical use-case from multimedia systems.
{"title":"A framework for scheduling DRAM memory accesses for multi-core mixed-time critical systems","authors":"Mohamed Hassan, Hiren D. Patel, R. Pellizzoni","doi":"10.1109/RTAS.2015.7108454","DOIUrl":"https://doi.org/10.1109/RTAS.2015.7108454","url":null,"abstract":"Mixed-time critical systems are real-time systems that accommodate both hard real-time (HRT) and soft realtime (SRT) tasks. HRT tasks mandate a gurantee on the worstcase latency, while SRT tasks have average-case bandwidth (BW) demands. Memory requests in mixed-time critical systems usually have different transaction sizes based on whether the issuer task is HRT or SRT. For example, HRT tasks often issue requests with a cache line size. On the other side, SRT tasks may issue requests with a size of KBs. Requests from multimedia cores, cores controlling network interfaces and direct memory accesses (DMAs) are obvious examples of these large-size requests. Based on these observations, we promote in this work a new approach to schedule memory requests. This approach retains locality within large-size requests to minimize the worst-case latency, while maintaining the average-case BW as high as required. To achieve this target, we introduce a novel and compact time-division-multiplexing scheduler that is adequate for mixed-time critical systems. We also present a novel framework that constructs optimal offchip DRAM memory controller schedules for multi-core mixedtime critical systems. These schedules are loaded to the memory controller during boot-time. Based on the proposed schedule, we provide a detailed static analysis that guarantees predictability. We compare the proposed controller against state-of-the-art realtime memory controllers using synthetic experiments as well as a practical use-case from multimedia systems.","PeriodicalId":320300,"journal":{"name":"21st IEEE Real-Time and Embedded Technology and Applications Symposium","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2015-04-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129786686","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2015-04-13DOI: 10.1109/RTAS.2015.7108417
A. Aminifar, P. Eles, Zebo Peng
Control applications are considered to be among the core applications in cyber-physical and embedded realtime systems, for which jitter is typically an important factor. This paper investigates whether it is possible to guarantee certain amount of jitter for a given set of applications on a shared platform. The effect of jitter on the stability of control applications and its relation with the latency will be discussed. The importance arises from the fact that it is considerably easier to manage the constant part of the delay (known as latency), while the process of coping with the varying part of the delay (known as jitter) is more involved. The proposed solution guarantees certain jitter limits, and at the same time does not lead to overly pessimistic latency values. The results are later used in a design optimization problem to minimize the resource utilized.
{"title":"Jfair: a scheduling algorithm to stabilize control applications","authors":"A. Aminifar, P. Eles, Zebo Peng","doi":"10.1109/RTAS.2015.7108417","DOIUrl":"https://doi.org/10.1109/RTAS.2015.7108417","url":null,"abstract":"Control applications are considered to be among the core applications in cyber-physical and embedded realtime systems, for which jitter is typically an important factor. This paper investigates whether it is possible to guarantee certain amount of jitter for a given set of applications on a shared platform. The effect of jitter on the stability of control applications and its relation with the latency will be discussed. The importance arises from the fact that it is considerably easier to manage the constant part of the delay (known as latency), while the process of coping with the varying part of the delay (known as jitter) is more involved. The proposed solution guarantees certain jitter limits, and at the same time does not lead to overly pessimistic latency values. The results are later used in a design optimization problem to minimize the resource utilized.","PeriodicalId":320300,"journal":{"name":"21st IEEE Real-Time and Embedded Technology and Applications Symposium","volume":"24 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2015-04-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114241809","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}