Pub Date : 2017-07-17DOI: 10.4230/LIPIcs.ECRTS.2018.1
F. Farshchi, P. K. Valsan, R. Mancuso, H. Yun
Poor time predictability of multicore processors has been a long-standing challenge in the real-time systems community. In this paper, we make a case that a fundamental problem that prevents efficient and predictable real-time computing on multicore is the lack of a proper memory abstraction to express memory criticality, which cuts across various layers of the system: the application, OS, and hardware. We, therefore, propose a new holistic resource management approach driven by a new memory abstraction, which we call Deterministic Memory. The key characteristic of deterministic memory is that the platform - the OS and hardware - guarantees small and tightly bounded worst-case memory access timing. In contrast, we call the conventional memory abstraction as best-effort memory in which only highly pessimistic worst-case bounds can be achieved. We propose to utilize both abstractions to achieve high time predictability but without significantly sacrificing performance. We present deterministic memory-aware OS and architecture designs, including OS-level page allocator, hardware-level cache, and DRAM controller designs. We implement the proposed OS and architecture extensions on Linux and gem5 simulator. Our evaluation results, using a set of synthetic and real-world benchmarks, demonstrate the feasibility and effectiveness of our approach.
{"title":"Deterministic Memory Abstraction and Supporting Multicore System Architecture","authors":"F. Farshchi, P. K. Valsan, R. Mancuso, H. Yun","doi":"10.4230/LIPIcs.ECRTS.2018.1","DOIUrl":"https://doi.org/10.4230/LIPIcs.ECRTS.2018.1","url":null,"abstract":"Poor time predictability of multicore processors has been a long-standing challenge in the real-time systems community. In this paper, we make a case that a fundamental problem that prevents efficient and predictable real-time computing on multicore is the lack of a proper memory abstraction to express memory criticality, which cuts across various layers of the system: the application, OS, and hardware. We, therefore, propose a new holistic resource management approach driven by a new memory abstraction, which we call Deterministic Memory. The key characteristic of deterministic memory is that the platform - the OS and hardware - guarantees small and tightly bounded worst-case memory access timing. In contrast, we call the conventional memory abstraction as best-effort memory in which only highly pessimistic worst-case bounds can be achieved. We propose to utilize both abstractions to achieve high time predictability but without significantly sacrificing performance. We present deterministic memory-aware OS and architecture designs, including OS-level page allocator, hardware-level cache, and DRAM controller designs. We implement the proposed OS and architecture extensions on Linux and gem5 simulator. Our evaluation results, using a set of synthetic and real-world benchmarks, demonstrate the feasibility and effectiveness of our approach.","PeriodicalId":191379,"journal":{"name":"Euromicro Conference on Real-Time Systems","volume":"10 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2017-07-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"124578841","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2017-06-27DOI: 10.4230/LIPIcs.ECRTS.2017.14
V. Nguyen, D. Hardy, I. Puaut
Most schedulability analysis techniques for multi-core architectures assume a single Worst-Case Execution Time (WCET) per task, which is valid in all execution conditions. This assumption is too pessimistic for parallel applications running on multi-core architectures with local instruction or data caches, for which the WCET of a task depends on the cache contents at the beginning of its execution, itself depending on the task that was executed before the task under study. In this paper, we propose two scheduling techniques for multi-core architectures equipped with local instruction and data caches. The two techniques schedule a parallel application modeled as a task graph, and generate a static partitioned non-preemptive schedule. We propose an optimal method, using an Integer Linear Programming (ILP) formulation, as well as a heuristic method based on list scheduling. Experimental results show that by taking into account the effect of private caches on tasks' WCETs, the length of generated schedules is significantly reduced as compared to schedules generated by cache-unaware scheduling methods. The observed schedule length reduction on streaming applications is 11% on average for the optimal method and 9% on average for the heuristic method.
{"title":"Cache-Conscious Offline Real-Time Task Scheduling for Multi-Core Processors","authors":"V. Nguyen, D. Hardy, I. Puaut","doi":"10.4230/LIPIcs.ECRTS.2017.14","DOIUrl":"https://doi.org/10.4230/LIPIcs.ECRTS.2017.14","url":null,"abstract":"Most schedulability analysis techniques for multi-core architectures assume a single Worst-Case Execution Time (WCET) per task, which is valid in all execution conditions. This assumption is too pessimistic for parallel applications running on multi-core architectures with local instruction or data caches, for which the WCET of a task depends on the cache contents at the beginning of its execution, itself depending on the task that was executed before the task under study. \u0000 \u0000In this paper, we propose two scheduling techniques for multi-core architectures equipped with local instruction and data caches. The two techniques schedule a parallel application modeled as a task graph, and generate a static partitioned non-preemptive schedule. We propose an optimal method, using an Integer Linear Programming (ILP) formulation, as well as a heuristic method based on list scheduling. Experimental results show that by taking into account the effect of private caches on tasks' WCETs, the length of generated schedules is significantly reduced as compared to schedules generated by cache-unaware scheduling methods. The observed schedule length reduction on streaming applications is 11% on average for the optimal method and 9% on average for the heuristic method.","PeriodicalId":191379,"journal":{"name":"Euromicro Conference on Real-Time Systems","volume":"40 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2017-06-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114955721","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2017-06-27DOI: 10.4230/LIPIcs.ECRTS.2017.17
Zain Alabedin Haj Hammadeh, Sophie Quinton, Marco Panunzio, R. Henia, L. Rioux, R. Ernst
In this paper, we present an extension of slack analysis for budgeting in the design of weakly-hard real-time systems. During design, it often happens that some parts of a task set are fully specified while other parameters, e.g. regarding recovery or monitoring tasks, will be available only much later. In such cases, slack analysis can help anticipate how these missing parameters can influence the behavior of the whole system so that a resource budget can be allocated to them. It is, however, sufficient in many application contexts to budget these tasks in order to preserve weakly-hard rather than hard guarantees. We thus present an extension of slack analysis for deriving task budgets for systems with hard and weakly-hard requirements. This work is motivated by and validated on a realistic case study inspired by industrial practice. 1998 ACM Subject Classification B.8.2 Performance Analysis and Design Aids
{"title":"Budgeting Under-Specified Tasks for Weakly-Hard Real-Time Systems","authors":"Zain Alabedin Haj Hammadeh, Sophie Quinton, Marco Panunzio, R. Henia, L. Rioux, R. Ernst","doi":"10.4230/LIPIcs.ECRTS.2017.17","DOIUrl":"https://doi.org/10.4230/LIPIcs.ECRTS.2017.17","url":null,"abstract":"In this paper, we present an extension of slack analysis for budgeting in the design of weakly-hard real-time systems. During design, it often happens that some parts of a task set are fully specified while other parameters, e.g. regarding recovery or monitoring tasks, will be available only much later. In such cases, slack analysis can help anticipate how these missing parameters can influence the behavior of the whole system so that a resource budget can be allocated to them. It is, however, sufficient in many application contexts to budget these tasks in order to preserve weakly-hard rather than hard guarantees. We thus present an extension of slack analysis for deriving task budgets for systems with hard and weakly-hard requirements. This work is motivated by and validated on a realistic case study inspired by industrial practice. 1998 ACM Subject Classification B.8.2 Performance Analysis and Design Aids","PeriodicalId":191379,"journal":{"name":"Euromicro Conference on Real-Time Systems","volume":"9 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2017-06-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127290014","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2017-06-01DOI: 10.4230/LIPIcs.ECRTS.2017.25
Tao Qian, F. Mueller, Yufeng Xin
In a distributed computing environment, guaranteeing the hard deadline for real-time messages is essential to ensure schedulability of real-time tasks. Since capabilities of the shared resources for transmission are limited, e.g., the buffer size is limited on network devices, it becomes a challenge to design an effective and feasible resource sharing policy based on both the demand of real-time packet transmissions and the limitation of resource capabilities. We address this challenge in two cooperative mechanisms. First, we design a static routing algorithm to find forwarding paths for packets to guarantee their hard deadlines. The routing algorithm employs a validation-based backtracking procedure capable of deriving the demand of a set of real-time packets on each shared network device, and it checks whether this demand can be met on the device. Second, we design a packet scheduler that runs on network devices to transmit messages according to our routing requirements. We implement these mechanisms on virtual software-defined network (SDN) switches and evaluate them on real hardware in a local cluster to demonstrate the feasibility and effectiveness of our routing algorithm and packet scheduler.
{"title":"A Linux Real-Time Packet Scheduler for Reliable Static SDN Routing","authors":"Tao Qian, F. Mueller, Yufeng Xin","doi":"10.4230/LIPIcs.ECRTS.2017.25","DOIUrl":"https://doi.org/10.4230/LIPIcs.ECRTS.2017.25","url":null,"abstract":"In a distributed computing environment, guaranteeing the hard deadline for real-time messages is essential to ensure schedulability of real-time tasks. Since capabilities of the shared resources for transmission are limited, e.g., the buffer size is limited on network devices, it becomes a challenge to design an effective and feasible resource sharing policy based on both the demand of real-time packet transmissions and the limitation of resource capabilities. We address this challenge in two cooperative mechanisms. First, we design a static routing algorithm to find forwarding paths for packets to guarantee their hard deadlines. The routing algorithm employs a validation-based backtracking procedure capable of deriving the demand of a set of real-time packets on each shared network device, and it checks whether this demand can be met on the device. Second, we design a packet scheduler that runs on network devices to transmit messages according to our routing requirements. We implement these mechanisms on virtual software-defined network (SDN) switches and evaluate them on real hardware in a local cluster to demonstrate the feasibility and effectiveness of our routing algorithm and packet scheduler.","PeriodicalId":191379,"journal":{"name":"Euromicro Conference on Real-Time Systems","volume":"6 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2017-06-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"122813491","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2017-06-01DOI: 10.4230/LIPIcs.ECRTS.2017.8
Abhishek Singh, Pontus Ekberg, Sanjoy Baruah
Schedulability analysis techniques that are well understood within the real-time scheduling community are applied to the analysis of recurrent real-time workloads that are modeled using the synchronous data-flow graph (SDFG) model. An enhancement to the standard SDFG model is proposed, that permits the specification of a real-time latency constraint between a specified input and a specified output of an SDFG. A technique is derived for transforming such an enhanced SDFG to a collection of traditional 3-parameter sporadic tasks, thereby allowing for the analysis of systems of SDFG tasks using the methods and algorithms that have previously been developed within the real-time scheduling community for the analysis of systems of such sporadic tasks. The applicability of this approach is illustrated by applying prior results from real-time scheduling theory to construct an exact preemptive uniprocessor schedulability test for collections of recurrent processes that are each represented using the enhanced SDFG model.
{"title":"Applying Real-Time Scheduling Theory to the Synchronous Data Flow Model of Computation","authors":"Abhishek Singh, Pontus Ekberg, Sanjoy Baruah","doi":"10.4230/LIPIcs.ECRTS.2017.8","DOIUrl":"https://doi.org/10.4230/LIPIcs.ECRTS.2017.8","url":null,"abstract":"Schedulability analysis techniques that are well understood within the real-time scheduling community are applied to the analysis of recurrent real-time workloads that are modeled using the synchronous data-flow graph (SDFG) model. An enhancement to the standard SDFG model is proposed, that permits the specification of a real-time latency constraint between a specified input and a specified output of an SDFG. A technique is derived for transforming such an enhanced SDFG to a collection of traditional 3-parameter sporadic tasks, thereby allowing for the analysis of systems of SDFG tasks using the methods and algorithms that have previously been developed within the real-time scheduling community for the analysis of systems of such sporadic tasks. The applicability of this approach is illustrated by applying prior results from real-time scheduling theory to construct an exact preemptive uniprocessor schedulability test for collections of recurrent processes that are each represented using the enhanced SDFG model.","PeriodicalId":191379,"journal":{"name":"Euromicro Conference on Real-Time Systems","volume":"57 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2017-06-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"123818326","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2017-06-01DOI: 10.4230/LIPIcs.ECRTS.2017.3
R. Mancuso, R. Pellizzoni, Neriman Tokcan, M. Caccamo
In the last decade there has been a steady uptrend in the popularity of embedded multi-core platforms. This represents a turning point in the theory and implementation of real-time systems. From a real-time standpoint, however, the extensive sharing of hardware resources (e.g. caches, DRAM subsystem, I/O channels) represents a major source of unpredictability. Budget-based memory regulation (throttling) has been extensively studied to enforce a strict partitioning of the DRAM subsystem’s bandwidth. The common approach to analyze a task under memory bandwidth regulation is to consider the budget of the core where the task is executing, and assume the worst-case about the remaining cores' budgets. In this work, we propose a novel analysis strategy to derive the WCET of a task under memory bandwidth regulation that takes into account the exact distribution of memory budgets to cores. In this sense, the proposed analysis represents a generalization of approaches that consider (i) even budget distribution across cores; and (ii) uneven but unknown (except for the core under analysis) budget assignment. By exploiting the additional piece of information, we show that it is possible to derive a more accurate WCET estimation. Our evaluations highlight that the proposed technique can reduce overestimation by 30% in average, and up to 60%, compared to the state of the art.
{"title":"WCET Derivation under Single Core Equivalence with Explicit Memory Budget Assignment","authors":"R. Mancuso, R. Pellizzoni, Neriman Tokcan, M. Caccamo","doi":"10.4230/LIPIcs.ECRTS.2017.3","DOIUrl":"https://doi.org/10.4230/LIPIcs.ECRTS.2017.3","url":null,"abstract":"In the last decade there has been a steady uptrend in the popularity of embedded multi-core platforms. This represents a turning point in the theory and implementation of real-time systems. From a real-time standpoint, however, the extensive sharing of hardware resources (e.g. caches, DRAM subsystem, I/O channels) represents a major source of unpredictability. Budget-based memory regulation (throttling) has been extensively studied to enforce a strict partitioning of the DRAM subsystem’s bandwidth. The common approach to analyze a task under memory bandwidth regulation is to consider the budget of the core where the task is executing, and assume the worst-case about the remaining cores' budgets. \u0000 \u0000In this work, we propose a novel analysis strategy to derive the WCET of a task under memory bandwidth regulation that takes into account the exact distribution of memory budgets to cores. In this sense, the proposed analysis represents a generalization of approaches that consider (i) even budget distribution across cores; and (ii) uneven but unknown (except for the core under analysis) budget assignment. By exploiting the additional piece of information, we show that it is possible to derive a more accurate WCET estimation. Our evaluations highlight that the proposed technique can reduce overestimation by 30% in average, and up to 60%, compared to the state of the art.","PeriodicalId":191379,"journal":{"name":"Euromicro Conference on Real-Time Systems","volume":"34 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2017-06-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128881474","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2017-06-01DOI: 10.4230/LIPIcs.ECRTS.2017.15
Zheng Dong, Cong Liu, A. Gatherer, L. McFearin, P. Yan, James H. Anderson
Heterogeneous computing platforms with multiple types of computing resources have been widely used in many industrial systems to process dataflow tasks with pre-defined affinity of tasks to subgroups of resources. For many dataflow workloads with soft real-time requirements, guaranteeing fast and bounded response times is often the objective. This paper presents a new set of analysis techniques showing that a classical real-time scheduler, namely earliest-deadline first (EDF), is able to support dataflow tasks scheduled on such heterogeneous platforms with provably bounded response times while incurring no resource capacity loss, thus proving EDF to be an optimal solution for this scheduling problem. Experiments using synthetic workloads with widely varied parameters also demonstrate that the magnitude of the response time bounds yielded under the proposed analysis is reasonably small under all scenarios. Compared to the state-of-the-art soft real-time analysis techniques, our test yields a 68% reduction on response time bounds on average. This work demonstrates the potential of applying EDF into practical industrial systems containing dataflow-based workloads that desire guaranteed bounded response times.
{"title":"Optimal Dataflow Scheduling on a Heterogeneous Multiprocessor With Reduced Response Time Bounds","authors":"Zheng Dong, Cong Liu, A. Gatherer, L. McFearin, P. Yan, James H. Anderson","doi":"10.4230/LIPIcs.ECRTS.2017.15","DOIUrl":"https://doi.org/10.4230/LIPIcs.ECRTS.2017.15","url":null,"abstract":"Heterogeneous computing platforms with multiple types of computing resources have been widely used in many industrial systems to process dataflow tasks with pre-defined affinity of tasks to subgroups of resources. For many dataflow workloads with soft real-time requirements, guaranteeing fast and bounded response times is often the objective. This paper presents a new set of analysis techniques showing that a classical real-time scheduler, namely earliest-deadline first (EDF), is able to support dataflow tasks scheduled on such heterogeneous platforms with provably bounded response times while incurring no resource capacity loss, thus proving EDF to be an optimal solution for this scheduling problem. Experiments using synthetic workloads with widely varied parameters also demonstrate that the magnitude of the response time bounds yielded under the proposed analysis is reasonably small under all scenarios. Compared to the state-of-the-art soft real-time analysis techniques, our test yields a 68% reduction on response time bounds on average. This work demonstrates the potential of applying EDF into practical industrial systems containing dataflow-based workloads that desire guaranteed bounded response times.","PeriodicalId":191379,"journal":{"name":"Euromicro Conference on Real-Time Systems","volume":"35 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2017-06-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"121420683","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2017-05-31DOI: 10.4230/LIPIcs.ECRTS.2017.19
R. Pathan
The traditional Vestal's model of Mixed-Criticality (MC) systems was recently extended to Imprecise Mixed-Critical task model (IMC) to guarantee some minimum level of (degraded) service to the low-critical tasks even after the system switches to the high-critical behavior. This paper extends the IMC task model by associating specific Quality-of-Service (QoS) values with the low-critical tasks and proposes a fluid-based scheduling algorithm, called MCFQ, for such task model. The MCFQ algorithm allows some low-critical tasks to provide full service even during the high-critical behavior so that the QoS of the overall system is increased. To the best of our knowledge MCFQ is the first algorithm for IMC task sets considering multiprocessor platform and QoS values. By extending the recently proposed MC-Fluid and MCF fluid-based multiprocessor scheduling algorithms for IMC task model, empirical results show that MCFQ algorithm can significantly improve the QoS of the system in comparison to that of both MC-Fluid and MCF. In addition, the schedulability performance of MCFQ is very close to the optimal MC-Fluid algorithm. Finally, we prove that the MCFQ algorithm has a speedup bound of 4/3, which is optimal for IMC tasks.
{"title":"Improving the Quality-of-Service for Scheduling Mixed-Criticality Systems on Multiprocessors","authors":"R. Pathan","doi":"10.4230/LIPIcs.ECRTS.2017.19","DOIUrl":"https://doi.org/10.4230/LIPIcs.ECRTS.2017.19","url":null,"abstract":"The traditional Vestal's model of Mixed-Criticality (MC) systems was recently extended to Imprecise Mixed-Critical task model (IMC) to guarantee some minimum level of (degraded) service to the low-critical tasks even after the system switches to the high-critical behavior. This paper extends the IMC task model by associating specific Quality-of-Service (QoS) values with the low-critical tasks and proposes a fluid-based scheduling algorithm, called MCFQ, for such task model. The MCFQ algorithm allows some low-critical tasks to provide full service even during the high-critical behavior so that the QoS of the overall system is increased. To the best of our knowledge MCFQ is the first algorithm for IMC task sets considering multiprocessor platform and QoS values. \u0000 \u0000 \u0000By extending the recently proposed MC-Fluid and MCF fluid-based multiprocessor scheduling algorithms for IMC task model, empirical results show that MCFQ algorithm can significantly improve the QoS of the system in comparison to that of both MC-Fluid and MCF. In addition, the schedulability performance of MCFQ is very close to the optimal MC-Fluid algorithm. Finally, we prove that the MCFQ algorithm has a speedup bound of 4/3, which is optimal for IMC tasks.","PeriodicalId":191379,"journal":{"name":"Euromicro Conference on Real-Time Systems","volume":"45 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2017-05-31","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"134530409","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2017-05-31DOI: 10.4230/LIPIcs.ECRTS.2017.20
Eberle A. Rambo, R. Ernst
Cross-layer fault-tolerance solutions are the key to effectively and efficiently increase the reliability in future safety-critical real-time systems. Replicated software execution with hardware support for error detection is a cross-layer approach that exploits future many-core platforms to increase reliability without resorting to redundancy in hardware. The performance of such systems, however, strongly depends on the scheduler. Standard schedulers, such as Partitioned Strict Priority Preemptive (SPP) and Time-Division Multiplexing (TDM)-based ones, although widely employed, provide poor performance in face of replicated execution. In this paper, we propose the replica-aware co-scheduling for mixed-critical systems. Experimental results show schedulability improvements of more than 1.5x when compared to TDM and 6.9x when compared to SPP. 1998 ACM Subject Classification C.4 Performance of Systems
{"title":"Replica-Aware Co-Scheduling for Mixed-Criticality","authors":"Eberle A. Rambo, R. Ernst","doi":"10.4230/LIPIcs.ECRTS.2017.20","DOIUrl":"https://doi.org/10.4230/LIPIcs.ECRTS.2017.20","url":null,"abstract":"Cross-layer fault-tolerance solutions are the key to effectively and efficiently increase the reliability in future safety-critical real-time systems. Replicated software execution with hardware support for error detection is a cross-layer approach that exploits future many-core platforms to increase reliability without resorting to redundancy in hardware. The performance of such systems, however, strongly depends on the scheduler. Standard schedulers, such as Partitioned Strict Priority Preemptive (SPP) and Time-Division Multiplexing (TDM)-based ones, although widely employed, provide poor performance in face of replicated execution. In this paper, we propose the replica-aware co-scheduling for mixed-critical systems. Experimental results show schedulability improvements of more than 1.5x when compared to TDM and 6.9x when compared to SPP. 1998 ACM Subject Classification C.4 Performance of Systems","PeriodicalId":191379,"journal":{"name":"Euromicro Conference on Real-Time Systems","volume":"20 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2017-05-31","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"125707910","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2002-06-19DOI: 10.1109/ECRTS.2002.10005
Elisabeth F. M. Steffens
{"title":"QoS, Consumer Electronics, and Real-Time: Challenges and Opportunities","authors":"Elisabeth F. M. Steffens","doi":"10.1109/ECRTS.2002.10005","DOIUrl":"https://doi.org/10.1109/ECRTS.2002.10005","url":null,"abstract":"","PeriodicalId":191379,"journal":{"name":"Euromicro Conference on Real-Time Systems","volume":"185 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2002-06-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"116095230","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}