Peng Wu, Chenchen Fu, Minming Li, Yingchao Zhao, C. Xue, Song Han
Real-time task scheduling for wireless networked control systems provides guarantees for the quality of service. This paper introduces a new model for joint network and computing resource scheduling (JNCRS) in real-time wireless networked control systems. This new end-to-end real-time task model considers a strict execution order of segments including the sensing, the computing and the actuating segment based on the control loop of WNCSs. The general JNCRS problem is proved to be a NP-hard problem. After dividing the JNCRS problem into four subproblems, we propose a polynomial-time optimal algorithm to solve the first subproblem where each segment has unit execution time, by checking the intervals with 100% network resource utilization and modify the deadlines of tasks. To solve the second subproblem where the computing segment is larger than one unit execution time, we define the new timing parameters of each network segment by taking into account the scheduling of the computing segments. We propose a polynomial-time optimal algorithm to check the intervals with the network resource utilization larger than or equal to 100% and modify the timing parameters of tasks based on these intervals.
{"title":"Work-in-Progress: Joint Network and Computing Resource Scheduling for Wireless Networked Control Systems","authors":"Peng Wu, Chenchen Fu, Minming Li, Yingchao Zhao, C. Xue, Song Han","doi":"10.1109/RTSS.2018.00035","DOIUrl":"https://doi.org/10.1109/RTSS.2018.00035","url":null,"abstract":"Real-time task scheduling for wireless networked control systems provides guarantees for the quality of service. This paper introduces a new model for joint network and computing resource scheduling (JNCRS) in real-time wireless networked control systems. This new end-to-end real-time task model considers a strict execution order of segments including the sensing, the computing and the actuating segment based on the control loop of WNCSs. The general JNCRS problem is proved to be a NP-hard problem. After dividing the JNCRS problem into four subproblems, we propose a polynomial-time optimal algorithm to solve the first subproblem where each segment has unit execution time, by checking the intervals with 100% network resource utilization and modify the deadlines of tasks. To solve the second subproblem where the computing segment is larger than one unit execution time, we define the new timing parameters of each network segment by taking into account the scheduling of the computing segments. We propose a polynomial-time optimal algorithm to check the intervals with the network resource utilization larger than or equal to 100% and modify the timing parameters of tasks based on these intervals.","PeriodicalId":294784,"journal":{"name":"2018 IEEE Real-Time Systems Symposium (RTSS)","volume":"24 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"121167635","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Daniel Casini, Alessandro Biondi, Geoffrey Nelissen, G. Buttazzo
This work proposes solutions for bounding the worst-case memory space requirement for parallel tasks running on multicore platforms with scratchpad memories. It introduces a feasibility test that verifies whether memories are large enough to contain the maximum memory backlog that may be generated by the system. Both closed-form bounds and more accurate algorithmic techniques are proposed. It is shown how one can use max-plus algebra and solutions to the max-flow cut problem to efficiently solve the memory feasibility problem. Experimental results are presented to evaluate the efficiency of the proposed feasibility analysis techniques on synthetic workload and state-of-the-art benchmarks.
{"title":"Memory Feasibility Analysis of Parallel Tasks Running on Scratchpad-Based Architectures","authors":"Daniel Casini, Alessandro Biondi, Geoffrey Nelissen, G. Buttazzo","doi":"10.1109/RTSS.2018.00047","DOIUrl":"https://doi.org/10.1109/RTSS.2018.00047","url":null,"abstract":"This work proposes solutions for bounding the worst-case memory space requirement for parallel tasks running on multicore platforms with scratchpad memories. It introduces a feasibility test that verifies whether memories are large enough to contain the maximum memory backlog that may be generated by the system. Both closed-form bounds and more accurate algorithmic techniques are proposed. It is shown how one can use max-plus algebra and solutions to the max-flow cut problem to efficiently solve the memory feasibility problem. Experimental results are presented to evaluate the efficiency of the proposed feasibility analysis techniques on synthetic workload and state-of-the-art benchmarks.","PeriodicalId":294784,"journal":{"name":"2018 IEEE Real-Time Systems Symposium (RTSS)","volume":"18 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"123717387","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Multi-core architectures pose many challenges in real-time systems, which arise from contention between concurrent accesses to shared memory. Among the available memory arbitration policies, Time Division Multiplexing (TDM) ensures a predictable behavior by bounding access latencies and guaranteed bandwidth to tasks independently from the other tasks. To do so, TDM guarantees exclusive access to the shared memory in a fixed time window. TDM, however, provides a low resource utilization as it is non-work-conserving. Besides, it is very inefficient for resources having highly variable latencies, such as sharing the access to a DRAM memory. The constant length of a TDM slot is, hence, highly pessimistic and causes an underutilization of the memory. To address these limitations, we present dynamic arbitration schemes that are based on TDM. However, instead of arbitrating at the level of TDM slots, our approach operates at the granularity of clock cycles by exploiting slack time accumulated from preceding requests. This allows the arbiter to reorder memory requests, exploit the actual access latencies of requests, and thus improve memory utilization. We demonstrate that our policies are analyzable as they preserve the guarantees of TDM in the worst case, while our experiments show an improved memory utilization on average.
{"title":"Shedding the Shackles of Time-Division Multiplexing","authors":"F. Hebbache, M. Jan, F. Brandner, L. Pautet","doi":"10.1109/RTSS.2018.00059","DOIUrl":"https://doi.org/10.1109/RTSS.2018.00059","url":null,"abstract":"Multi-core architectures pose many challenges in real-time systems, which arise from contention between concurrent accesses to shared memory. Among the available memory arbitration policies, Time Division Multiplexing (TDM) ensures a predictable behavior by bounding access latencies and guaranteed bandwidth to tasks independently from the other tasks. To do so, TDM guarantees exclusive access to the shared memory in a fixed time window. TDM, however, provides a low resource utilization as it is non-work-conserving. Besides, it is very inefficient for resources having highly variable latencies, such as sharing the access to a DRAM memory. The constant length of a TDM slot is, hence, highly pessimistic and causes an underutilization of the memory. To address these limitations, we present dynamic arbitration schemes that are based on TDM. However, instead of arbitrating at the level of TDM slots, our approach operates at the granularity of clock cycles by exploiting slack time accumulated from preceding requests. This allows the arbiter to reorder memory requests, exploit the actual access latencies of requests, and thus improve memory utilization. We demonstrate that our policies are analyzable as they preserve the guarantees of TDM in the worst case, while our experiments show an improved memory utilization on average.","PeriodicalId":294784,"journal":{"name":"2018 IEEE Real-Time Systems Symposium (RTSS)","volume":"27 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127034792","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Predictable execution time upon accessing shared memories in multi-core real-time systems is a stringent requirement. A plethora of existing works focus on the analysis of Double Data Rate Dynamic Random Access Memories (DDR DRAMs), or redesigning its memory to provide predictable memory behavior. In this paper, we show that DDR DRAMs by construction suffer inherent limitations associated with achieving such predictability. These limitations lead to 1) highly variable access latencies that fluctuate based on various factors such as access patterns and memory state from previous accesses, and 2) overly pessimistic latency bounds. As a result, DDR DRAMs can be ill-suited for some real-time systems that mandate a strict predictable performance with tight timing constraints. Targeting these systems, we promote an alternative off-chip memory solution that is based on the emerging Reduced Latency DRAM (RLDRAM) protocol, and propose a predictable memory controller (RLDC) managing accesses to this memory. Comparing with the state-of-the-art predictable DDR controllers, the proposed solution provides up to 11× less timing variability and 6.4× reduction in the worst case memory latency.
{"title":"On the Off-Chip Memory Latency of Real-Time Systems: Is DDR DRAM Really the Best Option?","authors":"Mohamed Hassan","doi":"10.1109/RTSS.2018.00062","DOIUrl":"https://doi.org/10.1109/RTSS.2018.00062","url":null,"abstract":"Predictable execution time upon accessing shared memories in multi-core real-time systems is a stringent requirement. A plethora of existing works focus on the analysis of Double Data Rate Dynamic Random Access Memories (DDR DRAMs), or redesigning its memory to provide predictable memory behavior. In this paper, we show that DDR DRAMs by construction suffer inherent limitations associated with achieving such predictability. These limitations lead to 1) highly variable access latencies that fluctuate based on various factors such as access patterns and memory state from previous accesses, and 2) overly pessimistic latency bounds. As a result, DDR DRAMs can be ill-suited for some real-time systems that mandate a strict predictable performance with tight timing constraints. Targeting these systems, we promote an alternative off-chip memory solution that is based on the emerging Reduced Latency DRAM (RLDRAM) protocol, and propose a predictable memory controller (RLDC) managing accesses to this memory. Comparing with the state-of-the-art predictable DDR controllers, the proposed solution provides up to 11× less timing variability and 6.4× reduction in the worst case memory latency.","PeriodicalId":294784,"journal":{"name":"2018 IEEE Real-Time Systems Symposium (RTSS)","volume":"24 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-10-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"125998168","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Supporting real-time communications over Wireless networks (WSNs) is a tough challenge, due to packet collisions and the non-determinism of common channel access schemes like CSMA/CA. Real-time WSN communication is even more problematic in the general case of multi-hop mesh networks. For this reason, many real-time WSN solutions are limited to simple topologies, such as star networks. We propose a real-time multi-hop WSN MAC protocol built atop the IEEE 802.15.4 physical layer. By relying on precise clock synchronization and constructive interference-based flooding, the proposed MAC builds a centralized TDMA schedule, supporting multi-hop mesh networks. The real-time multi-hop communication model is connection-oriented, using guaranteed time slots, ad enables point-to-point communications also with redundant paths. The protocol has been implemented in simulation using OMNeT++, and the performance has been verified in a real-world deployment using Wandstem WSN nodes.
{"title":"TDMH-MAC: Real-Time and Multi-hop in the Same Wireless MAC","authors":"F. Terraneo, P. Polidori, A. Leva, W. Fornaciari","doi":"10.1109/RTSS.2018.00044","DOIUrl":"https://doi.org/10.1109/RTSS.2018.00044","url":null,"abstract":"Supporting real-time communications over Wireless networks (WSNs) is a tough challenge, due to packet collisions and the non-determinism of common channel access schemes like CSMA/CA. Real-time WSN communication is even more problematic in the general case of multi-hop mesh networks. For this reason, many real-time WSN solutions are limited to simple topologies, such as star networks. We propose a real-time multi-hop WSN MAC protocol built atop the IEEE 802.15.4 physical layer. By relying on precise clock synchronization and constructive interference-based flooding, the proposed MAC builds a centralized TDMA schedule, supporting multi-hop mesh networks. The real-time multi-hop communication model is connection-oriented, using guaranteed time slots, ad enables point-to-point communications also with redundant paths. The protocol has been implemented in simulation using OMNeT++, and the performance has been verified in a real-world deployment using Wandstem WSN nodes.","PeriodicalId":294784,"journal":{"name":"2018 IEEE Real-Time Systems Symposium (RTSS)","volume":"85 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-09-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128828584","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Ankit Agrawal, R. Mancuso, R. Pellizzoni, G. Fohler
One of the primary sources of unpredictability in modern multi-core embedded systems is contention over shared memory resources, such as caches, interconnects, and DRAM. Despite significant achievements in the design and analysis of multi-core systems, there is a need for a theoretical framework that can be used to reason on the worst-case behavior of real-time workload when both processors and memory resources are subject to scheduling decisions. In this paper, we focus our attention on dynamic allocation of main memory bandwidth. In particular, we study how to determine the worst-case response time of tasks spanning through a sequence of time intervals, each with a different bandwidth-to-core assignment. We show that the response time computation can be reduced to a maximization problem over assignment of memory requests to different time intervals, and we provide an efficient way to solve such problem. As a case study, we then demonstrate how our proposed analysis can be used to improve the schedulability of Integrated Modular Avionics systems in the presence of memory-intensive workload.
{"title":"Analysis of Dynamic Memory Bandwidth Regulation in Multi-core Real-Time Systems","authors":"Ankit Agrawal, R. Mancuso, R. Pellizzoni, G. Fohler","doi":"10.1109/RTSS.2018.00040","DOIUrl":"https://doi.org/10.1109/RTSS.2018.00040","url":null,"abstract":"One of the primary sources of unpredictability in modern multi-core embedded systems is contention over shared memory resources, such as caches, interconnects, and DRAM. Despite significant achievements in the design and analysis of multi-core systems, there is a need for a theoretical framework that can be used to reason on the worst-case behavior of real-time workload when both processors and memory resources are subject to scheduling decisions. In this paper, we focus our attention on dynamic allocation of main memory bandwidth. In particular, we study how to determine the worst-case response time of tasks spanning through a sequence of time intervals, each with a different bandwidth-to-core assignment. We show that the response time computation can be reduced to a maximization problem over assignment of memory requests to different time intervals, and we provide an efficient way to solve such problem. As a case study, we then demonstrate how our proposed analysis can be used to improve the schedulability of Integrated Modular Avionics systems in the presence of memory-intensive workload.","PeriodicalId":294784,"journal":{"name":"2018 IEEE Real-Time Systems Symposium (RTSS)","volume":"34 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-09-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"125013181","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Jian-Jia Chen, G. V. D. Brüggen, Junjie Shi, Niklas Ueter
Over the years, many multiprocessor locking protocols have been designed and analyzed. However, the performance of these protocols highly depends on how the tasks are partitioned and prioritized, and how the resources are shared locally and globally. This paper answers a few fundamental questions when real-time tasks share resources in multiprocessor systems. We explore the fundamental difficulty of the multiprocessor synchronization problem and show that a very simplified version of this problem is NP-hard in the strong sense regardless of the number of processors and the underlying scheduling paradigm. Therefore, the allowance of preemption or migration does not reduce the computational complexity. On the positive side, we develop a dependency-graph approach that is specifically useful for frame-based real-time tasks, i.e., when all tasks have the same period and release their jobs always at the same time. We present a series of algorithms with speedup factors between 2 and 3 under semi-partitioned scheduling. We further explore methodologies for and tradeoffs between preemptive and non-preemptive scheduling algorithms, and partitioned and semi-partitioned scheduling algorithms. Our approach is extended to periodic tasks under certain conditions.
{"title":"Dependency Graph Approach for Multiprocessor Real-Time Synchronization","authors":"Jian-Jia Chen, G. V. D. Brüggen, Junjie Shi, Niklas Ueter","doi":"10.1109/RTSS.2018.00057","DOIUrl":"https://doi.org/10.1109/RTSS.2018.00057","url":null,"abstract":"Over the years, many multiprocessor locking protocols have been designed and analyzed. However, the performance of these protocols highly depends on how the tasks are partitioned and prioritized, and how the resources are shared locally and globally. This paper answers a few fundamental questions when real-time tasks share resources in multiprocessor systems. We explore the fundamental difficulty of the multiprocessor synchronization problem and show that a very simplified version of this problem is NP-hard in the strong sense regardless of the number of processors and the underlying scheduling paradigm. Therefore, the allowance of preemption or migration does not reduce the computational complexity. On the positive side, we develop a dependency-graph approach that is specifically useful for frame-based real-time tasks, i.e., when all tasks have the same period and release their jobs always at the same time. We present a series of algorithms with speedup factors between 2 and 3 under semi-partitioned scheduling. We further explore methodologies for and tradeoffs between preemptive and non-preemptive scheduling algorithms, and partitioned and semi-partitioned scheduling algorithms. Our approach is extended to periodic tasks under certain conditions.","PeriodicalId":294784,"journal":{"name":"2018 IEEE Real-Time Systems Symposium (RTSS)","volume":"26 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-09-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"125180142","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Meiling Han, Nan Guan, Jinghao Sun, Qingqiang He, Qingxu Deng, Weichen Liu
Heterogenerous multi-cores utilize the strength of different architectures for executing particular types of workload, and usually offer higher performance and energy efficiency. In this paper, we study the worst-case response time (WCRT) analysis of typed scheduling of parallel DAG tasks on heterogeneous multi-cores, where the workload of each vertex in the DAG is only allowed to execute on a particular type of cores. The only known WCRT bound for this problem is grossly pessimistic and suffers the non-self-sustainability problem. In this paper, we propose two new WCRT bounds. The first new bound has the same time complexity as the existing bound, but is more precise and solves its non-self-sustainability problem. The second new bound explores more detailed task graph structure information to greatly improve the precision, but is computationally more expensive. We prove that the problem of computing the second bound is strongly NP-hard if the number of types in the system is a variable, and develop an efficient algorithm which has polynomial time complexity if the number of types is a constant. Experiments with randomly generated workload show that our proposed new methods are significantly more precise than the existing bound while having good scalability.
{"title":"Work-in-Progress: Response Time Bounds for Typed DAG Parallel Tasks on Heterogeneous Multi-cores","authors":"Meiling Han, Nan Guan, Jinghao Sun, Qingqiang He, Qingxu Deng, Weichen Liu","doi":"10.1109/RTSS.2018.00028","DOIUrl":"https://doi.org/10.1109/RTSS.2018.00028","url":null,"abstract":"Heterogenerous multi-cores utilize the strength of different architectures for executing particular types of workload, and usually offer higher performance and energy efficiency. In this paper, we study the worst-case response time (WCRT) analysis of typed scheduling of parallel DAG tasks on heterogeneous multi-cores, where the workload of each vertex in the DAG is only allowed to execute on a particular type of cores. The only known WCRT bound for this problem is grossly pessimistic and suffers the non-self-sustainability problem. In this paper, we propose two new WCRT bounds. The first new bound has the same time complexity as the existing bound, but is more precise and solves its non-self-sustainability problem. The second new bound explores more detailed task graph structure information to greatly improve the precision, but is computationally more expensive. We prove that the problem of computing the second bound is strongly NP-hard if the number of types in the system is a variable, and develop an efficient algorithm which has polynomial time complexity if the number of types is a constant. Experiments with randomly generated workload show that our proposed new methods are significantly more precise than the existing bound while having good scalability.","PeriodicalId":294784,"journal":{"name":"2018 IEEE Real-Time Systems Symposium (RTSS)","volume":"124 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-08-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"124197433","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Shamit Bansal, Yecheng Zhao, Haibo Zeng, Kehua Yang
Model-based design using the Simulink modeling formalism and associated toolchain has gained popularity in the development of real-time embedded systems. However, the current research on software synthesis for Simulink models has a critical gap for providing a deterministic, semantics-preserving implementation on multicore architectures with partitioned fixed-priority scheduling. In this paper, we consider a semantics-preservation mechanism that combines (1) the RT blocks from Simulink, and (2) task offset assignment to separate the time windows to access shared buffers by communicating tasks. We study the software synthesis problem that optimizes control performance by judiciously assigning task offsets, task priorities, and task communication mechanisms. We develop a problem-specific exact algorithm that uses an abstraction layer to hide the complexity of timing analysis. Experimental results show that it may run a few orders of magnitude faster than a direct formulation in integer linear programming.
{"title":"Optimal Implementation of Simulink Models on Multicore Architectures with Partitioned Fixed Priority Scheduling","authors":"Shamit Bansal, Yecheng Zhao, Haibo Zeng, Kehua Yang","doi":"10.1109/RTSS.2018.00041","DOIUrl":"https://doi.org/10.1109/RTSS.2018.00041","url":null,"abstract":"Model-based design using the Simulink modeling formalism and associated toolchain has gained popularity in the development of real-time embedded systems. However, the current research on software synthesis for Simulink models has a critical gap for providing a deterministic, semantics-preserving implementation on multicore architectures with partitioned fixed-priority scheduling. In this paper, we consider a semantics-preservation mechanism that combines (1) the RT blocks from Simulink, and (2) task offset assignment to separate the time windows to access shared buffers by communicating tasks. We study the software synthesis problem that optimizes control performance by judiciously assigning task offsets, task priorities, and task communication mechanisms. We develop a problem-specific exact algorithm that uses an abstraction layer to hide the complexity of timing analysis. Experimental results show that it may run a few orders of magnitude faster than a direct formulation in integer linear programming.","PeriodicalId":294784,"journal":{"name":"2018 IEEE Real-Time Systems Symposium (RTSS)","volume":"10 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-08-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126372024","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Schedulability is a fundamental problem in real-time scheduling, but it has to be approximated due to the intrinsic computational hardness. As the most popular algorithm for deciding schedulability on multiprocess platforms, the speedup factor of partitioned-EDF is challenging to analyze and is far from being determined. Partitioned-EDF was first proposed in 2005 by Barush and Fisher [1], and was shown to have a speedup factor at most 3-1/m, meaning that if the input of sporadic tasks is feasible on m processors with speed one, partitioned-EDF will always succeed on m processors with speed 3-1/m. In 2011, this upper bound was improved to 2.6322-1/m by Chen and Chakraborty [2], and no more improvements have appeared ever since then. In this paper, we develop a novel method to discretize and regularize sporadic tasks, which enables us to improve, in the case of constrained deadlines, the speedup factor of partitioned-EDF to 2.5556-1/m, very close to the asymptotic lower bound 2.5 in [2].
{"title":"An Improved Speedup Factor for Sporadic Tasks with Constrained Deadlines Under Dynamic Priority Scheduling","authors":"Xin Han, Liang Zhao, Zhishan Guo, Xingwu Liu","doi":"10.1109/RTSS.2018.00058","DOIUrl":"https://doi.org/10.1109/RTSS.2018.00058","url":null,"abstract":"Schedulability is a fundamental problem in real-time scheduling, but it has to be approximated due to the intrinsic computational hardness. As the most popular algorithm for deciding schedulability on multiprocess platforms, the speedup factor of partitioned-EDF is challenging to analyze and is far from being determined. Partitioned-EDF was first proposed in 2005 by Barush and Fisher [1], and was shown to have a speedup factor at most 3-1/m, meaning that if the input of sporadic tasks is feasible on m processors with speed one, partitioned-EDF will always succeed on m processors with speed 3-1/m. In 2011, this upper bound was improved to 2.6322-1/m by Chen and Chakraborty [2], and no more improvements have appeared ever since then. In this paper, we develop a novel method to discretize and regularize sporadic tasks, which enables us to improve, in the case of constrained deadlines, the speedup factor of partitioned-EDF to 2.5556-1/m, very close to the asymptotic lower bound 2.5 in [2].","PeriodicalId":294784,"journal":{"name":"2018 IEEE Real-Time Systems Symposium (RTSS)","volume":"25 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-07-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126869438","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}