Kostiantyn Berezovskyi, K. Bletsas, Björn Andersson
Graphics processors were originally developed for rendering graphics but have recently evolved towards being an architecture for general-purpose computations. They are also expected to become important parts of embedded systems hardware -- not just for graphics. However, this necessitates the development of appropriate timing analysis techniques which would be required because techniques developed for CPU scheduling are not applicable. The reason is that we are not interested in how long it takes for any given GPU thread to complete, but rather how long it takes for all of them to complete. We therefore develop a simple method for finding an upper bound on the make span of a group of GPU threads executing the same program and competing for the resources of a single streaming multiprocessor (whose architecture is based on NVIDIA Fermi, with some simplifying assumptions). We then build upon this method to formulate the derivation of the exact worst-case make span (and corresponding schedule) as an optimization problem. Addressing the issue of tractability, we also present a technique for efficiently computing a safe estimate of the worst-case make span with minimal pessimism, for use when finding an exact value would take too long.
{"title":"Makespan Computation for GPU Threads Running on a Single Streaming Multiprocessor","authors":"Kostiantyn Berezovskyi, K. Bletsas, Björn Andersson","doi":"10.1109/ECRTS.2012.16","DOIUrl":"https://doi.org/10.1109/ECRTS.2012.16","url":null,"abstract":"Graphics processors were originally developed for rendering graphics but have recently evolved towards being an architecture for general-purpose computations. They are also expected to become important parts of embedded systems hardware -- not just for graphics. However, this necessitates the development of appropriate timing analysis techniques which would be required because techniques developed for CPU scheduling are not applicable. The reason is that we are not interested in how long it takes for any given GPU thread to complete, but rather how long it takes for all of them to complete. We therefore develop a simple method for finding an upper bound on the make span of a group of GPU threads executing the same program and competing for the resources of a single streaming multiprocessor (whose architecture is based on NVIDIA Fermi, with some simplifying assumptions). We then build upon this method to formulate the derivation of the exact worst-case make span (and corresponding schedule) as an optimization problem. Addressing the issue of tractability, we also present a technique for efficiently computing a safe estimate of the worst-case make span with minimal pessimism, for use when finding an exact value would take too long.","PeriodicalId":425794,"journal":{"name":"2012 24th Euromicro Conference on Real-Time Systems","volume":"6 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2012-07-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"134466256","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Mayank Shekhar, Abhik Sarkar, H. Ramaprasad, F. Mueller
As real-time embedded systems integrate more and more functionality, they are demanding increasing amounts of computational power that can only be met by deploying multicore architectures. The use of multicore architectures with on-chip memory hierarchies and shared communication infrastructure in the context of real-time systems poses several challenges for task scheduling. In this paper, we present a predictable semi-partitioned strategy for scheduling a set of independent hard-real-time tasks on homogeneous multicore platforms using cache locking and locked cache migration. Semipartitioned scheduling strategies form a middle ground between the two extreme approaches, namely global and partitioned scheduling. By making most tasks non-migrating (partitioned), runtime migration overhead is minimized. On the other hand, by allowing some tasks to migrate among cores, schedulability of task sets may be improved. Simulation results demonstrate the effectiveness of our approach in improving task set schedulability over purely partitioned approaches while maintaining real-time predictability of migrating tasks. In our simulations, we achieve an average increase in utilization of 37.31% and an average increase in density of 81.36% compared to purely partitioned task allocation.
{"title":"Semi-Partitioned Hard-Real-Time Scheduling under Locked Cache Migration in Multicore Systems","authors":"Mayank Shekhar, Abhik Sarkar, H. Ramaprasad, F. Mueller","doi":"10.1109/ECRTS.2012.27","DOIUrl":"https://doi.org/10.1109/ECRTS.2012.27","url":null,"abstract":"As real-time embedded systems integrate more and more functionality, they are demanding increasing amounts of computational power that can only be met by deploying multicore architectures. The use of multicore architectures with on-chip memory hierarchies and shared communication infrastructure in the context of real-time systems poses several challenges for task scheduling. In this paper, we present a predictable semi-partitioned strategy for scheduling a set of independent hard-real-time tasks on homogeneous multicore platforms using cache locking and locked cache migration. Semipartitioned scheduling strategies form a middle ground between the two extreme approaches, namely global and partitioned scheduling. By making most tasks non-migrating (partitioned), runtime migration overhead is minimized. On the other hand, by allowing some tasks to migrate among cores, schedulability of task sets may be improved. Simulation results demonstrate the effectiveness of our approach in improving task set schedulability over purely partitioned approaches while maintaining real-time predictability of migrating tasks. In our simulations, we achieve an average increase in utilization of 37.31% and an average increase in density of 81.36% compared to purely partitioned task allocation.","PeriodicalId":425794,"journal":{"name":"2012 24th Euromicro Conference on Real-Time Systems","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2012-07-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129952696","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Static cache analysis is an indispensable part of static timing analysis, which is employed to verify the timing behaviour of programs in safety-critical real-time systems. State-of-the-art cache analyses classify memory references as `always hit', `always miss', or `unknown'. To do so, they rely on a preceding address analysis that tries to determine the referenced addresses. If a referenced address is not determined precisely, however, those cache analyses cannot predict this reference as hit or miss. On top of that, information about other cache contents is lost upon such references. We present a novel approach to static cache analysis that alleviates the dependency on precise address analysis. Instead of having to argue about concrete addresses, we only need to argue about relations between referenced addresses, e.g. `accesses same memory block' or `maps to different cache set'. Such relations can be determined by congruence analyses, without precise knowledge about the actual addresses. The subsequent cache analysis then only relies on relations to infer cache information and to classify references. One advantage of this approach is that hits can be predicted for references with imprecisely determined addresses, even if there is no information about accessed addresses. In particular, this enables the prediction of hits for references whose addresses depend on an unknown stack pointer or even depend on the program input. Relational cache analysis is always at least as precise as the corresponding state-of-the-art cache analysis. Furthermore, we demonstrate significant improvements for three classes of program constructs.
{"title":"Relational Cache Analysis for Static Timing Analysis","authors":"S. Hahn, Daniel Grund","doi":"10.1109/ECRTS.2012.14","DOIUrl":"https://doi.org/10.1109/ECRTS.2012.14","url":null,"abstract":"Static cache analysis is an indispensable part of static timing analysis, which is employed to verify the timing behaviour of programs in safety-critical real-time systems. State-of-the-art cache analyses classify memory references as `always hit', `always miss', or `unknown'. To do so, they rely on a preceding address analysis that tries to determine the referenced addresses. If a referenced address is not determined precisely, however, those cache analyses cannot predict this reference as hit or miss. On top of that, information about other cache contents is lost upon such references. We present a novel approach to static cache analysis that alleviates the dependency on precise address analysis. Instead of having to argue about concrete addresses, we only need to argue about relations between referenced addresses, e.g. `accesses same memory block' or `maps to different cache set'. Such relations can be determined by congruence analyses, without precise knowledge about the actual addresses. The subsequent cache analysis then only relies on relations to infer cache information and to classify references. One advantage of this approach is that hits can be predicted for references with imprecisely determined addresses, even if there is no information about accessed addresses. In particular, this enables the prediction of hits for references whose addresses depend on an unknown stack pointer or even depend on the program input. Relational cache analysis is always at least as precise as the corresponding state-of-the-art cache analysis. Furthermore, we demonstrate significant improvements for three classes of program constructs.","PeriodicalId":425794,"journal":{"name":"2012 24th Euromicro Conference on Real-Time Systems","volume":"11 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2012-07-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127040354","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
This paper presents the first real-time multiprocessor locking protocol that supports fine-grained nested resource requests. This locking protocol relies on a novel technique for ordering the satisfaction of resource requests to ensure a bounded duration of priority inversions for nested requests. This technique can be applied on partitioned, clustered, and globally scheduled systems in which waiting is realized by either spinning or suspending. Furthermore, this technique can be used to construct fine-grained nested locking protocols that are efficient under spin-based, suspension-oblivious or suspension-aware analysis of priority inversions. Locking protocols built upon this technique perform no worse than coarse-grained locking mechanisms, while allowing for increased parallelism in the average case (and, depending upon the task set, better worst-case performance).
{"title":"Supporting Nested Locking in Multiprocessor Real-Time Systems","authors":"Bryan C. Ward, James H. Anderson","doi":"10.1109/ECRTS.2012.17","DOIUrl":"https://doi.org/10.1109/ECRTS.2012.17","url":null,"abstract":"This paper presents the first real-time multiprocessor locking protocol that supports fine-grained nested resource requests. This locking protocol relies on a novel technique for ordering the satisfaction of resource requests to ensure a bounded duration of priority inversions for nested requests. This technique can be applied on partitioned, clustered, and globally scheduled systems in which waiting is realized by either spinning or suspending. Furthermore, this technique can be used to construct fine-grained nested locking protocols that are efficient under spin-based, suspension-oblivious or suspension-aware analysis of priority inversions. Locking protocols built upon this technique perform no worse than coarse-grained locking mechanisms, while allowing for increased parallelism in the average case (and, depending upon the task set, better worst-case performance).","PeriodicalId":425794,"journal":{"name":"2012 24th Euromicro Conference on Real-Time Systems","volume":"131 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2012-07-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"121538707","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
GPGPUs (General Purpose Graphic Processing Units) provide massive computational power. However, applying GPGPU technology to real-time computing is challenging due to the non-preemptive nature of GPGPUs. Especially, a job running in a GPGPU or a data copy between a GPGPU and CPU is non-preemptive. As a result, a high priority job arriving in the middle of a low priority job execution or memory copy suffers from priority inversion. To address the problem, we present a new lightweight approach to supporting preemptive memory copies and job executions in GPGPUs. Moreover, in our approach, a GPGPU job and memory copy between a GPGPU and the hosting CPU are run concurrently to enhance the responsiveness. To show the feasibility of our approach, we have implemented a prototype system for preemptive job executions and data copies in a GPGPU. The experimental results show that our approach can bound the response times in a reliable manner. In addition, the response time of our approach is significantly shorter than those of the unmodified GPGPU runtime system that supports no preemption and an advanced GPGPU model designed to support prioritization and performance isolation via preemptive data copies.
{"title":"Supporting Preemptive Task Executions and Memory Copies in GPGPUs","authors":"Can Basaran, K. Kang","doi":"10.1109/ECRTS.2012.15","DOIUrl":"https://doi.org/10.1109/ECRTS.2012.15","url":null,"abstract":"GPGPUs (General Purpose Graphic Processing Units) provide massive computational power. However, applying GPGPU technology to real-time computing is challenging due to the non-preemptive nature of GPGPUs. Especially, a job running in a GPGPU or a data copy between a GPGPU and CPU is non-preemptive. As a result, a high priority job arriving in the middle of a low priority job execution or memory copy suffers from priority inversion. To address the problem, we present a new lightweight approach to supporting preemptive memory copies and job executions in GPGPUs. Moreover, in our approach, a GPGPU job and memory copy between a GPGPU and the hosting CPU are run concurrently to enhance the responsiveness. To show the feasibility of our approach, we have implemented a prototype system for preemptive job executions and data copies in a GPGPU. The experimental results show that our approach can bound the response times in a reliable manner. In addition, the response time of our approach is significantly shorter than those of the unmodified GPGPU runtime system that supports no preemption and an advanced GPGPU model designed to support prioritization and performance isolation via preemptive data copies.","PeriodicalId":425794,"journal":{"name":"2012 24th Euromicro Conference on Real-Time Systems","volume":"10 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2012-07-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"121788970","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
L. Palopoli, D. Fontanelli, Nicola Manica, Luca Abeni
The application of a resource reservation scheduler to soft real -- time systems requires effective means to compute the probability of a deadline miss given a particular choice for the scheduling parameters. This is a challenging research problem, for which only numeric solutions, complex and difficult to manage, are currently available. In this paper, we adopt an analytical approach. By using an approximate and conservative model for the evolution of a periodic task scheduled through a reservation, we construct a closed form lower bound for the probability of a deadline miss. Our experiments reveal that the bound remains reasonably close to the experimental probability for many real -- time applications of interest.
{"title":"An Analytical Bound for Probabilistic Deadlines","authors":"L. Palopoli, D. Fontanelli, Nicola Manica, Luca Abeni","doi":"10.1109/ECRTS.2012.19","DOIUrl":"https://doi.org/10.1109/ECRTS.2012.19","url":null,"abstract":"The application of a resource reservation scheduler to soft real -- time systems requires effective means to compute the probability of a deadline miss given a particular choice for the scheduling parameters. This is a challenging research problem, for which only numeric solutions, complex and difficult to manage, are currently available. In this paper, we adopt an analytical approach. By using an approximate and conservative model for the evolution of a periodic task scheduled through a reservation, we construct a closed form lower bound for the probability of a deadline miss. Our experiments reveal that the bound remains reasonably close to the experimental probability for many real -- time applications of interest.","PeriodicalId":425794,"journal":{"name":"2012 24th Euromicro Conference on Real-Time Systems","volume":"295 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2012-07-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"124841936","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Worst case backlog evaluation is a key issue to avoid under or over sizing of output port buffers for store-and-forward switches. Typically, the dimensioning of switches in the context of avionics is at least as important as the upper bounding of the end-to-end delays. This paper presents a new method based on the Trajectory approach for backlog evaluation of output ports of AFDX switches. On an industrial AFDX configuration, this new method leads to an average buffer size reduction of 10% compared to the existing Network Calculus approach.
{"title":"Worst-Case Backlog Evaluation of Avionics Switched Ethernet Networks with the Trajectory Approach","authors":"Henri Bauer, Jean-Luc Scharbarg, C. Fraboul","doi":"10.1109/ECRTS.2012.12","DOIUrl":"https://doi.org/10.1109/ECRTS.2012.12","url":null,"abstract":"Worst case backlog evaluation is a key issue to avoid under or over sizing of output port buffers for store-and-forward switches. Typically, the dimensioning of switches in the context of avionics is at least as important as the upper bounding of the end-to-end delays. This paper presents a new method based on the Trajectory approach for backlog evaluation of output ports of AFDX switches. On an industrial AFDX configuration, this new method leads to an average buffer size reduction of 10% compared to the existing Network Calculus approach.","PeriodicalId":425794,"journal":{"name":"2012 24th Euromicro Conference on Real-Time Systems","volume":"215 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2012-07-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"125694035","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Real-time systems are often modeled as a collection of tasks, describing the structure of the processor's workload. In the literature, task-models of different expressiveness have been developed, ranging from the traditional periodic task model to highly expressive graph-based models. For dynamic priority schedulers, it has been shown that the schedulability problem can be solved efficiently, even for graph-based models. However, the situation is less clear for the case of static priority schedulers. It has been believed that the problem can be solved in pseudo-polynomial time for the generalized multiframe model (GMF). The GMF model constitutes a compromise in expressiveness by allowing cycling through a static list of behaviors, but disallowing branching. Further, the problem complexity for more expressive models has been unknown so far. In this paper, we show that previous results claiming that a precise and efficient test exists are wrong, giving a counterexample. We prove that the schedulability problem for GMF models (and thus also all more expressive models) using static priority schedulers is in fact coNP-hard in the strong sense. Our result thus establishes the fundamental hardness of analyzing static priority real-time scheduling, in contrast to its dynamic priority counterpart of pseudo-polynomial complexity.
{"title":"Hardness Results for Static Priority Real-Time Scheduling","authors":"Martin Stigge, W. Yi","doi":"10.1109/ECRTS.2012.13","DOIUrl":"https://doi.org/10.1109/ECRTS.2012.13","url":null,"abstract":"Real-time systems are often modeled as a collection of tasks, describing the structure of the processor's workload. In the literature, task-models of different expressiveness have been developed, ranging from the traditional periodic task model to highly expressive graph-based models. For dynamic priority schedulers, it has been shown that the schedulability problem can be solved efficiently, even for graph-based models. However, the situation is less clear for the case of static priority schedulers. It has been believed that the problem can be solved in pseudo-polynomial time for the generalized multiframe model (GMF). The GMF model constitutes a compromise in expressiveness by allowing cycling through a static list of behaviors, but disallowing branching. Further, the problem complexity for more expressive models has been unknown so far. In this paper, we show that previous results claiming that a precise and efficient test exists are wrong, giving a counterexample. We prove that the schedulability problem for GMF models (and thus also all more expressive models) using static priority schedulers is in fact coNP-hard in the strong sense. Our result thus establishes the fundamental hardness of analyzing static priority real-time scheduling, in contrast to its dynamic priority counterpart of pseudo-polynomial complexity.","PeriodicalId":425794,"journal":{"name":"2012 24th Euromicro Conference on Real-Time Systems","volume":"27 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2012-07-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127065354","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
We examine the problem of computing the worst-case first-to-first information propagation delay through a sequence of fixed-priority periodic tasks with different periods. This propagation delay is the span of time from the moment information becomes available until the first time the final task in the sequence produces an output that uses this (or more recent) input. We consider task systems in which all tasks are initially ready for execution, and the periods are harmonically related. We give efficient algorithms for computing this delay for the special cases in which the task priorities in the sequence are either monotonically decreasing or monotonically increasing. We then show how to combine these algorithms to compute an upper bound for the case in which priorities are ordered arbitrarily.
{"title":"Computing First-to-First Propagation Delays through Sequences of Fixed-Priority Periodic Tasks","authors":"Rodney R. Howell","doi":"10.1109/ECRTS.2012.26","DOIUrl":"https://doi.org/10.1109/ECRTS.2012.26","url":null,"abstract":"We examine the problem of computing the worst-case first-to-first information propagation delay through a sequence of fixed-priority periodic tasks with different periods. This propagation delay is the span of time from the moment information becomes available until the first time the final task in the sequence produces an output that uses this (or more recent) input. We consider task systems in which all tasks are initially ready for execution, and the periods are harmonically related. We give efficient algorithms for computing this delay for the special cases in which the task priorities in the sequence are either monotonically decreasing or monotonically increasing. We then show how to combine these algorithms to compute an upper bound for the case in which priorities are ordered arbitrarily.","PeriodicalId":425794,"journal":{"name":"2012 24th Euromicro Conference on Real-Time Systems","volume":"29 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2012-07-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128240327","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
The advent of multicore processors has attracted many safety-critical systems, e.g., automotive and avionics, to consider integrating multiple functionalities on a single, powerful computing platform. Such integration leads to host functionalities with different criticality levels on the same platform. The design of such ``mixed-criticality'' systems is often subject to certification from one or more certification authorities. Coming up with an effective scheduling policy and its analysis that can guarantee certification of the system at each criticality level, while maximizing the utilization of the processors, is the focus of the research presented in this paper. In this paper, the global, fixed-priority scheduling of a set of sporadic, mixed-criticality, tasks on multiprocessors is considered. A sufficient schedulability test based on response time analysis of the proposed algorithm is derived. One of the useful features of the proposed test is that it can be used for systems with more than two criticality levels. In addition, the test can be used to find ``effective'' fixed-priority ordering of the mixed-criticality tasks based on Audsley's approach.
{"title":"Schedulability Analysis of Mixed-Criticality Systems on Multiprocessors","authors":"R. Pathan","doi":"10.1109/ECRTS.2012.29","DOIUrl":"https://doi.org/10.1109/ECRTS.2012.29","url":null,"abstract":"The advent of multicore processors has attracted many safety-critical systems, e.g., automotive and avionics, to consider integrating multiple functionalities on a single, powerful computing platform. Such integration leads to host functionalities with different criticality levels on the same platform. The design of such ``mixed-criticality'' systems is often subject to certification from one or more certification authorities. Coming up with an effective scheduling policy and its analysis that can guarantee certification of the system at each criticality level, while maximizing the utilization of the processors, is the focus of the research presented in this paper. In this paper, the global, fixed-priority scheduling of a set of sporadic, mixed-criticality, tasks on multiprocessors is considered. A sufficient schedulability test based on response time analysis of the proposed algorithm is derived. One of the useful features of the proposed test is that it can be used for systems with more than two criticality levels. In addition, the test can be used to find ``effective'' fixed-priority ordering of the mixed-criticality tasks based on Audsley's approach.","PeriodicalId":425794,"journal":{"name":"2012 24th Euromicro Conference on Real-Time Systems","volume":"8 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2012-07-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126984003","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}