Real-time systems used in media processing and transmission produce bursty workloads with highly variable execution and transmission times. To avoid the drawbacks of using the worst-case approach with these workloads, this paper uses a variation of the usual real-time task model where the WCET is replaced by a discrete statistical distribution. Using this approach, tasks are characterized by their processing time over a sampling period. We could expect that increasing the sampling period would smooth, in principle, the workload variability and the proposed analysis would provide more deterministic long- term results. However, we have surprisingly observed that this variability does not decreases with the sampling period: work- loads are bursty on many time scales. This property is known as self-similarity and is measured using the Hurst parameter.This paper studies how to properly model and analyze self- similar task sets showing the influence of the Hurst parameter on the schedulability analysis. It shows, through an analytical model and simulations, that this parameter may have a very negative impact on system performance. As a conclusion, it can be stated that this factor should be taken into account for statistical analysis of real-time systems, since simplistic workload models can lead to inaccurate results. It also shows that the negative effect of this parameter can be bounded using scheduling policies based on the bandwidth isolation principle.
{"title":"Analysis of Self-Similar Workload on Real-Time Systems","authors":"Enrique Hernández-Orallo, Joan Vila i Carbó","doi":"10.1109/RTAS.2010.13","DOIUrl":"https://doi.org/10.1109/RTAS.2010.13","url":null,"abstract":"Real-time systems used in media processing and transmission produce bursty workloads with highly variable execution and transmission times. To avoid the drawbacks of using the worst-case approach with these workloads, this paper uses a variation of the usual real-time task model where the WCET is replaced by a discrete statistical distribution. Using this approach, tasks are characterized by their processing time over a sampling period. We could expect that increasing the sampling period would smooth, in principle, the workload variability and the proposed analysis would provide more deterministic long- term results. However, we have surprisingly observed that this variability does not decreases with the sampling period: work- loads are bursty on many time scales. This property is known as self-similarity and is measured using the Hurst parameter.This paper studies how to properly model and analyze self- similar task sets showing the influence of the Hurst parameter on the schedulability analysis. It shows, through an analytical model and simulations, that this parameter may have a very negative impact on system performance. As a conclusion, it can be stated that this factor should be taken into account for statistical analysis of real-time systems, since simplistic workload models can lead to inaccurate results. It also shows that the negative effect of this parameter can be bounded using scheduling policies based on the bandwidth isolation principle.","PeriodicalId":356388,"journal":{"name":"2010 16th IEEE Real-Time and Embedded Technology and Applications Symposium","volume":"91 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2010-04-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126723501","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
P. Vicaire, Zhiheng Xie, Enamul Hoque, J. Stankovic
This paper describes the design and implementation of a pervasive computing framework, named Physicalnet. Essentially, Physicalnet is a generic paradigm for managing and programming world-wide distributed heterogeneous sensor and actuator resources in a multi-user and multi-network environment. Using a four-tier light-weight service oriented architecture, Physicalnet enables global uniform access to heterogeneous resources and decouples applications from particular resources, locations and networks. Through a negotiator module, it allows a large number of applications to concurrently execute on the same resources and to span multiple physical networks and logical administrative domains. By providing a fine-grained use-based access rights control and conflict resolution mechanism, Physicalnet not only ensures owners having total control of sharing and protecting their resources, but also dramatically increases the number of applications that can concurrently execute on the devices. Furthermore, Physicalnet supports resource dynamic location-aware mobility, application run-time reconfigurability and on-the-fly access rights specification. To quantify the performance, we evaluate Physicalnet based on memory usage, the number of concurrent applications, and dynamic responsiveness. The results show Physicalnet has excellent performance, but low overheads.
{"title":"Physicalnet: A Generic Framework for Managing and Programming Across Pervasive Computing Networks","authors":"P. Vicaire, Zhiheng Xie, Enamul Hoque, J. Stankovic","doi":"10.1109/RTAS.2010.17","DOIUrl":"https://doi.org/10.1109/RTAS.2010.17","url":null,"abstract":"This paper describes the design and implementation of a pervasive computing framework, named Physicalnet. Essentially, Physicalnet is a generic paradigm for managing and programming world-wide distributed heterogeneous sensor and actuator resources in a multi-user and multi-network environment. Using a four-tier light-weight service oriented architecture, Physicalnet enables global uniform access to heterogeneous resources and decouples applications from particular resources, locations and networks. Through a negotiator module, it allows a large number of applications to concurrently execute on the same resources and to span multiple physical networks and logical administrative domains. By providing a fine-grained use-based access rights control and conflict resolution mechanism, Physicalnet not only ensures owners having total control of sharing and protecting their resources, but also dramatically increases the number of applications that can concurrently execute on the devices. Furthermore, Physicalnet supports resource dynamic location-aware mobility, application run-time reconfigurability and on-the-fly access rights specification. To quantify the performance, we evaluate Physicalnet based on memory usage, the number of concurrent applications, and dynamic responsiveness. The results show Physicalnet has excellent performance, but low overheads.","PeriodicalId":356388,"journal":{"name":"2010 16th IEEE Real-Time and Embedded Technology and Applications Symposium","volume":"475 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2010-04-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115605009","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
The management of tasks is an essential requirement in most real-time and embedded systems, but invariably leads to unwanted CPU overheads. This paper is concerned with task management in real-time and embedded systems employing the Earliest Deadline First (EDF) scheduling algorithm. Currently, the best known techniques to manage EDF scheduling lead to overheads with complexity O(log n), where n is the number of recurring (periodic/sporadic) tasks. In this paper it will be shown that if both the ready and waiting queues are represented by either i) timing and indexed deadline wheels or ii) digital search trees, then all scheduling decisions may be made in time proportional to the logarithm of the largest time representation required by the system, pm. In cases where pm is relatively small, for example in some embedded systems, extremely efficient task management may then be achieved. Experimental results are then presented, and it is shown that on an ARM7 microcontroller, when the number of tasks is comparatively large for such a platform (≫ 250), the worst-case scheduling overheads remain effectively constant and below 20 µs. The results indicate that the techniques provide some improved performance over previous methods, and also seem to indicate that there is little discernable difference between the overheads incurred between employing a fixed- or dynamic-priority scheduler in a given system.
{"title":"Improved Task Management Techniques for Enforcing EDF Scheduling on Recurring Tasks","authors":"M. Short","doi":"10.1109/RTAS.2010.22","DOIUrl":"https://doi.org/10.1109/RTAS.2010.22","url":null,"abstract":"The management of tasks is an essential requirement in most real-time and embedded systems, but invariably leads to unwanted CPU overheads. This paper is concerned with task management in real-time and embedded systems employing the Earliest Deadline First (EDF) scheduling algorithm. Currently, the best known techniques to manage EDF scheduling lead to overheads with complexity O(log n), where n is the number of recurring (periodic/sporadic) tasks. In this paper it will be shown that if both the ready and waiting queues are represented by either i) timing and indexed deadline wheels or ii) digital search trees, then all scheduling decisions may be made in time proportional to the logarithm of the largest time representation required by the system, pm. In cases where pm is relatively small, for example in some embedded systems, extremely efficient task management may then be achieved. Experimental results are then presented, and it is shown that on an ARM7 microcontroller, when the number of tasks is comparatively large for such a platform (≫ 250), the worst-case scheduling overheads remain effectively constant and below 20 µs. The results indicate that the techniques provide some improved performance over previous methods, and also seem to indicate that there is little discernable difference between the overheads incurred between employing a fixed- or dynamic-priority scheduler in a given system.","PeriodicalId":356388,"journal":{"name":"2010 16th IEEE Real-Time and Embedded Technology and Applications Symposium","volume":"77 11 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2010-04-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"122831681","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Modern computing systems have adopted multicore architectures and multiprocessor systems on chip (MPSoCs) for accommodating the increasing demand on computation power. However, performance boosting is constrained by shared resources, such as buses, main memory, DMA, etc.This paper analyzes the worst-case completion (response) time for real-time tasks when time division multiple access (TDMA) policies are applied for resource arbitration.Real-time tasks execute periodically on a processing element and are constituted by sequential superblocks. A superblock is characterized by its accesses to a shared resource and its computation time. We explore three models of accessing shared resources: (1)dedicated access model, in which accesses happen only at the beginning and the end of a superblock, (2) general access model, in which accesses could happen anytime during the execution of a superblock, and (3) hybrid access model, which combines the dedicated and general access models. We present a framework to analyze the worst-case completion time of real-time tasks (superblocks) under these three access models, for a given TDMA arbiter. We compare the timing analysis of the three proposed models for a real-world application.
{"title":"Timing Analysis for TDMA Arbitration in Resource Sharing Systems","authors":"A. Schranzhofer, Jian-Jia Chen, L. Thiele","doi":"10.1109/RTAS.2010.24","DOIUrl":"https://doi.org/10.1109/RTAS.2010.24","url":null,"abstract":"Modern computing systems have adopted multicore architectures and multiprocessor systems on chip (MPSoCs) for accommodating the increasing demand on computation power. However, performance boosting is constrained by shared resources, such as buses, main memory, DMA, etc.This paper analyzes the worst-case completion (response) time for real-time tasks when time division multiple access (TDMA) policies are applied for resource arbitration.Real-time tasks execute periodically on a processing element and are constituted by sequential superblocks. A superblock is characterized by its accesses to a shared resource and its computation time. We explore three models of accessing shared resources: (1)dedicated access model, in which accesses happen only at the beginning and the end of a superblock, (2) general access model, in which accesses could happen anytime during the execution of a superblock, and (3) hybrid access model, which combines the dedicated and general access models. We present a framework to analyze the worst-case completion time of real-time tasks (superblocks) under these three access models, for a given TDMA arbiter. We compare the timing analysis of the three proposed models for a real-world application.","PeriodicalId":356388,"journal":{"name":"2010 16th IEEE Real-Time and Embedded Technology and Applications Symposium","volume":"64 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2010-04-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"122896442","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
This paper presents the design and analyzes the performance of a reliable communication scheme for the traffic control system built upon a wireless process control protocol, aiming at enhancing the robustness and timeliness of the safety-critical control applications. By exploiting the slot-based predictable access and the grid topology of urban area road networks, the proposed scheme establishes one primary and secondary route from the controller to each node and allocates the time slots accordingly. This is facilitated by a split-merge operation that makes a sender node sense the primary channel(channel to the primary receiver) and take the secondary channel only if the first one is not free in a single slot, while making the receiver first listen to the primary sender and switch to the secondary sender. Our scheme also finds the path with the lowest error rate by modeling the split-merge operation as a single virtual link and applying shortest path algorithm. The experimental results show that the proposed scheme greatly enhances the transmission success ratio for the grid-style traffic control network and the improvement scales up with the network size. In addition, the routing scheme can further find the path that can improve the delivery ratio of control messages compared with the traditional grid routing scheme.
{"title":"Design of a Reliable Communication System for Grid-Style Traffic Light Networks","authors":"Junghoon Lee, Song Han, A. Mok","doi":"10.1109/RTAS.2010.37","DOIUrl":"https://doi.org/10.1109/RTAS.2010.37","url":null,"abstract":"This paper presents the design and analyzes the performance of a reliable communication scheme for the traffic control system built upon a wireless process control protocol, aiming at enhancing the robustness and timeliness of the safety-critical control applications. By exploiting the slot-based predictable access and the grid topology of urban area road networks, the proposed scheme establishes one primary and secondary route from the controller to each node and allocates the time slots accordingly. This is facilitated by a split-merge operation that makes a sender node sense the primary channel(channel to the primary receiver) and take the secondary channel only if the first one is not free in a single slot, while making the receiver first listen to the primary sender and switch to the secondary sender. Our scheme also finds the path with the lowest error rate by modeling the split-merge operation as a single virtual link and applying shortest path algorithm. The experimental results show that the proposed scheme greatly enhances the transmission success ratio for the grid-style traffic control network and the improvement scales up with the network size. In addition, the routing scheme can further find the path that can improve the delivery ratio of control messages compared with the traditional grid routing scheme.","PeriodicalId":356388,"journal":{"name":"2010 16th IEEE Real-Time and Embedded Technology and Applications Symposium","volume":"8 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2010-04-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128398368","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
A combination of a scratchpad and scratchpad memory management unit (SMMU) has been proposed as a way to implement fast and time-predictable memory access operations in programs that use dynamic data structures.A memory access operation is time-predictable if its execution time is known or bounded -- this is important within a hard real-time task so that the worst-case execution time (WCET) can be determined. However, the requirement for time-predictability does not remove the conventional requirement for efficiency:operations must be serviced as quickly as possible under worst-case conditions.This paper studies the capabilities of the SMMU when applied to a number of benchmark programs. A new allocation algorithm is proposed to dynamically manage the scratchpad space. In many cases,the SMMU vastly reduces the number of accesses to dynamic data structures stored in external memory along the worst-case execution path (WCEP). Across all the benchmarks,an average of 47% of accesses are rerouted to scratchpad, with nearly100% for some programs. In previous scratchpad-based work, time-predictability could only be assured for these operations using external memory.The paper also examines situations in which the SMMU does not perform so well, and discusses how these could be addressed.
{"title":"Studying the Applicability of the Scratchpad Memory Management Unit","authors":"J. Whitham, N. Audsley","doi":"10.1109/RTAS.2010.21","DOIUrl":"https://doi.org/10.1109/RTAS.2010.21","url":null,"abstract":"A combination of a scratchpad and scratchpad memory management unit (SMMU) has been proposed as a way to implement fast and time-predictable memory access operations in programs that use dynamic data structures.A memory access operation is time-predictable if its execution time is known or bounded -- this is important within a hard real-time task so that the worst-case execution time (WCET) can be determined. However, the requirement for time-predictability does not remove the conventional requirement for efficiency:operations must be serviced as quickly as possible under worst-case conditions.This paper studies the capabilities of the SMMU when applied to a number of benchmark programs. A new allocation algorithm is proposed to dynamically manage the scratchpad space. In many cases,the SMMU vastly reduces the number of accesses to dynamic data structures stored in external memory along the worst-case execution path (WCEP). Across all the benchmarks,an average of 47% of accesses are rerouted to scratchpad, with nearly100% for some programs. In previous scratchpad-based work, time-predictability could only be assured for these operations using external memory.The paper also examines situations in which the SMMU does not perform so well, and discusses how these could be addressed.","PeriodicalId":356388,"journal":{"name":"2010 16th IEEE Real-Time and Embedded Technology and Applications Symposium","volume":"28 7","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2010-04-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"113968190","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Recent research in compositional real-time systems has focused on determination of a component's real-time interface parameters. An important objective in interface-parameter determination is minimizing the bandwidth allocated to each component of the system while simultaneously guaranteeing component schedulability. With this goal in mind, in this paper we develop a fully-polynomial-time approximation scheme (FPTAS) for allocating bandwidth for sporadic task systems scheduled by fixed priority (e.g., deadline monotonic, rate monotonic) upon an Explicit-Deadline Periodic (EDP) resource. Our parametric algorithm takes the task system and an accuracy parameter $epsilon ≫ 0$ as input, and returns a bandwidth which is guaranteed to be at most a factor $(1 + epsilon)$ times the optimal minimum bandwidth required to successfully schedule the task system. By simulations over synthetically generated task systems, we observe a significant decrease in runtime and a small relative error when comparing our proposed algorithm with the exact algorithm and the sufficient algorithm.
{"title":"Approximate Bandwidth Allocation for Fixed-Priority-Scheduled Periodic Resources","authors":"Farhana Dewan, N. Fisher","doi":"10.1109/RTAS.2010.28","DOIUrl":"https://doi.org/10.1109/RTAS.2010.28","url":null,"abstract":"Recent research in compositional real-time systems has focused on determination of a component's real-time interface parameters. An important objective in interface-parameter determination is minimizing the bandwidth allocated to each component of the system while simultaneously guaranteeing component schedulability. With this goal in mind, in this paper we develop a fully-polynomial-time approximation scheme (FPTAS) for allocating bandwidth for sporadic task systems scheduled by fixed priority (e.g., deadline monotonic, rate monotonic) upon an Explicit-Deadline Periodic (EDP) resource. Our parametric algorithm takes the task system and an accuracy parameter $epsilon ≫ 0$ as input, and returns a bandwidth which is guaranteed to be at most a factor $(1 + epsilon)$ times the optimal minimum bandwidth required to successfully schedule the task system. By simulations over synthetically generated task systems, we observe a significant decrease in runtime and a small relative error when comparing our proposed algorithm with the exact algorithm and the sufficient algorithm.","PeriodicalId":356388,"journal":{"name":"2010 16th IEEE Real-Time and Embedded Technology and Applications Symposium","volume":"72 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2010-04-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"132448429","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Application of runtime monitoring to maintain the health of an embedded real-time software system requires that anomalous behavior be detected within a bounded time while preserving the temporal guarantees of the underlying system. Existing results can compute bounds on the detection latency of runtime monitors that are realized as a deferrable server running at the highest priority. In this paper, we generalize those results to allow monitors to run at an arbitrary priority. We also present an analysis of queue length in predictable runtime monitoring, which allows one to compute an upper bound on queue length. When implementing predictable runtime monitoring, system engineers are presented with several challenges in configuring the parameters of monitor servers. To address those challenges, we explore the tradeoffs among key server parameters and make recommendations about how best to select those parameters to achieve system monitoring objectives.
{"title":"Selecting Server Parameters for Predictable Runtime Monitoring","authors":"H. Zhu, S. Goddard, Matthew B. Dwyer","doi":"10.1109/RTAS.2010.18","DOIUrl":"https://doi.org/10.1109/RTAS.2010.18","url":null,"abstract":"Application of runtime monitoring to maintain the health of an embedded real-time software system requires that anomalous behavior be detected within a bounded time while preserving the temporal guarantees of the underlying system. Existing results can compute bounds on the detection latency of runtime monitors that are realized as a deferrable server running at the highest priority. In this paper, we generalize those results to allow monitors to run at an arbitrary priority. We also present an analysis of queue length in predictable runtime monitoring, which allows one to compute an upper bound on queue length. When implementing predictable runtime monitoring, system engineers are presented with several challenges in configuring the parameters of monitor servers. To address those challenges, we explore the tradeoffs among key server parameters and make recommendations about how best to select those parameters to achieve system monitoring objectives.","PeriodicalId":356388,"journal":{"name":"2010 16th IEEE Real-Time and Embedded Technology and Applications Symposium","volume":"2 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2010-04-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129582608","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Many mission-critical applications such as military surveillance, human health monitoring, and obstacle detection in autonomous vehicles impose stringent requirements for event detection accuracy and demand long system lifetimes. Through quantitative study, we show that traditional approaches to event detection have difficulty meeting such requirements. Specifically, they cannot explore the detection capability of a deployed system and choose the right sensors, homogeneous or heterogeneous, to meet user specified detection accuracy. They also cannot dynamically adapt the detection capability to runtime observations to save energy. Therefore, we are motivated to propose Watchdog, a modality-agnostic event detection framework that clusters the right sensors to meet user specified detection accuracy during runtime while significantly reducing energy consumption. Through evaluation with vehicle detection trace data and a building traffic monitoring testbed of IRIS motes, we demonstrate the superior performance of Watchdog over existing solutions in terms of meeting user specified detection accuracy and energy savings.
{"title":"Watchdog: Confident Event Detection in Heterogeneous Sensor Networks","authors":"Matthew Keally, Gang Zhou, G. Xing","doi":"10.1109/RTAS.2010.15","DOIUrl":"https://doi.org/10.1109/RTAS.2010.15","url":null,"abstract":"Many mission-critical applications such as military surveillance, human health monitoring, and obstacle detection in autonomous vehicles impose stringent requirements for event detection accuracy and demand long system lifetimes. Through quantitative study, we show that traditional approaches to event detection have difficulty meeting such requirements. Specifically, they cannot explore the detection capability of a deployed system and choose the right sensors, homogeneous or heterogeneous, to meet user specified detection accuracy. They also cannot dynamically adapt the detection capability to runtime observations to save energy. Therefore, we are motivated to propose Watchdog, a modality-agnostic event detection framework that clusters the right sensors to meet user specified detection accuracy during runtime while significantly reducing energy consumption. Through evaluation with vehicle detection trace data and a building traffic monitoring testbed of IRIS motes, we demonstrate the superior performance of Watchdog over existing solutions in terms of meeting user specified detection accuracy and energy savings.","PeriodicalId":356388,"journal":{"name":"2010 16th IEEE Real-Time and Embedded Technology and Applications Symposium","volume":"54 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2010-04-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"123970420","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
J. Balasubramanian, A. Gokhale, A. Dubey, F. Wolf, Chenyang Lu, C. Gill, D. Schmidt
Developing large-scale distributed real-time and embedded (DRE) systems is hard in part due to complex deployment and configuration issues involved in satisfying multiple quality for service (QoS) properties, such as real-timeliness and fault tolerance. This paper makes three contributions to the study of deployment and configuration middleware for DRE systems that satisfy multiple QoS properties. First, it describes a novel task allocation algorithm for passively replicated DRE systems to meet their real-time and fault-tolerance QoS properties while consuming significantly less resources. Second, it presents the design of a strategizable allocation engine that enables application developers to evaluate different allocation algorithms. Third, it presents the design of a middleware agnostic configuration framework that uses allocation decisions to deploy application components/replicas and configure the underlying middleware automatically on the chosen nodes. These contributions are realized in the DeCoRAM (Deployment and Configuration Reasoning and Analysis via Modeling) middleware. Empirical results on a distributed testbed demonstrate DeCoRAM’s ability to handle multiple failures and provide efficient and predictable real-time performance.
{"title":"Middleware for Resource-Aware Deployment and Configuration of Fault-Tolerant Real-time Systems","authors":"J. Balasubramanian, A. Gokhale, A. Dubey, F. Wolf, Chenyang Lu, C. Gill, D. Schmidt","doi":"10.1109/RTAS.2010.30","DOIUrl":"https://doi.org/10.1109/RTAS.2010.30","url":null,"abstract":"Developing large-scale distributed real-time and embedded (DRE) systems is hard in part due to complex deployment and configuration issues involved in satisfying multiple quality for service (QoS) properties, such as real-timeliness and fault tolerance. This paper makes three contributions to the study of deployment and configuration middleware for DRE systems that satisfy multiple QoS properties. First, it describes a novel task allocation algorithm for passively replicated DRE systems to meet their real-time and fault-tolerance QoS properties while consuming significantly less resources. Second, it presents the design of a strategizable allocation engine that enables application developers to evaluate different allocation algorithms. Third, it presents the design of a middleware agnostic configuration framework that uses allocation decisions to deploy application components/replicas and configure the underlying middleware automatically on the chosen nodes. These contributions are realized in the DeCoRAM (Deployment and Configuration Reasoning and Analysis via Modeling) middleware. Empirical results on a distributed testbed demonstrate DeCoRAM’s ability to handle multiple failures and provide efficient and predictable real-time performance.","PeriodicalId":356388,"journal":{"name":"2010 16th IEEE Real-Time and Embedded Technology and Applications Symposium","volume":"31 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2010-04-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"122398131","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}