Pub Date : 2015-10-01DOI: 10.1109/RTEST.2015.7369846
Amir Taherin, M. Salehi, A. Ejlali
Mixed-criticality systems are introduced due to industrial interest to integrate different types of functionalities with varying importance into a common and shared computing platform. Low-energy consumption is vital in mixed-criticality systems due to their ever-increasing computation requirements and the fact that they are mostly supplied with batteries. In case when high-criticality tasks overrun in such systems, low-criticality tasks can be whether ignored or degraded to assure high-criticality tasks timeliness. We propose a novel energy management method (called Stretch), which lowers the energy consumption of mixed-criticality systems with the cost of degrading service level of low-criticality tasks. Our Stretch method extends both execution time and period of tasks while preserving their utilization. This leads to degrading the task's service level due to a period extension that is exploited by Stretch for energy management. Experiments show that Stretch provides 14% energy savings compared to the state-of-the-art with only 5% service level degradation in low-criticality tasks. The energy savings can be increased to 74% with the cost of 100% service level degradation in low-criticality tasks.
{"title":"Stretch: exploiting service level degradation for energy management in mixed-criticality systems","authors":"Amir Taherin, M. Salehi, A. Ejlali","doi":"10.1109/RTEST.2015.7369846","DOIUrl":"https://doi.org/10.1109/RTEST.2015.7369846","url":null,"abstract":"Mixed-criticality systems are introduced due to industrial interest to integrate different types of functionalities with varying importance into a common and shared computing platform. Low-energy consumption is vital in mixed-criticality systems due to their ever-increasing computation requirements and the fact that they are mostly supplied with batteries. In case when high-criticality tasks overrun in such systems, low-criticality tasks can be whether ignored or degraded to assure high-criticality tasks timeliness. We propose a novel energy management method (called Stretch), which lowers the energy consumption of mixed-criticality systems with the cost of degrading service level of low-criticality tasks. Our Stretch method extends both execution time and period of tasks while preserving their utilization. This leads to degrading the task's service level due to a period extension that is exploited by Stretch for energy management. Experiments show that Stretch provides 14% energy savings compared to the state-of-the-art with only 5% service level degradation in low-criticality tasks. The energy savings can be increased to 74% with the cost of 100% service level degradation in low-criticality tasks.","PeriodicalId":376270,"journal":{"name":"2015 CSI Symposium on Real-Time and Embedded Systems and Technologies (RTEST)","volume":"47 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2015-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"116359067","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2015-10-01DOI: 10.1109/RTEST.2015.7369852
Tayyebeh Hashamdar, Hamid Noori
Field Programmable Gate Arrays (FPGAs) are well-known platforms for implementing embedded systems due to configurability. Recently, high temperature of FPGAs is becoming a serious issue due to their higher logic density, clock frequency, and complexity. In this work we propose, implement, and evaluate an embedded system with a thermal aware operating system on the virtex-5 FPGA. It measures the temperature of the device using the system monitor IP core configured in the operating system and manages the temperature, not to violate threshold, using the task suspension feature of the operating system. A resident task in the operating system regularly checks the temperature of the device and does thermal management if needed by suspending other active tasks for a specified time slot. If this time slot is correctly chosen, the method degrades performance only 7 percent while the temperature threshold is not violated.
{"title":"Thermal management of FPGA-based embedded systems at operating system level","authors":"Tayyebeh Hashamdar, Hamid Noori","doi":"10.1109/RTEST.2015.7369852","DOIUrl":"https://doi.org/10.1109/RTEST.2015.7369852","url":null,"abstract":"Field Programmable Gate Arrays (FPGAs) are well-known platforms for implementing embedded systems due to configurability. Recently, high temperature of FPGAs is becoming a serious issue due to their higher logic density, clock frequency, and complexity. In this work we propose, implement, and evaluate an embedded system with a thermal aware operating system on the virtex-5 FPGA. It measures the temperature of the device using the system monitor IP core configured in the operating system and manages the temperature, not to violate threshold, using the task suspension feature of the operating system. A resident task in the operating system regularly checks the temperature of the device and does thermal management if needed by suspending other active tasks for a specified time slot. If this time slot is correctly chosen, the method degrades performance only 7 percent while the temperature threshold is not violated.","PeriodicalId":376270,"journal":{"name":"2015 CSI Symposium on Real-Time and Embedded Systems and Technologies (RTEST)","volume":"80 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2015-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126210707","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2015-10-01DOI: 10.1109/RTEST.2015.7369842
Masoume Zabihi, Hamed Farbeh, S. Miremadi
FPGA-based multiprocessors, referred as softmultiprocessors, have an increasing use in embedded systems due to appealing SRAM features. More than 95% of such FPGAs are occupied by SRAM cells constructing the configuration bits. These SRAM cells are highly vulnerable to soft errors threatening the reliability of the system. This paper proposes a fault-tolerant method to detect and correct errors in the configuration bits. The main of this method is to analyze the scheduled task graph and select a subset of tasks to be replicated in multiple processors based on the utilization of the processors in different execution phases. To this end, 1) errors are detected by re-executing a subset of tasks in multiple processors and comparing their output; 2) errors are corrected by re-downloading the fault-free bitstream; 3) errors are recovered from correct checkpoints. To evaluate the proposed method, a FPGA containing four and eight processors running randomly generated task graphs is evaluated. The simulation results show that the performance overhead of the proposed method for four and eight processors is 20% and 15%, respectively. These values for lockstep method are about 90% and 45%, respectively. Moreover, the area overhead of the proposed method is zero.
{"title":"A partial task replication algorithm for fault- tolerant FPGA-based soft-multiprocessors","authors":"Masoume Zabihi, Hamed Farbeh, S. Miremadi","doi":"10.1109/RTEST.2015.7369842","DOIUrl":"https://doi.org/10.1109/RTEST.2015.7369842","url":null,"abstract":"FPGA-based multiprocessors, referred as softmultiprocessors, have an increasing use in embedded systems due to appealing SRAM features. More than 95% of such FPGAs are occupied by SRAM cells constructing the configuration bits. These SRAM cells are highly vulnerable to soft errors threatening the reliability of the system. This paper proposes a fault-tolerant method to detect and correct errors in the configuration bits. The main of this method is to analyze the scheduled task graph and select a subset of tasks to be replicated in multiple processors based on the utilization of the processors in different execution phases. To this end, 1) errors are detected by re-executing a subset of tasks in multiple processors and comparing their output; 2) errors are corrected by re-downloading the fault-free bitstream; 3) errors are recovered from correct checkpoints. To evaluate the proposed method, a FPGA containing four and eight processors running randomly generated task graphs is evaluated. The simulation results show that the performance overhead of the proposed method for four and eight processors is 20% and 15%, respectively. These values for lockstep method are about 90% and 45%, respectively. Moreover, the area overhead of the proposed method is zero.","PeriodicalId":376270,"journal":{"name":"2015 CSI Symposium on Real-Time and Embedded Systems and Technologies (RTEST)","volume":"23 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2015-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"121160300","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2015-10-01DOI: 10.1109/RTEST.2015.7369844
Alireza Abdoli, A. Jahanian
Advent of digital microfluidic embedded biochips has revolutionized accomplishment of laboratory procedures. Digital microfluidic biochips provide general-purpose assay execution along with several advantages compared with traditional benchtop chemistry procedures; advantages of these modern devices encompass automation, miniaturization and lower costs. However these embedded systems are vulnerable to various types of faults which can adversely affect the integrity of assay execution outcome. This paper addresses fault tolerance of field-programmable pin-constrained digital microfluidic biochips from various aspects; evaluating effects of faulty mix modules, faulty Storage / Split / Detection (SSD) modules and faulty regions within routing paths. The simulation results show that in case of faulty mixing modules the operation times were retained however the 5 % advantage in routing times contributes to 1 % improvement of total bioassay execution time; considering overheads incurred by faulty mixing modules, the results show no overhead in operation times and 20 % overhead in routing times which in turn incur 2 % overhead on total bioassay execution time. In case of faulty SSD modules the operation time remains the same however as a result of 19 % advantage in routing times the total bioassay execution time shows 2 % improvement; regarding the overheads incurred by faulty SSD modules it is observed that despite the 4 % overhead in routing times there is no overhead with the total bioassay execution time.
{"title":"Fault-tolerant architecture and CAD algorithm for field-programmable pin-constrained digital microfluidic biochips","authors":"Alireza Abdoli, A. Jahanian","doi":"10.1109/RTEST.2015.7369844","DOIUrl":"https://doi.org/10.1109/RTEST.2015.7369844","url":null,"abstract":"Advent of digital microfluidic embedded biochips has revolutionized accomplishment of laboratory procedures. Digital microfluidic biochips provide general-purpose assay execution along with several advantages compared with traditional benchtop chemistry procedures; advantages of these modern devices encompass automation, miniaturization and lower costs. However these embedded systems are vulnerable to various types of faults which can adversely affect the integrity of assay execution outcome. This paper addresses fault tolerance of field-programmable pin-constrained digital microfluidic biochips from various aspects; evaluating effects of faulty mix modules, faulty Storage / Split / Detection (SSD) modules and faulty regions within routing paths. The simulation results show that in case of faulty mixing modules the operation times were retained however the 5 % advantage in routing times contributes to 1 % improvement of total bioassay execution time; considering overheads incurred by faulty mixing modules, the results show no overhead in operation times and 20 % overhead in routing times which in turn incur 2 % overhead on total bioassay execution time. In case of faulty SSD modules the operation time remains the same however as a result of 19 % advantage in routing times the total bioassay execution time shows 2 % improvement; regarding the overheads incurred by faulty SSD modules it is observed that despite the 4 % overhead in routing times there is no overhead with the total bioassay execution time.","PeriodicalId":376270,"journal":{"name":"2015 CSI Symposium on Real-Time and Embedded Systems and Technologies (RTEST)","volume":"54 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2015-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"121096161","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2015-10-01DOI: 10.1109/RTEST.2015.7369841
Danial Kamran, A. Marjovi, A. Fanian, M. Safayani
Side Channel Analysis (SCA) are still harmful threats against security of embedded systems. Due to the fact that every kind of SCA attack or countermeasure against it needs to be implemented before evaluation, a huge amount of time and cost of this process is paid for providing high resolution measurement tools, calibrating them and also implementation of proposed design on ASIC or target platform. In this paper, we have introduced a novel simulation platform for evaluation of power based SCA attacks and countermeasures. We have used Synopsys power analysis tools in order to simulate a processor and implement a successful Differential Power Analysis (DPA) attack on it. Then we focused on the simulation of a common countermeasure against DPA attacks called Random Delay Insertion (RDI). We simulated a resistant processor that uses this policy. In the next step we showed how the proposed framework can help to extract power characteristics of the simulated processor and implement power analysis based reverse engineering on it. We used this approach in order to detect DPA related assembly instructions being executed on the processor and performed a DPA attack on the RDI secured processor. Experiments were carried out on a Pico-blaze simulated processor.
{"title":"HDL based simulation framework for a DPA secured embedded system","authors":"Danial Kamran, A. Marjovi, A. Fanian, M. Safayani","doi":"10.1109/RTEST.2015.7369841","DOIUrl":"https://doi.org/10.1109/RTEST.2015.7369841","url":null,"abstract":"Side Channel Analysis (SCA) are still harmful threats against security of embedded systems. Due to the fact that every kind of SCA attack or countermeasure against it needs to be implemented before evaluation, a huge amount of time and cost of this process is paid for providing high resolution measurement tools, calibrating them and also implementation of proposed design on ASIC or target platform. In this paper, we have introduced a novel simulation platform for evaluation of power based SCA attacks and countermeasures. We have used Synopsys power analysis tools in order to simulate a processor and implement a successful Differential Power Analysis (DPA) attack on it. Then we focused on the simulation of a common countermeasure against DPA attacks called Random Delay Insertion (RDI). We simulated a resistant processor that uses this policy. In the next step we showed how the proposed framework can help to extract power characteristics of the simulated processor and implement power analysis based reverse engineering on it. We used this approach in order to detect DPA related assembly instructions being executed on the processor and performed a DPA attack on the RDI secured processor. Experiments were carried out on a Pico-blaze simulated processor.","PeriodicalId":376270,"journal":{"name":"2015 CSI Symposium on Real-Time and Embedded Systems and Technologies (RTEST)","volume":"66 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2015-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128801627","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2015-10-01DOI: 10.1109/RTEST.2015.7369847
Farimah Poursafaei, Sepideh Safari, Mohsen Ansari, M. Salehi, A. Ejlali
For real-time embedded systems, energy consumption and reliability are two major design concerns. We consider the problem of minimizing the energy consumption of a set of periodic real-time applications when running on a multi-core system while satisfying given reliability targets. Multi-core platforms provide a good capability for task replication in order to achieve given reliability targets. However, careless task replication may lead to significant energy overhead. Therefore, to provide a given reliability level with a reduced energy overhead, the level of replication and also the voltage and frequency assigned to each task should be determined cautiously. The goal of this paper is to find the level of replication, voltage and frequency assignment, and core allocation for each task at design time, in order to achieve a given reliability level while minimizing the energy consumption. Also, at run-time, we find the tasks that have finished correctly and cancel the execution of their replicas to achieve even more energy saving. We evaluated the effectiveness of our scheme through extensive simulations. The results show that our scheme provides up to 50% (in average by 47%) energy saving while satisfying a broad range of reliability targets.
{"title":"Offline replication and online energy management for hard real-time multicore systems","authors":"Farimah Poursafaei, Sepideh Safari, Mohsen Ansari, M. Salehi, A. Ejlali","doi":"10.1109/RTEST.2015.7369847","DOIUrl":"https://doi.org/10.1109/RTEST.2015.7369847","url":null,"abstract":"For real-time embedded systems, energy consumption and reliability are two major design concerns. We consider the problem of minimizing the energy consumption of a set of periodic real-time applications when running on a multi-core system while satisfying given reliability targets. Multi-core platforms provide a good capability for task replication in order to achieve given reliability targets. However, careless task replication may lead to significant energy overhead. Therefore, to provide a given reliability level with a reduced energy overhead, the level of replication and also the voltage and frequency assigned to each task should be determined cautiously. The goal of this paper is to find the level of replication, voltage and frequency assignment, and core allocation for each task at design time, in order to achieve a given reliability level while minimizing the energy consumption. Also, at run-time, we find the tasks that have finished correctly and cancel the execution of their replicas to achieve even more energy saving. We evaluated the effectiveness of our scheme through extensive simulations. The results show that our scheme provides up to 50% (in average by 47%) energy saving while satisfying a broad range of reliability targets.","PeriodicalId":376270,"journal":{"name":"2015 CSI Symposium on Real-Time and Embedded Systems and Technologies (RTEST)","volume":"4 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2015-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"132422203","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2015-10-01DOI: 10.1109/RTEST.2015.7369850
Ahad Mozafari Fard, M. Ghasemi, M. Kargahi
With the continuous shrinking of technology size, chip temperature, and consequently the temperature-affected error vulnerability have been increased. To control these issues, some temperature and reliability constraints have been added, which has led to confined performance. This paper proposes a proactive approach using thermal throttling to guarantee the failure rate of running tasks while minimizing the corresponding response-times. The task's jobs are executed according to the as soon as possible (ASAP) policy and the temperature of the processor is controlled based on the vulnerability factor of the running task. The optimality of the method in the case of first-come first-served (FCFS) task scheduling policy has also been proven. Simulation results reveal that the proposed method can reduce the job miss ratio and response-times, respectively, for at least 17% and 16% on the average.
随着技术尺寸的不断缩小,芯片温度,从而温度影响的错误脆弱性也在不断增加。为了控制这些问题,增加了一些温度和可靠性限制,导致性能受限。本文提出了一种利用热节流来保证运行任务故障率同时最小化相应响应时间的主动方法。任务的作业按照ASAP (as soon as possible)策略执行,处理器的温度根据正在运行的任务的漏洞因子进行控制。并证明了该方法在先到先得(FCFS)任务调度策略下的最优性。仿真结果表明,该方法可将作业失分率和响应时间平均分别降低17%和16%。
{"title":"Response-time minimization in soft real-time systems with temperature-affected reliability constraint","authors":"Ahad Mozafari Fard, M. Ghasemi, M. Kargahi","doi":"10.1109/RTEST.2015.7369850","DOIUrl":"https://doi.org/10.1109/RTEST.2015.7369850","url":null,"abstract":"With the continuous shrinking of technology size, chip temperature, and consequently the temperature-affected error vulnerability have been increased. To control these issues, some temperature and reliability constraints have been added, which has led to confined performance. This paper proposes a proactive approach using thermal throttling to guarantee the failure rate of running tasks while minimizing the corresponding response-times. The task's jobs are executed according to the as soon as possible (ASAP) policy and the temperature of the processor is controlled based on the vulnerability factor of the running task. The optimality of the method in the case of first-come first-served (FCFS) task scheduling policy has also been proven. Simulation results reveal that the proposed method can reduce the job miss ratio and response-times, respectively, for at least 17% and 16% on the average.","PeriodicalId":376270,"journal":{"name":"2015 CSI Symposium on Real-Time and Embedded Systems and Technologies (RTEST)","volume":"41 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2015-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"125968034","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2015-10-01DOI: 10.1109/RTEST.2015.7369840
Hamed Orojloo, M. A. Azgomi
In this paper, a new method for quantitative evaluation of the security of cyber-physical systems (CPSs) is proposed. The proposed method models the different classes of adversarial attacks against CPSs, including cross-domain attacks, i.e., cyber-to-cyber and cyber-to-physical attacks. It also takes the secondary consequences of attacks on CPSs into consideration. The intrusion process of attackers has been modeled using attack graph and the consequence estimation process of the attack has been investigated using process model. The security attributes and the special parameters involved in the security analysis of CPSs, have been identified and considered. The quantitative evaluation has been done using the probability of attacks, time-to-shutdown of the system and security risks. The validation phase of the proposed model is performed as a case study by applying it to a boiling water power plant and estimating the suitable security measures.
{"title":"Evaluating the complexity and impacts of attacks on cyber-physical systems","authors":"Hamed Orojloo, M. A. Azgomi","doi":"10.1109/RTEST.2015.7369840","DOIUrl":"https://doi.org/10.1109/RTEST.2015.7369840","url":null,"abstract":"In this paper, a new method for quantitative evaluation of the security of cyber-physical systems (CPSs) is proposed. The proposed method models the different classes of adversarial attacks against CPSs, including cross-domain attacks, i.e., cyber-to-cyber and cyber-to-physical attacks. It also takes the secondary consequences of attacks on CPSs into consideration. The intrusion process of attackers has been modeled using attack graph and the consequence estimation process of the attack has been investigated using process model. The security attributes and the special parameters involved in the security analysis of CPSs, have been identified and considered. The quantitative evaluation has been done using the probability of attacks, time-to-shutdown of the system and security risks. The validation phase of the proposed model is performed as a case study by applying it to a boiling water power plant and estimating the suitable security measures.","PeriodicalId":376270,"journal":{"name":"2015 CSI Symposium on Real-Time and Embedded Systems and Technologies (RTEST)","volume":"75 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2015-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127393509","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2015-10-01DOI: 10.1109/RTEST.2015.7369851
P. Burgio, A. Marongiu, P. Valente, M. Bertogna
There is an increasing interest among real-time systems architects for multi- and many-core accelerated platforms. The main obstacle towards the adoption of such devices within industrial settings is related to the difficulties in tightly estimating the multiple interferences that may arise among the parallel components of the system. This in particular concerns concurrent accesses to shared memory and communication resources. Existing worst-case execution time analyses are extremely pessimistic, especially when adopted for systems composed of hundreds-tothousands of cores. This significantly limits the potential for the adoption of these platforms in real-time systems. In this paper, we study how the predictable execution model (PREM), a memory-aware approach to enable timing-predictability in realtime systems, can be successfully adopted on multi- and manycore heterogeneous platforms. Using a state-of-the-art multi-core platform as a testbed, we validate that it is possible to obtain an order-of-magnitude improvement in the WCET bounds of parallel applications, if data movements are adequately orchestrated in accordance with PREM. We identify which system parameters mostly affect the tremendous performance opportunities offered by this approach, both on average and in the worst case, moving the first step towards predictable many-core systems.
{"title":"A memory-centric approach to enable timing-predictability within embedded many-core accelerators","authors":"P. Burgio, A. Marongiu, P. Valente, M. Bertogna","doi":"10.1109/RTEST.2015.7369851","DOIUrl":"https://doi.org/10.1109/RTEST.2015.7369851","url":null,"abstract":"There is an increasing interest among real-time systems architects for multi- and many-core accelerated platforms. The main obstacle towards the adoption of such devices within industrial settings is related to the difficulties in tightly estimating the multiple interferences that may arise among the parallel components of the system. This in particular concerns concurrent accesses to shared memory and communication resources. Existing worst-case execution time analyses are extremely pessimistic, especially when adopted for systems composed of hundreds-tothousands of cores. This significantly limits the potential for the adoption of these platforms in real-time systems. In this paper, we study how the predictable execution model (PREM), a memory-aware approach to enable timing-predictability in realtime systems, can be successfully adopted on multi- and manycore heterogeneous platforms. Using a state-of-the-art multi-core platform as a testbed, we validate that it is possible to obtain an order-of-magnitude improvement in the WCET bounds of parallel applications, if data movements are adequately orchestrated in accordance with PREM. We identify which system parameters mostly affect the tremendous performance opportunities offered by this approach, both on average and in the worst case, moving the first step towards predictable many-core systems.","PeriodicalId":376270,"journal":{"name":"2015 CSI Symposium on Real-Time and Embedded Systems and Technologies (RTEST)","volume":"35 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2015-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"132088947","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2015-10-01DOI: 10.1109/RTEST.2015.7369848
Alireza S. Abyaneh, M. Kargahi
Stability, which is heavily dependent on the controller delays, is the main measure of performance in embedded control systems. With the increased demand for resources in such systems, energy consumption has been converted to an important issue, especially in systems with limited energy sources like batteries. Accordingly, in addition to the traditional temporal requirements in these systems, stability and economic energy usage are further demands for the design of embedded control systems. For the latter demand, dynamic voltage and frequency scaling (DVFS) is too usual, however, as this technique increases the controller delay and jitter, it may negatively impact the system stability. This paper addresses the problem of control task priority assignment as well as task-specific processor voltage/ frequency assignment such that the stability be guaranteed and the energy consumption be reduced. The proposed idea considers the task execution-time variability and increases the processor frequency only when the task execution-time exceeds some threshold. Experimental results show energy-efficiency of the proposed method for embedded control systems.
{"title":"Energy-efficient scheduling for stability-guaranteed embedded control systems","authors":"Alireza S. Abyaneh, M. Kargahi","doi":"10.1109/RTEST.2015.7369848","DOIUrl":"https://doi.org/10.1109/RTEST.2015.7369848","url":null,"abstract":"Stability, which is heavily dependent on the controller delays, is the main measure of performance in embedded control systems. With the increased demand for resources in such systems, energy consumption has been converted to an important issue, especially in systems with limited energy sources like batteries. Accordingly, in addition to the traditional temporal requirements in these systems, stability and economic energy usage are further demands for the design of embedded control systems. For the latter demand, dynamic voltage and frequency scaling (DVFS) is too usual, however, as this technique increases the controller delay and jitter, it may negatively impact the system stability. This paper addresses the problem of control task priority assignment as well as task-specific processor voltage/ frequency assignment such that the stability be guaranteed and the energy consumption be reduced. The proposed idea considers the task execution-time variability and increases the processor frequency only when the task execution-time exceeds some threshold. Experimental results show energy-efficiency of the proposed method for embedded control systems.","PeriodicalId":376270,"journal":{"name":"2015 CSI Symposium on Real-Time and Embedded Systems and Technologies (RTEST)","volume":"167 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2015-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"132330596","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}