Pub Date : 2022-08-01DOI: 10.1109/RTCSA55878.2022.00020
Takumi Komori, Yutaka Masuda, T. Ishihara
A dual-OS platform can efficiently implement emerging mixed-criticality systems by consolidating a real-time OS (RTOS) and a general-purpose OS (GPOS). Although the dual-OS platform attracts increasing attention, it often suffers from energy inefficiency in the GPOS for guaranteeing real-time responses of the RTOS. This paper proposes an energy minimization method called DVFS virtualization, which allows running multiple DVFS policies dedicated to the RTOS and GPOS, respectively. The experimental evaluation using a commercial processor showed that the proposed hardware could change the supply voltage within 500 ns and reduce the energy consumption of typical applications by 60 % in the best case compared to conventional dual-OS platforms.
{"title":"DVFS Virtualization for Energy Minimization of Mixed-Criticality Dual-OS Platforms","authors":"Takumi Komori, Yutaka Masuda, T. Ishihara","doi":"10.1109/RTCSA55878.2022.00020","DOIUrl":"https://doi.org/10.1109/RTCSA55878.2022.00020","url":null,"abstract":"A dual-OS platform can efficiently implement emerging mixed-criticality systems by consolidating a real-time OS (RTOS) and a general-purpose OS (GPOS). Although the dual-OS platform attracts increasing attention, it often suffers from energy inefficiency in the GPOS for guaranteeing real-time responses of the RTOS. This paper proposes an energy minimization method called DVFS virtualization, which allows running multiple DVFS policies dedicated to the RTOS and GPOS, respectively. The experimental evaluation using a commercial processor showed that the proposed hardware could change the supply voltage within 500 ns and reduce the energy consumption of typical applications by 60 % in the best case compared to conventional dual-OS platforms.","PeriodicalId":38446,"journal":{"name":"International Journal of Embedded and Real-Time Communication Systems (IJERTCS)","volume":"13 1","pages":"128-137"},"PeriodicalIF":0.7,"publicationDate":"2022-08-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"86663196","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2022-08-01DOI: 10.1109/RTCSA55878.2022.00009
Miguel Silva, T. Gomes, S. Pinto
There is increasing pressure to optimize Internet of things (IoT) low-end devices. The ever-growing number of requirements and constraints is pushing towards maximizing performance and real-time, but simultaneously minimizing power consumption, form factor, and memory footprint. This has motivated the adoption of Field-Programmable Gate Array (FPGA) technology to accelerate computing-intensive workloads in hardware. However, and despite the ongoing trend of migrating application-level tasks to hardware, recently, the offload of system software such as operating system (OS) services has received little attention. This paper presents CHAMELIOT, a framework for FPGA-based IoT platforms that provides agnostic hardware acceleration to OS services by leveraging RISC-V technology. CHAMELIOT allows for developers to run unmodified applications in a set of well-established IoT OSes. Currently, the framework has support for RIOT, Zephyr, and FreeRTOS. The evaluation showed that latency and determinism can be enhanced up to 10x while the system’s performance can be increased to nearly 200%. CHAMELIOT will be open-sourced.
{"title":"Agnostic Hardware-Accelerated Operating System for Low-End IoT","authors":"Miguel Silva, T. Gomes, S. Pinto","doi":"10.1109/RTCSA55878.2022.00009","DOIUrl":"https://doi.org/10.1109/RTCSA55878.2022.00009","url":null,"abstract":"There is increasing pressure to optimize Internet of things (IoT) low-end devices. The ever-growing number of requirements and constraints is pushing towards maximizing performance and real-time, but simultaneously minimizing power consumption, form factor, and memory footprint. This has motivated the adoption of Field-Programmable Gate Array (FPGA) technology to accelerate computing-intensive workloads in hardware. However, and despite the ongoing trend of migrating application-level tasks to hardware, recently, the offload of system software such as operating system (OS) services has received little attention. This paper presents CHAMELIOT, a framework for FPGA-based IoT platforms that provides agnostic hardware acceleration to OS services by leveraging RISC-V technology. CHAMELIOT allows for developers to run unmodified applications in a set of well-established IoT OSes. Currently, the framework has support for RIOT, Zephyr, and FreeRTOS. The evaluation showed that latency and determinism can be enhanced up to 10x while the system’s performance can be increased to nearly 200%. CHAMELIOT will be open-sourced.","PeriodicalId":38446,"journal":{"name":"International Journal of Embedded and Real-Time Communication Systems (IJERTCS)","volume":"284 1 1","pages":"21-30"},"PeriodicalIF":0.7,"publicationDate":"2022-08-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"72900717","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2022-08-01DOI: 10.1109/RTCSA55878.2022.00029
Robin Hapka, Anika Christmann, Rolf Ernst
Autonomous mobile systems combine high performance requirements with safety criticality. High performance hardware/software architectures, however, expose a far more complex runtime behavior than traditional microcontroller architectures. Such high-performance architectures challenge traditional worst-case design that assumes a formally analyzable or at least deterministic worst-case response time (WCRT) that can be reasonably bounded. However, such architectures expose rare but substantial worst-case outliers, which are not only caused by the application itself, but also by the many dynamic influences of software architecture and platform control. Probabilistic methods can capture such outliers, but are only effective, if the outlier probability is sufficiently low and if the methods cover dynamic platform timing. As a main contribution, this paper exploits platform induced timing variety rather than trying to mitigate it. Assuming the typical redundant dual modular redundancy (DMR) implementation that is deployed in safety-critical systems, it introduces the concept of Timing Diversity, where rare outliers in one of the two channels are masked by the other channel with a sufficiently high probability. The paper uses a convolutional neural network (CNN) example in different parameter settings running on Linux operated multi-core platform with typical dynamic control to investigate the proposed concept. The experiments demonstrate the potential of Timing Diversity in leading to substantially higher reliability. Alternatively, the approach permits a reduction of the system WCRT at the same reliability level.
{"title":"Controlling High-Performance Platform Uncertainties with Timing Diversity","authors":"Robin Hapka, Anika Christmann, Rolf Ernst","doi":"10.1109/RTCSA55878.2022.00029","DOIUrl":"https://doi.org/10.1109/RTCSA55878.2022.00029","url":null,"abstract":"Autonomous mobile systems combine high performance requirements with safety criticality. High performance hardware/software architectures, however, expose a far more complex runtime behavior than traditional microcontroller architectures. Such high-performance architectures challenge traditional worst-case design that assumes a formally analyzable or at least deterministic worst-case response time (WCRT) that can be reasonably bounded. However, such architectures expose rare but substantial worst-case outliers, which are not only caused by the application itself, but also by the many dynamic influences of software architecture and platform control. Probabilistic methods can capture such outliers, but are only effective, if the outlier probability is sufficiently low and if the methods cover dynamic platform timing. As a main contribution, this paper exploits platform induced timing variety rather than trying to mitigate it. Assuming the typical redundant dual modular redundancy (DMR) implementation that is deployed in safety-critical systems, it introduces the concept of Timing Diversity, where rare outliers in one of the two channels are masked by the other channel with a sufficiently high probability. The paper uses a convolutional neural network (CNN) example in different parameter settings running on Linux operated multi-core platform with typical dynamic control to investigate the proposed concept. The experiments demonstrate the potential of Timing Diversity in leading to substantially higher reliability. Alternatively, the approach permits a reduction of the system WCRT at the same reliability level.","PeriodicalId":38446,"journal":{"name":"International Journal of Embedded and Real-Time Communication Systems (IJERTCS)","volume":"34 1","pages":"212-219"},"PeriodicalIF":0.7,"publicationDate":"2022-08-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"73377142","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2022-08-01DOI: 10.1109/RTCSA55878.2022.00030
Zenepe Satka, M. Ashjaei, H. Fotouhi, M. Daneshtalab, Mikael Sjödin, S. Mubeen
Integrating wired Ethernet networks, such as Time-Sensitive Networks (TSN), to 5G cellular network requires a flow management technique to efficiently map TSN traffic to 5G Quality-of-Service (QoS) flows. The 3GPP Release 16 provides a set of predefined QoS characteristics, such as priority level, packet delay budget, and maximum data burst volume, which can be used for the 5G QoS flows. Within this context, mapping TSN traffic flows to 5G QoS flows in an integrated TSN-5G network is of paramount importance as the mapping can significantly impact on the end-to-end QoS in the integrated network. In this paper, we present a novel and efficient mapping algorithm to map different TSN traffic flows to 5G QoS flows. To the best of our knowledge, this is the first QoS-aware mapping algorithm based on the application constraints used to exchange flows between TSN and 5G network domains. We evaluate the proposed mapping algorithm on synthetic scenarios with random sets of constraints on deadline, jitter, bandwidth, and packet loss rate. The evaluation results show that the proposed mapping algorithm can fulfill over 90% of the applications’ constraints.
{"title":"QoS-MAN: A Novel QoS Mapping Algorithm for TSN-5G Flows","authors":"Zenepe Satka, M. Ashjaei, H. Fotouhi, M. Daneshtalab, Mikael Sjödin, S. Mubeen","doi":"10.1109/RTCSA55878.2022.00030","DOIUrl":"https://doi.org/10.1109/RTCSA55878.2022.00030","url":null,"abstract":"Integrating wired Ethernet networks, such as Time-Sensitive Networks (TSN), to 5G cellular network requires a flow management technique to efficiently map TSN traffic to 5G Quality-of-Service (QoS) flows. The 3GPP Release 16 provides a set of predefined QoS characteristics, such as priority level, packet delay budget, and maximum data burst volume, which can be used for the 5G QoS flows. Within this context, mapping TSN traffic flows to 5G QoS flows in an integrated TSN-5G network is of paramount importance as the mapping can significantly impact on the end-to-end QoS in the integrated network. In this paper, we present a novel and efficient mapping algorithm to map different TSN traffic flows to 5G QoS flows. To the best of our knowledge, this is the first QoS-aware mapping algorithm based on the application constraints used to exchange flows between TSN and 5G network domains. We evaluate the proposed mapping algorithm on synthetic scenarios with random sets of constraints on deadline, jitter, bandwidth, and packet loss rate. The evaluation results show that the proposed mapping algorithm can fulfill over 90% of the applications’ constraints.","PeriodicalId":38446,"journal":{"name":"International Journal of Embedded and Real-Time Communication Systems (IJERTCS)","volume":"10 1","pages":"220-227"},"PeriodicalIF":0.7,"publicationDate":"2022-08-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"86916946","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2022-08-01DOI: 10.1109/RTCSA55878.2022.00031
Florian Schade, T. Sandmann, J. Becker, Henrik Theiling
In recent years, multi-core processors are becoming more and more common in embedded systems, offering higher performance than single-core processors and thereby enabling both computationally intensive embedded applications as well as the space-, weight-, and energy-efficient integration of software components. However, real-time applications, for which meeting certain deadlines must be guaranteed, do not profit as much from this transition. This is mainly due to interference between the processing cores of commercial-off-the-shelf multi-core processors at shared resources, hampering the predictability of task execution times. An effective approach to avoid this is running the critical tasks exclusively on one core while pausing execution on all other cores. This, however, reduces the overall system efficiency since parallel execution potential remains unused. In this work we present a novel approach to managing shared and exclusive execution in such systems. By on-line observation of the critical task progress via the on-chip trace infrastructure, we reduce the time of exclusive execution when it is safely possible and thereby increase the overall system efficiency. Using trace information allows for early detection of parallelization potential and does not require modifications to the critical application, which helps avoiding re-certification of the critical application. We present an implementation on a heterogeneous multi-processor system-on-chip using a state-of-the-art hypervisor for critical systems and evaluate its performance. Our results indicate that a performance gain of 37 % to 41 % over approaches focused on exclusive execution can be reached in low-interference situations.
{"title":"Using Trace Data for Run-Time Optimization of Parallel Execution in Real-Time Multi-Core Systems","authors":"Florian Schade, T. Sandmann, J. Becker, Henrik Theiling","doi":"10.1109/RTCSA55878.2022.00031","DOIUrl":"https://doi.org/10.1109/RTCSA55878.2022.00031","url":null,"abstract":"In recent years, multi-core processors are becoming more and more common in embedded systems, offering higher performance than single-core processors and thereby enabling both computationally intensive embedded applications as well as the space-, weight-, and energy-efficient integration of software components. However, real-time applications, for which meeting certain deadlines must be guaranteed, do not profit as much from this transition. This is mainly due to interference between the processing cores of commercial-off-the-shelf multi-core processors at shared resources, hampering the predictability of task execution times. An effective approach to avoid this is running the critical tasks exclusively on one core while pausing execution on all other cores. This, however, reduces the overall system efficiency since parallel execution potential remains unused. In this work we present a novel approach to managing shared and exclusive execution in such systems. By on-line observation of the critical task progress via the on-chip trace infrastructure, we reduce the time of exclusive execution when it is safely possible and thereby increase the overall system efficiency. Using trace information allows for early detection of parallelization potential and does not require modifications to the critical application, which helps avoiding re-certification of the critical application. We present an implementation on a heterogeneous multi-processor system-on-chip using a state-of-the-art hypervisor for critical systems and evaluate its performance. Our results indicate that a performance gain of 37 % to 41 % over approaches focused on exclusive execution can be reached in low-interference situations.","PeriodicalId":38446,"journal":{"name":"International Journal of Embedded and Real-Time Communication Systems (IJERTCS)","volume":"53 9 1","pages":"228-234"},"PeriodicalIF":0.7,"publicationDate":"2022-08-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"82834694","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2022-07-29DOI: 10.1109/RTCSA55878.2022.00025
Michael Yuhas, Daniel Jun Xian Ng, A. Easwaran
When machine learning (ML) models are supplied with data outside their training distribution, they are more likely to make inaccurate predictions; in a cyber-physical system (CPS), this could lead to catastrophic system failure. To mitigate this risk, an out-of-distribution (OOD) detector can run in parallel with an ML model and flag inputs that could lead to undesirable outcomes. Although OOD detectors have been well studied in terms of accuracy, there has been less focus on deployment to resource constrained CPSs. In this study, a design methodology is proposed to tune deep OOD detectors to meet the accuracy and response time requirements of embedded applications. The methodology uses genetic algorithms to optimize the detector’s preprocessing pipeline and selects a quantization method that balances robustness and response time. It also identifies several candidate task graphs under the Robot Operating System (ROS) for deployment of the selected design. The methodology is demonstrated on two variational autoencoder based OOD detectors from the literature on two embedded platforms. Insights into the trade-offs that occur during the design process are provided, and it is shown that this design methodology can lead to a drastic reduction in response time in relation to an unoptimized OOD detector while maintaining comparable accuracy.
{"title":"Design Methodology for Deep Out-of-Distribution Detectors in Real-Time Cyber-Physical Systems","authors":"Michael Yuhas, Daniel Jun Xian Ng, A. Easwaran","doi":"10.1109/RTCSA55878.2022.00025","DOIUrl":"https://doi.org/10.1109/RTCSA55878.2022.00025","url":null,"abstract":"When machine learning (ML) models are supplied with data outside their training distribution, they are more likely to make inaccurate predictions; in a cyber-physical system (CPS), this could lead to catastrophic system failure. To mitigate this risk, an out-of-distribution (OOD) detector can run in parallel with an ML model and flag inputs that could lead to undesirable outcomes. Although OOD detectors have been well studied in terms of accuracy, there has been less focus on deployment to resource constrained CPSs. In this study, a design methodology is proposed to tune deep OOD detectors to meet the accuracy and response time requirements of embedded applications. The methodology uses genetic algorithms to optimize the detector’s preprocessing pipeline and selects a quantization method that balances robustness and response time. It also identifies several candidate task graphs under the Robot Operating System (ROS) for deployment of the selected design. The methodology is demonstrated on two variational autoencoder based OOD detectors from the literature on two embedded platforms. Insights into the trade-offs that occur during the design process are provided, and it is shown that this design methodology can lead to a drastic reduction in response time in relation to an unoptimized OOD detector while maintaining comparable accuracy.","PeriodicalId":38446,"journal":{"name":"International Journal of Embedded and Real-Time Communication Systems (IJERTCS)","volume":"108 1","pages":"180-185"},"PeriodicalIF":0.7,"publicationDate":"2022-07-29","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"81164615","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2022-05-30DOI: 10.1109/rtest56034.2022.9850187
Hussien Al-haj Ahmad, Yasser Sedaghat
Recently, with increasing system complexity and advanced technology scaling, there is a severe need for accurate fault injection (FI) techniques in the reliability evaluation of safety-critical systems against transient hardware faults, like soft errors. Since compiler-based FI techniques operate at a high intermediate representation (IR) code, their accuracy is insufficient to assess the resilience of safety-critical systems against soft errors. Although binary-level FI techniques can provide high accuracy, error propagation analysis is challenging due to missing program structures. This paper proposes an accurate GCC compiler-based FI technique called (GCFI) to assess the resilience of software against soft errors. GCFI operates at the back-end of the GCC compiler and instruments the very low-level IR code through a compiler extension. GCFI only performs instrumentation once right after the completion of optimization passes, assuring one-to-one correspondence of IR code with assembly code. The effectiveness of GCFI is evaluated by employing it to conduct many FI experiments on different benchmarks compiled for x86 and ARM architectures. We compare the results with high-level and binary-level software FI techniques to demonstrate the accuracy of GCFI. The results show that GCFI can assess the resilience of programs against soft errors with high accuracy similar to binary-level FI.
{"title":"GCFI: A High Accurate Compiler-based Fault Injection for Transient Hardware Faults","authors":"Hussien Al-haj Ahmad, Yasser Sedaghat","doi":"10.1109/rtest56034.2022.9850187","DOIUrl":"https://doi.org/10.1109/rtest56034.2022.9850187","url":null,"abstract":"Recently, with increasing system complexity and advanced technology scaling, there is a severe need for accurate fault injection (FI) techniques in the reliability evaluation of safety-critical systems against transient hardware faults, like soft errors. Since compiler-based FI techniques operate at a high intermediate representation (IR) code, their accuracy is insufficient to assess the resilience of safety-critical systems against soft errors. Although binary-level FI techniques can provide high accuracy, error propagation analysis is challenging due to missing program structures. This paper proposes an accurate GCC compiler-based FI technique called (GCFI) to assess the resilience of software against soft errors. GCFI operates at the back-end of the GCC compiler and instruments the very low-level IR code through a compiler extension. GCFI only performs instrumentation once right after the completion of optimization passes, assuring one-to-one correspondence of IR code with assembly code. The effectiveness of GCFI is evaluated by employing it to conduct many FI experiments on different benchmarks compiled for x86 and ARM architectures. We compare the results with high-level and binary-level software FI techniques to demonstrate the accuracy of GCFI. The results show that GCFI can assess the resilience of programs against soft errors with high accuracy similar to binary-level FI.","PeriodicalId":38446,"journal":{"name":"International Journal of Embedded and Real-Time Communication Systems (IJERTCS)","volume":"10 1","pages":"1-8"},"PeriodicalIF":0.7,"publicationDate":"2022-05-30","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"73683549","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2022-05-30DOI: 10.1109/rtest56034.2022.9849862
Sahar Rezagholi Lalani, Bardia Safaei, A. H. Hosseini Monazzah, A. Ejlali
Routing between the IoT nodes has been considered an important challenge, due to its impact on different link/node metrics, including power consumption, reliability, and latency. Due to the low-power and lossy nature of IoT environments, the amount of consumed power, and the ratio of delivered packets plays an important role in the overall performance of the system. Meanwhile, in some IoT applications, e.g., remote health-care monitoring systems, other factors such as End-to-End (E2E) latency is significantly crucial. The standardized routing mechanism for IoT networks (RPL) tries to optimize these parameters via specified routing policies in its Objective Function (OF). The original version of this protocol, and many of its existing extensions are not well-suited for dynamic IoT networks. In the past few years, reinforcement learning methods have significantly involved in dynamic systems, where agents have no acknowledgment about their surrounding environment. These techniques provide a predictive model based on the interaction between an agent and its environment to reach a semi-optimized solution; For instance, the matter of packet transmission, and their delivery in unstable IoT networks. Accordingly, this paper introduces PEARL; a machine-learning based routing policy for IoT networks, which is both, delay-aware, and power-efficient. PEARL employs a novel routing policy based on the q-learning algorithm, which uses the one-hop E2E delay as its main path selection metric to determine the rewards of the algorithm, and to improve the E2E delay, and consumed power simultaneously in terms of Power-Delay-Product (PDP). According to an extensive set of experiments conducted in the Cooja simulator, in addition to improving reliability in the network in terms of Packet Delivery Ratio (PDR), PEARL has improved the amount of E2E delay, and PDP metrics in the network by up to 61% and 72%, against the state-of-the-art, respectively.
{"title":"PEARL: Power and Delay-Aware Learning-based Routing Policy for IoT Applications","authors":"Sahar Rezagholi Lalani, Bardia Safaei, A. H. Hosseini Monazzah, A. Ejlali","doi":"10.1109/rtest56034.2022.9849862","DOIUrl":"https://doi.org/10.1109/rtest56034.2022.9849862","url":null,"abstract":"Routing between the IoT nodes has been considered an important challenge, due to its impact on different link/node metrics, including power consumption, reliability, and latency. Due to the low-power and lossy nature of IoT environments, the amount of consumed power, and the ratio of delivered packets plays an important role in the overall performance of the system. Meanwhile, in some IoT applications, e.g., remote health-care monitoring systems, other factors such as End-to-End (E2E) latency is significantly crucial. The standardized routing mechanism for IoT networks (RPL) tries to optimize these parameters via specified routing policies in its Objective Function (OF). The original version of this protocol, and many of its existing extensions are not well-suited for dynamic IoT networks. In the past few years, reinforcement learning methods have significantly involved in dynamic systems, where agents have no acknowledgment about their surrounding environment. These techniques provide a predictive model based on the interaction between an agent and its environment to reach a semi-optimized solution; For instance, the matter of packet transmission, and their delivery in unstable IoT networks. Accordingly, this paper introduces PEARL; a machine-learning based routing policy for IoT networks, which is both, delay-aware, and power-efficient. PEARL employs a novel routing policy based on the q-learning algorithm, which uses the one-hop E2E delay as its main path selection metric to determine the rewards of the algorithm, and to improve the E2E delay, and consumed power simultaneously in terms of Power-Delay-Product (PDP). According to an extensive set of experiments conducted in the Cooja simulator, in addition to improving reliability in the network in terms of Packet Delivery Ratio (PDR), PEARL has improved the amount of E2E delay, and PDP metrics in the network by up to 61% and 72%, against the state-of-the-art, respectively.","PeriodicalId":38446,"journal":{"name":"International Journal of Embedded and Real-Time Communication Systems (IJERTCS)","volume":"16 1","pages":"1-8"},"PeriodicalIF":0.7,"publicationDate":"2022-05-30","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"89624265","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2022-05-30DOI: 10.1109/rtest56034.2022.9849980
{"title":"RTEST 2022 Article Index","authors":"","doi":"10.1109/rtest56034.2022.9849980","DOIUrl":"https://doi.org/10.1109/rtest56034.2022.9849980","url":null,"abstract":"","PeriodicalId":38446,"journal":{"name":"International Journal of Embedded and Real-Time Communication Systems (IJERTCS)","volume":"4 1","pages":""},"PeriodicalIF":0.7,"publicationDate":"2022-05-30","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"90786762","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2022-05-30DOI: 10.1109/rtest56034.2022.9850123
R. J. Bril
Despite their industrial relevance, contemporary textbooks do not pay much attention to cyclic executives, if at all. Analysis techniques for these executives are therefore needed.In this paper, we consider the impact of a real-time system model on the schedulability analysis of basic single-rate cyclic executives. Next to the basic real-time scheduling model ${mathcal{M}^{text{B}}}$, presented in [1], two other models are considered in this paper, a first refined model ${mathcal{M}^{text{R}}}$ that takes the notion of observable event [2] into account and a second model ${mathcal{M}^{text{P}}}$ that in addition also considers the single-path code paradigm [3]. Whereas the exact schedulability analysis for ${mathcal{M}^{text{B}}}$ turns out to be pessimistic when applied for the refined model ${mathcal{M}^{text{R}}}$, the analysis turns out to be optimistic for the second model ${mathcal{M}^{text{P}}}$.
{"title":"On system models and schedulability analysis for basic single-rate cyclic executives","authors":"R. J. Bril","doi":"10.1109/rtest56034.2022.9850123","DOIUrl":"https://doi.org/10.1109/rtest56034.2022.9850123","url":null,"abstract":"Despite their industrial relevance, contemporary textbooks do not pay much attention to cyclic executives, if at all. Analysis techniques for these executives are therefore needed.In this paper, we consider the impact of a real-time system model on the schedulability analysis of basic single-rate cyclic executives. Next to the basic real-time scheduling model ${mathcal{M}^{text{B}}}$, presented in [1], two other models are considered in this paper, a first refined model ${mathcal{M}^{text{R}}}$ that takes the notion of observable event [2] into account and a second model ${mathcal{M}^{text{P}}}$ that in addition also considers the single-path code paradigm [3]. Whereas the exact schedulability analysis for ${mathcal{M}^{text{B}}}$ turns out to be pessimistic when applied for the refined model ${mathcal{M}^{text{R}}}$, the analysis turns out to be optimistic for the second model ${mathcal{M}^{text{P}}}$.","PeriodicalId":38446,"journal":{"name":"International Journal of Embedded and Real-Time Communication Systems (IJERTCS)","volume":"24 1","pages":"1-8"},"PeriodicalIF":0.7,"publicationDate":"2022-05-30","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"79703450","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}