Pascal Fradet, Maxime Lesourd, J. Monin, Sophie Quinton
This paper presents a generic proof of Typical Worst-Case Analysis (TWCA), an analysis technique for weakly-hard real-time uniprocessor systems. TWCA was originally introduced for systems with fixed priority preemptive (FPP) schedulers and has since been extended to fixed-priority nonpreemptive (FPNP) and earliest-deadline-first (EDF) schedulers. Our generic analysis is based on an abstract model that characterizes the exact properties needed to make TWCA applicable to any system model. Our results are formalized and checked using the Coq proof assistant along with the Prosa schedulability analysis library. Our experience with formalizing real-time systems analyses shows that this is not only a way to increase confidence in our claimed results: The discipline required to obtain machine checked proofs helps understanding the exact assumptions required by a given analysis, its key intermediate steps and how this analysis can be generalized.
{"title":"A Generic Coq Proof of Typical Worst-Case Analysis","authors":"Pascal Fradet, Maxime Lesourd, J. Monin, Sophie Quinton","doi":"10.1109/RTSS.2018.00039","DOIUrl":"https://doi.org/10.1109/RTSS.2018.00039","url":null,"abstract":"This paper presents a generic proof of Typical Worst-Case Analysis (TWCA), an analysis technique for weakly-hard real-time uniprocessor systems. TWCA was originally introduced for systems with fixed priority preemptive (FPP) schedulers and has since been extended to fixed-priority nonpreemptive (FPNP) and earliest-deadline-first (EDF) schedulers. Our generic analysis is based on an abstract model that characterizes the exact properties needed to make TWCA applicable to any system model. Our results are formalized and checked using the Coq proof assistant along with the Prosa schedulability analysis library. Our experience with formalizing real-time systems analyses shows that this is not only a way to increase confidence in our claimed results: The discipline required to obtain machine checked proofs helps understanding the exact assumptions required by a given analysis, its key intermediate steps and how this analysis can be generalized.","PeriodicalId":294784,"journal":{"name":"2018 IEEE Real-Time Systems Symposium (RTSS)","volume":"164 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127355236","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Worst-case timing analysis of Networks-on-Chip (NoCs) is a crucial aspect to design safe real-time systems based on manycore architectures. In this paper, we present some potential extensions of our previously-published buffer-aware worst-case timing analysis approach to cope with bursty traffic such as real-time audio and video streams. A first promising lead is to improve the algorithm analyzing backpressure patterns to capture consecutive-packet queueing effect while keeping the information about the dependencies between flows. Furthermore, the improved algorithm may also decrease the inherent complexity of computing the indirect blocking latency due to backpressure.
{"title":"Work-in-Progress: Extending Buffer-Aware Worst-Case Timing Analysis of Wormhole NoCs","authors":"Frederic Giroudot, A. Mifdaoui","doi":"10.1109/RTSS.2018.00032","DOIUrl":"https://doi.org/10.1109/RTSS.2018.00032","url":null,"abstract":"Worst-case timing analysis of Networks-on-Chip (NoCs) is a crucial aspect to design safe real-time systems based on manycore architectures. In this paper, we present some potential extensions of our previously-published buffer-aware worst-case timing analysis approach to cope with bursty traffic such as real-time audio and video streams. A first promising lead is to improve the algorithm analyzing backpressure patterns to capture consecutive-packet queueing effect while keeping the information about the dependencies between flows. Furthermore, the improved algorithm may also decrease the inherent complexity of computing the indirect blocking latency due to backpressure.","PeriodicalId":294784,"journal":{"name":"2018 IEEE Real-Time Systems Symposium (RTSS)","volume":"15 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"131772354","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Jordi Cardona, Carles Hernández, E. Mezzetti, J. Abella, F. Cazorla
Manycores are capable of providing the computational demands required by functionally-advanced critical applications in domains such as automotive and avionics. In manycores a network-on-chip (NoC) provides access to shared caches and memories and hence concentrates most of the contention that tasks suffer, with effects on the worst-case contention delay (WCD) of packets and tasks' WCET. While several proposals minimize the impact of individual NoC parameters on WCD, e.g. mapping and routing, there are strong dependences among these NoC parameters. Hence, finding the optimal NoC configurations requires optimizing all parameters simultaneously, which represents a multidimensional optimization problem. In this paper we propose NoCo, a novel approach that combines ILP and stochastic optimization to find NoC configurations in terms of packet routing, application mapping, and arbitration weight allocation. Our results show that NoCo improves other techniques that optimize a subset of NoC parameters.
{"title":"NoCo: ILP-Based Worst-Case Contention Estimation for Mesh Real-Time Manycores","authors":"Jordi Cardona, Carles Hernández, E. Mezzetti, J. Abella, F. Cazorla","doi":"10.1109/RTSS.2018.00043","DOIUrl":"https://doi.org/10.1109/RTSS.2018.00043","url":null,"abstract":"Manycores are capable of providing the computational demands required by functionally-advanced critical applications in domains such as automotive and avionics. In manycores a network-on-chip (NoC) provides access to shared caches and memories and hence concentrates most of the contention that tasks suffer, with effects on the worst-case contention delay (WCD) of packets and tasks' WCET. While several proposals minimize the impact of individual NoC parameters on WCD, e.g. mapping and routing, there are strong dependences among these NoC parameters. Hence, finding the optimal NoC configurations requires optimizing all parameters simultaneously, which represents a multidimensional optimization problem. In this paper we propose NoCo, a novel approach that combines ILP and stochastic optimization to find NoC configurations in terms of packet routing, application mapping, and arbitration weight allocation. Our results show that NoCo improves other techniques that optimize a subset of NoC parameters.","PeriodicalId":294784,"journal":{"name":"2018 IEEE Real-Time Systems Symposium (RTSS)","volume":"153 10-12","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114047234","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
The scheduling of mixed-criticality (MC) systems with graceful degradation is considered, where LO-criticality tasks are guaranteed some service in HI mode in the form of minimum cumulative completion rates. First, we present an easy to implement admission-control procedure to determine which LO-criticality jobs to complete in HI mode. Then, we propose a demand-bound-function-based MC schedulability test that runs in pseudo-polynomial time for such systems under EDF-VD scheduling, wherein two virtual deadline setting heuristics are considered. Furthermore, we discuss a mechanism for the system to switch back from HI to LO mode and quantify the maximum time duration such recovery process would take. Finally, we show the effectiveness of our proposed method by experimental evaluation in comparison to state-of-the-art MC schedulers.
{"title":"Uniprocessor Mixed-Criticality Scheduling with Graceful Degradation by Completion Rate","authors":"Zhishan Guo, Kecheng Yang, Sudharsan Vaidhun, Samsil Arefin, Sajal K. Das, Haoyi Xiong","doi":"10.1109/RTSS.2018.00052","DOIUrl":"https://doi.org/10.1109/RTSS.2018.00052","url":null,"abstract":"The scheduling of mixed-criticality (MC) systems with graceful degradation is considered, where LO-criticality tasks are guaranteed some service in HI mode in the form of minimum cumulative completion rates. First, we present an easy to implement admission-control procedure to determine which LO-criticality jobs to complete in HI mode. Then, we propose a demand-bound-function-based MC schedulability test that runs in pseudo-polynomial time for such systems under EDF-VD scheduling, wherein two virtual deadline setting heuristics are considered. Furthermore, we discuss a mechanism for the system to switch back from HI to LO mode and quantify the maximum time duration such recovery process would take. Finally, we show the effectiveness of our proposed method by experimental evaluation in comparison to state-of-the-art MC schedulers.","PeriodicalId":294784,"journal":{"name":"2018 IEEE Real-Time Systems Symposium (RTSS)","volume":"41 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"131028324","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Memory is an expensive and, therefore, limited resource in deeply embedded real-time systems. Thread stacks substantially contribute to the RAM requirements. To reduce the system's worst-case stack consumption (WCSC), it is state of the art to exploit thread-level preemption constraints to let multiple threads share the same stack. However, deriving a tight, yet correct bound for the shared stack is a difficult undertaking and stack sharing is currently restricted to run-to-completion threads, which are preemptable, but cannot block (i.e., passively wait for an event) at run time. With semi-extended tasks (SETs), we propose a solution for efficient stack sharing among blocking and non-blocking threads on the system level. For this, we refine the stack-sharing granularity from the thread to function level. We provide an efficient intra-thread stack-switch mechanism and an ILP-based WCSC analysis that considers fine-grained preemption constraints and possible function-level switching points from the private to the shared stack. A genetic algorithm then selects switching points that lead to the reduction of the overall WCSC. Compared to systems that run only non-blocking threads on the shared stack, semi-extended tasks decrease the WCSC in our benchmarks on average by 7 percent and up to 52 percent for some systems.
{"title":"Semi-Extended Tasks: Efficient Stack Sharing Among Blocking Threads","authors":"Christian J. Dietrich, D. Lohmann","doi":"10.1109/RTSS.2018.00049","DOIUrl":"https://doi.org/10.1109/RTSS.2018.00049","url":null,"abstract":"Memory is an expensive and, therefore, limited resource in deeply embedded real-time systems. Thread stacks substantially contribute to the RAM requirements. To reduce the system's worst-case stack consumption (WCSC), it is state of the art to exploit thread-level preemption constraints to let multiple threads share the same stack. However, deriving a tight, yet correct bound for the shared stack is a difficult undertaking and stack sharing is currently restricted to run-to-completion threads, which are preemptable, but cannot block (i.e., passively wait for an event) at run time. With semi-extended tasks (SETs), we propose a solution for efficient stack sharing among blocking and non-blocking threads on the system level. For this, we refine the stack-sharing granularity from the thread to function level. We provide an efficient intra-thread stack-switch mechanism and an ILP-based WCSC analysis that considers fine-grained preemption constraints and possible function-level switching points from the private to the shared stack. A genetic algorithm then selects switching points that lead to the reduction of the overall WCSC. Compared to systems that run only non-blocking threads on the shared stack, semi-extended tasks decrease the WCSC in our benchmarks on average by 7 percent and up to 52 percent for some systems.","PeriodicalId":294784,"journal":{"name":"2018 IEEE Real-Time Systems Symposium (RTSS)","volume":"2 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"131375963","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Tei-Wei Kuo, Jian-Jia Chen, Yuan-Hao Chang, P. Hsiu
Real-time computing provides insightful ways to explore the optimization in resource usages, especially from the time point of view. Nevertheless, real-time task scheduling is recognized by its high complexity when there are non-preemptive shared resources and multiple processors. When more and more practical factors in system designs are considered, such as energy consumption and memory allocation, even some sub-problems in real-time task scheduling become intractable. Although people often criticize various artificial assumptions in real-time task scheduling, they have to admit that ideas in real-time computing and their extensions, such as tradeoff in cost, performance, energy, and even the quality of service, can be applied to multi-dimensional optimization in system designs. In this direction, we witness the rapid development of the embedded system industry and join the task force in system designs, especially mobile devices and non-volatile memory systems. Resource management on mobile devices, with a special emphasis on user experience, should not only consider the response time but also the visual perception of users. Non-volatile memory has also blurred the boundary between the memory and the storage. It enables certain unified considerations of the main memory and storage and also in-memory computing. It shows the ways to break the boundaries between hardware and software layers and have better integration of computing and memory/storage units. The advances in mobile systems and memory innovations inspire the evolution of embedded system designs and have also brought us insights to solutions regarding how systems should be restructured and how computing should be done. They might also provide their feedback to real-time computing and even shape the future direction of real-time computing in various innovative ways.
{"title":"Real-Time Computing and the Evolution of Embedded System Designs","authors":"Tei-Wei Kuo, Jian-Jia Chen, Yuan-Hao Chang, P. Hsiu","doi":"10.1109/RTSS.2018.00011","DOIUrl":"https://doi.org/10.1109/RTSS.2018.00011","url":null,"abstract":"Real-time computing provides insightful ways to explore the optimization in resource usages, especially from the time point of view. Nevertheless, real-time task scheduling is recognized by its high complexity when there are non-preemptive shared resources and multiple processors. When more and more practical factors in system designs are considered, such as energy consumption and memory allocation, even some sub-problems in real-time task scheduling become intractable. Although people often criticize various artificial assumptions in real-time task scheduling, they have to admit that ideas in real-time computing and their extensions, such as tradeoff in cost, performance, energy, and even the quality of service, can be applied to multi-dimensional optimization in system designs. In this direction, we witness the rapid development of the embedded system industry and join the task force in system designs, especially mobile devices and non-volatile memory systems. Resource management on mobile devices, with a special emphasis on user experience, should not only consider the response time but also the visual perception of users. Non-volatile memory has also blurred the boundary between the memory and the storage. It enables certain unified considerations of the main memory and storage and also in-memory computing. It shows the ways to break the boundaries between hardware and software layers and have better integration of computing and memory/storage units. The advances in mobile systems and memory innovations inspire the evolution of embedded system designs and have also brought us insights to solutions regarding how systems should be restructured and how computing should be done. They might also provide their feedback to real-time computing and even shape the future direction of real-time computing in various innovative ways.","PeriodicalId":294784,"journal":{"name":"2018 IEEE Real-Time Systems Symposium (RTSS)","volume":"9 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130162540","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
The traditional mixed-criticality (MC) model does not allow less critical tasks to execute during an event of the error and exception. Recently, the imprecise MC (IMC) model has been proposed where, even for exceptional events, less critical tasks also receive some amount of (degraded) service, e.g., a task overruns its execution demand. In this work, we present our ongoing effort to extend the IMC model to the precise scheduling of tasks and integrate with the dynamic voltage and frequency scaling (DVFS) scheme to enable energy minimization. Precise scheduling of MC systems is highly challenging because of its requirement to simultaneously guarantee the timing correctness of all tasks under both pessimistic and less pessimistic assumptions. We propose an utilization-based schedulability test and sufficient schedulability conditions for such systems under earliest deadline first with virtual deadline (EDF-VD) scheduling policy. For this unified model, we present a quantitative study in the forms of speedup bound and approximation ratio. Finally, both theoretical and experimental analysis will be conducted to prove the correctness of our algorithm and to demonstrate its effectiveness.
{"title":"Work-in-Progress: Precise Scheduling of Mixed-Criticality Tasks by Varying Processor Speed","authors":"S. Sruti, Ashikahmed Bhuiyan, Zhishan Guo","doi":"10.1145/3356401.3356410","DOIUrl":"https://doi.org/10.1145/3356401.3356410","url":null,"abstract":"The traditional mixed-criticality (MC) model does not allow less critical tasks to execute during an event of the error and exception. Recently, the imprecise MC (IMC) model has been proposed where, even for exceptional events, less critical tasks also receive some amount of (degraded) service, e.g., a task overruns its execution demand. In this work, we present our ongoing effort to extend the IMC model to the precise scheduling of tasks and integrate with the dynamic voltage and frequency scaling (DVFS) scheme to enable energy minimization. Precise scheduling of MC systems is highly challenging because of its requirement to simultaneously guarantee the timing correctness of all tasks under both pessimistic and less pessimistic assumptions. We propose an utilization-based schedulability test and sufficient schedulability conditions for such systems under earliest deadline first with virtual deadline (EDF-VD) scheduling policy. For this unified model, we present a quantitative study in the forms of speedup bound and approximation ratio. Finally, both theoretical and experimental analysis will be conducted to prove the correctness of our algorithm and to demonstrate its effectiveness.","PeriodicalId":294784,"journal":{"name":"2018 IEEE Real-Time Systems Symposium (RTSS)","volume":"4 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127130418","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Machine learning (ML) on edge computing devices is becoming popular in the industry as a means to make control systems more intelligent and autonomous. The new trend is to utilize embedded edge devices, as they boast higher computational power and larger memories than before, to perform ML tasks that had previously been limited to cloud-hosted deployments. In this work, we assess the real-time predictability and consider data privacy concerns by comparing traditional cloud services with edge-based ones for certain data analytics tasks. We identify the subset of ML problems appropriate for edge devices by investigating if they result in real-time predictable services for a set of widely used ML libraries. We specifically enhance the Caffe library to make it more suitable for real-time predictability. We then deploy ML models with high accuracy scores on an embedded system, exposing it to industry sensor data from the field, to demonstrates its efficacy and suitability for real-time processing.
{"title":"Work-in-Progress: Making Machine Learning Real-Time Predictable","authors":"Hang Xu, F. Mueller","doi":"10.1109/RTSS.2018.00029","DOIUrl":"https://doi.org/10.1109/RTSS.2018.00029","url":null,"abstract":"Machine learning (ML) on edge computing devices is becoming popular in the industry as a means to make control systems more intelligent and autonomous. The new trend is to utilize embedded edge devices, as they boast higher computational power and larger memories than before, to perform ML tasks that had previously been limited to cloud-hosted deployments. In this work, we assess the real-time predictability and consider data privacy concerns by comparing traditional cloud services with edge-based ones for certain data analytics tasks. We identify the subset of ML problems appropriate for edge devices by investigating if they result in real-time predictable services for a set of widely used ML libraries. We specifically enhance the Caffe library to make it more suitable for real-time predictability. We then deploy ML models with high accuracy scores on an embedded system, exposing it to industry sensor data from the field, to demonstrates its efficacy and suitability for real-time processing.","PeriodicalId":294784,"journal":{"name":"2018 IEEE Real-Time Systems Symposium (RTSS)","volume":"47 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115859954","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
J. Erickson, James H. Anderson, Gurulingesh Raravi, B. Andersson, K. Bletsas, Vincent Nélis, Sanjoy Baruah
{"title":"Outstanding Paper Awards","authors":"J. Erickson, James H. Anderson, Gurulingesh Raravi, B. Andersson, K. Bletsas, Vincent Nélis, Sanjoy Baruah","doi":"10.1109/rtss.2018.00006","DOIUrl":"https://doi.org/10.1109/rtss.2018.00006","url":null,"abstract":"","PeriodicalId":294784,"journal":{"name":"2018 IEEE Real-Time Systems Symposium (RTSS)","volume":"115-116 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"123179472","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
The integration trend and increased required computing power is driving the advent of common embedded consumer devices like MPSoCs platforms in the safety critical domain. MPSoCs often feature a shared tightly-coupled memory system where a careful management of data storage and transfers is a key enabler for performance. However, providing real-time guarantees for these platforms is extremely challenging as they rely on exploiting data locality to improve average latencies in shared-memory architectures. This effect is often disregarded by existing real-time analysis approaches which furthermore often focus solely on a single component of the memory system. In this paper, we propose a framework for the timing analysis of shared memory systems composed of on-chip scratchpad memories, off-chip DRAMs and DMA engines. The analysis captures the effect on the performance of the system of the locality of accesses, their interleaving and granularity.
{"title":"Exploiting Locality for the Performance Analysis of Shared Memory Systems in MPSoCs","authors":"Selma Saidi, A. Syring","doi":"10.1109/RTSS.2018.00050","DOIUrl":"https://doi.org/10.1109/RTSS.2018.00050","url":null,"abstract":"The integration trend and increased required computing power is driving the advent of common embedded consumer devices like MPSoCs platforms in the safety critical domain. MPSoCs often feature a shared tightly-coupled memory system where a careful management of data storage and transfers is a key enabler for performance. However, providing real-time guarantees for these platforms is extremely challenging as they rely on exploiting data locality to improve average latencies in shared-memory architectures. This effect is often disregarded by existing real-time analysis approaches which furthermore often focus solely on a single component of the memory system. In this paper, we propose a framework for the timing analysis of shared memory systems composed of on-chip scratchpad memories, off-chip DRAMs and DMA engines. The analysis captures the effect on the performance of the system of the locality of accesses, their interleaving and granularity.","PeriodicalId":294784,"journal":{"name":"2018 IEEE Real-Time Systems Symposium (RTSS)","volume":"110 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"122653231","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}