Pub Date : 2013-06-19DOI: 10.1109/ISORC.2013.6913229
Hassen Karray, M. Paulitsch, Bernd Koppenhoefer, D. Geiger
The progress of silicon integration has led to the ability to integrate complex systems on a single die. Integration of different application software components on a distributed system-on-chip can be demanding unless one follows a structural system integration approach with architectural support by hardware. The ACROSS Multi-Processor System-on-Chip platform provides architectural means for integration, such as well-defined communication interfaces, deterministic communication schedules, fault-containment, and error-confinement support. We present the non-functional requirements of a degraded vision landing system for a helicopter and show how the ACROSS Multi-Processor System-on-Chip research platform alleviates integration of software and system components. We also discuss more general multicore-specific software-related requirements and how the ACROSS MPSoC platform meets these.
{"title":"Design and implementation of a degraded vision landing aid application on a multicore processor architecture for safety-critical application","authors":"Hassen Karray, M. Paulitsch, Bernd Koppenhoefer, D. Geiger","doi":"10.1109/ISORC.2013.6913229","DOIUrl":"https://doi.org/10.1109/ISORC.2013.6913229","url":null,"abstract":"The progress of silicon integration has led to the ability to integrate complex systems on a single die. Integration of different application software components on a distributed system-on-chip can be demanding unless one follows a structural system integration approach with architectural support by hardware. The ACROSS Multi-Processor System-on-Chip platform provides architectural means for integration, such as well-defined communication interfaces, deterministic communication schedules, fault-containment, and error-confinement support. We present the non-functional requirements of a degraded vision landing system for a helicopter and show how the ACROSS Multi-Processor System-on-Chip research platform alleviates integration of software and system components. We also discuss more general multicore-specific software-related requirements and how the ACROSS MPSoC platform meets these.","PeriodicalId":330873,"journal":{"name":"16th IEEE International Symposium on Object/component/service-oriented Real-time distributed Computing (ISORC 2013)","volume":"70 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2013-06-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126479619","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2013-06-19DOI: 10.1109/ISORC.2013.6913200
Takuya Ishikawa, Takuya Azumi, Hiroshi Oyama, H. Takada
A software partitioning has been used to develop safety-critical systems in recent years. In addition, software component technologies supporting a software partitioning have been developed. This paper describes the new component technology for embedded software that requires memory protection, which is one of the important features for the partitioning. HR-TECS is a new component technology based on the real-time operating system supporting the static memory layout. Developers can easily allocate components to partitions in order to protect memory areas. In addition, HR-TECS supports inter-partition communications so that developers can implement components without consideration for inter-partition communications. The results of evaluation demonstrate the effectiveness of HR-TECS.
{"title":"HR-TECS: Component technology for embedded systems with memory protection","authors":"Takuya Ishikawa, Takuya Azumi, Hiroshi Oyama, H. Takada","doi":"10.1109/ISORC.2013.6913200","DOIUrl":"https://doi.org/10.1109/ISORC.2013.6913200","url":null,"abstract":"A software partitioning has been used to develop safety-critical systems in recent years. In addition, software component technologies supporting a software partitioning have been developed. This paper describes the new component technology for embedded software that requires memory protection, which is one of the important features for the partitioning. HR-TECS is a new component technology based on the real-time operating system supporting the static memory layout. Developers can easily allocate components to partitions in order to protect memory areas. In addition, HR-TECS supports inter-partition communications so that developers can implement components without consideration for inter-partition communications. The results of evaluation demonstrate the effectiveness of HR-TECS.","PeriodicalId":330873,"journal":{"name":"16th IEEE International Symposium on Object/component/service-oriented Real-time distributed Computing (ISORC 2013)","volume":"4 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2013-06-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"132182915","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2013-06-19DOI: 10.1109/ISORC.2013.6913225
Sahar Abbaspour, F. Brandner, Martin Schoeberl
Real-time systems need time-predictable architectures to support static worst-case execution time (WCET) analysis. One architectural feature, the data cache, is hard to analyze when different data areas (e.g., heap allocated and stack allocated data) share the same cache. This sharing leads to less precise results of the cache analysis part of the WCET analysis. Splitting the data cache for different data areas enables composable data cache analysis. The WCET analysis tool can analyze the accesses to these different data areas independently. In this paper we present the design and implementation of a cache for stack allocated data. Our port of the LLVM C++ compiler supports the management of the stack cache. The combination of stack cache instructions and the hardware implementation of the stack cache is a further step towards time-predictable architectures.
{"title":"A time-predictable stack cache","authors":"Sahar Abbaspour, F. Brandner, Martin Schoeberl","doi":"10.1109/ISORC.2013.6913225","DOIUrl":"https://doi.org/10.1109/ISORC.2013.6913225","url":null,"abstract":"Real-time systems need time-predictable architectures to support static worst-case execution time (WCET) analysis. One architectural feature, the data cache, is hard to analyze when different data areas (e.g., heap allocated and stack allocated data) share the same cache. This sharing leads to less precise results of the cache analysis part of the WCET analysis. Splitting the data cache for different data areas enables composable data cache analysis. The WCET analysis tool can analyze the accesses to these different data areas independently. In this paper we present the design and implementation of a cache for stack allocated data. Our port of the LLVM C++ compiler supports the management of the stack cache. The combination of stack cache instructions and the hardware implementation of the stack cache is a further step towards time-predictable architectures.","PeriodicalId":330873,"journal":{"name":"16th IEEE International Symposium on Object/component/service-oriented Real-time distributed Computing (ISORC 2013)","volume":"109 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2013-06-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127560595","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2013-06-19DOI: 10.1109/ISORC.2013.6913208
Benjamin Venelle, Jérémy Briffaut, Laurent Clevy, C. Toinard
Since 70's, and despite its operational complexity, Mandatory Access Control (MAC) has demonstrated its reliability to enforce integrity and confidentiality. Surprisingly, the Java technology, despite its popularity, has not yet adopted this protection principle. Current security features within the JVM (JAAS and bytecode verifier) can be bypassed, as demonstrated by summer 2012 attacks. Thus, a MAC model for Java and a cross platform reference monitor are required for the Java Virtual Machine. Security Enhanced Java (SEJava) enables to control dynamically the information flows between all the Java objects requiring neither bytecode nor source code instrumentations. The main idea is to consider Java types as security contexts, and method calls/field accesses as permissions. SEJava allows fine-grain MAC rules between the Java objects. Thus, SEJava controls all the information flows within the JVM. Our implementation is faster than concurrent approaches while allowing both finer and more advanced controls. A use case shows the efficiency to protect against Common Vulnerability and Exposures in an efficient manner.
{"title":"Security Enhanced Java: Mandatory Access Control for the Java Virtual Machine","authors":"Benjamin Venelle, Jérémy Briffaut, Laurent Clevy, C. Toinard","doi":"10.1109/ISORC.2013.6913208","DOIUrl":"https://doi.org/10.1109/ISORC.2013.6913208","url":null,"abstract":"Since 70's, and despite its operational complexity, Mandatory Access Control (MAC) has demonstrated its reliability to enforce integrity and confidentiality. Surprisingly, the Java technology, despite its popularity, has not yet adopted this protection principle. Current security features within the JVM (JAAS and bytecode verifier) can be bypassed, as demonstrated by summer 2012 attacks. Thus, a MAC model for Java and a cross platform reference monitor are required for the Java Virtual Machine. Security Enhanced Java (SEJava) enables to control dynamically the information flows between all the Java objects requiring neither bytecode nor source code instrumentations. The main idea is to consider Java types as security contexts, and method calls/field accesses as permissions. SEJava allows fine-grain MAC rules between the Java objects. Thus, SEJava controls all the information flows within the JVM. Our implementation is faster than concurrent approaches while allowing both finer and more advanced controls. A use case shows the efficiency to protect against Common Vulnerability and Exposures in an efficient manner.","PeriodicalId":330873,"journal":{"name":"16th IEEE International Symposium on Object/component/service-oriented Real-time distributed Computing (ISORC 2013)","volume":"39 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2013-06-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127299010","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2013-06-19DOI: 10.1109/ISORC.2013.6913205
Oliver Höftberger, R. Obermaisser
Embedded real-time systems with dynamic resource management capabilities are able to adapt to changing resource requirements, resource availability, the occurrence of faults and environmental changes. This enables better resource utilization, more flexibility and increased dependability. Depending on the application domain, reconfiguration decisions must be found and applied within temporal bounds. Although semantic techniques are used to react to unexpected events in standard IT systems, they exhibit a computational complexity and temporal unpredictability that is not suitable for real-time systems. This paper describes a temporally predictable framework for reconfigurable embedded real-time systems. It uses a service-oriented approach to dynamically reconfigure component interactions. Knowledge about the system structure and semantics is provided in a system ontology with relevant information for embedded realtime systems (e.g., transfer delay times, accuracy of relations). The ontology allows to automatically generate service substitutes by exploiting implicit redundancy in the system. Furthermore, an algorithm is presented that searches the ontology for semantically equivalent implementations of failed services. The process of substitution search and substitute service generation is demonstrated with an example from the automotive domain.
{"title":"Ontology-based runtime reconfiguration of distributed embedded real-time systems","authors":"Oliver Höftberger, R. Obermaisser","doi":"10.1109/ISORC.2013.6913205","DOIUrl":"https://doi.org/10.1109/ISORC.2013.6913205","url":null,"abstract":"Embedded real-time systems with dynamic resource management capabilities are able to adapt to changing resource requirements, resource availability, the occurrence of faults and environmental changes. This enables better resource utilization, more flexibility and increased dependability. Depending on the application domain, reconfiguration decisions must be found and applied within temporal bounds. Although semantic techniques are used to react to unexpected events in standard IT systems, they exhibit a computational complexity and temporal unpredictability that is not suitable for real-time systems. This paper describes a temporally predictable framework for reconfigurable embedded real-time systems. It uses a service-oriented approach to dynamically reconfigure component interactions. Knowledge about the system structure and semantics is provided in a system ontology with relevant information for embedded realtime systems (e.g., transfer delay times, accuracy of relations). The ontology allows to automatically generate service substitutes by exploiting implicit redundancy in the system. Furthermore, an algorithm is presented that searches the ontology for semantically equivalent implementations of failed services. The process of substitution search and substitute service generation is demonstrated with an example from the automotive domain.","PeriodicalId":330873,"journal":{"name":"16th IEEE International Symposium on Object/component/service-oriented Real-time distributed Computing (ISORC 2013)","volume":"82 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2013-06-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128989817","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2013-06-19DOI: 10.1109/ISORC.2013.6913197
Dennis C. Feiock, James H. Hill
Software instrumentation is an important aspect of software-intensive distributed real-time and embedded (DRE) systems because it enables real-time feedback of system properties, such as resource usage and component state, for performance analysis. Although it is critical not to collect too much instrumentation data to ensure minimal impact on the DRE system's existing performance properties, the design and implementation of software instrumentation middleware can impact how much instrumentation data can be collected. This can indirectly impact the DRE system's existing properties and performance analysis, and is more of a concern when using general-purpose software instrumentation middleware for DRE systems. This paper provides two contributions to instrumenting software-intensive DRE systems. First, it presents two techniques named the Standard Flat-rate Envelope and Pay-per-use for improving the performance of software instrumentation middleware for DRE systems. Secondly, it quantitatively evaluates performance gains realized by the two techniques in the context the Open-source Architecture for Software Instrumentation of Systems (OASIS), which is open-source dynamic instrumentation middleware for DRE systems. Our results show that the Standard Flat-rate Envelope improves performance up to 57% and the Pay-per-use improves performance up to 49%.
{"title":"Optimizing general-purpose software instrumentation middleware performance for distributed real-time and embedded systems","authors":"Dennis C. Feiock, James H. Hill","doi":"10.1109/ISORC.2013.6913197","DOIUrl":"https://doi.org/10.1109/ISORC.2013.6913197","url":null,"abstract":"Software instrumentation is an important aspect of software-intensive distributed real-time and embedded (DRE) systems because it enables real-time feedback of system properties, such as resource usage and component state, for performance analysis. Although it is critical not to collect too much instrumentation data to ensure minimal impact on the DRE system's existing performance properties, the design and implementation of software instrumentation middleware can impact how much instrumentation data can be collected. This can indirectly impact the DRE system's existing properties and performance analysis, and is more of a concern when using general-purpose software instrumentation middleware for DRE systems. This paper provides two contributions to instrumenting software-intensive DRE systems. First, it presents two techniques named the Standard Flat-rate Envelope and Pay-per-use for improving the performance of software instrumentation middleware for DRE systems. Secondly, it quantitatively evaluates performance gains realized by the two techniques in the context the Open-source Architecture for Software Instrumentation of Systems (OASIS), which is open-source dynamic instrumentation middleware for DRE systems. Our results show that the Standard Flat-rate Envelope improves performance up to 57% and the Pay-per-use improves performance up to 49%.","PeriodicalId":330873,"journal":{"name":"16th IEEE International Symposium on Object/component/service-oriented Real-time distributed Computing (ISORC 2013)","volume":"56 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2013-06-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130566193","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2013-06-19DOI: 10.1109/ISORC.2013.6913195
C. Geyer, Benedikt Huber, Daniel Prokesch, P. Puschner
When designing modern real-time systems, which have to deliver results at specified deadlines, knowing the worst-case execution time (WCET) of software components is of utmost importance. Although there has been much research in the field of WCET analysis in the last years, with a focus on improving the accuracy of processor models and WCET-calculation methods, researchers have paid little attention to exploring the impact of the instruction set architecture (ISA) on the time predictability of the code executing on a given real-time processor. In this paper we explore ISA extensions that allow compilers to generate highly time-predictable code. To this end, an existing instruction set has been extended by a number of instructions, and the LLVM compiler framework has been adapted to use these new instructions in its assembler-code generator. The timing behavior of the generated code has been evaluated by means of an instruction-set simulator. The results of the experiments allowed us to identify a promising combination of the newly introduced instructions. The use of these instructions leads to a reduction of the number of branches in the assembler code, thus improving time predictability while still providing competitive worst-case timing.
{"title":"Time-predictable code execution — Instruction-set support for the single-path approach","authors":"C. Geyer, Benedikt Huber, Daniel Prokesch, P. Puschner","doi":"10.1109/ISORC.2013.6913195","DOIUrl":"https://doi.org/10.1109/ISORC.2013.6913195","url":null,"abstract":"When designing modern real-time systems, which have to deliver results at specified deadlines, knowing the worst-case execution time (WCET) of software components is of utmost importance. Although there has been much research in the field of WCET analysis in the last years, with a focus on improving the accuracy of processor models and WCET-calculation methods, researchers have paid little attention to exploring the impact of the instruction set architecture (ISA) on the time predictability of the code executing on a given real-time processor. In this paper we explore ISA extensions that allow compilers to generate highly time-predictable code. To this end, an existing instruction set has been extended by a number of instructions, and the LLVM compiler framework has been adapted to use these new instructions in its assembler-code generator. The timing behavior of the generated code has been evaluated by means of an instruction-set simulator. The results of the experiments allowed us to identify a promising combination of the newly introduced instructions. The use of these instructions leads to a reduction of the number of branches in the assembler code, thus improving time predictability while still providing competitive worst-case timing.","PeriodicalId":330873,"journal":{"name":"16th IEEE International Symposium on Object/component/service-oriented Real-time distributed Computing (ISORC 2013)","volume":"24 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2013-06-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"133318423","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2013-06-19DOI: 10.1109/ISORC.2013.6913209
Zheng Li, Li Wang, Shangping Ren, Gang Quan
In this paper, we study the energy minimization problem for a frame-based real time system with guaranteed reliability using the checkpointing technique. We formally prove that executing a real time task set with a uniform frequency, or neighboring frequencies if the desired frequency is not available, not only optimizes its energy consumption but also achieves maximal reliability. Based on the theoretic conclusion, we further develop a Dynamic Voltage Frequency Scaling (DVFS) and checkpoint allocation strategy for a task set to guarantee both reliability and deadline constraints but with minimal energy consumption. The proposed strategy has very small frequency switching overhead as no more than one frequency change is needed for the entire task set execution and thus is particularly effective for processors with large frequency switching overhead. We further empirically compare our approach with recent work published in the literature. The experimental results show that the proposed approach can reduce as much as 15% energy consumption.
{"title":"Energy minimization for checkpointing-based approach to guaranteeing real-time systems reliability","authors":"Zheng Li, Li Wang, Shangping Ren, Gang Quan","doi":"10.1109/ISORC.2013.6913209","DOIUrl":"https://doi.org/10.1109/ISORC.2013.6913209","url":null,"abstract":"In this paper, we study the energy minimization problem for a frame-based real time system with guaranteed reliability using the checkpointing technique. We formally prove that executing a real time task set with a uniform frequency, or neighboring frequencies if the desired frequency is not available, not only optimizes its energy consumption but also achieves maximal reliability. Based on the theoretic conclusion, we further develop a Dynamic Voltage Frequency Scaling (DVFS) and checkpoint allocation strategy for a task set to guarantee both reliability and deadline constraints but with minimal energy consumption. The proposed strategy has very small frequency switching overhead as no more than one frequency change is needed for the entire task set execution and thus is particularly effective for processors with large frequency switching overhead. We further empirically compare our approach with recent work published in the literature. The experimental results show that the proposed approach can reduce as much as 15% energy consumption.","PeriodicalId":330873,"journal":{"name":"16th IEEE International Symposium on Object/component/service-oriented Real-time distributed Computing (ISORC 2013)","volume":"103 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2013-06-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"116478974","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2013-06-19DOI: 10.1109/ISORC.2013.6913230
F. Rammig, Lial Khaluf, N. Montealegre, Katharina Stahl, Yuhong Zhao
For upcoming Cyber Physical Systems with a high need of adaptation to changing environments an appropriate programming approach is needed. In this paper we argue that such systems have to be highly adaptive and self-evolving. The general vision and approach is pointed out. Furthermore specific approaches solving important aspects of such a programming paradigm are presented. The aspects discussed include the identification of adaptation needs using online Model Checking, real-time-aware adaptation mechanisms, and self-adapting safety guards by means of Artificial Immune Systems.
{"title":"Organic real-time programming — Vision and approaches towards self-evolving and adaptive real-time software","authors":"F. Rammig, Lial Khaluf, N. Montealegre, Katharina Stahl, Yuhong Zhao","doi":"10.1109/ISORC.2013.6913230","DOIUrl":"https://doi.org/10.1109/ISORC.2013.6913230","url":null,"abstract":"For upcoming Cyber Physical Systems with a high need of adaptation to changing environments an appropriate programming approach is needed. In this paper we argue that such systems have to be highly adaptive and self-evolving. The general vision and approach is pointed out. Furthermore specific approaches solving important aspects of such a programming paradigm are presented. The aspects discussed include the identification of adaptation needs using online Model Checking, real-time-aware adaptation mechanisms, and self-adapting safety guards by means of Artificial Immune Systems.","PeriodicalId":330873,"journal":{"name":"16th IEEE International Symposium on Object/component/service-oriented Real-time distributed Computing (ISORC 2013)","volume":"105 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2013-06-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"117353508","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2013-06-19DOI: 10.1109/ISORC.2013.6913203
M. Peiris, M. Hasan, James H. Hill
This paper presents a method and tool named the Dataflow Model Auto-Constructor (DMAC). DMAC uses frequent-sequence mining and Dempster-Shafer theory to mine a system execution trace and reconstruct its corresponding dataflow model. Distributed system testers then use the resultant dataflow model to analyze performance properties (e.g., end-to-end response time, throughput, and service time) captured in the system execution trace. Results from applying DMAC to different case studies show that DMAC can reconstruct dataflow models that cover at most 94% of the events in the original system execution trace. Likewise, more than 2 sources of evidence are needed to reconstruct dataflow models for systems with multiple execution contexts.
{"title":"Auto-constructing dataflow models from system execution traces","authors":"M. Peiris, M. Hasan, James H. Hill","doi":"10.1109/ISORC.2013.6913203","DOIUrl":"https://doi.org/10.1109/ISORC.2013.6913203","url":null,"abstract":"This paper presents a method and tool named the Dataflow Model Auto-Constructor (DMAC). DMAC uses frequent-sequence mining and Dempster-Shafer theory to mine a system execution trace and reconstruct its corresponding dataflow model. Distributed system testers then use the resultant dataflow model to analyze performance properties (e.g., end-to-end response time, throughput, and service time) captured in the system execution trace. Results from applying DMAC to different case studies show that DMAC can reconstruct dataflow models that cover at most 94% of the events in the original system execution trace. Likewise, more than 2 sources of evidence are needed to reconstruct dataflow models for systems with multiple execution contexts.","PeriodicalId":330873,"journal":{"name":"16th IEEE International Symposium on Object/component/service-oriented Real-time distributed Computing (ISORC 2013)","volume":"35 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2013-06-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"121298409","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}