Guaranteeing response times of real-time (RT) distributed computing systems has been recognized as one of the biggest challenges by the RT software research community for three decades. The concept of a hybrid approach that combines analytical derivation approaches and testing-based statistical derivation approaches in a symbiotic form for meeting this challenge was presented in recent years. However, concrete practical hybrid approaches are still in early stages of development. One such approach pursued by the authors and their collaborators is presented here. This paper focuses on the cases of deriving tight execution time bounds of the segments of object methods which do not involve calls for services from the operating system kernel and middleware. A case-study that demonstrates how the adopted approach works in handling a simple practical application is also presented
{"title":"A hybrid approach in TADE for derivation of execution time bounds of program-segments in distributed real-time embedded computing","authors":"C. Im, Kwang-rok Kim","doi":"10.1109/ISORC.2006.5","DOIUrl":"https://doi.org/10.1109/ISORC.2006.5","url":null,"abstract":"Guaranteeing response times of real-time (RT) distributed computing systems has been recognized as one of the biggest challenges by the RT software research community for three decades. The concept of a hybrid approach that combines analytical derivation approaches and testing-based statistical derivation approaches in a symbiotic form for meeting this challenge was presented in recent years. However, concrete practical hybrid approaches are still in early stages of development. One such approach pursued by the authors and their collaborators is presented here. This paper focuses on the cases of deriving tight execution time bounds of the segments of object methods which do not involve calls for services from the operating system kernel and middleware. A case-study that demonstrates how the adopted approach works in handling a simple practical application is also presented","PeriodicalId":212174,"journal":{"name":"Ninth IEEE International Symposium on Object and Component-Oriented Real-Time Distributed Computing (ISORC'06)","volume":"11 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2006-04-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"122112144","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Heterogeneous non-functional requirements of DRE system put a limit on middleware engineering; building an application-tailored middleware becomes a challenge. In this paper, we show how we use the PolyORB middleware and its architecture as a framework to implement DDS, the data distribution services (DDS) recently published by the OMG. We demonstrate how the architecture proposed by PolyORB enables a rapid implementation of this specification, and allows for extreme tailorability to support application requirements
{"title":"A framework for DRE middleware, an application to DDS","authors":"J. Hugues, L. Pautet, F. Kordon","doi":"10.1109/ISORC.2006.4","DOIUrl":"https://doi.org/10.1109/ISORC.2006.4","url":null,"abstract":"Heterogeneous non-functional requirements of DRE system put a limit on middleware engineering; building an application-tailored middleware becomes a challenge. In this paper, we show how we use the PolyORB middleware and its architecture as a framework to implement DDS, the data distribution services (DDS) recently published by the OMG. We demonstrate how the architecture proposed by PolyORB enables a rapid implementation of this specification, and allows for extreme tailorability to support application requirements","PeriodicalId":212174,"journal":{"name":"Ninth IEEE International Symposium on Object and Component-Oriented Real-Time Distributed Computing (ISORC'06)","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2006-04-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128190217","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Convenience, reliability, and effectiveness of automatic memory management have long been established in modern systems and programming languages such as Java. The timeliness requirements of real-time systems, however, impose specific demands on the operational parameters of the garbage collector. The memory requirements of real-time tasks must be accommodated with a predictable impact on the timeline, and under the purview of the scheduler. Utility accrual is a method of dynamic overload scheduling that is designed to respond to CPU overload conditions by producing a schedule that heuristically maximize a predefined metric of utility. There also exists in such systems the possibility of memory overload situations in which the cumulative memory demand exceeds the amount of memory available. This paper presents a utility accrual algorithm for uniprocessor CPU and garbage collection scheduling that addresses memory overload conditions. By tightly linking CPU and memory allocation, the scheduler can appropriately respond to overload along both dimensions. This scheduler is the first of its kind to enable the use of automatic memory management in a utility accrual system. Experimental results using actual Java application profiles indicate the viability of this model
{"title":"Automatic memory management in utility accrual scheduling environments","authors":"Shahrooz Feizabadi, Godmar Back","doi":"10.1109/ISORC.2006.21","DOIUrl":"https://doi.org/10.1109/ISORC.2006.21","url":null,"abstract":"Convenience, reliability, and effectiveness of automatic memory management have long been established in modern systems and programming languages such as Java. The timeliness requirements of real-time systems, however, impose specific demands on the operational parameters of the garbage collector. The memory requirements of real-time tasks must be accommodated with a predictable impact on the timeline, and under the purview of the scheduler. Utility accrual is a method of dynamic overload scheduling that is designed to respond to CPU overload conditions by producing a schedule that heuristically maximize a predefined metric of utility. There also exists in such systems the possibility of memory overload situations in which the cumulative memory demand exceeds the amount of memory available. This paper presents a utility accrual algorithm for uniprocessor CPU and garbage collection scheduling that addresses memory overload conditions. By tightly linking CPU and memory allocation, the scheduler can appropriately respond to overload along both dimensions. This scheduler is the first of its kind to enable the use of automatic memory management in a utility accrual system. Experimental results using actual Java application profiles indicate the viability of this model","PeriodicalId":212174,"journal":{"name":"Ninth IEEE International Symposium on Object and Component-Oriented Real-Time Distributed Computing (ISORC'06)","volume":"43 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2006-04-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"131453607","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
J. Son, Sang Hyun Park, Jung-Guk Kim, Moon-hae Kim
The TMO (time-triggered message-triggered object) model is a well-known real-time object model for distributed timeliness computing. In a couple of years ago, we developed a Linux-based real-time kernel, named TMO-Linux, supporting deadline driven executions of TMO's. TMO-Linux and its distributed IPC subsystem have been used well in developing networked control systems consisting of cooperating embedded devices, but there have difficulties in executing some TMO applications accurately due to the lack of timeliness in distributed communications. To overcome this problem, we newly developed a real-time distributed IPC over IEEE1394 for the TMO-Linux kernel. In the new system, predictable delivery services for real-time messages are provided by isochronous transmissions of IEEE1394. To implement predictable delivery services, each node is set to have its own isochronous channel for receiving data that is allocated to a fixed time-slot bandwidth in an IEEE1394 frame. This paper presents an implementation technique for the IEEE1394-based real-time distributed IPC and collaborations of computing nodes using TMO-Linux
{"title":"An IEEE1394-based real-time distributed IPC system for collaborating TMO's","authors":"J. Son, Sang Hyun Park, Jung-Guk Kim, Moon-hae Kim","doi":"10.1109/ISORC.2006.15","DOIUrl":"https://doi.org/10.1109/ISORC.2006.15","url":null,"abstract":"The TMO (time-triggered message-triggered object) model is a well-known real-time object model for distributed timeliness computing. In a couple of years ago, we developed a Linux-based real-time kernel, named TMO-Linux, supporting deadline driven executions of TMO's. TMO-Linux and its distributed IPC subsystem have been used well in developing networked control systems consisting of cooperating embedded devices, but there have difficulties in executing some TMO applications accurately due to the lack of timeliness in distributed communications. To overcome this problem, we newly developed a real-time distributed IPC over IEEE1394 for the TMO-Linux kernel. In the new system, predictable delivery services for real-time messages are provided by isochronous transmissions of IEEE1394. To implement predictable delivery services, each node is set to have its own isochronous channel for receiving data that is allocated to a fixed time-slot bandwidth in an IEEE1394 frame. This paper presents an implementation technique for the IEEE1394-based real-time distributed IPC and collaborations of computing nodes using TMO-Linux","PeriodicalId":212174,"journal":{"name":"Ninth IEEE International Symposium on Object and Component-Oriented Real-Time Distributed Computing (ISORC'06)","volume":"26 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2006-04-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"132673209","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Many prior approaches in UML-based embedded software design incorporate state-based behavior modeling. However, interaction-based behavior modeling provides more intuitive view of a system. In this paper, we propose an approach to interaction-based behavior modeling of embedded software using UML 2.0. We use the interaction overview diagrams and the sequence diagrams to model the behavior. We present the method of constructing interaction-based behavior model with an example. We also briefly describe the idea of generating executable code from it
{"title":"Interaction-based behavior modeling of embedded software using UML 2.0","authors":"Sang-Uk Jeon, Jang-Eui Hong, Doo-Hwan Bae","doi":"10.1109/ISORC.2006.42","DOIUrl":"https://doi.org/10.1109/ISORC.2006.42","url":null,"abstract":"Many prior approaches in UML-based embedded software design incorporate state-based behavior modeling. However, interaction-based behavior modeling provides more intuitive view of a system. In this paper, we propose an approach to interaction-based behavior modeling of embedded software using UML 2.0. We use the interaction overview diagrams and the sequence diagrams to model the behavior. We present the method of constructing interaction-based behavior model with an example. We also briefly describe the idea of generating executable code from it","PeriodicalId":212174,"journal":{"name":"Ninth IEEE International Symposium on Object and Component-Oriented Real-Time Distributed Computing (ISORC'06)","volume":"98 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2006-04-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"124922561","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
This paper tackles the problem of ever changing embedded systems non-functional requirements, specially the architectural ones. It proposes a solution based on features model and MDA standards, which is called features-oriented model-driven architecture (FOMDA). This proposal can be used to help application designer in defining the mappings and transformations of UML models to as many target platforms as wished. This is done by configuring model-to-model and model-to-code transformations over tiers, where every tier represents some target platform properties that the system must be mapped and transformed to. To validate the proposal a case study related to the development of an embedded real-time system is presented, detailing how to transform a generic high-level UML model to a model specific for a given target platform. Obtained results are optimistic and conclude that the FOMDA approach can make designers re-think their current development process to make it more decoupled from a specific target platform
{"title":"Using the FOMDA approach to support object-oriented real-time systems development","authors":"F. Basso, T. Oliveira, L. Becker","doi":"10.1109/ISORC.2006.76","DOIUrl":"https://doi.org/10.1109/ISORC.2006.76","url":null,"abstract":"This paper tackles the problem of ever changing embedded systems non-functional requirements, specially the architectural ones. It proposes a solution based on features model and MDA standards, which is called features-oriented model-driven architecture (FOMDA). This proposal can be used to help application designer in defining the mappings and transformations of UML models to as many target platforms as wished. This is done by configuring model-to-model and model-to-code transformations over tiers, where every tier represents some target platform properties that the system must be mapped and transformed to. To validate the proposal a case study related to the development of an embedded real-time system is presented, detailing how to transform a generic high-level UML model to a model specific for a given target platform. Obtained results are optimistic and conclude that the FOMDA approach can make designers re-think their current development process to make it more decoupled from a specific target platform","PeriodicalId":212174,"journal":{"name":"Ninth IEEE International Symposium on Object and Component-Oriented Real-Time Distributed Computing (ISORC'06)","volume":"69 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2006-04-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126878442","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
W. Tsai, C. Fan, Yinong Chen, R. Paul, Jen-Yao Chung
The architecture of SOA-based applications is different from traditional software architecture where the architecture is mainly static. The architecture of an SOA-based application is dynamic, i.e., the application may be composed at runtime using existing services. Thus SOA has provided a new direction for software architecture study, where the architecture is determined at runtime and architecture can be dynamically changed at runtime to meet the new software requirements. This paper proposes an architecture classification scheme for SOA-based applications. Using this classification, several well-known SOA-based applications are reviewed including the architectures proposed and adopted by major computer companies and standard organizations. The architecture classification provides a unified way to evaluate a variety of architectures for SOA-based applications
{"title":"Architecture classification for SOA-based applications","authors":"W. Tsai, C. Fan, Yinong Chen, R. Paul, Jen-Yao Chung","doi":"10.1109/ISORC.2006.18","DOIUrl":"https://doi.org/10.1109/ISORC.2006.18","url":null,"abstract":"The architecture of SOA-based applications is different from traditional software architecture where the architecture is mainly static. The architecture of an SOA-based application is dynamic, i.e., the application may be composed at runtime using existing services. Thus SOA has provided a new direction for software architecture study, where the architecture is determined at runtime and architecture can be dynamically changed at runtime to meet the new software requirements. This paper proposes an architecture classification scheme for SOA-based applications. Using this classification, several well-known SOA-based applications are reviewed including the architectures proposed and adopted by major computer companies and standard organizations. The architecture classification provides a unified way to evaluate a variety of architectures for SOA-based applications","PeriodicalId":212174,"journal":{"name":"Ninth IEEE International Symposium on Object and Component-Oriented Real-Time Distributed Computing (ISORC'06)","volume":"49 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2006-04-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127023883","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
F. Rammig, M. Götz, T. Heimfarth, P. Janacik, Simon Oberthür
It can be observed that most technological artefacts are becoming intelligent "things that think" and most of these intelligent objects will be linked together to an "Internet of things". To master this omnipresent virtual "organism", completely new design and operation paradigms have to evolve. In this paper we discuss how research of our group at the University of Paderborn is providing fundamental principles, methods, and tools to design real-time operating systems for this virtual "organism" of the future. Based on our fine-granular library for the construction of reflexive RTOS, the necessary configuration tool and its on-line version are discussed. Next step towards self-coordination is a profile management system to support self-optimization of the RTOS. The included flexible resource manager allows migration of RTOS services dynamically between programmable processors and re-configurable HW. In a final step the RTOS itself can be distributed. Its services are provided by a cluster of instances instead of a single one. This makes a sophisticated dynamically self-optimizing communication system necessary
{"title":"Real-time operating systems for self-coordinating embedded systems","authors":"F. Rammig, M. Götz, T. Heimfarth, P. Janacik, Simon Oberthür","doi":"10.1109/ISORC.2006.67","DOIUrl":"https://doi.org/10.1109/ISORC.2006.67","url":null,"abstract":"It can be observed that most technological artefacts are becoming intelligent \"things that think\" and most of these intelligent objects will be linked together to an \"Internet of things\". To master this omnipresent virtual \"organism\", completely new design and operation paradigms have to evolve. In this paper we discuss how research of our group at the University of Paderborn is providing fundamental principles, methods, and tools to design real-time operating systems for this virtual \"organism\" of the future. Based on our fine-granular library for the construction of reflexive RTOS, the necessary configuration tool and its on-line version are discussed. Next step towards self-coordination is a profile management system to support self-optimization of the RTOS. The included flexible resource manager allows migration of RTOS services dynamically between programmable processors and re-configurable HW. In a final step the RTOS itself can be distributed. Its services are provided by a cluster of instances instead of a single one. This makes a sophisticated dynamically self-optimizing communication system necessary","PeriodicalId":212174,"journal":{"name":"Ninth IEEE International Symposium on Object and Component-Oriented Real-Time Distributed Computing (ISORC'06)","volume":"11 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2006-04-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130842135","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Mapping of software onto hardware elements under platform resource constraints is a crucial step in the design of embedded systems. As embedded systems are increasingly integrating both safety-critical and non-safety critical software functionalities onto a shared hardware platform, a dependability driven integration is desirable. Such an integration approach faces new challenges of mapping software components onto shared hardware resources while considering extra-functional (dependability, timing, power consumption, etc.) requirements of the system. Considering dependability and real-time as primary drivers, we present a systematic resource allocation approach for the consolidated mapping of safety critical and non-safety critical applications onto a distributed platform such that their operational delineation is maintained over integration. The objective of our allocation technique is to come up with a feasible solution satisfying multiple concurrent constraints. Ensuring criticality partitioning, avoiding error propagation and reducing interactions across components are addressed in our approach. In order to demonstrate the usefulness and effectiveness of the mapping, the developed approach is applied to an actual automotive system
{"title":"Dependability driven integration of mixed criticality SW components","authors":"Shariful Islam, Robert Lindstrom, N. Suri","doi":"10.1109/ISORC.2006.26","DOIUrl":"https://doi.org/10.1109/ISORC.2006.26","url":null,"abstract":"Mapping of software onto hardware elements under platform resource constraints is a crucial step in the design of embedded systems. As embedded systems are increasingly integrating both safety-critical and non-safety critical software functionalities onto a shared hardware platform, a dependability driven integration is desirable. Such an integration approach faces new challenges of mapping software components onto shared hardware resources while considering extra-functional (dependability, timing, power consumption, etc.) requirements of the system. Considering dependability and real-time as primary drivers, we present a systematic resource allocation approach for the consolidated mapping of safety critical and non-safety critical applications onto a distributed platform such that their operational delineation is maintained over integration. The objective of our allocation technique is to come up with a feasible solution satisfying multiple concurrent constraints. Ensuring criticality partitioning, avoiding error propagation and reducing interactions across components are addressed in our approach. In order to demonstrate the usefulness and effectiveness of the mapping, the developed approach is applied to an actual automotive system","PeriodicalId":212174,"journal":{"name":"Ninth IEEE International Symposium on Object and Component-Oriented Real-Time Distributed Computing (ISORC'06)","volume":"100 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2006-04-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"123149078","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
This paper presents a framework for combining low-level measurement data through high-level static analysis techniques on instrumented programs in order to generate WCET estimates, for which we introduce the instrumentation point graph (IPG). We present the notion of iteration edges, which are the most important property of the IPG from a timing analysis perspective since they allow more path-based information to be integrated into tree-based calculations on loops. The main focus of this paper, however, is an algorithm that performs a hierarchical decomposition of an IPG into an Itree to permit tree-based WCET calculations. The Itree representation supports a novel high-level structure, the meta-loop, which enables iteration edges to be merged in the calculation stage. The timing schema required for the Itree is also presented. Finally, we outline some conclusions and future areas of interest
{"title":"Tree-based WCET analysis on instrumentation point graphs","authors":"A. Betts, G. Bernat","doi":"10.1109/ISORC.2006.75","DOIUrl":"https://doi.org/10.1109/ISORC.2006.75","url":null,"abstract":"This paper presents a framework for combining low-level measurement data through high-level static analysis techniques on instrumented programs in order to generate WCET estimates, for which we introduce the instrumentation point graph (IPG). We present the notion of iteration edges, which are the most important property of the IPG from a timing analysis perspective since they allow more path-based information to be integrated into tree-based calculations on loops. The main focus of this paper, however, is an algorithm that performs a hierarchical decomposition of an IPG into an Itree to permit tree-based WCET calculations. The Itree representation supports a novel high-level structure, the meta-loop, which enables iteration edges to be merged in the calculation stage. The timing schema required for the Itree is also presented. Finally, we outline some conclusions and future areas of interest","PeriodicalId":212174,"journal":{"name":"Ninth IEEE International Symposium on Object and Component-Oriented Real-Time Distributed Computing (ISORC'06)","volume":"29 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2006-04-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"123602823","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}