Pub Date : 2013-06-19DOI: 10.1109/ISORC.2013.6913217
T. Heimfarth, R. R. Oliveira, R. W. Bettio, Ariel F. F. Marques, C. Toledo
The development of Wireless Sensor Networks (WSNs) applications is an arduous task, since the developer has to design the behavior of the nodes and their interactions. The automatic generation of WSN's applications is desirable to reduce costs, since it drastically reduces the human effort. This paper presents the use of Genetic Programming to automatically generate WSNs applications. A scripting language based on events and actions is proposed to represent the WSN behavior. Events represent the state of a given sensor node and actions modify these states. Some events are internal states and others are external states captured by the sensors. A parallel genetic algorithm is used to automatically generate WSNs applications in this scripting language. These scripts are executed by a middleware installed on all sensors nodes. This approach enables the application designer to define only the overall objective of the WSN. This objective is defined by means of a fitness function. An event-detection problem is presented in order to evaluate the proposed method. The results showed the capability of the developed approach to successfully solve WSNs problems through the automatic generation of applications.
{"title":"Automatic generation and configuration of Wireless Sensor Networks applications with Genetic Programming","authors":"T. Heimfarth, R. R. Oliveira, R. W. Bettio, Ariel F. F. Marques, C. Toledo","doi":"10.1109/ISORC.2013.6913217","DOIUrl":"https://doi.org/10.1109/ISORC.2013.6913217","url":null,"abstract":"The development of Wireless Sensor Networks (WSNs) applications is an arduous task, since the developer has to design the behavior of the nodes and their interactions. The automatic generation of WSN's applications is desirable to reduce costs, since it drastically reduces the human effort. This paper presents the use of Genetic Programming to automatically generate WSNs applications. A scripting language based on events and actions is proposed to represent the WSN behavior. Events represent the state of a given sensor node and actions modify these states. Some events are internal states and others are external states captured by the sensors. A parallel genetic algorithm is used to automatically generate WSNs applications in this scripting language. These scripts are executed by a middleware installed on all sensors nodes. This approach enables the application designer to define only the overall objective of the WSN. This objective is defined by means of a fitness function. An event-detection problem is presented in order to evaluate the proposed method. The results showed the capability of the developed approach to successfully solve WSNs problems through the automatic generation of applications.","PeriodicalId":330873,"journal":{"name":"16th IEEE International Symposium on Object/component/service-oriented Real-time distributed Computing (ISORC 2013)","volume":"312 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2013-06-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"122967437","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2013-06-19DOI: 10.1109/ISORC.2013.6913239
T. Ohta, Shuhei Ishizuka, Takeshi Hashimoto, Y. Kakuda
A new service discovery scheme using mobile agents for mobile ad hoc networks (MANETs) we have proposed are effective for dissemination and collection of service information because mobile agents can migrate among nodes in spite of topology change in MANETs. In the proposed scheme, mobile agents migrate to one of neighboring nodes in which the number of services collected in the predefined time (called service collection time) is the lowest. This paper investigates the property of the proposed mobile agent based scheme for realtime service dissemination and collection in MANETs through simulation experiments. The simulation results show that the service collection time has the impact on the dissemination time of the proposed scheme in imbalanced location of services, and that the proposed scheme can achieve real-time service dissemination and collection by autonomously adjusting the service collection time.
{"title":"A new mobile agent based scheme for self-organizing real-time service dissemination and collection in mobile ad hoc networks","authors":"T. Ohta, Shuhei Ishizuka, Takeshi Hashimoto, Y. Kakuda","doi":"10.1109/ISORC.2013.6913239","DOIUrl":"https://doi.org/10.1109/ISORC.2013.6913239","url":null,"abstract":"A new service discovery scheme using mobile agents for mobile ad hoc networks (MANETs) we have proposed are effective for dissemination and collection of service information because mobile agents can migrate among nodes in spite of topology change in MANETs. In the proposed scheme, mobile agents migrate to one of neighboring nodes in which the number of services collected in the predefined time (called service collection time) is the lowest. This paper investigates the property of the proposed mobile agent based scheme for realtime service dissemination and collection in MANETs through simulation experiments. The simulation results show that the service collection time has the impact on the dissemination time of the proposed scheme in imbalanced location of services, and that the proposed scheme can achieve real-time service dissemination and collection by autonomously adjusting the service collection time.","PeriodicalId":330873,"journal":{"name":"16th IEEE International Symposium on Object/component/service-oriented Real-time distributed Computing (ISORC 2013)","volume":"55 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2013-06-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114578483","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2013-06-19DOI: 10.1109/ISORC.2013.6913220
P. Puschner, Daniel Prokesch, Benedikt Huber, J. Knoop, Stefan Hepp, Gernot Gebhard
A good worst-case performance and the availability of high-quality bounds on the worst-case execution time (WCET) of tasks are central for the construction of hard realtime computer systems for safety-critical applications. Timing-predictability of the whole software/hardware system is a necessary prerequisite to achieve this. We show that a predictable architecture and the tight and seamless integration of compilation and WCET analysis is beneficial to achieve the initial two goals of good worst-case performance and the availability of high-quality bounds on the WCET of computation tasks. Information generated by the compiler improves the WCET analysis. Detailed timing feedback from the WCET analysis helps the compiler to reduce the worst case execution time. The paper describes the interface and the interaction between the industrial strength WCET analysis tool and the compiler as developed in the EU FP7 T-CREST project, and demonstrates the cooperation of these tools with an illustrative example.
{"title":"The T-CREST approach of compiler and WCET-analysis integration","authors":"P. Puschner, Daniel Prokesch, Benedikt Huber, J. Knoop, Stefan Hepp, Gernot Gebhard","doi":"10.1109/ISORC.2013.6913220","DOIUrl":"https://doi.org/10.1109/ISORC.2013.6913220","url":null,"abstract":"A good worst-case performance and the availability of high-quality bounds on the worst-case execution time (WCET) of tasks are central for the construction of hard realtime computer systems for safety-critical applications. Timing-predictability of the whole software/hardware system is a necessary prerequisite to achieve this. We show that a predictable architecture and the tight and seamless integration of compilation and WCET analysis is beneficial to achieve the initial two goals of good worst-case performance and the availability of high-quality bounds on the WCET of computation tasks. Information generated by the compiler improves the WCET analysis. Detailed timing feedback from the WCET analysis helps the compiler to reduce the worst case execution time. The paper describes the interface and the interaction between the industrial strength WCET analysis tool and the compiler as developed in the EU FP7 T-CREST project, and demonstrates the cooperation of these tools with an illustrative example.","PeriodicalId":330873,"journal":{"name":"16th IEEE International Symposium on Object/component/service-oriented Real-time distributed Computing (ISORC 2013)","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2013-06-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"123845993","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2013-06-19DOI: 10.1109/ISORC.2013.6913223
A. A. Fröhlich, Alexandre Massayuki Okazaki, Rodrigo Vieira Steiner, Peterson Oliveira, J. E. Martina
It is a mistake to assume that each embedded object in the Internet of Things will implement a TCP/IP stack similar to those present in contemporary operating systems. Typical requirements of ordinary things, such as low power consumption, small size, and low cost, demand innovative solutions. In this article, we describe the design, implementation, and evaluation of a trustful infrastructure for the Internet of Things based on EPOSMote. The infrastructure was built around EPOS' second generation of motes, which features an ARM processor and an IEEE 802.15.4 radio transceiver. It is presented to end users through a trustful communication protocol stack compatible with TCP/IP. Trustfulness was tackled at MAC level by extending C-MAC, EPOS native MAC protocol, with AES capabilities that were used to encrypt and authenticate IP datagrams packets. Our authentication mechanism encompasses temporal information to protect the network against replay attacks. The prototype implementation was assessed for processing, memory, and energy consumption with positive results.
{"title":"A cross-layer approach to trustfulness in the Internet of Things","authors":"A. A. Fröhlich, Alexandre Massayuki Okazaki, Rodrigo Vieira Steiner, Peterson Oliveira, J. E. Martina","doi":"10.1109/ISORC.2013.6913223","DOIUrl":"https://doi.org/10.1109/ISORC.2013.6913223","url":null,"abstract":"It is a mistake to assume that each embedded object in the Internet of Things will implement a TCP/IP stack similar to those present in contemporary operating systems. Typical requirements of ordinary things, such as low power consumption, small size, and low cost, demand innovative solutions. In this article, we describe the design, implementation, and evaluation of a trustful infrastructure for the Internet of Things based on EPOSMote. The infrastructure was built around EPOS' second generation of motes, which features an ARM processor and an IEEE 802.15.4 radio transceiver. It is presented to end users through a trustful communication protocol stack compatible with TCP/IP. Trustfulness was tackled at MAC level by extending C-MAC, EPOS native MAC protocol, with AES capabilities that were used to encrypt and authenticate IP datagrams packets. Our authentication mechanism encompasses temporal information to protect the network against replay attacks. The prototype implementation was assessed for processing, memory, and energy consumption with positive results.","PeriodicalId":330873,"journal":{"name":"16th IEEE International Symposium on Object/component/service-oriented Real-time distributed Computing (ISORC 2013)","volume":"94 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2013-06-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126532516","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2013-06-19DOI: 10.1109/ISORC.2013.6913241
Philipp Ittershagen, Philipp A. Hartmann, Kim Grüttner, A. Rettberg
With the accelerating pervasiveness of multi-core platforms in the embedded domains and the on-going need for more computational power and increased integration, multi-core scheduling for real-time and mixed-critical applications is an active research topic. In this paper, we give an overview on the history and the current state-of-the-art on multi-core real-time scheduling. A special focus is put on shared resource access protocols and hierarchical scheduling approaches, both of which are increasingly important due to the higher spatial integration and stronger coupling between the different subsystems, both on the application and on the multi-core architectural level. Moreover, hierarchical scheduling is a promising approach in the area of mixed-criticality systems to enable composability and segregation, which is needed to cope with the complexity of such systems. This survey will be of interest to researchers and practitioners in the field of real-time scheduling for multi-core systems.
{"title":"Hierarchical real-time scheduling in the multi-core era — An overview","authors":"Philipp Ittershagen, Philipp A. Hartmann, Kim Grüttner, A. Rettberg","doi":"10.1109/ISORC.2013.6913241","DOIUrl":"https://doi.org/10.1109/ISORC.2013.6913241","url":null,"abstract":"With the accelerating pervasiveness of multi-core platforms in the embedded domains and the on-going need for more computational power and increased integration, multi-core scheduling for real-time and mixed-critical applications is an active research topic. In this paper, we give an overview on the history and the current state-of-the-art on multi-core real-time scheduling. A special focus is put on shared resource access protocols and hierarchical scheduling approaches, both of which are increasingly important due to the higher spatial integration and stronger coupling between the different subsystems, both on the application and on the multi-core architectural level. Moreover, hierarchical scheduling is a promising approach in the area of mixed-criticality systems to enable composability and segregation, which is needed to cope with the complexity of such systems. This survey will be of interest to researchers and practitioners in the field of real-time scheduling for multi-core systems.","PeriodicalId":330873,"journal":{"name":"16th IEEE International Symposium on Object/component/service-oriented Real-time distributed Computing (ISORC 2013)","volume":"33 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2013-06-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"131855553","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2013-06-19DOI: 10.1109/ISORC.2013.6913194
Miao Song, Shuhui Li, Shangping Ren, S. Hong, X. Hu
In distributed real-time systems, when resource cannot meet workload demand, some jobs have to be removed from further execution. The decision as to which job to remove directly influences the system computation efficiency, i.e., the ratio between computation contributed to successful completions of real-time jobs and total computation contributed to the execution of jobs that may or may not be completed. The paper presents two job removal policies which aim at maximizing system's computation efficiency for distributed real-time applications where the applications' end-to-end deadlines must be guaranteed. Experiments based on benchmark applications generated by TGFF [1] are conducted and compared with recent work in the literature. The results show clear benefits of the developed approaches - they can achieve as much as 20% computation efficiency improvement.
{"title":"Computation efficiency driven job removal policies for meeting end-to-end deadlines in distributed real-time systems","authors":"Miao Song, Shuhui Li, Shangping Ren, S. Hong, X. Hu","doi":"10.1109/ISORC.2013.6913194","DOIUrl":"https://doi.org/10.1109/ISORC.2013.6913194","url":null,"abstract":"In distributed real-time systems, when resource cannot meet workload demand, some jobs have to be removed from further execution. The decision as to which job to remove directly influences the system computation efficiency, i.e., the ratio between computation contributed to successful completions of real-time jobs and total computation contributed to the execution of jobs that may or may not be completed. The paper presents two job removal policies which aim at maximizing system's computation efficiency for distributed real-time applications where the applications' end-to-end deadlines must be guaranteed. Experiments based on benchmark applications generated by TGFF [1] are conducted and compared with recent work in the literature. The results show clear benefits of the developed approaches - they can achieve as much as 20% computation efficiency improvement.","PeriodicalId":330873,"journal":{"name":"16th IEEE International Symposium on Object/component/service-oriented Real-time distributed Computing (ISORC 2013)","volume":"39 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2013-06-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"124322838","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2013-06-19DOI: 10.1109/ISORC.2013.6913202
Nada Louati, R. Bouaziz, Claude Duvallet, B. Sadeg
A real-time database usually requires maintaining a great quantities of temporal data items whose values remain valid only within their validity interval. Each data item in the database models the current status of a physical real-world entity. The freshness of these temporal data items is maintained by update transactions that need to be executed within their deadlines. In this paper, we propose a model-based data freshness management approach consisting of a model for expressing data freshness requirements. This approach is based on the TIME package of MARTE (Modeling and Analysis of Real-Time and Embedded systems) profile which provides capabilities of modeling concepts to deal with real-time and embedded systems features.
{"title":"Managing data freshness with MARTE in real-time databases","authors":"Nada Louati, R. Bouaziz, Claude Duvallet, B. Sadeg","doi":"10.1109/ISORC.2013.6913202","DOIUrl":"https://doi.org/10.1109/ISORC.2013.6913202","url":null,"abstract":"A real-time database usually requires maintaining a great quantities of temporal data items whose values remain valid only within their validity interval. Each data item in the database models the current status of a physical real-world entity. The freshness of these temporal data items is maintained by update transactions that need to be executed within their deadlines. In this paper, we propose a model-based data freshness management approach consisting of a model for expressing data freshness requirements. This approach is based on the TIME package of MARTE (Modeling and Analysis of Real-Time and Embedded systems) profile which provides capabilities of modeling concepts to deal with real-time and embedded systems features.","PeriodicalId":330873,"journal":{"name":"16th IEEE International Symposium on Object/component/service-oriented Real-time distributed Computing (ISORC 2013)","volume":"122 4 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2013-06-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129470534","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2013-06-19DOI: 10.1109/ISORC.2013.6913212
Oleg Litvinski, Abdelouahed Gherbi
Cloud computing is a computing model which is essentially characterized by an on-demand and dynamic provisioning of computing resources. In this model, a cloud is a large-scale distributed system which leverages internet and virtualization technologies to provide computing resources as a service. Efficient, flexible and dynamic resource management is among the most challenging research issues in this domain. In this context, we present a study focusing on the dynamic behavior of the scheduling functionality of an Infrastructure-as-a-Service (IaaS) cloud, namely OpenStack Scheduler. We aim, through this study at identifying the limitations of this scheduler and ultimately enabling its extension using enhanced metrics. Towards this end, we present a Design of Experiment (DOE) based approach for the evaluation of the OpenStack Scheduler behavior. In particular, we use the screening type of experiment to identify the factors with significant effects on the responses. In our context, these factors are the amount of memory and the number of CPU cores assigned to virtual machine (VM) and the amount of memory and the number of cores on physical nodes. More specifically, we present a two-level fractional factorial balanced with the resolution IV and four center points experimental design with no replication.
{"title":"Openstack scheduler evaluation using design of experiment approach","authors":"Oleg Litvinski, Abdelouahed Gherbi","doi":"10.1109/ISORC.2013.6913212","DOIUrl":"https://doi.org/10.1109/ISORC.2013.6913212","url":null,"abstract":"Cloud computing is a computing model which is essentially characterized by an on-demand and dynamic provisioning of computing resources. In this model, a cloud is a large-scale distributed system which leverages internet and virtualization technologies to provide computing resources as a service. Efficient, flexible and dynamic resource management is among the most challenging research issues in this domain. In this context, we present a study focusing on the dynamic behavior of the scheduling functionality of an Infrastructure-as-a-Service (IaaS) cloud, namely OpenStack Scheduler. We aim, through this study at identifying the limitations of this scheduler and ultimately enabling its extension using enhanced metrics. Towards this end, we present a Design of Experiment (DOE) based approach for the evaluation of the OpenStack Scheduler behavior. In particular, we use the screening type of experiment to identify the factors with significant effects on the responses. In our context, these factors are the amount of memory and the number of CPU cores assigned to virtual machine (VM) and the amount of memory and the number of cores on physical nodes. More specifically, we present a two-level fractional factorial balanced with the resolution IV and four center points experimental design with no replication.","PeriodicalId":330873,"journal":{"name":"16th IEEE International Symposium on Object/component/service-oriented Real-time distributed Computing (ISORC 2013)","volume":"10 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2013-06-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128245595","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2013-06-19DOI: 10.1109/ISORC.2013.6913234
C. Landauer, K. Bellman, Phyllis R. Nelson
No system in the real world can compute an appropriate response in reaction to every situation it encounters, or even most situations it is likely to encounter. Biological systems address this issue with four strategies: (1) a repertoire of already computed responses tied to a situation recognition process, (2) organized in a response-time hierarchy that allows a quick response to occur immediately, and one or more slower and more deliberate responses to begin at the same time, with (3) decision processes that allow one of them to take over after a little while, or that (4) merge several of them in a combined and possibly novel response. In this paper, we describe an approach to building self-adaptive computing systems that incorporates these strategies, to cope with their intended use in hazardous, remote, unknown, or otherwise difficult environments, in which it is known a priori that the system cannot keep up with all important events, and that “as fast as possible” is not appropriate for some interactions. The key to implementing these strategies is an abstraction/refinement hierarchy of behavioral models and processes at multiple levels of granularity and precision. The key to coordinating these different models is the collection of integrative mappings among them, which are developed along with the models, and used for managing system behavior. We also describe the system development process that we use to build such systems, which differs from conventional methods by taking the basic artifacts of development, considered as partial models of aspects of the system in its environment, and retains them all in a model hierarchy, which eventually becomes the definition of the run time system. We show how to implement such systems, explain why we think they are good candidates for real-time operational environments, and illustrate the method with an example implementation.
{"title":"Modeling spaces for real-time embedded systems","authors":"C. Landauer, K. Bellman, Phyllis R. Nelson","doi":"10.1109/ISORC.2013.6913234","DOIUrl":"https://doi.org/10.1109/ISORC.2013.6913234","url":null,"abstract":"No system in the real world can compute an appropriate response in reaction to every situation it encounters, or even most situations it is likely to encounter. Biological systems address this issue with four strategies: (1) a repertoire of already computed responses tied to a situation recognition process, (2) organized in a response-time hierarchy that allows a quick response to occur immediately, and one or more slower and more deliberate responses to begin at the same time, with (3) decision processes that allow one of them to take over after a little while, or that (4) merge several of them in a combined and possibly novel response. In this paper, we describe an approach to building self-adaptive computing systems that incorporates these strategies, to cope with their intended use in hazardous, remote, unknown, or otherwise difficult environments, in which it is known a priori that the system cannot keep up with all important events, and that “as fast as possible” is not appropriate for some interactions. The key to implementing these strategies is an abstraction/refinement hierarchy of behavioral models and processes at multiple levels of granularity and precision. The key to coordinating these different models is the collection of integrative mappings among them, which are developed along with the models, and used for managing system behavior. We also describe the system development process that we use to build such systems, which differs from conventional methods by taking the basic artifacts of development, considered as partial models of aspects of the system in its environment, and retains them all in a model hierarchy, which eventually becomes the definition of the run time system. We show how to implement such systems, explain why we think they are good candidates for real-time operational environments, and illustrate the method with an example implementation.","PeriodicalId":330873,"journal":{"name":"16th IEEE International Symposium on Object/component/service-oriented Real-time distributed Computing (ISORC 2013)","volume":"5 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2013-06-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"131033477","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2013-06-19DOI: 10.1109/ISORC.2013.6913240
F. Rammig, Katharina Stahl, G. Vaz
In self-adapting embedded real-time systems, operating systems and software provide mechanisms to self-adapt to changing requirements. Autonomous adaptation decisions introduce novel risks as they may lead to unforeseen system behavior that could not have been specified within a design-time model. However, as part of its functionality the operating system has to ensure the reliability of the entire self-x system during run-time. In this paper, we present our work in progress for an operating system framework which aims to identify anomalous or malicious system states at run-time without a sophisticated specification-time model. Inspired by the Artificial Immune Systems Danger Theory, we propose an anomaly detection mechanism that operates not only on the local system behavior information of the monitored component. Furthermore, to ensure an efficient behavior evaluation, the anomaly detection mechanism implies system-wide input signals that indicate e.g the existence of a potential danger within the overall system or the occurrence of a system adaption. Due to the ability of this framework to cope with dynamically changing behavior and to identify unintended behavioral deviations, it seems to be a promising approach to enhance the run-time dependability of a self-x system.
{"title":"A framework for enhancing dependability in self-x systems by Artificial Immune Systems","authors":"F. Rammig, Katharina Stahl, G. Vaz","doi":"10.1109/ISORC.2013.6913240","DOIUrl":"https://doi.org/10.1109/ISORC.2013.6913240","url":null,"abstract":"In self-adapting embedded real-time systems, operating systems and software provide mechanisms to self-adapt to changing requirements. Autonomous adaptation decisions introduce novel risks as they may lead to unforeseen system behavior that could not have been specified within a design-time model. However, as part of its functionality the operating system has to ensure the reliability of the entire self-x system during run-time. In this paper, we present our work in progress for an operating system framework which aims to identify anomalous or malicious system states at run-time without a sophisticated specification-time model. Inspired by the Artificial Immune Systems Danger Theory, we propose an anomaly detection mechanism that operates not only on the local system behavior information of the monitored component. Furthermore, to ensure an efficient behavior evaluation, the anomaly detection mechanism implies system-wide input signals that indicate e.g the existence of a potential danger within the overall system or the occurrence of a system adaption. Due to the ability of this framework to cope with dynamically changing behavior and to identify unintended behavioral deviations, it seems to be a promising approach to enhance the run-time dependability of a self-x system.","PeriodicalId":330873,"journal":{"name":"16th IEEE International Symposium on Object/component/service-oriented Real-time distributed Computing (ISORC 2013)","volume":"25 3 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2013-06-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"123357482","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}