Dependability and performance analysis of modern systems is facing great challenges: their scale is growing, they are becoming massively distributed, interconnected, and evolving. Such complexity makes model-based assessment a difficult and time-consuming task. For the evaluation of large systems, reusable sub models are typically adopted as an effective way to address the complexity and improve the maintanability of models. Approaches based on Stochastic Petri Nets often compose sub models by state-sharing, following predefined "patterns", depending on the scenario of interest. However, such composition patterns are typically not formalized. Clearly defining libraries of reusable sub models, together with valid patterns for their composition, would allow complex models to be automatically assembled, based on a high-level description of the scenario to be evaluated. The contribution of this paper to this problem is twofold: on one hand we describe our workflow for the automated generation of large per formability models, on the other hand we introduce the TMDL language, a DSL to concretely support the workflow. After introducing the approach and the language, we detail their implementation within the Eclipse modeling platform, and briefly show its usage through an example.
{"title":"A DSL-Supported Workflow for the Automated Assembly of Large Stochastic Models","authors":"Leonardo Montecchi, P. Lollini, A. Bondavalli","doi":"10.1109/EDCC.2014.33","DOIUrl":"https://doi.org/10.1109/EDCC.2014.33","url":null,"abstract":"Dependability and performance analysis of modern systems is facing great challenges: their scale is growing, they are becoming massively distributed, interconnected, and evolving. Such complexity makes model-based assessment a difficult and time-consuming task. For the evaluation of large systems, reusable sub models are typically adopted as an effective way to address the complexity and improve the maintanability of models. Approaches based on Stochastic Petri Nets often compose sub models by state-sharing, following predefined \"patterns\", depending on the scenario of interest. However, such composition patterns are typically not formalized. Clearly defining libraries of reusable sub models, together with valid patterns for their composition, would allow complex models to be automatically assembled, based on a high-level description of the scenario to be evaluated. The contribution of this paper to this problem is twofold: on one hand we describe our workflow for the automated generation of large per formability models, on the other hand we introduce the TMDL language, a DSL to concretely support the workflow. After introducing the approach and the language, we detail their implementation within the Eclipse modeling platform, and briefly show its usage through an example.","PeriodicalId":364377,"journal":{"name":"2014 Tenth European Dependable Computing Conference","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2014-05-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"122961080","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Manual management of dependability while operating large software systems - including failure detection, diagnosis, repair, and prevention activities - is time-consuming and error-prone. Various automatic approaches supporting these activities have been proposed, e.g., to detect and diagnose performance degradation effects caused by software aging and to execute reactive or proactive rejuvenation actions. However, users often mistrust fully-automatic dependability management approaches due to a lack of control over the change actions conducted to the business-critical software landscape. Building trust for automatic systems is challenging. In this paper, we present our envisioned control center for a semi-automatic management of large software landscapes, featured by a graphical user interface including interactive system visualizations. The control center will provide a reusable platform for integrating techniques for dependability management, including monitoring, and analyzing a system's dependability during production as well as for planning and executing reactive or proactive change actions to the software landscape.
{"title":"Towards a Dependability Control Center for Large Software Landscapes (Short Paper)","authors":"Florian Fittkau, A. Hoorn, W. Hasselbring","doi":"10.1109/EDCC.2014.12","DOIUrl":"https://doi.org/10.1109/EDCC.2014.12","url":null,"abstract":"Manual management of dependability while operating large software systems - including failure detection, diagnosis, repair, and prevention activities - is time-consuming and error-prone. Various automatic approaches supporting these activities have been proposed, e.g., to detect and diagnose performance degradation effects caused by software aging and to execute reactive or proactive rejuvenation actions. However, users often mistrust fully-automatic dependability management approaches due to a lack of control over the change actions conducted to the business-critical software landscape. Building trust for automatic systems is challenging. In this paper, we present our envisioned control center for a semi-automatic management of large software landscapes, featured by a graphical user interface including interactive system visualizations. The control center will provide a reusable platform for integrating techniques for dependability management, including monitoring, and analyzing a system's dependability during production as well as for planning and executing reactive or proactive change actions to the software landscape.","PeriodicalId":364377,"journal":{"name":"2014 Tenth European Dependable Computing Conference","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2014-05-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"117116909","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Tushar Deshpande, P. Katsaros, S. Smolka, S. Stoller
The Domain Name System (DNS) is an Internet-wide, hierarchical naming system used to translate domain names into numeric IP addresses. Any disruption of DNS service can have serious consequences. We present a formal game-theoretic analysis of a notable threat to DNS, namely the bandwidth amplification attack (BAA), and the countermeasures designed to defend against it. We model the DNS BAA as a two-player, turn-based, zero-sum stochastic game between an attacker and a defender. The attacker attempts to flood a victim DNS server with malicious traffic by choosing an appropriate number of zombie machines with which to attack. In response, the defender chooses among five BAA countermeasures, each of which seeks to increase the amount of legitimate traffic the victim server processes. To simplify the model and optimize the analysis, our model does not explicitly track the handling of each packet. Instead, our model is based on calculations of the rates at which the relevant kinds of events occur in each state. We use our game-based model of DNS BAA to generate optimal attack strategies, which vary the number of zombies, and optimal defense strategies, which aim to enhance the utility of the BAA countermeasures by combining them in advantageous ways. The goal of these strategies is to optimize the attacker's and defender's payoffs, which are defined using probabilistic reward-based properties, and are measured in terms of the attacker's ability to minimize the volume of legitimate traffic that is processed, and the defender's ability to maximize the volume of legitimate traffic that is processed.
{"title":"Stochastic Game-Based Analysis of the DNS Bandwidth Amplification Attack Using Probabilistic Model Checking","authors":"Tushar Deshpande, P. Katsaros, S. Smolka, S. Stoller","doi":"10.1109/EDCC.2014.37","DOIUrl":"https://doi.org/10.1109/EDCC.2014.37","url":null,"abstract":"The Domain Name System (DNS) is an Internet-wide, hierarchical naming system used to translate domain names into numeric IP addresses. Any disruption of DNS service can have serious consequences. We present a formal game-theoretic analysis of a notable threat to DNS, namely the bandwidth amplification attack (BAA), and the countermeasures designed to defend against it. We model the DNS BAA as a two-player, turn-based, zero-sum stochastic game between an attacker and a defender. The attacker attempts to flood a victim DNS server with malicious traffic by choosing an appropriate number of zombie machines with which to attack. In response, the defender chooses among five BAA countermeasures, each of which seeks to increase the amount of legitimate traffic the victim server processes. To simplify the model and optimize the analysis, our model does not explicitly track the handling of each packet. Instead, our model is based on calculations of the rates at which the relevant kinds of events occur in each state. We use our game-based model of DNS BAA to generate optimal attack strategies, which vary the number of zombies, and optimal defense strategies, which aim to enhance the utility of the BAA countermeasures by combining them in advantageous ways. The goal of these strategies is to optimize the attacker's and defender's payoffs, which are defined using probabilistic reward-based properties, and are measured in terms of the attacker's ability to minimize the volume of legitimate traffic that is processed, and the defender's ability to maximize the volume of legitimate traffic that is processed.","PeriodicalId":364377,"journal":{"name":"2014 Tenth European Dependable Computing Conference","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2014-05-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"132610407","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
The analysis of massive data streams is fundamental in many monitoring applications. In particular, for networks operators, it is a recurrent and crucial issue to determine whether huge data streams, received at their monitored devices, are correlated or not as it may reveal the presence of malicious activities in the network system. We propose a metric, called our metric, that allows to evaluate the correlation between distributed streams. This metric is inspired from classical metric in statistics and probability theory, and as such allows us to understand how observed quantities change together, and in which proportion. We then propose to estimate the our metric in the data stream model. In this model, functions are estimated on a huge sequence of data items, in an online fashion, and with a very small amount of memory with respect to both the size of the input stream and the values domain from which data items are drawn. We give upper and lower bounds on the quality of the our metric, and provide both local and distributed algorithms that additively approximates the our metric among n data streams by using math cal Oleft((1/varepsilon)log(1/delta)left(log N + log mright)right) bits of space for each of the n nodes, where N is the domain value from which data items are drawn, and m is the maximal stream's length. To the best of our knowledge, such a metric has never been proposed so far.
分析海量数据流是许多监控应用的基础。特别是,对于网络运营商来说,确定其监控设备接收到的巨大数据流是否相关是一个反复出现的关键问题,因为它可能会揭示网络系统中存在恶意活动。我们提出了一个度量,称为我们的度量,它允许评估分布式流之间的相关性。这个度量的灵感来自于统计学和概率论中的经典度量,因此我们可以理解观察到的量是如何一起变化的,以及以何种比例变化的。然后,我们建议在数据流模型中估计我们的度量。在此模型中,以在线方式对大量数据项序列进行函数估计,并且相对于输入流的大小和从中绘制数据项的值域而言,使用非常少的内存。我们给出了度量质量的上界和下界,并提供了本地和分布式算法,通过使用数学称为olleft ((1/varepsilon)log(1/delta)left(log n + log mright)right)位空间来加法近似n个数据流中的度量,其中n是绘制数据项的域值,m是最大流的长度。据我们所知,迄今为止还没有人提出过这样的度量标准。
{"title":"Deviation Estimation between Distributed Data Streams","authors":"E. Anceaume, Yann Busnel","doi":"10.1109/EDCC.2014.27","DOIUrl":"https://doi.org/10.1109/EDCC.2014.27","url":null,"abstract":"The analysis of massive data streams is fundamental in many monitoring applications. In particular, for networks operators, it is a recurrent and crucial issue to determine whether huge data streams, received at their monitored devices, are correlated or not as it may reveal the presence of malicious activities in the network system. We propose a metric, called our metric, that allows to evaluate the correlation between distributed streams. This metric is inspired from classical metric in statistics and probability theory, and as such allows us to understand how observed quantities change together, and in which proportion. We then propose to estimate the our metric in the data stream model. In this model, functions are estimated on a huge sequence of data items, in an online fashion, and with a very small amount of memory with respect to both the size of the input stream and the values domain from which data items are drawn. We give upper and lower bounds on the quality of the our metric, and provide both local and distributed algorithms that additively approximates the our metric among n data streams by using math cal Oleft((1/varepsilon)log(1/delta)left(log N + log mright)right) bits of space for each of the n nodes, where N is the domain value from which data items are drawn, and m is the maximal stream's length. To the best of our knowledge, such a metric has never been proposed so far.","PeriodicalId":364377,"journal":{"name":"2014 Tenth European Dependable Computing Conference","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2014-05-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"122060660","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Markov models are often used in performance and dependability analysis and allow a precise and numerically stable computation of many result measures including those which result from rare events. It is, however, known that simple exponential distributions, which are the base of Markov modeling, cannot adequately describe the duration of availability or unavailability intervals of components in a distributed system. Commonly used in modeling those durations are Weibull, log-normal or Pareto distributions that can also capture a possibly heavy tailed behavior but cannot be analyzed analytically or numerically. An alternative to applying the mentioned distributions in modeling availability or unavailability intervals are phase type distributions and Markovian arrival processes that still result in a Markov model. Based on experiments for a large number of publically available availability traces, we show that phase type distributions are a flexible alternative to other commonly known distributions and even more that Markov models can be easily extended to capture also correlation in the length of availability or unavailability intervals.
{"title":"Markov Modeling of Availability and Unavailability Data","authors":"P. Buchholz, J. Kriege","doi":"10.1109/EDCC.2014.22","DOIUrl":"https://doi.org/10.1109/EDCC.2014.22","url":null,"abstract":"Markov models are often used in performance and dependability analysis and allow a precise and numerically stable computation of many result measures including those which result from rare events. It is, however, known that simple exponential distributions, which are the base of Markov modeling, cannot adequately describe the duration of availability or unavailability intervals of components in a distributed system. Commonly used in modeling those durations are Weibull, log-normal or Pareto distributions that can also capture a possibly heavy tailed behavior but cannot be analyzed analytically or numerically. An alternative to applying the mentioned distributions in modeling availability or unavailability intervals are phase type distributions and Markovian arrival processes that still result in a Markov model. Based on experiments for a large number of publically available availability traces, we show that phase type distributions are a flexible alternative to other commonly known distributions and even more that Markov models can be easily extended to capture also correlation in the length of availability or unavailability intervals.","PeriodicalId":364377,"journal":{"name":"2014 Tenth European Dependable Computing Conference","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2014-05-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"132465352","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
We present an engineering process model for generating software modifications that is designed to be used when either most or all development artifacts about the software, including the source code, are unavailable. This kind of software, commonly called Software Of Unknown Provenance (SOUP), raises many doubts about the existence and adequacy of desired dependability properties, for example security. These doubts motivate some users to apply modifications to enhance dependability properties of the software, however, without necessary development artifacts, modifications are made in a state of uncertainty and risk. We investigate enhancing dependability through software modification in the presence of these risks as an engineering problem and introduce an engineering process for generating software modifications called Speculative Software Modification (SSM). We present the motivation and guiding principles of SSM, and a case study of SSM applied to protect software against buffer overflow attacks when only the binary is available.
{"title":"Speculative Software Modification and its Use in Securing SOUP","authors":"Benjamin D. Rodes, J. Knight","doi":"10.1109/EDCC.2014.29","DOIUrl":"https://doi.org/10.1109/EDCC.2014.29","url":null,"abstract":"We present an engineering process model for generating software modifications that is designed to be used when either most or all development artifacts about the software, including the source code, are unavailable. This kind of software, commonly called Software Of Unknown Provenance (SOUP), raises many doubts about the existence and adequacy of desired dependability properties, for example security. These doubts motivate some users to apply modifications to enhance dependability properties of the software, however, without necessary development artifacts, modifications are made in a state of uncertainty and risk. We investigate enhancing dependability through software modification in the presence of these risks as an engineering problem and introduce an engineering process for generating software modifications called Speculative Software Modification (SSM). We present the motivation and guiding principles of SSM, and a case study of SSM applied to protect software against buffer overflow attacks when only the binary is available.","PeriodicalId":364377,"journal":{"name":"2014 Tenth European Dependable Computing Conference","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2014-05-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"121156822","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Camille Fayollas, C. Martinie, Philippe A. Palanque, Y. Déléris, J. Fabre, D. Navarre
A cockpit (also called flight deck) is an interactive environment of an aircraft which enables both pilot and first officer to monitor the aircraft systems and to control them. Allowing the crew to control aircraft systems through display unit by using keyboard and cursor control unit is one of the main novelties in the new generation cockpits based on ARINC 661 standard. Currently only secondary aircraft systems are managed using such interactive cockpits. Generalisation to other aircraft systems would require introducing mechanisms aiming at ensuring the fault-tolerance of such interaction in cockpits. Such mechanisms would allow designers to take into account the new functions' safety requirement. However, it is possible that such mechanisms may have consequences (positive and/or negative ones) on the crew activities. This paper reports studies that have been performed on fault-tolerance mechanisms in the domain of ARINC 661 interactive cockpits. More precisely this paper focuses on interactive systems, showing how these fault-tolerance mechanisms (mainly redundancy as segregation and diversity are not exemplified here) could affect the usability of the interactive system, making both the tasks of the crew members and their training more complex. We propose a generic approach to analyse the trade-offs between dependability and usability in a software interactive cockpit environment.
{"title":"An Approach for Assessing the Impact of Dependability on Usability: Application to Interactive Cockpits","authors":"Camille Fayollas, C. Martinie, Philippe A. Palanque, Y. Déléris, J. Fabre, D. Navarre","doi":"10.1109/EDCC.2014.17","DOIUrl":"https://doi.org/10.1109/EDCC.2014.17","url":null,"abstract":"A cockpit (also called flight deck) is an interactive environment of an aircraft which enables both pilot and first officer to monitor the aircraft systems and to control them. Allowing the crew to control aircraft systems through display unit by using keyboard and cursor control unit is one of the main novelties in the new generation cockpits based on ARINC 661 standard. Currently only secondary aircraft systems are managed using such interactive cockpits. Generalisation to other aircraft systems would require introducing mechanisms aiming at ensuring the fault-tolerance of such interaction in cockpits. Such mechanisms would allow designers to take into account the new functions' safety requirement. However, it is possible that such mechanisms may have consequences (positive and/or negative ones) on the crew activities. This paper reports studies that have been performed on fault-tolerance mechanisms in the domain of ARINC 661 interactive cockpits. More precisely this paper focuses on interactive systems, showing how these fault-tolerance mechanisms (mainly redundancy as segregation and diversity are not exemplified here) could affect the usability of the interactive system, making both the tasks of the crew members and their training more complex. We propose a generic approach to analyse the trade-offs between dependability and usability in a software interactive cockpit environment.","PeriodicalId":364377,"journal":{"name":"2014 Tenth European Dependable Computing Conference","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2014-05-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"125809589","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
L. J. Saiz, P. Gil, J. Baraza-Calvo, Juan-Carlos Ruiz-Garcia, D. Gil, J. Gracia
Error correction codes are used in semiconductor memories to protect information against errors. Simple error correction codes are preferred due to their low redundancy and encoding/decoding latency. Hamming codes are simple and can be easily built for any word length. They only allow single error correction, so a multiple error can lead to a wrong decoding. Multiple errors often manifest as burst errors, and they are becoming more frequent as integration scale increases. This paper proposes modified Hamming codes, with the same redundancy and coverage as the original versions, but adding short burst error detection. Three code examples, with different error correction and detection capabilities, are presented. They are especially well-suited for memories, where the length of the data word is commonly a power of 2, and low redundancy and fast and simple encoder and decoder circuits are required.
{"title":"Modified Hamming Codes to Enhance Short Burst Error Detection in Semiconductor Memories (Short Paper)","authors":"L. J. Saiz, P. Gil, J. Baraza-Calvo, Juan-Carlos Ruiz-Garcia, D. Gil, J. Gracia","doi":"10.1109/EDCC.2014.25","DOIUrl":"https://doi.org/10.1109/EDCC.2014.25","url":null,"abstract":"Error correction codes are used in semiconductor memories to protect information against errors. Simple error correction codes are preferred due to their low redundancy and encoding/decoding latency. Hamming codes are simple and can be easily built for any word length. They only allow single error correction, so a multiple error can lead to a wrong decoding. Multiple errors often manifest as burst errors, and they are becoming more frequent as integration scale increases. This paper proposes modified Hamming codes, with the same redundancy and coverage as the original versions, but adding short burst error detection. Three code examples, with different error correction and detection capabilities, are presented. They are especially well-suited for memories, where the length of the data word is commonly a power of 2, and low redundancy and fast and simple encoder and decoder circuits are required.","PeriodicalId":364377,"journal":{"name":"2014 Tenth European Dependable Computing Conference","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2014-05-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130168886","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
This paper discusses strategies for I/O sharing in Multiple Independent Levels of Security (MILS) systems mostly deployed in the special environment of avionic systems. MILS system designs are promising approaches for handling the increasing complexity of functionally integrated systems, where multiple applications run concurrently on the same hardware platform. Such integrated systems, also known as Integrated Modular Avionics (IMA) in the aviation industry, require communication to remote systems located outside of the hosting hardware platform. One possible solution is to provide each partition, the isolated runtime environment of an application, a direct interface to the communication's hardware controller. Nevertheless, this approach requires a special design of the hardware itself. This paper discusses efficient system architectures for I/O sharing in the environment of high-criticality embedded systems and the exemplary analysis of Free scale's proprietary Data Path Acceleration Architecture (DPAA) with respect to generic hardware requirements. Based on this analysis we also discuss the development of possible architectures matching with the MILS approach. Even though the analysis focuses on avionics it is equally applicable to automotive architectures such as Auto SAR.
{"title":"On MILS I/O Sharing Targeting Avionic Systems","authors":"Kevin Mueller, G. Sigl, B. Triquet, M. Paulitsch","doi":"10.1109/EDCC.2014.35","DOIUrl":"https://doi.org/10.1109/EDCC.2014.35","url":null,"abstract":"This paper discusses strategies for I/O sharing in Multiple Independent Levels of Security (MILS) systems mostly deployed in the special environment of avionic systems. MILS system designs are promising approaches for handling the increasing complexity of functionally integrated systems, where multiple applications run concurrently on the same hardware platform. Such integrated systems, also known as Integrated Modular Avionics (IMA) in the aviation industry, require communication to remote systems located outside of the hosting hardware platform. One possible solution is to provide each partition, the isolated runtime environment of an application, a direct interface to the communication's hardware controller. Nevertheless, this approach requires a special design of the hardware itself. This paper discusses efficient system architectures for I/O sharing in the environment of high-criticality embedded systems and the exemplary analysis of Free scale's proprietary Data Path Acceleration Architecture (DPAA) with respect to generic hardware requirements. Based on this analysis we also discuss the development of possible architectures matching with the MILS approach. Even though the analysis focuses on avionics it is equally applicable to automotive architectures such as Auto SAR.","PeriodicalId":364377,"journal":{"name":"2014 Tenth European Dependable Computing Conference","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2014-05-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"123188332","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
J. Birch, R. Rivett, I. Habli, Ben Bradshaw, J. Botham, Dave Higham, H. Monkhouse, R. Palin
We present a model for structuring automotive safety arguments comprising four different, yet interrelated, layers of safety claims. The layered model is structured by the rationale behind safety requirements, their relationship to corresponding physical artefact(s) and hazardous events, the means used in their development and the environment in which safety activities are undertaken. The layered approach allows for focus and clarity in communicating and assessing the functional safety of automotive Electrical/Electronic systems, particularly in the context of the automotive standard ISO 26262.
{"title":"A Layered Model for Structuring Automotive Safety Arguments (Short Paper)","authors":"J. Birch, R. Rivett, I. Habli, Ben Bradshaw, J. Botham, Dave Higham, H. Monkhouse, R. Palin","doi":"10.1109/EDCC.2014.24","DOIUrl":"https://doi.org/10.1109/EDCC.2014.24","url":null,"abstract":"We present a model for structuring automotive safety arguments comprising four different, yet interrelated, layers of safety claims. The layered model is structured by the rationale behind safety requirements, their relationship to corresponding physical artefact(s) and hazardous events, the means used in their development and the environment in which safety activities are undertaken. The layered approach allows for focus and clarity in communicating and assessing the functional safety of automotive Electrical/Electronic systems, particularly in the context of the automotive standard ISO 26262.","PeriodicalId":364377,"journal":{"name":"2014 Tenth European Dependable Computing Conference","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2014-05-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"122815204","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}