This paper describes an approach for the automated security evaluation of operational Network Intrusion Detection Systems (NIDS) in Infrastructure as a Service (IaaS) cloud computing environments. Our objective is to provide automated and experimental methods to execute attack campaigns and analyze NIDS reactions, in order to highlight the ability of the NIDS to protect clients' virtual infrastructures and find potential weaknesses in their placement and configuration. To do so, we designed a three-phase approach. It is composed of the cloning of the target client's infrastructure to perform the subsequent audit operations on a clone, followed by the analysis of network access controls to determine the network accessibilities in the cloned infrastructure. Using evaluation traffic we modeled and generated, the last phase of the approach, presented in this paper, focuses on executing attack campaigns following an optimized algorithm. The NIDS alerts are analyzed and evaluation metrics are computed. Our approach is sustained by a prototype and experiments carried out on a VMware-based cloud platform.
{"title":"Automated Evaluation of Network Intrusion Detection Systems in IaaS Clouds","authors":"T. Probst, E. Alata, M. Kaâniche, V. Nicomette","doi":"10.1109/EDCC.2015.10","DOIUrl":"https://doi.org/10.1109/EDCC.2015.10","url":null,"abstract":"This paper describes an approach for the automated security evaluation of operational Network Intrusion Detection Systems (NIDS) in Infrastructure as a Service (IaaS) cloud computing environments. Our objective is to provide automated and experimental methods to execute attack campaigns and analyze NIDS reactions, in order to highlight the ability of the NIDS to protect clients' virtual infrastructures and find potential weaknesses in their placement and configuration. To do so, we designed a three-phase approach. It is composed of the cloning of the target client's infrastructure to perform the subsequent audit operations on a clone, followed by the analysis of network access controls to determine the network accessibilities in the cloned infrastructure. Using evaluation traffic we modeled and generated, the last phase of the approach, presented in this paper, focuses on executing attack campaigns following an optimized algorithm. The NIDS alerts are analyzed and evaluation metrics are computed. Our approach is sustained by a prototype and experiments carried out on a VMware-based cloud platform.","PeriodicalId":138826,"journal":{"name":"2015 11th European Dependable Computing Conference (EDCC)","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2015-09-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"125875121","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Embedded systems are widely used in critical situations and hence, are targets for malicious users. Researchers have demonstrated successful attacks against embedded systems used in power grids, modern cars, and medical devices. This makes building Intrusion Detection Systems (IDS)for embedded devices a necessity. However, embedded devices have constraints(such as limited memory capacity) that make building IDSes monitoring all their security properties challenging. In this paper, we formulate building IDS for embedded systems as an optimization problem. Having the set of the security properties of the system and the invariants that verify those properties, we build an IDS that maximizes the coverage for the security properties, with respect to the available memory. This allows our IDS to be applicable to a wide range of embedded devices with different memory capacities. In our formulation users may define their own coverage criteria for the security properties. We also propose two coverage criteria and build IDSes based on them. We implement our IDSes for SegMeter, an open source smart meter. Our results show that our IDSes provide a high detection rate in spite of memory constraints of the system. Further, the detection rate of our IDSes at runtime are close to their estimated coverage at design time. This validates our approach in quantifying the coverage of our IDSes and optimizing them.
{"title":"Flexible Intrusion Detection Systems for Memory-Constrained Embedded Systems","authors":"F. Tabrizi, K. Pattabiraman","doi":"10.1109/EDCC.2015.17","DOIUrl":"https://doi.org/10.1109/EDCC.2015.17","url":null,"abstract":"Embedded systems are widely used in critical situations and hence, are targets for malicious users. Researchers have demonstrated successful attacks against embedded systems used in power grids, modern cars, and medical devices. This makes building Intrusion Detection Systems (IDS)for embedded devices a necessity. However, embedded devices have constraints(such as limited memory capacity) that make building IDSes monitoring all their security properties challenging. In this paper, we formulate building IDS for embedded systems as an optimization problem. Having the set of the security properties of the system and the invariants that verify those properties, we build an IDS that maximizes the coverage for the security properties, with respect to the available memory. This allows our IDS to be applicable to a wide range of embedded devices with different memory capacities. In our formulation users may define their own coverage criteria for the security properties. We also propose two coverage criteria and build IDSes based on them. We implement our IDSes for SegMeter, an open source smart meter. Our results show that our IDSes provide a high detection rate in spite of memory constraints of the system. Further, the detection rate of our IDSes at runtime are close to their estimated coverage at design time. This validates our approach in quantifying the coverage of our IDSes and optimizing them.","PeriodicalId":138826,"journal":{"name":"2015 11th European Dependable Computing Conference (EDCC)","volume":"12 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2015-09-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"125330502","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Z. Estrada, C. Pham, Fei Deng, Lok K. Yan, Z. Kalbarczyk, R. Iyer
Many current VM monitoring approaches require guest OS modifications and are also unable to perform application level monitoring, reducing their value in a cloud setting. This paper introduces hprobes, a framework that allows one to dynamically monitor applications and operating systems inside a VM. The hprobe framework does not require any changes to the guest OS, which avoids the tight coupling of monitoring with its target. Furthermore, the monitors can be customized and enabled/disabled while the VM is running. To demonstrate the usefulness of this framework, we present three sample detectors: an emergency detector for a security vulnerability, an application watchdog, and an infinite-loop detector. We test our detectors on real applications and demonstrate that those detectors achieve an acceptable level of performance overhead with a high degree of flexibility.
{"title":"Dynamic VM Dependability Monitoring Using Hypervisor Probes","authors":"Z. Estrada, C. Pham, Fei Deng, Lok K. Yan, Z. Kalbarczyk, R. Iyer","doi":"10.1109/EDCC.2015.9","DOIUrl":"https://doi.org/10.1109/EDCC.2015.9","url":null,"abstract":"Many current VM monitoring approaches require guest OS modifications and are also unable to perform application level monitoring, reducing their value in a cloud setting. This paper introduces hprobes, a framework that allows one to dynamically monitor applications and operating systems inside a VM. The hprobe framework does not require any changes to the guest OS, which avoids the tight coupling of monitoring with its target. Furthermore, the monitors can be customized and enabled/disabled while the VM is running. To demonstrate the usefulness of this framework, we present three sample detectors: an emergency detector for a security vulnerability, an application watchdog, and an infinite-loop detector. We test our detectors on real applications and demonstrate that those detectors achieve an acceptable level of performance overhead with a high degree of flexibility.","PeriodicalId":138826,"journal":{"name":"2015 11th European Dependable Computing Conference (EDCC)","volume":"57 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2015-09-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"121240709","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Seyedeh Golsana Ghaemi, Amir Mahdi Hosseini Monazzah, Hamed Farbeh, S. Miremadi
Nowadays, leakage energy constitutes up to80% of total cache energy consumption and tag array isresponsible for a considerable fraction of static energyconsumption. An approach to reduce static energyconsumption is to replace SRAMs by STT-RAMs with nearzero leakage power. However, a problem of an STT-RAMcell is its limited write endurance. In spite of previousstudies which have targeted the data array, in this studySTT-RAMs are used in the L1 tag array. To solve the writeendurance problem, this paper proposes an STTRAM/SRAM tag architecture. Considering the spatiallocality of memory references, the lower significant bitlinesof the tag update more. The SRAM part handles theupdates in the bit-lines which their lifetime is less than thedesired lifetime. The proposed architecture is evaluated bythe gem5 simulator running Mibench benchmark suits. The evaluation results recommend implementing less than30% of bit-lines of the STT-RAM-based tag array bySRAMs for a 5-year lifetime. Moreover, the static energyconsumption is reduced up to 82 % in comparison withSRAM tag array.
{"title":"LATED: Lifetime-Aware Tag for Enduring Design","authors":"Seyedeh Golsana Ghaemi, Amir Mahdi Hosseini Monazzah, Hamed Farbeh, S. Miremadi","doi":"10.1109/EDCC.2015.31","DOIUrl":"https://doi.org/10.1109/EDCC.2015.31","url":null,"abstract":"Nowadays, leakage energy constitutes up to80% of total cache energy consumption and tag array isresponsible for a considerable fraction of static energyconsumption. An approach to reduce static energyconsumption is to replace SRAMs by STT-RAMs with nearzero leakage power. However, a problem of an STT-RAMcell is its limited write endurance. In spite of previousstudies which have targeted the data array, in this studySTT-RAMs are used in the L1 tag array. To solve the writeendurance problem, this paper proposes an STTRAM/SRAM tag architecture. Considering the spatiallocality of memory references, the lower significant bitlinesof the tag update more. The SRAM part handles theupdates in the bit-lines which their lifetime is less than thedesired lifetime. The proposed architecture is evaluated bythe gem5 simulator running Mibench benchmark suits. The evaluation results recommend implementing less than30% of bit-lines of the STT-RAM-based tag array bySRAMs for a 5-year lifetime. Moreover, the static energyconsumption is reduced up to 82 % in comparison withSRAM tag array.","PeriodicalId":138826,"journal":{"name":"2015 11th European Dependable Computing Conference (EDCC)","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2015-09-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128646222","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Carla Sauvanaud, Guthemberg Silvestre, M. Kaâniche, K. Kanoun
This paper introduces a new approach for the online detection of performance anomalies in cloud virtual machines (VMs). It is designed for cloud infrastructure providers to detect during runtime unknown anomalies that may still be observed in complex modern systems hosted on VMs. The approach is drawn on data stream clustering of per-VM monitoring data and detects at a fine granularity where anomalies occur. Its operations are independent of the types of applications deployed over VMs. Moreover it deals with frequent changes in systems normal behaviors during runtime. The parallel analyses of each VM makes this approach scalable to a large number of VMs composing an application. The approach consists of two online steps: 1) the incremental update of sets of clusters by means of data stream clustering, and 2) the computation of two attributes characterizing the global clusters evolution. We validate our approach over a VMware vSphere testbed. It hosts a typical cloud application, MongoDB, that we study in normal behavior contexts and in presence of anomalies.
{"title":"Data Stream Clustering for Online Anomaly Detection in Cloud Applications","authors":"Carla Sauvanaud, Guthemberg Silvestre, M. Kaâniche, K. Kanoun","doi":"10.1109/EDCC.2015.22","DOIUrl":"https://doi.org/10.1109/EDCC.2015.22","url":null,"abstract":"This paper introduces a new approach for the online detection of performance anomalies in cloud virtual machines (VMs). It is designed for cloud infrastructure providers to detect during runtime unknown anomalies that may still be observed in complex modern systems hosted on VMs. The approach is drawn on data stream clustering of per-VM monitoring data and detects at a fine granularity where anomalies occur. Its operations are independent of the types of applications deployed over VMs. Moreover it deals with frequent changes in systems normal behaviors during runtime. The parallel analyses of each VM makes this approach scalable to a large number of VMs composing an application. The approach consists of two online steps: 1) the incremental update of sets of clusters by means of data stream clustering, and 2) the computation of two attributes characterizing the global clusters evolution. We validate our approach over a VMware vSphere testbed. It hosts a typical cloud application, MongoDB, that we study in normal behavior contexts and in presence of anomalies.","PeriodicalId":138826,"journal":{"name":"2015 11th European Dependable Computing Conference (EDCC)","volume":"1987 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2015-09-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128050519","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Hélène Martorell, J. Fabre, M. Lauer, Matthieu Roy, R. Valentin
The AUTOSAR standard describes an architecture for embedded automotive systems. The lack of flexibility is a major drawback of this architecture and updates are not easily possible. In our work we explore the various facets of software updates in the context of AUTOSAR embedded applications. With few modifications that remain compatible with the development process, we propose specific concepts for updates. Such updates can be remotely achieved, for maintenance and/or evolution purposes. As functional updates may lead to safety mechanisms updates, we also highlight how safety mechanisms can be added or updated with different level of granularity. We illustrate these concepts and capabilities with a simple case study as a proof of concepts. We finally draw the lessons learnt from this work.
{"title":"Partial Updates of AUTOSAR Embedded Applications -- To What Extent?","authors":"Hélène Martorell, J. Fabre, M. Lauer, Matthieu Roy, R. Valentin","doi":"10.1109/EDCC.2015.18","DOIUrl":"https://doi.org/10.1109/EDCC.2015.18","url":null,"abstract":"The AUTOSAR standard describes an architecture for embedded automotive systems. The lack of flexibility is a major drawback of this architecture and updates are not easily possible. In our work we explore the various facets of software updates in the context of AUTOSAR embedded applications. With few modifications that remain compatible with the development process, we propose specific concepts for updates. Such updates can be remotely achieved, for maintenance and/or evolution purposes. As functional updates may lead to safety mechanisms updates, we also highlight how safety mechanisms can be added or updated with different level of granularity. We illustrate these concepts and capabilities with a simple case study as a proof of concepts. We finally draw the lessons learnt from this work.","PeriodicalId":138826,"journal":{"name":"2015 11th European Dependable Computing Conference (EDCC)","volume":"30 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2015-09-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"131727542","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
The Raft consensus algorithm is a new distributed consensus algorithm that is both easier to understand and more straightforward to implement than the older Paxos algorithm. Its major limitation is its high energy footprint. As it relies on majority consensus voting for deciding when to commit an update, Raft requires five participants to protect against two simultaneous failures. We propose two methods for reducing this huge energy footprint. Our first proposal consists of adjusting Raft quorums in a way that would allow updates to proceed with as few as two servers while requiring a larger quorum for electing a new leader. Our second proposal consists of replacing one or two of the five Raft servers with witnesses, that is, lightweight servers that maintain the same metadata as other servers but hold no data and can therefore run on very low-power hosts. We show that these substitutions have little impact on the cluster availability but very different impacts on the risks of incurring a data loss.
{"title":"Reducing the Energy Footprint of a Distributed Consensus Algorithm","authors":"Jehan-Francois Pâris, D. Long","doi":"10.1109/EDCC.2015.25","DOIUrl":"https://doi.org/10.1109/EDCC.2015.25","url":null,"abstract":"The Raft consensus algorithm is a new distributed consensus algorithm that is both easier to understand and more straightforward to implement than the older Paxos algorithm. Its major limitation is its high energy footprint. As it relies on majority consensus voting for deciding when to commit an update, Raft requires five participants to protect against two simultaneous failures. We propose two methods for reducing this huge energy footprint. Our first proposal consists of adjusting Raft quorums in a way that would allow updates to proceed with as few as two servers while requiring a larger quorum for electing a new leader. Our second proposal consists of replacing one or two of the five Raft servers with witnesses, that is, lightweight servers that maintain the same metadata as other servers but hold no data and can therefore run on very low-power hosts. We show that these substitutions have little impact on the cluster availability but very different impacts on the risks of incurring a data loss.","PeriodicalId":138826,"journal":{"name":"2015 11th European Dependable Computing Conference (EDCC)","volume":"26 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2015-09-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"131854556","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Xiwei Xu, Liming Zhu, Daniel W. Sun, An Binh Tran, I. Weber, Min Fu, L. Bass
Operations such as upgrade or redeployment are an important cause of system outages. Diagnosing such errors at runtime poses significant challenges. In this paper, we propose an error diagnosis approach using Bayesian Networks. Each node in the network captures the potential (root) causes of operational errors and its probability under different operational contexts. Once an operational error is detected, our diagnosis algorithm chooses a starting node, traverses the Bayesian Network and performs assertion checking associated with each node to confirm the error, retrieve further information and update the belief network. The next node in the network to check is selected through an online optimisation that minimises the overall availability risk considering diagnosis time and fault consequence. Our experiments show that the technique minimises the risk of faults significantly compared to other approaches in most cases. The diagnosis accuracy is high but also depends on the transient nature of a fault.
{"title":"Error Diagnosis of Cloud Application Operation Using Bayesian Networks and Online Optimisation","authors":"Xiwei Xu, Liming Zhu, Daniel W. Sun, An Binh Tran, I. Weber, Min Fu, L. Bass","doi":"10.1109/EDCC.2015.15","DOIUrl":"https://doi.org/10.1109/EDCC.2015.15","url":null,"abstract":"Operations such as upgrade or redeployment are an important cause of system outages. Diagnosing such errors at runtime poses significant challenges. In this paper, we propose an error diagnosis approach using Bayesian Networks. Each node in the network captures the potential (root) causes of operational errors and its probability under different operational contexts. Once an operational error is detected, our diagnosis algorithm chooses a starting node, traverses the Bayesian Network and performs assertion checking associated with each node to confirm the error, retrieve further information and update the belief network. The next node in the network to check is selected through an online optimisation that minimises the overall availability risk considering diagnosis time and fault consequence. Our experiments show that the technique minimises the risk of faults significantly compared to other approaches in most cases. The diagnosis accuracy is high but also depends on the transient nature of a fault.","PeriodicalId":138826,"journal":{"name":"2015 11th European Dependable Computing Conference (EDCC)","volume":"23 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2015-09-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126011308","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Paul Rimba, Liming Zhu, L. Bass, I. Kuz, S. Reeves
Building secure applications requires significant expertise. Secure platforms and security patterns have been proposed to alleviate this problem. However, correctly applying patterns to use platform features is still highly expertise-dependent. Patterns are informal and there is a gap between them and platform features. We propose the concept of reusable verified design fragments, which package security patterns and platform features and are verified to provide assurance about their security properties. Design fragments can be composed through four primitive tactics. The verification of the composed design against desired security properties is presented in an assurance case. We demonstrate our approach by securing a Continuous Deployment pipeline and show that the tactics are sufficient to compose design fragments into a secure system. Finally, we formally define composition tactics, which are intended to support the development of systems that are secure by construction.
{"title":"Composing Patterns to Construct Secure Systems","authors":"Paul Rimba, Liming Zhu, L. Bass, I. Kuz, S. Reeves","doi":"10.1109/EDCC.2015.12","DOIUrl":"https://doi.org/10.1109/EDCC.2015.12","url":null,"abstract":"Building secure applications requires significant expertise. Secure platforms and security patterns have been proposed to alleviate this problem. However, correctly applying patterns to use platform features is still highly expertise-dependent. Patterns are informal and there is a gap between them and platform features. We propose the concept of reusable verified design fragments, which package security patterns and platform features and are verified to provide assurance about their security properties. Design fragments can be composed through four primitive tactics. The verification of the composed design against desired security properties is presented in an assurance case. We demonstrate our approach by securing a Continuous Deployment pipeline and show that the tactics are sufficient to compose design fragments into a secure system. Finally, we formally define composition tactics, which are intended to support the development of systems that are secure by construction.","PeriodicalId":138826,"journal":{"name":"2015 11th European Dependable Computing Conference (EDCC)","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2015-09-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129235522","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Thorsten Piper, Stefan Winter, N. Suri, T. Fuhrman
The automotive safety standard ISO 26262 strongly recommends the use of fault injection (FI) for the assessment of safety mechanisms that typically span composite dependability and real-time operations. However, with the standard providing very limited guidance on the actual design, implementation and execution of FI experiments, most AUTOSAR FI approaches use standard fault models (e.g., bit flips and data type based corruptions), and focus on using simulation environments. Unfortunately, the representation of timing faults using standard fault models, and the representation of real-time properties in simulation environments are hard, rendering both inadequate forthe comprehensive assessment of AUTOSAR's safety mechanisms. The actual development of ISO 26262 advocated FI is further hampered by the lack of representative software fault models and the lack of an openly accessible AUTOSAR FI framework. We address these gaps by (a) adapting the open source FI framework GRINDER to AUTOSAR and (b) showing how to effectively apply it for the assessment of AUTOSAR's safety mechanisms.
{"title":"On the Effective Use of Fault Injection for the Assessment of AUTOSAR Safety Mechanisms","authors":"Thorsten Piper, Stefan Winter, N. Suri, T. Fuhrman","doi":"10.1109/EDCC.2015.14","DOIUrl":"https://doi.org/10.1109/EDCC.2015.14","url":null,"abstract":"The automotive safety standard ISO 26262 strongly recommends the use of fault injection (FI) for the assessment of safety mechanisms that typically span composite dependability and real-time operations. However, with the standard providing very limited guidance on the actual design, implementation and execution of FI experiments, most AUTOSAR FI approaches use standard fault models (e.g., bit flips and data type based corruptions), and focus on using simulation environments. Unfortunately, the representation of timing faults using standard fault models, and the representation of real-time properties in simulation environments are hard, rendering both inadequate forthe comprehensive assessment of AUTOSAR's safety mechanisms. The actual development of ISO 26262 advocated FI is further hampered by the lack of representative software fault models and the lack of an openly accessible AUTOSAR FI framework. We address these gaps by (a) adapting the open source FI framework GRINDER to AUTOSAR and (b) showing how to effectively apply it for the assessment of AUTOSAR's safety mechanisms.","PeriodicalId":138826,"journal":{"name":"2015 11th European Dependable Computing Conference (EDCC)","volume":"28 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2015-09-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127675815","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}