Pub Date : 2020-09-01DOI: 10.1109/EDCC51268.2020.00027
Ibéria Medeiros, N. Neves
Web application security has become paramount for the organisation's operation, and therefore, static analysis tools (SAT) for vulnerability detection have been widely researched in the last years. Nevertheless, SATs often generate errors (false positives & negatives), whose cause is recurrently associated with very diverse coding styles, i.e., similar functionality is implemented in distinct manners, and programming practices that create ambiguity, such as the reuse and share of variables. The paper presents an analysis of SAT's behaviour and results when they process various relevant web applications coded with different coding styles. Furthermore, it discusses if the SQL injection vulnerabilities detected by SATs as true positives are really exploitable. Our results demonstrate that SATs are built having in mind how to detect specific vulnerabilities, without considering such forms of programming. They call to action for a new generation of SATs that are highly malleable to be capable of processing the codes observed in the wild.
{"title":"Effect of Coding Styles in Detection of Web Application Vulnerabilities","authors":"Ibéria Medeiros, N. Neves","doi":"10.1109/EDCC51268.2020.00027","DOIUrl":"https://doi.org/10.1109/EDCC51268.2020.00027","url":null,"abstract":"Web application security has become paramount for the organisation's operation, and therefore, static analysis tools (SAT) for vulnerability detection have been widely researched in the last years. Nevertheless, SATs often generate errors (false positives & negatives), whose cause is recurrently associated with very diverse coding styles, i.e., similar functionality is implemented in distinct manners, and programming practices that create ambiguity, such as the reuse and share of variables. The paper presents an analysis of SAT's behaviour and results when they process various relevant web applications coded with different coding styles. Furthermore, it discusses if the SQL injection vulnerabilities detected by SATs as true positives are really exploitable. Our results demonstrate that SATs are built having in mind how to detect specific vulnerabilities, without considering such forms of programming. They call to action for a new generation of SATs that are highly malleable to be capable of processing the codes observed in the wild.","PeriodicalId":212573,"journal":{"name":"2020 16th European Dependable Computing Conference (EDCC)","volume":"55 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-09-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128877323","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2020-09-01DOI: 10.1109/EDCC51268.2020.00019
John C. Mace, R. Czekster, C. Morisset, C. Maple
Inter-networked control systems make smart buildings increasingly efficient but can lead to severe operational disruptions and infrastructure damage. It is vital the security state of smart buildings is properly assessed so that thorough and cost effective risk management can be established. This paper uniquely reports on an actual risk assessment performed in 2018 on one of the world's most densely monitored, state-of-the-art, smart buildings. From our observations, we suggest that current practice may be inadequate due to a number of challenges and deficiencies, including the lack of a recognised smart building risk assessment methodology. As a result, the security posture of many smart buildings may not be as robust as their risk assessments suggest. Crucially, we highlight a number of key recommendations for a more comprehensive risk assessment process for smart buildings. As a whole, we believe this practical experience report will be of interest to a range of smart building stakeholders.
{"title":"Smart Building Risk Assessment Case Study: Challenges, Deficiencies and Recommendations","authors":"John C. Mace, R. Czekster, C. Morisset, C. Maple","doi":"10.1109/EDCC51268.2020.00019","DOIUrl":"https://doi.org/10.1109/EDCC51268.2020.00019","url":null,"abstract":"Inter-networked control systems make smart buildings increasingly efficient but can lead to severe operational disruptions and infrastructure damage. It is vital the security state of smart buildings is properly assessed so that thorough and cost effective risk management can be established. This paper uniquely reports on an actual risk assessment performed in 2018 on one of the world's most densely monitored, state-of-the-art, smart buildings. From our observations, we suggest that current practice may be inadequate due to a number of challenges and deficiencies, including the lack of a recognised smart building risk assessment methodology. As a result, the security posture of many smart buildings may not be as robust as their risk assessments suggest. Crucially, we highlight a number of key recommendations for a more comprehensive risk assessment process for smart buildings. As a whole, we believe this practical experience report will be of interest to a range of smart building stakeholders.","PeriodicalId":212573,"journal":{"name":"2020 16th European Dependable Computing Conference (EDCC)","volume":"3 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-09-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"125111804","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2020-09-01DOI: 10.1109/EDCC51268.2020.00011
I. Tuzov, D. Andrés, J. Ruiz
Thanks to their dynamic reconfiguration capabilities, FPGAs are used in application domains ranging from embedded systems to high performance computing. Nevertheless, as FPGAs usually rely on SRAM memories to keep their current configuration, they are highly sensitive to radiation. The robustness of FPGA-based implementations can be improved by tuning the configuration parameters of selected IP cores or EDA tools. As many different parameters can usually be set at several configuration levels, this constitutes a huge design space to be explored. Accordingly, not only suitable techniques are required to sample as many different configurations as possible, but also novel fault injection approaches are necessary to reduce the number of faults to be injected and speed up as much as possible the experimentation as a whole. To accomplish this goal, this paper integrates state of the art FPGA-based approaches to speed up the execution of individual fault injection experiments with a novel proposal that minimises the number of fault injection experiments required to successfully explore the design space with robustness in mind and following a genetic algorithm. This approach is exemplified by tuning the Vivado Design Suite to optimize the robustness and clock frequency of MC8051, AVR, and Microblaze soft-core processors.
{"title":"Improving Robustness-Aware Design Space Exploration for FPGA-Based Systems","authors":"I. Tuzov, D. Andrés, J. Ruiz","doi":"10.1109/EDCC51268.2020.00011","DOIUrl":"https://doi.org/10.1109/EDCC51268.2020.00011","url":null,"abstract":"Thanks to their dynamic reconfiguration capabilities, FPGAs are used in application domains ranging from embedded systems to high performance computing. Nevertheless, as FPGAs usually rely on SRAM memories to keep their current configuration, they are highly sensitive to radiation. The robustness of FPGA-based implementations can be improved by tuning the configuration parameters of selected IP cores or EDA tools. As many different parameters can usually be set at several configuration levels, this constitutes a huge design space to be explored. Accordingly, not only suitable techniques are required to sample as many different configurations as possible, but also novel fault injection approaches are necessary to reduce the number of faults to be injected and speed up as much as possible the experimentation as a whole. To accomplish this goal, this paper integrates state of the art FPGA-based approaches to speed up the execution of individual fault injection experiments with a novel proposal that minimises the number of fault injection experiments required to successfully explore the design space with robustness in mind and following a genetic algorithm. This approach is exemplified by tuning the Vivado Design Suite to optimize the robustness and clock frequency of MC8051, AVR, and Microblaze soft-core processors.","PeriodicalId":212573,"journal":{"name":"2020 16th European Dependable Computing Conference (EDCC)","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-09-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"131163945","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2020-09-01DOI: 10.1109/EDCC51268.2020.00031
Christian Herrera, Nancy Cruz, Ricardo Quintero
Stateful priorities are used for imposing precise restrictions on system actions, in order to meet safety constraints. Those priorities restrict erroneous system behavior, whereas safe system behavior remains unrestricted. In this work, we present the design of the tool CrEStO, which obtains those priorities, and we extend its query support. We also present several examples, experiments and point out future research work.
{"title":"CrEStO: A Tool for Synthesizing Stateful Priorities","authors":"Christian Herrera, Nancy Cruz, Ricardo Quintero","doi":"10.1109/EDCC51268.2020.00031","DOIUrl":"https://doi.org/10.1109/EDCC51268.2020.00031","url":null,"abstract":"Stateful priorities are used for imposing precise restrictions on system actions, in order to meet safety constraints. Those priorities restrict erroneous system behavior, whereas safe system behavior remains unrestricted. In this work, we present the design of the tool CrEStO, which obtains those priorities, and we extend its query support. We also present several examples, experiments and point out future research work.","PeriodicalId":212573,"journal":{"name":"2020 16th European Dependable Computing Conference (EDCC)","volume":"132 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-09-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115199427","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2020-09-01DOI: 10.1109/EDCC51268.2020.00021
Christian Herrera
We present the notion of stateful priorities for imposing precise restrictions on system actions, in order to meet safety constraints. By using stateful priorities we are able to exclusively restrict erroneous system behavior as specified by the constraint, whereas safe system behavior remains unrestricted. Given a system modeled as a network of discrete automata and an error constraint, we present algorithms which use those inputs to synthesize stateful priorities. We present as well a network transformation which uses synthesized priorities for blocking all system actions leading to the input error. The applicability of our approach is demonstrated on three real-world examples.
{"title":"Stateful Priorities for Precise Restriction of System Behavior","authors":"Christian Herrera","doi":"10.1109/EDCC51268.2020.00021","DOIUrl":"https://doi.org/10.1109/EDCC51268.2020.00021","url":null,"abstract":"We present the notion of stateful priorities for imposing precise restrictions on system actions, in order to meet safety constraints. By using stateful priorities we are able to exclusively restrict erroneous system behavior as specified by the constraint, whereas safe system behavior remains unrestricted. Given a system modeled as a network of discrete automata and an error constraint, we present algorithms which use those inputs to synthesize stateful priorities. We present as well a network transformation which uses synthesized priorities for blocking all system actions leading to the input error. The applicability of our approach is demonstrated on three real-world examples.","PeriodicalId":212573,"journal":{"name":"2020 16th European Dependable Computing Conference (EDCC)","volume":"461 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-09-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115292217","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2020-09-01DOI: 10.1109/EDCC51268.2020.00015
Lauri Vihman, M. Kruusmaa, J. Raik
The paper proposes a data-driven cross-layer resilient architecture for sensor networks. The novelty of the approach lies in combining fault detection across data and network layers into a coordinated system health management architecture.The implemented fault detection is entirely data-driven: data are collected exclusively by the functional sensors that are part of the system. Thus, there is no need for additional hardware resources.The data layers considered include the raw sensor data layer, the processed data layer and the data aggregation layer. The proposed cross-layer fault management architecture utilizes a hierarchical health-map structure for fault detection and data aggregation. A practical case study of an underwater sensor network for harbor water flow monitoring application based on the proposed architecture is presented. Synthetic experiments with real data demonstrate the effectiveness of the approach in fault detection and diagnosis. The experiments show that the data-driven cross-layer fault management allows improving the sensor group measurement accuracy by 35% in case of single sensor errors and nearly twofold in case of double sensor errors. The paper also presents examples of system health-map aggregation and fault diagnosis based on faults manifesting at the different layers for real incidents occurring in the field.
{"title":"Data-Driven Cross-Layer Fault Management Architecture for Sensor Networks","authors":"Lauri Vihman, M. Kruusmaa, J. Raik","doi":"10.1109/EDCC51268.2020.00015","DOIUrl":"https://doi.org/10.1109/EDCC51268.2020.00015","url":null,"abstract":"The paper proposes a data-driven cross-layer resilient architecture for sensor networks. The novelty of the approach lies in combining fault detection across data and network layers into a coordinated system health management architecture.The implemented fault detection is entirely data-driven: data are collected exclusively by the functional sensors that are part of the system. Thus, there is no need for additional hardware resources.The data layers considered include the raw sensor data layer, the processed data layer and the data aggregation layer. The proposed cross-layer fault management architecture utilizes a hierarchical health-map structure for fault detection and data aggregation. A practical case study of an underwater sensor network for harbor water flow monitoring application based on the proposed architecture is presented. Synthetic experiments with real data demonstrate the effectiveness of the approach in fault detection and diagnosis. The experiments show that the data-driven cross-layer fault management allows improving the sensor group measurement accuracy by 35% in case of single sensor errors and nearly twofold in case of double sensor errors. The paper also presents examples of system health-map aggregation and fault diagnosis based on faults manifesting at the different layers for real incidents occurring in the field.","PeriodicalId":212573,"journal":{"name":"2020 16th European Dependable Computing Conference (EDCC)","volume":"5 2","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-09-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114133570","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2020-09-01DOI: 10.1109/EDCC51268.2020.00022
V. Cholvi, Antonio Fernández, Chryssis Georgiou, N. Nicolaou, M. Raynal
A Distributed Ledger Object (DLO) is a concurrent object that maintains a totally ordered sequence of records, and supports two operations:APPEND, which appends a record at the end of the sequence, andGET, which returns the whole sequence of records. The work presented in this article is made up of two main contributions.The first contribution is a formalization of aByzantine-tolerantDistributed Ledger Object(BDLO), which is a DLO in which clients and servers processes may deviate arbitrarily from their intended behavior (i.e. they may be Byzantine). The proposed formal definition is accompanied by algorithms that implementBDLOs on top of an underlying Byzantine Atomic Broadcast service.The second contribution is a suite of algorithms, based on the previous BDLO implementations, that solve the Atomic Appends problem in the presence of asynchrony, Byzantine clients and Byzantine servers. This problem occurs when clients have a composite record (set of basic records) to append to different BDLOs, in such a way that either each basic record is appended to its BDLO (and this must occur in good circumstances),or no basic record is appended. Distributed algorithms are presented, which solve the Atomic Appends problem when the clients (involved in theAtomic Appends) and the servers (which maintain the BDLOs) may be Byzantine.
{"title":"Atomic Appends in Asynchronous Byzantine Distributed Ledgers","authors":"V. Cholvi, Antonio Fernández, Chryssis Georgiou, N. Nicolaou, M. Raynal","doi":"10.1109/EDCC51268.2020.00022","DOIUrl":"https://doi.org/10.1109/EDCC51268.2020.00022","url":null,"abstract":"A Distributed Ledger Object (DLO) is a concurrent object that maintains a totally ordered sequence of records, and supports two operations:APPEND, which appends a record at the end of the sequence, andGET, which returns the whole sequence of records. The work presented in this article is made up of two main contributions.The first contribution is a formalization of aByzantine-tolerantDistributed Ledger Object(BDLO), which is a DLO in which clients and servers processes may deviate arbitrarily from their intended behavior (i.e. they may be Byzantine). The proposed formal definition is accompanied by algorithms that implementBDLOs on top of an underlying Byzantine Atomic Broadcast service.The second contribution is a suite of algorithms, based on the previous BDLO implementations, that solve the Atomic Appends problem in the presence of asynchrony, Byzantine clients and Byzantine servers. This problem occurs when clients have a composite record (set of basic records) to append to different BDLOs, in such a way that either each basic record is appended to its BDLO (and this must occur in good circumstances),or no basic record is appended. Distributed algorithms are presented, which solve the Atomic Appends problem when the clients (involved in theAtomic Appends) and the servers (which maintain the BDLOs) may be Byzantine.","PeriodicalId":212573,"journal":{"name":"2020 16th European Dependable Computing Conference (EDCC)","volume":"76 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-09-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130609945","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2020-09-01DOI: 10.1109/EDCC51268.2020.00020
Camille Fayollas, H. Bonnin, Olivier Flébus
Improved safety is one of the key benefits expected from autonomous vehicles. This can only be achieved if the autonomous vehicles are guaranteed to be safe enough. This paper proposes a potential approach contributing to this safety improvement: it describes and investigates "SafeOps", a concept of "continuous safety", based on the DevOps approach, unifying development and operations. DevOps consists in a set of practices intended to reduce the time between committing a change to a system and the change being deployed into production, while ensuring high quality. DevOps benefits to system development and delivery by enabling software continuous delivery, faster changes management with faster issues resolution, and improved reliability. SafeOps key principle is to monitor the system in operation and to use this information for validating and certifying a certain safety assurance level. Following this approach, a system could be compliant to a first safety assurance level when it's first delivered and compliant to higher ones when validated in operation.
{"title":"SafeOps: A Concept of Continuous Safety","authors":"Camille Fayollas, H. Bonnin, Olivier Flébus","doi":"10.1109/EDCC51268.2020.00020","DOIUrl":"https://doi.org/10.1109/EDCC51268.2020.00020","url":null,"abstract":"Improved safety is one of the key benefits expected from autonomous vehicles. This can only be achieved if the autonomous vehicles are guaranteed to be safe enough. This paper proposes a potential approach contributing to this safety improvement: it describes and investigates \"SafeOps\", a concept of \"continuous safety\", based on the DevOps approach, unifying development and operations. DevOps consists in a set of practices intended to reduce the time between committing a change to a system and the change being deployed into production, while ensuring high quality. DevOps benefits to system development and delivery by enabling software continuous delivery, faster changes management with faster issues resolution, and improved reliability. SafeOps key principle is to monitor the system in operation and to use this information for validating and certifying a certain safety assurance level. Following this approach, a system could be compliant to a first safety assurance level when it's first delivered and compliant to higher ones when validated in operation.","PeriodicalId":212573,"journal":{"name":"2020 16th European Dependable Computing Conference (EDCC)","volume":"344 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-09-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"124253494","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2020-09-01DOI: 10.1109/EDCC51268.2020.00024
C. Temple
The emergence of high performance high complexity automotive systems for autonomous driving involves introducing complex supply chains to the system design and managing them in a structured way. Based on current estimates a fully autonomous car could require up to 1 billion lines of code with a code base involving dozens of suppliers. This paper identifies and discusses the complexities involved when such a complex safety critical system is designed using a high number of interacting safety elements that have been designed out of context of the target system by a multitude of suppliers. The paper details the complexities of the integration task. It argues in favour of introducing additional error containment boundaries and safety mechanisms to help manage the integration complexity.
{"title":"Developing Complex Safety Critical Systems in Complex Supply Chains","authors":"C. Temple","doi":"10.1109/EDCC51268.2020.00024","DOIUrl":"https://doi.org/10.1109/EDCC51268.2020.00024","url":null,"abstract":"The emergence of high performance high complexity automotive systems for autonomous driving involves introducing complex supply chains to the system design and managing them in a structured way. Based on current estimates a fully autonomous car could require up to 1 billion lines of code with a code base involving dozens of suppliers. This paper identifies and discusses the complexities involved when such a complex safety critical system is designed using a high number of interacting safety elements that have been designed out of context of the target system by a multitude of suppliers. The paper details the complexities of the integration task. It argues in favour of introducing additional error containment boundaries and safety mechanisms to help manage the integration complexity.","PeriodicalId":212573,"journal":{"name":"2020 16th European Dependable Computing Conference (EDCC)","volume":"1219 ","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-09-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114052005","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2020-09-01DOI: 10.1109/EDCC51268.2020.00023
Matheus Torquato, Charles F. Gonçalves, M. Vieira
Decision support systems (DSS) and online transaction processing applications (OLTP) are crucial for several organizations and frequently require high levels of availability. Many organizations moved their systems to the virtualized environment aiming at improving system availability. Despite the flexibility and manageability features provided by virtualization, a question arises on what policies to apply in order to achieve high availability. Usual approaches highlight redundancy as a strategy for high availability. Still, a concern persists on what components we should consider for redundancy. This paper proposes a hierarchical availability model for evaluating different redundancy allocations for DSS and OLTP systems in virtualized environments. We present three case studies investigating only-Virtual Machine (VM) redundancy and physical machine redundancy strategies. The results provide an overview of the availability impact due to each strategy. We noticed that the physical machine failure rate limits the maximum availability obtained from only-VM redundancy. We exercise our model with a genetic algorithm to find alternatives for high availability. The presented models and results may bring insights when designing availability policies.
{"title":"An Availability Model for DSS and OLTP Applications in Virtualized Environments","authors":"Matheus Torquato, Charles F. Gonçalves, M. Vieira","doi":"10.1109/EDCC51268.2020.00023","DOIUrl":"https://doi.org/10.1109/EDCC51268.2020.00023","url":null,"abstract":"Decision support systems (DSS) and online transaction processing applications (OLTP) are crucial for several organizations and frequently require high levels of availability. Many organizations moved their systems to the virtualized environment aiming at improving system availability. Despite the flexibility and manageability features provided by virtualization, a question arises on what policies to apply in order to achieve high availability. Usual approaches highlight redundancy as a strategy for high availability. Still, a concern persists on what components we should consider for redundancy. This paper proposes a hierarchical availability model for evaluating different redundancy allocations for DSS and OLTP systems in virtualized environments. We present three case studies investigating only-Virtual Machine (VM) redundancy and physical machine redundancy strategies. The results provide an overview of the availability impact due to each strategy. We noticed that the physical machine failure rate limits the maximum availability obtained from only-VM redundancy. We exercise our model with a genetic algorithm to find alternatives for high availability. The presented models and results may bring insights when designing availability policies.","PeriodicalId":212573,"journal":{"name":"2020 16th European Dependable Computing Conference (EDCC)","volume":"174 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-09-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"122050364","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}