Anthony Dessiatnikoff, R. Akrout, E. Alata, M. Kaâniche, V. Nicomette
This paper presents a new algorithm aimed at the vulnerability assessment of web applications following a black-box approach. The objective is to improve the detection efficiency of existing vulnerability scanners and to move a step forward toward the automation of this process. Our approach covers various types of vulnerabilities but this paper mainly focuses on SQL injections. The proposed algorithm is based on the automatic classification of the responses returned by the web servers using data clustering techniques and provides especially crafted inputs that lead to successful attacks when vulnerabilities are present. Experimental results on several vulnerable applications and comparative analysis with some existing tools confirm the effectiveness of our approach.
{"title":"A Clustering Approach for Web Vulnerabilities Detection","authors":"Anthony Dessiatnikoff, R. Akrout, E. Alata, M. Kaâniche, V. Nicomette","doi":"10.1109/PRDC.2011.31","DOIUrl":"https://doi.org/10.1109/PRDC.2011.31","url":null,"abstract":"This paper presents a new algorithm aimed at the vulnerability assessment of web applications following a black-box approach. The objective is to improve the detection efficiency of existing vulnerability scanners and to move a step forward toward the automation of this process. Our approach covers various types of vulnerabilities but this paper mainly focuses on SQL injections. The proposed algorithm is based on the automatic classification of the responses returned by the web servers using data clustering techniques and provides especially crafted inputs that lead to successful attacks when vulnerabilities are present. Experimental results on several vulnerable applications and comparative analysis with some existing tools confirm the effectiveness of our approach.","PeriodicalId":254760,"journal":{"name":"2011 IEEE 17th Pacific Rim International Symposium on Dependable Computing","volume":"78 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2011-12-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"125520799","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Gossip is a scalable and easy-to-deploy broadcast method for distributed systems. In gossip a broadcast message is disseminated through repeated information exchanges between randomly chosen nodes. Gossip can also achieve high reliability using a large amount of redundant messages, but this also incurs high load on the network. This paper proposes a new gossip algorithm which incorporates network coding techniques to mitigate the high load. With random linear coding, each message propagated in the new algorithm is randomly generated from the broadcast message. Unlike in ordinary gossip, this feature prevents nodes from receiving an identical message more than once, allowing to achieve the same reliability at a lower message cost.
{"title":"Gossiping with Network Coding","authors":"Shun Tokuyama, Tatsuhiro Tsuchiya, T. Kikuno","doi":"10.1109/PRDC.2011.17","DOIUrl":"https://doi.org/10.1109/PRDC.2011.17","url":null,"abstract":"Gossip is a scalable and easy-to-deploy broadcast method for distributed systems. In gossip a broadcast message is disseminated through repeated information exchanges between randomly chosen nodes. Gossip can also achieve high reliability using a large amount of redundant messages, but this also incurs high load on the network. This paper proposes a new gossip algorithm which incorporates network coding techniques to mitigate the high load. With random linear coding, each message propagated in the new algorithm is randomly generated from the broadcast message. Unlike in ordinary gossip, this feature prevents nodes from receiving an identical message more than once, allowing to achieve the same reliability at a lower message cost.","PeriodicalId":254760,"journal":{"name":"2011 IEEE 17th Pacific Rim International Symposium on Dependable Computing","volume":"354 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2011-12-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"134459611","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Chi-Shiang Cho, W. Chung, D. Gao, Hongke Zhang, S. Kuo
The use of nuclear energy to generate electric power is crucial in meeting the high energy demand of modern economy. The dependability of nuclear power plants has been a critical issue and the reactor containment is the most important safety structure acting as a barrier against the release of radioactive material to the environment. In this paper, we propose a practical framework for design, implementation, and V&V to enhance the dependability of reactor containment through an integrated leakage rate test.
{"title":"Dependability Enhancement of Reactor Containment in Safety Critical Nuclear Power Plants","authors":"Chi-Shiang Cho, W. Chung, D. Gao, Hongke Zhang, S. Kuo","doi":"10.1109/PRDC.2011.24","DOIUrl":"https://doi.org/10.1109/PRDC.2011.24","url":null,"abstract":"The use of nuclear energy to generate electric power is crucial in meeting the high energy demand of modern economy. The dependability of nuclear power plants has been a critical issue and the reactor containment is the most important safety structure acting as a barrier against the release of radioactive material to the environment. In this paper, we propose a practical framework for design, implementation, and V&V to enhance the dependability of reactor containment through an integrated leakage rate test.","PeriodicalId":254760,"journal":{"name":"2011 IEEE 17th Pacific Rim International Symposium on Dependable Computing","volume":"307 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2011-12-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"133048233","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Fault-tolerant clock synchronization is the foundation of synchronous architectures such as the Time-Triggered Architecture (TTA) for dependable cyber-physical systems. Clocks are typically local counters that are increased with a given rate according to real time, and clock synchronization algorithms ensure that any two clocks in the system read about the same value at about the same point in real time. This is achieved by a clock synchronization algorithm that changes the current values of the clocks, the clocks' rate, or both. This paper presents a diagnosis algorithm and a clock-rate correction algorithm as layered services on top of the TTEthernet clock synchronization algorithm, which itself is a clock-state correction algorithm. We analyze the algorithms' properties and explore and understand their behavior using a bounded model checker for infinite data types. We use our formal framework for both simulation and formal proof. To the best knowledge of the authors this has been the first time that formal methods, should they be theorem provers or model checkers, have been applied to the problem of rate-correction for fault-tolerant clock synchronization. Furthermore, the formal development process itself demonstrates how easily existing models can be utilized in the development of new algorithms and their formal verification.
{"title":"Layered Diagnosis and Clock-Rate Correction for the TTEthernet Clock Synchronization Protocol","authors":"W. Steiner, B. Dutertre","doi":"10.1109/PRDC.2011.36","DOIUrl":"https://doi.org/10.1109/PRDC.2011.36","url":null,"abstract":"Fault-tolerant clock synchronization is the foundation of synchronous architectures such as the Time-Triggered Architecture (TTA) for dependable cyber-physical systems. Clocks are typically local counters that are increased with a given rate according to real time, and clock synchronization algorithms ensure that any two clocks in the system read about the same value at about the same point in real time. This is achieved by a clock synchronization algorithm that changes the current values of the clocks, the clocks' rate, or both. This paper presents a diagnosis algorithm and a clock-rate correction algorithm as layered services on top of the TTEthernet clock synchronization algorithm, which itself is a clock-state correction algorithm. We analyze the algorithms' properties and explore and understand their behavior using a bounded model checker for infinite data types. We use our formal framework for both simulation and formal proof. To the best knowledge of the authors this has been the first time that formal methods, should they be theorem provers or model checkers, have been applied to the problem of rate-correction for fault-tolerant clock synchronization. Furthermore, the formal development process itself demonstrates how easily existing models can be utilized in the development of new algorithms and their formal verification.","PeriodicalId":254760,"journal":{"name":"2011 IEEE 17th Pacific Rim International Symposium on Dependable Computing","volume":"112 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2011-12-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114177529","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
The paper presents a unified test model which is a mapping scheme for describing the one-to-one correspondence between the input and output domains of a given hardware or software system. Here the test inputs and the fault classes are also involved. The test model incorporates both the verification and the validation schemes for the hardware and software.
{"title":"A Test Model for Hardware and Software","authors":"J. Sziray","doi":"10.1109/PRDC.2011.41","DOIUrl":"https://doi.org/10.1109/PRDC.2011.41","url":null,"abstract":"The paper presents a unified test model which is a mapping scheme for describing the one-to-one correspondence between the input and output domains of a given hardware or software system. Here the test inputs and the fault classes are also involved. The test model incorporates both the verification and the validation schemes for the hardware and software.","PeriodicalId":254760,"journal":{"name":"2011 IEEE 17th Pacific Rim International Symposium on Dependable Computing","volume":"38 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2011-12-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130925536","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
A. Napolitano, G. Carrozza, Antonio Bovenzi, C. Esposito
The next generation of critical systems requires an efficient, scalable and robust data dissemination infrastructure. Middleware solutions compliant with the novel OMG standard, called Data Distribution Service (DDS), are being traditionally used for architecting large-scale systems, because they well meet the requirements of scalability, seamless decoupling and fault tolerance. Due to such features, industrial practitioners are enforcing the adoption of such middleware solutions also within the context of critical systems. However, these systems pose serious dependability requirements, which in turn demand DDS compliant products also to realize reliable data dissemination in different and heterogeneous contexts. Hence, assessing the supported reliability degree and proposing improvement strategies becomes crucial and requires a clear understanding of DDS compliant middleware failing behavior. This paper illustrates an innovative tool to automatically evaluate the robustness of DDS-compliant middleware based on a fault injection technique. Specifically, experiments have been conducted on an actual implementation of the DDS standard, by means of injecting a set of proper invalid inputs through its API and analyzing the achieved outcomes.
下一代关键系统需要高效、可扩展和健壮的数据传播基础设施。中间件解决方案符合新的OMG标准,称为数据分布服务(Data Distribution Service, DDS),传统上用于构建大型系统,因为它们很好地满足了可伸缩性、无缝解耦和容错的要求。由于这些特性,工业从业者也在关键系统的上下文中强制采用此类中间件解决方案。然而,这些系统提出了严重的可靠性要求,这反过来又要求符合DDS的产品在不同和异构环境下实现可靠的数据传播。因此,评估支持的可靠性程度并提出改进策略变得至关重要,并且需要清楚地了解符合DDS的中间件故障行为。本文提出了一种基于故障注入技术的dds兼容中间件鲁棒性自动评估工具。具体而言,通过DDS标准的API注入一组适当的无效输入,并对实际实施DDS标准的结果进行了实验分析。
{"title":"Automatic Robustness Assessment of DDS-Compliant Middleware","authors":"A. Napolitano, G. Carrozza, Antonio Bovenzi, C. Esposito","doi":"10.1109/PRDC.2011.51","DOIUrl":"https://doi.org/10.1109/PRDC.2011.51","url":null,"abstract":"The next generation of critical systems requires an efficient, scalable and robust data dissemination infrastructure. Middleware solutions compliant with the novel OMG standard, called Data Distribution Service (DDS), are being traditionally used for architecting large-scale systems, because they well meet the requirements of scalability, seamless decoupling and fault tolerance. Due to such features, industrial practitioners are enforcing the adoption of such middleware solutions also within the context of critical systems. However, these systems pose serious dependability requirements, which in turn demand DDS compliant products also to realize reliable data dissemination in different and heterogeneous contexts. Hence, assessing the supported reliability degree and proposing improvement strategies becomes crucial and requires a clear understanding of DDS compliant middleware failing behavior. This paper illustrates an innovative tool to automatically evaluate the robustness of DDS-compliant middleware based on a fault injection technique. Specifically, experiments have been conducted on an actual implementation of the DDS standard, by means of injecting a set of proper invalid inputs through its API and analyzing the achieved outcomes.","PeriodicalId":254760,"journal":{"name":"2011 IEEE 17th Pacific Rim International Symposium on Dependable Computing","volume":"195 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2011-12-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"122433293","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Matthias Güdemann, Michael Lipaczewski, F. Ortmeier, Marco Schumann, R. Eschbach
Dependability is often a very abstract concept. The reason is that dependability implications shall be very rare and are often not even wanted to happen during testing. In particular for software-intensive systems, it is very hard to find correct causal relationships/minimal cut sets. Modern model-based approaches help here by computing for example minimal cut sets automatically. However, these methods always rely on a correct model of the environment. In addition, the results are often not traceable or understandable for humans. Therefore, we suggest combining model-based analysis for deriving safety properties with virtual realities for ensuring model validity and trace-ability of results.
{"title":"Towards Making Dependability Visual -- Combining Model-Based Design and Virtual Realities","authors":"Matthias Güdemann, Michael Lipaczewski, F. Ortmeier, Marco Schumann, R. Eschbach","doi":"10.1109/PRDC.2011.55","DOIUrl":"https://doi.org/10.1109/PRDC.2011.55","url":null,"abstract":"Dependability is often a very abstract concept. The reason is that dependability implications shall be very rare and are often not even wanted to happen during testing. In particular for software-intensive systems, it is very hard to find correct causal relationships/minimal cut sets. Modern model-based approaches help here by computing for example minimal cut sets automatically. However, these methods always rely on a correct model of the environment. In addition, the results are often not traceable or understandable for humans. Therefore, we suggest combining model-based analysis for deriving safety properties with virtual realities for ensuring model validity and trace-ability of results.","PeriodicalId":254760,"journal":{"name":"2011 IEEE 17th Pacific Rim International Symposium on Dependable Computing","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2011-12-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128957213","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Kordy, S. Mauw, S. Radomirovic, P. Schweitzer, D Mougouei, M. Moghtadaei, S. Moradmand
It is critical to develop an effective way to monitor advanced metering infrastructures (AMI). To ensure the security and reliability of a modernized power grid, the current deployment of millions of smart meters requires the development of innovative situational awareness solutions to prevent compromised devices from impacting the stability of the grid and the reliability of the energy distribution infrastructure. To address this issue, we introduce a specification-based intrusion detection sensor that can be deployed in the field to identify security threats in real time. This sensor monitors the traffic among meters and access points at the network, transport, and application layers to ensure that devices are running in a secure state and their operations respect a specified security policy. It does this by implementing a set of constraints on transmissions made using the C12.22 standard protocol that ensure that all violations of the specified security policy will be detected. The soundness of these constraints was verified using a formal framework, and a prototype implementation of the sensor was evaluated with realistic AMI network traffic.
{"title":"Specification-Based Intrusion Detection for Advanced Metering Infrastructures","authors":"Kordy, S. Mauw, S. Radomirovic, P. Schweitzer, D Mougouei, M. Moghtadaei, S. Moradmand","doi":"10.1109/PRDC.2011.30","DOIUrl":"https://doi.org/10.1109/PRDC.2011.30","url":null,"abstract":"It is critical to develop an effective way to monitor advanced metering infrastructures (AMI). To ensure the security and reliability of a modernized power grid, the current deployment of millions of smart meters requires the development of innovative situational awareness solutions to prevent compromised devices from impacting the stability of the grid and the reliability of the energy distribution infrastructure. To address this issue, we introduce a specification-based intrusion detection sensor that can be deployed in the field to identify security threats in real time. This sensor monitors the traffic among meters and access points at the network, transport, and application layers to ensure that devices are running in a secure state and their operations respect a specified security policy. It does this by implementing a set of constraints on transmissions made using the C12.22 standard protocol that ensure that all violations of the specified security policy will be detected. The soundness of these constraints was verified using a formal framework, and a prototype implementation of the sensor was evaluated with realistic AMI network traffic.","PeriodicalId":254760,"journal":{"name":"2011 IEEE 17th Pacific Rim International Symposium on Dependable Computing","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2011-12-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"117349254","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
In the near future embedded systems will be faced with the phenomena of increasing error rates, caused by a variety of error sources that have to be considered during the design process. In this paper we propose a method to derive the reliability of a real-time capable CAN bus system with errors. Individual errors on the CAN bus might be correlated in arbitrary way, the proposed algorithm will cover this. It is based on a previous work on reliability analysis that has been restricted to uncorrelated bit errors. To extend this approach we first introduce a suitable error model to describe arbitrary correlations between bit errors. As a key novelty we present an extended analysis procedure that takes this error model into account. This new approach will be utilized to determine the effects of burst errors and to demonstrate the necessity of appropriate error models for reliability analysis.
{"title":"Utilizing Hidden Markov Models for Formal Reliability Analysis of Real-Time Communication Systems with Errors","authors":"M. Sebastian, Philip Axer, R. Ernst","doi":"10.1109/PRDC.2011.19","DOIUrl":"https://doi.org/10.1109/PRDC.2011.19","url":null,"abstract":"In the near future embedded systems will be faced with the phenomena of increasing error rates, caused by a variety of error sources that have to be considered during the design process. In this paper we propose a method to derive the reliability of a real-time capable CAN bus system with errors. Individual errors on the CAN bus might be correlated in arbitrary way, the proposed algorithm will cover this. It is based on a previous work on reliability analysis that has been restricted to uncorrelated bit errors. To extend this approach we first introduce a suitable error model to describe arbitrary correlations between bit errors. As a key novelty we present an extended analysis procedure that takes this error model into account. This new approach will be utilized to determine the effects of burst errors and to demonstrate the necessity of appropriate error models for reliability analysis.","PeriodicalId":254760,"journal":{"name":"2011 IEEE 17th Pacific Rim International Symposium on Dependable Computing","volume":"40 3","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2011-12-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"121014360","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Jie Xiao, Jianhui Jiang, Xuguang Zhu, Chengtian Ouyang
The rapid development of nanotechnology has opened up new possibilities and introduced new challenges for circuit design. It is very important to study new analysis methods for accurate circuit reliability. Few methods for evaluating circuit reliability were proposed in recent years. For example, the original probabilistic transfer matrix (PTM) model has large time and space overhead, so it can only calculate small scale circuits, the improved PTM model proposed in [2] can handle large scale circuits but it also has large time overhead. In this paper, the concept of macro-gate is defined and an iterative PTM model based on macro-gate is proposed. Based on this model, a circuit reliability evaluation algorithm that can calculate the circuit reliability from primary input to any level of the circuit is given. The complexity of the proposed algorithm related to the number of macro-gates contained in the circuit is linear. Experimental results show that the proposed method has the same accuracy as the PTM model, but it has lower time overhead for large circuits.
{"title":"A Method of Gate-Level Circuit Reliability Estimation Based on Iterative PTM Model","authors":"Jie Xiao, Jianhui Jiang, Xuguang Zhu, Chengtian Ouyang","doi":"10.1109/PRDC.2011.45","DOIUrl":"https://doi.org/10.1109/PRDC.2011.45","url":null,"abstract":"The rapid development of nanotechnology has opened up new possibilities and introduced new challenges for circuit design. It is very important to study new analysis methods for accurate circuit reliability. Few methods for evaluating circuit reliability were proposed in recent years. For example, the original probabilistic transfer matrix (PTM) model has large time and space overhead, so it can only calculate small scale circuits, the improved PTM model proposed in [2] can handle large scale circuits but it also has large time overhead. In this paper, the concept of macro-gate is defined and an iterative PTM model based on macro-gate is proposed. Based on this model, a circuit reliability evaluation algorithm that can calculate the circuit reliability from primary input to any level of the circuit is given. The complexity of the proposed algorithm related to the number of macro-gates contained in the circuit is linear. Experimental results show that the proposed method has the same accuracy as the PTM model, but it has lower time overhead for large circuits.","PeriodicalId":254760,"journal":{"name":"2011 IEEE 17th Pacific Rim International Symposium on Dependable Computing","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2011-12-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130244711","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}