The basic condition of software reliability assessment is failure time, which must be acquired during a test based on operational profile or on real usage. Failure data from software development or other non-software reliability testing (SRT) cannot be used for reliability evaluation because such data do not include usage information and failure time. This paper presents a software reliability virtual test (SRVT), which constructs the software input space model and the known failure input space model through which possible failure time can be determined by matching the randomly generate inputs. An experiment comparing SRT and SRVT with different thresholds is introduced to verify SRVT. Results indicate that SRVT saves a large amount of testing time while providing reliability assessment with acceptable accuracy.
{"title":"Software Reliability Virtual Testing for Reliability Assessment","authors":"J. Ai, Hanyu Pei, Liang Yan","doi":"10.1109/SERE-C.2014.24","DOIUrl":"https://doi.org/10.1109/SERE-C.2014.24","url":null,"abstract":"The basic condition of software reliability assessment is failure time, which must be acquired during a test based on operational profile or on real usage. Failure data from software development or other non-software reliability testing (SRT) cannot be used for reliability evaluation because such data do not include usage information and failure time. This paper presents a software reliability virtual test (SRVT), which constructs the software input space model and the known failure input space model through which possible failure time can be determined by matching the randomly generate inputs. An experiment comparing SRT and SRVT with different thresholds is introduced to verify SRVT. Results indicate that SRVT saves a large amount of testing time while providing reliability assessment with acceptable accuracy.","PeriodicalId":373062,"journal":{"name":"2014 IEEE Eighth International Conference on Software Security and Reliability-Companion","volume":"16 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2014-06-30","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"133351339","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Wiem Tounsi, Benjamin Justus, N. Cuppens-Boulahia, F. Cuppens, Joaquín García
Pencil-and-paper ciphers are plausible solutions that could provide lightweight protection to the communication of resource-constrained devices. A good example in this category is Schneier's Solitaire cipher. In this paper, we propose a probabilistic solution that is able to estimate Solitaire's keystream cycle length. We also present a variation of Solitaire's original design, and evaluate the resulting construction in terms of predictability. We conduct statistical randomness tests on both the original design and the modified version based on the NIST randomness test suite. The results show that our approach improves the randomness of original Solitaire's output sequences.
{"title":"Probabilistic Cycle Detection for Schneier's Solitaire Keystream Algorithm","authors":"Wiem Tounsi, Benjamin Justus, N. Cuppens-Boulahia, F. Cuppens, Joaquín García","doi":"10.1109/SERE-C.2014.29","DOIUrl":"https://doi.org/10.1109/SERE-C.2014.29","url":null,"abstract":"Pencil-and-paper ciphers are plausible solutions that could provide lightweight protection to the communication of resource-constrained devices. A good example in this category is Schneier's Solitaire cipher. In this paper, we propose a probabilistic solution that is able to estimate Solitaire's keystream cycle length. We also present a variation of Solitaire's original design, and evaluate the resulting construction in terms of predictability. We conduct statistical randomness tests on both the original design and the modified version based on the NIST randomness test suite. The results show that our approach improves the randomness of original Solitaire's output sequences.","PeriodicalId":373062,"journal":{"name":"2014 IEEE Eighth International Conference on Software Security and Reliability-Companion","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2014-06-30","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"131310208","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Cyber-Physical Systems(CPS) is a new trend of real-time systems in the area of distributed embedded systems or networked agent systems. The first author introduced a specification language for real-time system, called as spatial-temporal consistency language (Shortly, STeC) in 2010. In this paper, the authors introduce a novel clock system, called as hybrid clock, to specify both logical and chronometric time aspect of real time system. Some operations on hybrid clocks and relations between hybrid clocks are introduced. A satisfaction relation between a hybrid clock and a STeC design of real time system specified in term with STeC language is defined. Some properties and CPS case studies are given in this paper.
{"title":"A Hybrid Clock System Related to STeC Language","authors":"Yixiang Chen, Yuanrui Zhang","doi":"10.1109/SERE-C.2014.39","DOIUrl":"https://doi.org/10.1109/SERE-C.2014.39","url":null,"abstract":"Cyber-Physical Systems(CPS) is a new trend of real-time systems in the area of distributed embedded systems or networked agent systems. The first author introduced a specification language for real-time system, called as spatial-temporal consistency language (Shortly, STeC) in 2010. In this paper, the authors introduce a novel clock system, called as hybrid clock, to specify both logical and chronometric time aspect of real time system. Some operations on hybrid clocks and relations between hybrid clocks are introduced. A satisfaction relation between a hybrid clock and a STeC design of real time system specified in term with STeC language is defined. Some properties and CPS case studies are given in this paper.","PeriodicalId":373062,"journal":{"name":"2014 IEEE Eighth International Conference on Software Security and Reliability-Companion","volume":"20 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2014-06-30","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"131488596","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
The arbitration inter frame space, Contention window minimum and Contention window maximum are some of the most important parameters of 802.11e, and the enhanced parameters tuning algorithm is applied for their adjustment. To achieve the high quality of service (QoS), priority combinations strategy with simpleness and effectiveness is proposed. In such a strategy, the internal competition of business analysis methods is used to detect the channel busy probability. Via different settings of the above parameters, the EPT reduces the conflict probability to complete the performance analysis while retreating the traffic business to the idle and zero states. Simulation environments are built for test and validation the better adapted regulation mechanism with the parameters.
{"title":"A Parameters Tuning Algorithm in Wireless Networks","authors":"Hua-Ching Chen, Hsuan-Ming Feng, Benbin Chen, Donghui Guo","doi":"10.1109/SERE-C.2014.49","DOIUrl":"https://doi.org/10.1109/SERE-C.2014.49","url":null,"abstract":"The arbitration inter frame space, Contention window minimum and Contention window maximum are some of the most important parameters of 802.11e, and the enhanced parameters tuning algorithm is applied for their adjustment. To achieve the high quality of service (QoS), priority combinations strategy with simpleness and effectiveness is proposed. In such a strategy, the internal competition of business analysis methods is used to detect the channel busy probability. Via different settings of the above parameters, the EPT reduces the conflict probability to complete the performance analysis while retreating the traffic business to the idle and zero states. Simulation environments are built for test and validation the better adapted regulation mechanism with the parameters.","PeriodicalId":373062,"journal":{"name":"2014 IEEE Eighth International Conference on Software Security and Reliability-Companion","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2014-06-30","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130284508","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
We present, in this paper, ongoing work that investigates a new error detection policy aiming at enhancing the system safety level particularly communication integrity in the presence of permanent errors (single and multiple). We consider critical embedded systems which are based on complex networks including active interstage nodes. This property increases the occurrence probability of permanent errors. The novelty of the proposed policy lies in the fact that unlike classical policies using a single error detection function, it is based rather on a set of different error detection functions. The different used functions must be complementary in terms of detection capability in order to increase the resultant error detection capability. Our reference application to illustrate the proposed concepts is the Flight Control System (FCS). However, our objective is also to apply the proposed approach to other application domains sharing similar features and characteristics.
{"title":"A Multi-function Error Detection Policy to Enhance Communication Integrity in Critical Embedded Systems","authors":"Amira Zammali, A. D. Bonneval, Y. Crouzet","doi":"10.1109/SERE-C.2014.18","DOIUrl":"https://doi.org/10.1109/SERE-C.2014.18","url":null,"abstract":"We present, in this paper, ongoing work that investigates a new error detection policy aiming at enhancing the system safety level particularly communication integrity in the presence of permanent errors (single and multiple). We consider critical embedded systems which are based on complex networks including active interstage nodes. This property increases the occurrence probability of permanent errors. The novelty of the proposed policy lies in the fact that unlike classical policies using a single error detection function, it is based rather on a set of different error detection functions. The different used functions must be complementary in terms of detection capability in order to increase the resultant error detection capability. Our reference application to illustrate the proposed concepts is the Flight Control System (FCS). However, our objective is also to apply the proposed approach to other application domains sharing similar features and characteristics.","PeriodicalId":373062,"journal":{"name":"2014 IEEE Eighth International Conference on Software Security and Reliability-Companion","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2014-06-30","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128996220","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
As wireless networks being prevalent, rogue access points (AP) become a serious security issue. Among various rogue APs, a fake AP with fully forging the SSID and MAC address of a legitimate AP is the hardest thing to detect and the highest probability of causing security breach. Among the past academic papers, which scholars had published, mainly relied on statistics of packets to detect fake APs. They are apt to trigger false alarms. To measure more precisely, this research proposes an algorithm that is based on the interval, serial number, and timestamp of beacons. In our analysis, even the hackers deliberately synchronize the sequence numbers and timestamp of both legal and fake APs, we are still able to exactly identify whether a fake AP exists or not.
{"title":"An Accurate Fake Access Point Detection Method Based on Deviation of Beacon Time Interval","authors":"Kuo-Fong Kao, Wen-Ching Chen, Jui-Chi Chang, Heng-Te Chu","doi":"10.1109/SERE-C.2014.13","DOIUrl":"https://doi.org/10.1109/SERE-C.2014.13","url":null,"abstract":"As wireless networks being prevalent, rogue access points (AP) become a serious security issue. Among various rogue APs, a fake AP with fully forging the SSID and MAC address of a legitimate AP is the hardest thing to detect and the highest probability of causing security breach. Among the past academic papers, which scholars had published, mainly relied on statistics of packets to detect fake APs. They are apt to trigger false alarms. To measure more precisely, this research proposes an algorithm that is based on the interval, serial number, and timestamp of beacons. In our analysis, even the hackers deliberately synchronize the sequence numbers and timestamp of both legal and fake APs, we are still able to exactly identify whether a fake AP exists or not.","PeriodicalId":373062,"journal":{"name":"2014 IEEE Eighth International Conference on Software Security and Reliability-Companion","volume":"36 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2014-06-30","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"133867024","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Attribute-based access control (ABAC) is a new generation of access control techniques. It enables fine-grained access control by using various attributes of authorization elements, facilitates collaborative policy administration within a large enterprise or across multiple organizations, and allows for decoupling of access control policies from application logic. Nevertheless, ABAC-based systems can be very complex to manage. High expressiveness of ABAC specifications also increases the possibility of having defects. Therefore testing and verification are important for assuring that ABAC policies are specified and enforced correctly. This paper presents an overview of the existing work on specification, dynamic testing, and static verification of ABAC policies. It not only summarizes the up-to-date research progresses, but also provides an understanding about the limitations and open issues of the existing work. It is expected to serve as useful guidelines for future research.
{"title":"Specification and Analysis of Attribute-Based Access Control Policies: An Overview","authors":"Dianxiang Xu, Yunpeng Zhang","doi":"10.1109/SERE-C.2014.21","DOIUrl":"https://doi.org/10.1109/SERE-C.2014.21","url":null,"abstract":"Attribute-based access control (ABAC) is a new generation of access control techniques. It enables fine-grained access control by using various attributes of authorization elements, facilitates collaborative policy administration within a large enterprise or across multiple organizations, and allows for decoupling of access control policies from application logic. Nevertheless, ABAC-based systems can be very complex to manage. High expressiveness of ABAC specifications also increases the possibility of having defects. Therefore testing and verification are important for assuring that ABAC policies are specified and enforced correctly. This paper presents an overview of the existing work on specification, dynamic testing, and static verification of ABAC policies. It not only summarizes the up-to-date research progresses, but also provides an understanding about the limitations and open issues of the existing work. It is expected to serve as useful guidelines for future research.","PeriodicalId":373062,"journal":{"name":"2014 IEEE Eighth International Conference on Software Security and Reliability-Companion","volume":"37 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2014-06-30","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"121285491","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
W. E. Wong, Tej Gidvani, Alfonso Lopez, Ruizhi Gao, M. Horn
Software safety standards are commonly used to guide the development of safety-critical software systems. However, given the existence of multiple competing standards, it is critical to select the most appropriate one for a given project. We have developed a set of 15 criteria to evaluate each standard in terms of its usage, strengths, and limitations. Five standards are studied, including a NASA Software Safety Standard, an FAA System Safety Handbook, MIL-STD-882D (US Department of Defense), DEF-STAN 00-56 (UK Ministry of Defense), and DO-178B (Commercial avionics). Results of our evaluation suggest that different standards score differently with respect to each evaluation criterion. No standard performs better than others on all the criteria. The lessons learned from software-related accidents in which the standards were involved provide further insights on the pros and cons of using each standard.
{"title":"Evaluating Software Safety Standards: A Systematic Review and Comparison","authors":"W. E. Wong, Tej Gidvani, Alfonso Lopez, Ruizhi Gao, M. Horn","doi":"10.1109/SERE-C.2014.25","DOIUrl":"https://doi.org/10.1109/SERE-C.2014.25","url":null,"abstract":"Software safety standards are commonly used to guide the development of safety-critical software systems. However, given the existence of multiple competing standards, it is critical to select the most appropriate one for a given project. We have developed a set of 15 criteria to evaluate each standard in terms of its usage, strengths, and limitations. Five standards are studied, including a NASA Software Safety Standard, an FAA System Safety Handbook, MIL-STD-882D (US Department of Defense), DEF-STAN 00-56 (UK Ministry of Defense), and DO-178B (Commercial avionics). Results of our evaluation suggest that different standards score differently with respect to each evaluation criterion. No standard performs better than others on all the criteria. The lessons learned from software-related accidents in which the standards were involved provide further insights on the pros and cons of using each standard.","PeriodicalId":373062,"journal":{"name":"2014 IEEE Eighth International Conference on Software Security and Reliability-Companion","volume":"22 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2014-06-30","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"121906825","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
In the automatic code verification, it is often necessary for programmers to provide logical annotations in the form of pre-/post-conditions and loop invariants. In this paper, we propose a framework that automatically infers loop invariants of loops manipulating commonly-used data structures. These data structures include one-dimensional arrays, singly-linked lists, doubly-linked lists and static lists. In practical cases, a majority of the loops operating on such data structures work by iterating over the elements of these data structures. The loop invariants of this kind of loops are usually similar in form with their corresponding post-conditions. The framework takes advantage of this observation by generating invariant candidates automatically from a given post-condition following several heuristics. These invariant candidates are subsequently validated via the SMT solver Z3 and the weakest-precondition calculator provided in the interactive code-verification tool Accumulator. The framework, which has been implemented for a small C-like language, suffices to infer suitable loop invariants of a range of loops w.r.t. given post-conditions. The framework has been integrated into the tool Accumulator to ease the verification tasks by alleviating the burden of providing loop invariants manually.
{"title":"Post-condition-Directed Invariant Inference for Loops over Data Structures","authors":"Juan Zhai, Hanfei Wang, Jianhua Zhao","doi":"10.1109/SERE-C.2014.40","DOIUrl":"https://doi.org/10.1109/SERE-C.2014.40","url":null,"abstract":"In the automatic code verification, it is often necessary for programmers to provide logical annotations in the form of pre-/post-conditions and loop invariants. In this paper, we propose a framework that automatically infers loop invariants of loops manipulating commonly-used data structures. These data structures include one-dimensional arrays, singly-linked lists, doubly-linked lists and static lists. In practical cases, a majority of the loops operating on such data structures work by iterating over the elements of these data structures. The loop invariants of this kind of loops are usually similar in form with their corresponding post-conditions. The framework takes advantage of this observation by generating invariant candidates automatically from a given post-condition following several heuristics. These invariant candidates are subsequently validated via the SMT solver Z3 and the weakest-precondition calculator provided in the interactive code-verification tool Accumulator. The framework, which has been implemented for a small C-like language, suffices to infer suitable loop invariants of a range of loops w.r.t. given post-conditions. The framework has been integrated into the tool Accumulator to ease the verification tasks by alleviating the burden of providing loop invariants manually.","PeriodicalId":373062,"journal":{"name":"2014 IEEE Eighth International Conference on Software Security and Reliability-Companion","volume":"28 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2014-06-30","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129940270","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}