Pub Date : 2022-11-30DOI: https://dl.acm.org/doi/10.1145/3572782
Maria Maximova, Sven Schneider, Holger Giese
The analysis of behavioral models is of high importance for cyber-physical systems, as the systems often encompass complex behavior based on e.g. concurrent components with mutual exclusion or probabilistic failures on demand. The rule-based formalism of Probabilistic Timed Graph Transformation Systems (PTGTSs) is a suitable choice when the models representing states of the system can be understood as graphs and timed and probabilistic behavior is important. However, model checking PTGTSs is limited to systems with rather small state spaces.
We present an approach for the analysis of large-scale systems modeled as PTGTSs by systematically decomposing their state spaces into manageable fragments. To obtain qualitative and quantitative analysis results for a large-scale system, we verify that results obtained for its fragments serve as overapproximations for the corresponding results of the large-scale system. Hence, our approach allows for the detection of violations of qualitative and quantitative safety properties for the large-scale system under analysis. We consider a running example in which shuttles drive on tracks of a large-scale topology and autonomously coordinate their local behavior with other shuttles nearby. For this running example, we verify that (a) shuttles can always make the expected forward progress using several properties, (b) shuttles never collide, and (c) shuttles are unlikely to execute emergency brakes in two scenarios. In our evaluation, we apply an implementation of our approach in the tool AutoGraph to our running example.
{"title":"Compositional Analysis of Probabilistic Timed Graph Transformation Systems","authors":"Maria Maximova, Sven Schneider, Holger Giese","doi":"https://dl.acm.org/doi/10.1145/3572782","DOIUrl":"https://doi.org/https://dl.acm.org/doi/10.1145/3572782","url":null,"abstract":"<p>The analysis of behavioral models is of high importance for cyber-physical systems, as the systems often encompass complex behavior based on e.g. concurrent components with mutual exclusion or probabilistic failures on demand. The rule-based formalism of Probabilistic Timed Graph Transformation Systems (PTGTSs) is a suitable choice when the models representing states of the system can be understood as graphs and timed and probabilistic behavior is important. However, model checking PTGTSs is limited to systems with rather small state spaces. </p><p>We present an approach for the analysis of large-scale systems modeled as PTGTSs by systematically decomposing their state spaces into manageable fragments. To obtain qualitative and quantitative analysis results for a large-scale system, we verify that results obtained for its fragments serve as overapproximations for the corresponding results of the large-scale system. Hence, our approach allows for the detection of violations of qualitative and quantitative safety properties for the large-scale system under analysis. We consider a running example in which shuttles drive on tracks of a large-scale topology and autonomously coordinate their local behavior with other shuttles nearby. For this running example, we verify that (a) shuttles can always make the expected forward progress using several properties, (b) shuttles never collide, and (c) shuttles are unlikely to execute emergency brakes in two scenarios. In our evaluation, we apply an implementation of our approach in the tool <span>AutoGraph</span> to our running example.</p>","PeriodicalId":50432,"journal":{"name":"Formal Aspects of Computing","volume":"24 1","pages":""},"PeriodicalIF":1.0,"publicationDate":"2022-11-30","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"138517760","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2022-09-20DOI: https://dl.acm.org/doi/10.1145/3536429
Hichem Debbi
One of the major advantages of model checking over other formal methods is its ability to generate a counterexample when a model does not satisfy is its specification. A counterexample is an error trace that helps to locate the source of the error. Therefore, the counterexample represents a valuable tool for debugging. In Probabilistic Model Checking (PMC), the task of counterexample generation has a quantitative aspect. Unlike the previous methods proposed for conventional model checking that generate the counterexample as a single path ending with a bad state representing the failure, the task in PMC is completely different. A counterexample in PMC is a set of evidences or diagnostic paths that satisfy a path formula, whose probability mass violates the probability threshold.
Counterexample generation is not sufficient for finding the exact source of the error. Therefore, in conventional model checking, many debugging techniques have been proposed to act on the counterexamples generated to locate the source of the error. In PMC, debugging counterexamples is more challenging, since the probabilistic counterexample consists of multiple paths and it is probabilistic. In this article, we propose a debugging technique based on stochastic games to analyze probabilistic counterexamples generated for probabilistic models described as Markov chains in PRISM language. The technique is based mainly on the idea of considering the modules composing the system as players of a reachability game, whose actions contribute to the evolution of the game. Through many case studies, we will show that our technique is very effective for systems employing multiple components. The results are also validated by introducing a debugging tool called GEPCX (Game Explainer of Probabilistic Counterexamples).
与其他形式化方法相比,模型检查的主要优点之一是,当模型不满足其规范时,它能够生成反例。反例是帮助定位错误来源的错误跟踪。因此,反例是一个有价值的调试工具。在概率模型检验(PMC)中,反例生成的任务具有定量化的特点。与之前提出的传统模型检查方法不同,PMC中的任务完全不同,传统模型检查方法将反例生成为以坏状态结束的单个路径,表示失败。PMC中的反例是一组满足路径公式的证据或诊断路径,其概率质量违反概率阈值。反例生成不足以找到错误的确切来源。因此,在传统的模型检查中,提出了许多调试技术来对生成的反例进行操作,以定位错误的来源。在PMC中,调试反例更具挑战性,因为概率反例由多条路径组成,而且是概率性的。在本文中,我们提出了一种基于随机博弈的调试技术来分析PRISM语言中描述为马尔可夫链的概率模型生成的概率反例。该技术主要基于将组成系统的模块视为可达性游戏的玩家的理念,他们的行动有助于游戏的发展。通过许多案例研究,我们将展示我们的技术对于使用多个组件的系统是非常有效的。通过引入一个名为GEPCX (Game Explainer of Probabilistic Counterexamples)的调试工具,结果也得到了验证。
{"title":"A Debugging Game for Probabilistic Models","authors":"Hichem Debbi","doi":"https://dl.acm.org/doi/10.1145/3536429","DOIUrl":"https://doi.org/https://dl.acm.org/doi/10.1145/3536429","url":null,"abstract":"<p>One of the major advantages of model checking over other formal methods is its ability to generate a counterexample when a model does not satisfy is its specification. A counterexample is an error trace that helps to locate the source of the error. Therefore, the counterexample represents a valuable tool for debugging. In Probabilistic Model Checking (PMC), the task of counterexample generation has a quantitative aspect. Unlike the previous methods proposed for conventional model checking that generate the counterexample as a single path ending with a bad state representing the failure, the task in PMC is completely different. A counterexample in PMC is a set of evidences or diagnostic paths that satisfy a path formula, whose probability mass violates the probability threshold. </p><p>Counterexample generation is not sufficient for finding the exact source of the error. Therefore, in conventional model checking, many debugging techniques have been proposed to act on the counterexamples generated to locate the source of the error. In PMC, debugging counterexamples is more challenging, since the probabilistic counterexample consists of multiple paths and it is probabilistic. In this article, we propose a debugging technique based on stochastic games to analyze probabilistic counterexamples generated for probabilistic models described as Markov chains in PRISM language. The technique is based mainly on the idea of considering the modules composing the system as players of a reachability game, whose actions contribute to the evolution of the game. Through many case studies, we will show that our technique is very effective for systems employing multiple components. The results are also validated by introducing a debugging tool called GEPCX (Game Explainer of Probabilistic Counterexamples).</p>","PeriodicalId":50432,"journal":{"name":"Formal Aspects of Computing","volume":"72 1","pages":""},"PeriodicalIF":1.0,"publicationDate":"2022-09-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"138517773","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2022-09-20DOI: https://dl.acm.org/doi/10.1145/3536430
Cheng-Hao Cai, Jing Sun, Gillian Dobbie, Zhé Hóu, Hadrien Bride, Jin Song Dong, Scott Uk-Jin Lee
Automated model repair techniques enable machines to synthesise patches that ensure models meet given requirements. B-repair, which is an existing model repair approach, assists users in repairing erroneous models in the B formal method, but repairing large models is inefficient due to successive applications of repair. In this work, we improve the performance of B-repair using simultaneous modifications, repair refactoring, and better classifiers. The simultaneous modifications can eliminate multiple invariant violations at a time so the average time to repair each fault can be reduced. Further, the modifications can be refactored to reduce the length of repair. The purpose of using better classifiers is to perform more accurate and general repairs and avoid inefficient brute-force searches. We conducted an empirical study to demonstrate that the improved implementation leads to the entire model process achieving higher accuracy, generality, and efficiency.
{"title":"Fast Automated Abstract Machine Repair Using Simultaneous Modifications and Refactoring","authors":"Cheng-Hao Cai, Jing Sun, Gillian Dobbie, Zhé Hóu, Hadrien Bride, Jin Song Dong, Scott Uk-Jin Lee","doi":"https://dl.acm.org/doi/10.1145/3536430","DOIUrl":"https://doi.org/https://dl.acm.org/doi/10.1145/3536430","url":null,"abstract":"<p>Automated model repair techniques enable machines to synthesise patches that ensure models meet given requirements. B-repair, which is an existing model repair approach, assists users in repairing erroneous models in the B formal method, but repairing large models is inefficient due to successive applications of repair. In this work, we improve the performance of B-repair using simultaneous modifications, repair refactoring, and better classifiers. The simultaneous modifications can eliminate multiple invariant violations at a time so the average time to repair each fault can be reduced. Further, the modifications can be refactored to reduce the length of repair. The purpose of using better classifiers is to perform more accurate and general repairs and avoid inefficient brute-force searches. We conducted an empirical study to demonstrate that the improved implementation leads to the entire model process achieving higher accuracy, generality, and efficiency.</p>","PeriodicalId":50432,"journal":{"name":"Formal Aspects of Computing","volume":"3 1","pages":""},"PeriodicalIF":1.0,"publicationDate":"2022-09-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"138517759","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2022-09-19DOI: https://dl.acm.org/doi/10.1145/3545180
Blair Archibald, Muffy Calder, Michele Sevegnani
Bigraphs are a universal computational modelling formalism for the spatial and temporal evolution of a system in which entities can be added and removed. We extend bigraphs to probabilistic bigraphs, and then again to action bigraphs, which include non-determinism and rewards. The extensions are implemented in the BigraphER toolkit and illustrated through examples of virus spread in computer networks and data harvesting in wireless sensor systems. BigraphER also supports the existing stochastic bigraphs extension of Krivine et al. and using BigraphER we give, for the first time, a direct implementation of the membrane budding model used to motivate stochastic bigraphs.
{"title":"Probabilistic Bigraphs","authors":"Blair Archibald, Muffy Calder, Michele Sevegnani","doi":"https://dl.acm.org/doi/10.1145/3545180","DOIUrl":"https://doi.org/https://dl.acm.org/doi/10.1145/3545180","url":null,"abstract":"<p>Bigraphs are a universal computational modelling formalism for the spatial and temporal evolution of a system in which entities can be added and removed. We extend bigraphs to probabilistic bigraphs, and then again to action bigraphs, which include non-determinism and rewards. The extensions are implemented in the BigraphER toolkit and illustrated through examples of virus spread in computer networks and data harvesting in wireless sensor systems. BigraphER also supports the existing <i>stochastic bigraphs</i> extension of Krivine et al. and using BigraphER we give, for the first time, a direct implementation of the membrane budding model used to motivate stochastic bigraphs.</p>","PeriodicalId":50432,"journal":{"name":"Formal Aspects of Computing","volume":"49 1","pages":""},"PeriodicalIF":1.0,"publicationDate":"2022-09-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"138517801","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2022-09-19DOI: https://dl.acm.org/doi/10.1145/3543670
Samuel Coward, Lawrence Paulson, Theo Drane, Emiliano Morini
We present a method for formal verification of transcendental hardware and software algorithms that scales to higher precision without suffering an exponential growth in runtimes. A class of implementations using piecewise polynomial approximation to compute the result is verified using MetiTarski, an automated theorem prover, which verifies a range of inputs for each call. The method was applied to commercial implementations from Cadence Design Systems with significant runtime gains over exhaustive testing methods and was successful in proving that the expected accuracy of one implementation was overly optimistic. Reproducing the verification of a sine implementation in software, previously done using an alternative theorem-proving technique, demonstrates that the MetiTarski approach is a viable competitor. Verification of a 52-bit implementation of the square root function highlights the method’s high-precision capabilities.
{"title":"Formal Verification of Transcendental Fixed- and Floating-point Algorithms using an Automatic Theorem Prover","authors":"Samuel Coward, Lawrence Paulson, Theo Drane, Emiliano Morini","doi":"https://dl.acm.org/doi/10.1145/3543670","DOIUrl":"https://doi.org/https://dl.acm.org/doi/10.1145/3543670","url":null,"abstract":"<p>We present a method for formal verification of transcendental hardware and software algorithms that scales to higher precision without suffering an exponential growth in runtimes. A class of implementations using piecewise polynomial approximation to compute the result is verified using MetiTarski, an automated theorem prover, which verifies a range of inputs for each call. The method was applied to commercial implementations from Cadence Design Systems with significant runtime gains over exhaustive testing methods and was successful in proving that the expected accuracy of one implementation was overly optimistic. Reproducing the verification of a sine implementation in software, previously done using an alternative theorem-proving technique, demonstrates that the MetiTarski approach is a viable competitor. Verification of a 52-bit implementation of the square root function highlights the method’s high-precision capabilities.</p>","PeriodicalId":50432,"journal":{"name":"Formal Aspects of Computing","volume":"26 1","pages":""},"PeriodicalIF":1.0,"publicationDate":"2022-09-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"138517775","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Y. Aït-Ameur, Sergiy Bogomolov, G. Dupont, A. Iliasov, A. Romanovsky, P. Stankaitis
For years, formal methods have been successfully applied in the railway domain to formally demonstrate safety of railway systems. Despite that, little has been done in the field of formal methods to address the cyber-physical nature of modern railway signalling systems. In this article, we present an approach for a formal development of cyber-physical railway signalling systems that is based on a refinement-based modelling and proof-based verification. Our approach utilises the Event-B formal specification language together with a hybrid system and communication modelling patterns to developing a generic hybrid railway signalling system model that can be further refined to capture a specific railway signalling system. The main technical contribution of this article is the refinement of the hybrid train Event-B model with other railway signalling sub-systems. The complete model of the cyber-physical railway signalling system was formally proved to ensure a safe rolling stock separation and prevent their derailment. Furthermore, the article demonstrates the advantage of the refinement-based development approach of cyber-physical systems, which enables a problem decomposition and in turn reduction in the verification and modelling effort.
{"title":"A Refinement-based Formal Development of Cyber-physical Railway Signalling Systems","authors":"Y. Aït-Ameur, Sergiy Bogomolov, G. Dupont, A. Iliasov, A. Romanovsky, P. Stankaitis","doi":"10.1145/3524052","DOIUrl":"https://doi.org/10.1145/3524052","url":null,"abstract":"For years, formal methods have been successfully applied in the railway domain to formally demonstrate safety of railway systems. Despite that, little has been done in the field of formal methods to address the cyber-physical nature of modern railway signalling systems. In this article, we present an approach for a formal development of cyber-physical railway signalling systems that is based on a refinement-based modelling and proof-based verification. Our approach utilises the Event-B formal specification language together with a hybrid system and communication modelling patterns to developing a generic hybrid railway signalling system model that can be further refined to capture a specific railway signalling system. The main technical contribution of this article is the refinement of the hybrid train Event-B model with other railway signalling sub-systems. The complete model of the cyber-physical railway signalling system was formally proved to ensure a safe rolling stock separation and prevent their derailment. Furthermore, the article demonstrates the advantage of the refinement-based development approach of cyber-physical systems, which enables a problem decomposition and in turn reduction in the verification and modelling effort.","PeriodicalId":50432,"journal":{"name":"Formal Aspects of Computing","volume":"35 1","pages":"1 - 1"},"PeriodicalIF":1.0,"publicationDate":"2022-08-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"45878598","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2022-07-21DOI: https://dl.acm.org/doi/10.1145/3527458
Andreas Humenberger, Daneshvar Amrollahi, Nikolaj Bjørner, Laura Kovács
Provably correct software is one of the key challenges of our software-driven society. Program synthesis—the task of constructing a program satisfying a given specification—is one strategy for achieving this. The result of this task is then a program that is correct by design. As in the domain of program verification, handling loops is one of the main ingredients to a successful synthesis procedure.
We present an algorithm for synthesizing loops satisfying a given polynomial loop invariant. The class of loops we are considering can be modeled by a system of algebraic recurrence equations with constant coefficients, thus encoding program loops with affine operations among program variables. We turn the task of loop synthesis into a polynomial constraint problem by precisely characterizing the set of all loops satisfying the given invariant. We prove soundness of our approach, as well as its completeness with respect to an a priori fixed upper bound on the number of program variables. Our work has applications toward synthesizing loops satisfying a given polynomial loop invariant—program verification—as well as generating number sequences from algebraic relations. To understand viability of the methodology and heuristics for synthesizing loops, we implement and evaluate the method using the Absynth tool.
{"title":"Algebra-Based Reasoning for Loop Synthesis","authors":"Andreas Humenberger, Daneshvar Amrollahi, Nikolaj Bjørner, Laura Kovács","doi":"https://dl.acm.org/doi/10.1145/3527458","DOIUrl":"https://doi.org/https://dl.acm.org/doi/10.1145/3527458","url":null,"abstract":"<p>Provably correct software is one of the key challenges of our software-driven society. Program synthesis—the task of constructing a program satisfying a given specification—is one strategy for achieving this. The result of this task is then a program that is correct by design. As in the domain of program verification, handling loops is one of the main ingredients to a successful synthesis procedure.</p><p>We present an algorithm for synthesizing loops satisfying a given polynomial loop invariant. The class of loops we are considering can be modeled by a system of algebraic recurrence equations with constant coefficients, thus encoding program loops with affine operations among program variables. We turn the task of loop synthesis into a polynomial constraint problem by precisely characterizing the set of all loops satisfying the given invariant. We prove soundness of our approach, as well as its completeness with respect to an <i>a priori</i> fixed upper bound on the number of program variables. Our work has applications toward synthesizing loops satisfying a given polynomial loop invariant—program verification—as well as generating number sequences from algebraic relations. To understand viability of the methodology and heuristics for synthesizing loops, we implement and evaluate the method using the <monospace>Absynth</monospace> tool.</p>","PeriodicalId":50432,"journal":{"name":"Formal Aspects of Computing","volume":"21 1","pages":""},"PeriodicalIF":1.0,"publicationDate":"2022-07-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"138517802","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Model checking techniques have often been applied to the verification of railway interlocking systems, responsible for guiding trains safely through a given railway network. However, these techniques fail to scale to the interlocking systems controlling large stations, composed of hundreds and even thousands of controlled entities, due to the state space explosion problem. Indeed, interlocking systems exhibit a certain degree of locality that allows some reasoning only on the mere set of entities that regard the train movements, but safe routing through a complex station layout requires a global reservation policy, which can require global state conditions to be taken into account. In this article, we present a compositional approach aimed at chopping the verification of a large interlocking system into that of smaller fragments, exploiting in each fragment a proper abstraction of the global information on routing state. A proof is given of the thesis that verifying the safety of the smaller fragments is sufficient to verify the safety of the whole network. Experiments using this compositional approach have shown important gains in performance of the verification, as well as in the size of affordable station layouts.
{"title":"Compositional Verification of Railway Interlocking Systems","authors":"A. Haxthausen, A. Fantechi","doi":"10.1145/3549736","DOIUrl":"https://doi.org/10.1145/3549736","url":null,"abstract":"Model checking techniques have often been applied to the verification of railway interlocking systems, responsible for guiding trains safely through a given railway network. However, these techniques fail to scale to the interlocking systems controlling large stations, composed of hundreds and even thousands of controlled entities, due to the state space explosion problem. Indeed, interlocking systems exhibit a certain degree of locality that allows some reasoning only on the mere set of entities that regard the train movements, but safe routing through a complex station layout requires a global reservation policy, which can require global state conditions to be taken into account. In this article, we present a compositional approach aimed at chopping the verification of a large interlocking system into that of smaller fragments, exploiting in each fragment a proper abstraction of the global information on routing state. A proof is given of the thesis that verifying the safety of the smaller fragments is sufficient to verify the safety of the whole network. Experiments using this compositional approach have shown important gains in performance of the verification, as well as in the size of affordable station layouts.","PeriodicalId":50432,"journal":{"name":"Formal Aspects of Computing","volume":"35 1","pages":"1 - 46"},"PeriodicalIF":1.0,"publicationDate":"2022-07-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"43917506","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2022-07-05DOI: https://dl.acm.org/doi/full/10.1145/3522582
Tomas Kulik, Brijesh Dongol, Peter Gorm Larsen, Hugo Daniel Macedo, Steve Schneider, Peter W. V. Tran-Jørgensen, James Woodcock
In today’s world, critical infrastructure is often controlled by computing systems. This introduces new risks for cyber attacks, which can compromise the security and disrupt the functionality of these systems. It is therefore necessary to build such systems with strong guarantees of resiliency against cyber attacks. One way to achieve this level of assurance is using formal verification, which provides proofs of system compliance with desired cyber security properties. The use of Formal Methods (FM) in aspects of cyber security and safety-critical systems are reviewed in this article. We split FM into the three main classes: theorem proving, model checking, and lightweight FM. To allow the different uses of FM to be compared, we define a common set of terms. We further develop categories based on the type of computing system FM are applied in. Solutions in each class and category are presented, discussed, compared, and summarised. We describe historical highlights and developments and present a state-of-the-art review in the area of FM in cyber security. This review is presented from the point of view of FM practitioners and researchers, commenting on the trends in each of the classes and categories. This is achieved by considering all types of FM, several types of security and safety-critical systems, and by structuring the taxonomy accordingly. The article hence provides a comprehensive overview of FM and techniques available to system designers of security-critical systems, simplifying the process of choosing the right tool for the task. The article concludes by summarising the discussion of the review, focusing on best practices, challenges, general future trends, and directions of research within this field.
{"title":"A Survey of Practical Formal Methods for Security","authors":"Tomas Kulik, Brijesh Dongol, Peter Gorm Larsen, Hugo Daniel Macedo, Steve Schneider, Peter W. V. Tran-Jørgensen, James Woodcock","doi":"https://dl.acm.org/doi/full/10.1145/3522582","DOIUrl":"https://doi.org/https://dl.acm.org/doi/full/10.1145/3522582","url":null,"abstract":"<p>In today’s world, critical infrastructure is often controlled by computing systems. This introduces new risks for cyber attacks, which can compromise the security and disrupt the functionality of these systems. It is therefore necessary to build such systems with strong guarantees of resiliency against cyber attacks. One way to achieve this level of assurance is using formal verification, which provides proofs of system compliance with desired cyber security properties. The use of Formal Methods (FM) in aspects of cyber security and safety-critical systems are reviewed in this article. We split FM into the three main classes: theorem proving, model checking, and lightweight FM. To allow the different uses of FM to be compared, we define a common set of terms. We further develop categories based on the type of computing system FM are applied in. Solutions in each class and category are presented, discussed, compared, and summarised. We describe historical highlights and developments and present a state-of-the-art review in the area of FM in cyber security. This review is presented from the point of view of FM practitioners and researchers, commenting on the trends in each of the classes and categories. This is achieved by considering all types of FM, several types of security and safety-critical systems, and by structuring the taxonomy accordingly. The article hence provides a comprehensive overview of FM and techniques available to system designers of security-critical systems, simplifying the process of choosing the right tool for the task. The article concludes by summarising the discussion of the review, focusing on best practices, challenges, general future trends, and directions of research within this field.</p>","PeriodicalId":50432,"journal":{"name":"Formal Aspects of Computing","volume":"49 1","pages":""},"PeriodicalIF":1.0,"publicationDate":"2022-07-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"138517761","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2022-07-05DOI: https://dl.acm.org/doi/full/10.1145/3523737
Stefan Bodenmüller, Gerhard Schellhorn, Wolfgang Reif
When developing file systems, caching is a common technique to achieve a performant implementation. Integrating write-back caches is not primarily a problem for functional correctness, but is critical for proving crash safety. Since parts of written data are stored in volatile memory, special care has to be taken when integrating write-back caches to guarantee that a power cut during a running operation leads to a consistent state. This article shows how non-order-preserving caches can be added to a virtual file system switch (VFS) and gives a novel crash-safety criterion matching the characteristics of such caches. Broken down to individual files, a power cut can be explained by constructing an alternative run, where all writes since the last synchronization of that file have written a prefix. VFS caches have been integrated modularly into Flashix, a verified file system for flash memory, and both functional correctness and crash-safety of this extension have been verified with the interactive theorem prover KIV.
{"title":"Verification of Crashsafe Caching in a Virtual File System Switch","authors":"Stefan Bodenmüller, Gerhard Schellhorn, Wolfgang Reif","doi":"https://dl.acm.org/doi/full/10.1145/3523737","DOIUrl":"https://doi.org/https://dl.acm.org/doi/full/10.1145/3523737","url":null,"abstract":"<p>When developing file systems, caching is a common technique to achieve a performant implementation. Integrating write-back caches is not primarily a problem for functional correctness, but is critical for proving crash safety. Since parts of written data are stored in volatile memory, special care has to be taken when integrating write-back caches to guarantee that a power cut during a running operation leads to a consistent state. This article shows how non-order-preserving caches can be added to a virtual file system switch (VFS) and gives a novel crash-safety criterion matching the characteristics of such caches. Broken down to individual files, a power cut can be explained by constructing an alternative run, where all writes since the last synchronization of that file have written a prefix. VFS caches have been integrated modularly into Flashix, a verified file system for flash memory, and both functional correctness and crash-safety of this extension have been verified with the interactive theorem prover KIV.</p>","PeriodicalId":50432,"journal":{"name":"Formal Aspects of Computing","volume":"176 1","pages":""},"PeriodicalIF":1.0,"publicationDate":"2022-07-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"138517769","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}