Pub Date : 2022-07-05DOI: https://dl.acm.org/doi/full/10.1145/3522577
Cliff B. Jones, Martyn Thomas
In addition to the major UK contributions to research underpinning formal approaches to the specification and development of computer systems—and perhaps as a consequence of this—some significant attempts to deploy the ideas into practical environments have taken place in the United Kingdom. The authors of this article have been involved in formal methods for many years and both had contact with a significant proportion of this history. This article both lists key ideas and indicates where attempts were made to use the ideas in practice. Not all of these deployment stories have been a complete success and an attempt is made to tease out lessons that influence the probability of successful long-term changes to software engineering.
{"title":"The Development and Deployment of Formal Methods in the UK","authors":"Cliff B. Jones, Martyn Thomas","doi":"https://dl.acm.org/doi/full/10.1145/3522577","DOIUrl":"https://doi.org/https://dl.acm.org/doi/full/10.1145/3522577","url":null,"abstract":"<p>In addition to the major UK contributions to research underpinning formal approaches to the specification and development of computer systems—and perhaps as a consequence of this—some significant attempts to deploy the ideas into practical environments have taken place in the United Kingdom. The authors of this article have been involved in formal methods for many years and both had contact with a significant proportion of this history. This article both lists key ideas and indicates where attempts were made to use the ideas in practice. Not all of these deployment stories have been a complete success and an attempt is made to tease out lessons that influence the probability of successful long-term changes to software engineering.</p>","PeriodicalId":50432,"journal":{"name":"Formal Aspects of Computing","volume":"09 1","pages":""},"PeriodicalIF":1.0,"publicationDate":"2022-07-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"138517770","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Exams are an important way for assessing people’s skills and, as such, play a key role in establishing meritocracy in modern societies. To be effective, however, exams need to be fair and secure against tampering which is where Rosario Giustolisi’s book “Modelling and Verification of Secure Exams” [5] comes to a rescue. Over 133 pages, the book describes how to formalize and verify various types of exams. It is best suited for an audience with background in formal methods as well as security wanting to learn more about how formal methods can be used for the design and analysis of secure exam protocols. The book provides a nice overview of key elements of different types of exams, leading to a good understanding of exam protocols in general. To start with, the book introduces basic aspects of an exam, such as roles and principals, phases, and potential threats. In addition, different types of exams, such as traditional, computer-assisted, computer-based, internet-assisted, and internet-based, are identified and briefly discussed. The book even shows how to formally model an exam in the applied pi-calculus [1] as the concurrent execution of different types of processes, such as candidates, examiners, question committee, collector, and remaining authorities. Of particular interest are the various security requirements identified for exams and the way they are formalized. To this end, the book describes three types of security requirements: Authentication is formalized in terms of correspondence properties of the form “if a certain event happens then another event must have happened before”. Privacy requirements are formalized as special kind of bisimilarity requirements. To formalize verifiability requirements, the author first introduces an alternative definition of an exam (compared to the process algebraic one) based on basic set theory. Verifiability is then formulated as a predicate-logic formula over the model. Here, clarity is diminished by the fact that the book uses a mix of two different formalisms: while the applied
{"title":"Review on Modelling and Verification of Secure Exams","authors":"Diego Marmsoler","doi":"10.1145/3545182","DOIUrl":"https://doi.org/10.1145/3545182","url":null,"abstract":"Exams are an important way for assessing people’s skills and, as such, play a key role in establishing meritocracy in modern societies. To be effective, however, exams need to be fair and secure against tampering which is where Rosario Giustolisi’s book “Modelling and Verification of Secure Exams” [5] comes to a rescue. Over 133 pages, the book describes how to formalize and verify various types of exams. It is best suited for an audience with background in formal methods as well as security wanting to learn more about how formal methods can be used for the design and analysis of secure exam protocols. The book provides a nice overview of key elements of different types of exams, leading to a good understanding of exam protocols in general. To start with, the book introduces basic aspects of an exam, such as roles and principals, phases, and potential threats. In addition, different types of exams, such as traditional, computer-assisted, computer-based, internet-assisted, and internet-based, are identified and briefly discussed. The book even shows how to formally model an exam in the applied pi-calculus [1] as the concurrent execution of different types of processes, such as candidates, examiners, question committee, collector, and remaining authorities. Of particular interest are the various security requirements identified for exams and the way they are formalized. To this end, the book describes three types of security requirements: Authentication is formalized in terms of correspondence properties of the form “if a certain event happens then another event must have happened before”. Privacy requirements are formalized as special kind of bisimilarity requirements. To formalize verifiability requirements, the author first introduces an alternative definition of an exam (compared to the process algebraic one) based on basic set theory. Verifiability is then formulated as a predicate-logic formula over the model. Here, clarity is diminished by the fact that the book uses a mix of two different formalisms: while the applied","PeriodicalId":50432,"journal":{"name":"Formal Aspects of Computing","volume":" ","pages":"1 - 3"},"PeriodicalIF":1.0,"publicationDate":"2022-06-30","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"45793080","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Programming is still mostly undisciplined, 45 years after Edsger Dijkstra’s “A Discipline of Programming” [1]. For sure, in critical areas, like aerospace, communications, and silicon fabrication, rigorous approached are standard. But the vast majority of the software that underpins all aspects of everyday life is still crafted by one or more hands, with assurance given by design, inspection, and testing, rather than proof or refinement from a formal specification. There are lots of reasons for this, but, most important, hand crafted software works. Certainly, set against how utterly dependent we now are on software, the number of people who have died as a direct result of failure is vanishingly small. Long may this continue. But, by analogy with other engineering professions, it will take a major disaster to make formal software design and validation mandatory, most likely driven by a legal requirement for practitioner indemnity insurance. Thus, facility with formality is rarely a prerequisite for employment. This is really unfortunate: a demonstrable understanding of foundations should give reassurance of competence at practical programming. Thankfully, most Computer Science programmes include discrete mathematics and computability theory, often alongside declarative programming. Formality may be challenging, but it need not be hard. I have long admired the Nielsons’ pedagogy of presenting formal material through the systematic calculation of concrete examples, well exemplified by their excellent introduction to semantics [2]. Their engaging new book is a direct descendant of Dijkstra’s. The first two chapters present program graphs as abstract representations of programs, and Dijkstra’s Guarded Command language as a source for program graphs.
{"title":"Review of Formal Methods: An Appetizer","authors":"G. Michaelson","doi":"10.1145/3545181","DOIUrl":"https://doi.org/10.1145/3545181","url":null,"abstract":"Programming is still mostly undisciplined, 45 years after Edsger Dijkstra’s “A Discipline of Programming” [1]. For sure, in critical areas, like aerospace, communications, and silicon fabrication, rigorous approached are standard. But the vast majority of the software that underpins all aspects of everyday life is still crafted by one or more hands, with assurance given by design, inspection, and testing, rather than proof or refinement from a formal specification. There are lots of reasons for this, but, most important, hand crafted software works. Certainly, set against how utterly dependent we now are on software, the number of people who have died as a direct result of failure is vanishingly small. Long may this continue. But, by analogy with other engineering professions, it will take a major disaster to make formal software design and validation mandatory, most likely driven by a legal requirement for practitioner indemnity insurance. Thus, facility with formality is rarely a prerequisite for employment. This is really unfortunate: a demonstrable understanding of foundations should give reassurance of competence at practical programming. Thankfully, most Computer Science programmes include discrete mathematics and computability theory, often alongside declarative programming. Formality may be challenging, but it need not be hard. I have long admired the Nielsons’ pedagogy of presenting formal material through the systematic calculation of concrete examples, well exemplified by their excellent introduction to semantics [2]. Their engaging new book is a direct descendant of Dijkstra’s. The first two chapters present program graphs as abstract representations of programs, and Dijkstra’s Guarded Command language as a source for program graphs.","PeriodicalId":50432,"journal":{"name":"Formal Aspects of Computing","volume":"34 1","pages":"1 - 2"},"PeriodicalIF":1.0,"publicationDate":"2022-06-30","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"45851621","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Some years ago, the author of the reviewed book and the author of the review shared a ride in the shuttle from Grenoble to Lyon Airport. The author-to-be told the reviewer-to-be about the state of his book project. The existing draft had 1,000 pages. The reviewer explained to the author that in his deep insight into the nature of things and his long-term experience with textbooks there were books that improve the world and there were books that are being read. The author asked for confirmation whether the reviewer felt that the coming book would belong to those books that improve the world. Let me skip how I got myself out of this difficult situation. When asked whether a book will be read, the question is by whom. Citing the author, this book is intended for readers interested in the theory of abstract interpretation, the understanding of formal methods, and the design of verifiers and static analyzers. And my answer is, it is a must read for these groups of people. To make one thing clear from the beginning: This reviewer need not be convinced of the value of Abstract Interpretation as his greatest scientific achievements [5, 6, 8, 10] are based on the foundational work on Abstract Interpretation by Patrick and Radhia Cousot. Static analyses are distinct from . . .model checking, which verifies the correctness of a separate external specification of a program [9]. In model checking, a user supplies the program to be verified and the logical expression or the automaton against which the program is to be checked at the same time. In static analysis there are distinct times, a time when an abstract interpreter is designed with certain facts in mind to be extracted from a class of programs, and there is a time when the abstract interpreter is applied by programmers or verification engineers to extract these type of facts from particular programs. This enables a fruitful division of work. The first phase, the design of non-trivial abstract interpreters needs highly competent specialists, while the second phase is easier, although sometimes also non-trivial. The designer needs to identify abstract domains to
{"title":"Principles of Abstract Interpretation","authors":"R. Wilhelm","doi":"10.1145/3546953","DOIUrl":"https://doi.org/10.1145/3546953","url":null,"abstract":"Some years ago, the author of the reviewed book and the author of the review shared a ride in the shuttle from Grenoble to Lyon Airport. The author-to-be told the reviewer-to-be about the state of his book project. The existing draft had 1,000 pages. The reviewer explained to the author that in his deep insight into the nature of things and his long-term experience with textbooks there were books that improve the world and there were books that are being read. The author asked for confirmation whether the reviewer felt that the coming book would belong to those books that improve the world. Let me skip how I got myself out of this difficult situation. When asked whether a book will be read, the question is by whom. Citing the author, this book is intended for readers interested in the theory of abstract interpretation, the understanding of formal methods, and the design of verifiers and static analyzers. And my answer is, it is a must read for these groups of people. To make one thing clear from the beginning: This reviewer need not be convinced of the value of Abstract Interpretation as his greatest scientific achievements [5, 6, 8, 10] are based on the foundational work on Abstract Interpretation by Patrick and Radhia Cousot. Static analyses are distinct from . . .model checking, which verifies the correctness of a separate external specification of a program [9]. In model checking, a user supplies the program to be verified and the logical expression or the automaton against which the program is to be checked at the same time. In static analysis there are distinct times, a time when an abstract interpreter is designed with certain facts in mind to be extracted from a class of programs, and there is a time when the abstract interpreter is applied by programmers or verification engineers to extract these type of facts from particular programs. This enables a fruitful division of work. The first phase, the design of non-trivial abstract interpreters needs highly competent specialists, while the second phase is easier, although sometimes also non-trivial. The designer needs to identify abstract domains to","PeriodicalId":50432,"journal":{"name":"Formal Aspects of Computing","volume":" ","pages":"1 - 3"},"PeriodicalIF":1.0,"publicationDate":"2022-06-30","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"48900944","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
When developers describe a software system with multiple models, such as architecture diagrams, deployment descriptions, and source code, these models must represent the system in a uniform way, i.e., they must be and stay consistent. One means to automatically preserve consistency after changes to models are model transformations, of which bidirectional transformations that preserve consistency between two models have been well researched. To preserve consistency between multiple models, such transformations can be combined to networks. When transformations are developed independently and reused modularly, the resulting network can be of arbitrary topology. For such networks, no universal strategy exists to orchestrate the execution of transformations such that the resulting models are consistent. In this article, we prove that termination of such a strategy can only be guaranteed if it is incomplete, i.e., if it is allowed to fail to restore consistency for some changes although an execution order of transformations exists that yields consistent models. We propose such a strategy, for which we prove termination and show that and why it makes it easier for users of model transformation networks to understand the reasons whenever the strategy fails. In addition, we provide a simulator for the comparison of different execution strategies. These findings help transformation developers and users in understanding when and why they can expect the execution of a transformation network to terminate and when they can even expect it to succeed. Furthermore, the proposed strategy guarantees them termination and supports them in finding the reason whenever it is not successful.
{"title":"Termination and Expressiveness of Execution Strategies for Networks of Bidirectional Model Transformations","authors":"Heiko Klare, Joshua Gleitze","doi":"10.1145/3543845","DOIUrl":"https://doi.org/10.1145/3543845","url":null,"abstract":"When developers describe a software system with multiple models, such as architecture diagrams, deployment descriptions, and source code, these models must represent the system in a uniform way, i.e., they must be and stay consistent. One means to automatically preserve consistency after changes to models are model transformations, of which bidirectional transformations that preserve consistency between two models have been well researched. To preserve consistency between multiple models, such transformations can be combined to networks. When transformations are developed independently and reused modularly, the resulting network can be of arbitrary topology. For such networks, no universal strategy exists to orchestrate the execution of transformations such that the resulting models are consistent. In this article, we prove that termination of such a strategy can only be guaranteed if it is incomplete, i.e., if it is allowed to fail to restore consistency for some changes although an execution order of transformations exists that yields consistent models. We propose such a strategy, for which we prove termination and show that and why it makes it easier for users of model transformation networks to understand the reasons whenever the strategy fails. In addition, we provide a simulator for the comparison of different execution strategies. These findings help transformation developers and users in understanding when and why they can expect the execution of a transformation network to terminate and when they can even expect it to succeed. Furthermore, the proposed strategy guarantees them termination and supports them in finding the reason whenever it is not successful.","PeriodicalId":50432,"journal":{"name":"Formal Aspects of Computing","volume":"35 1","pages":"1 - 35"},"PeriodicalIF":1.0,"publicationDate":"2022-06-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"46991276","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Samuel Coward, Lawrence Charles Paulson, Theo Drane, Emiliano Morini
We present a method for formal verification of transcendental hardware and software algorithms that scales to higher precision without suffering an exponential growth in runtimes. A class of implementations using piecewise polynomial approximation to compute the result is verified using MetiTarski, an automated theorem prover, which verifies a range of inputs for each call. The method was applied to commercial implementations from Cadence Design Systems with significant runtime gains over exhaustive testing methods and was successful in proving that the expected accuracy of one implementation was overly optimistic. Reproducing the verification of a sine implementation in software, previously done using an alternative theorem-proving technique, demonstrates that the MetiTarski approach is a viable competitor. Verification of a 52-bit implementation of the square root function highlights the method’s high-precision capabilities.
{"title":"Formal Verification of Transcendental Fixed- and Floating-point Algorithms using an Automatic Theorem Prover","authors":"Samuel Coward, Lawrence Charles Paulson, Theo Drane, Emiliano Morini","doi":"10.1145/3543670","DOIUrl":"https://doi.org/10.1145/3543670","url":null,"abstract":"We present a method for formal verification of transcendental hardware and software algorithms that scales to higher precision without suffering an exponential growth in runtimes. A class of implementations using piecewise polynomial approximation to compute the result is verified using MetiTarski, an automated theorem prover, which verifies a range of inputs for each call. The method was applied to commercial implementations from Cadence Design Systems with significant runtime gains over exhaustive testing methods and was successful in proving that the expected accuracy of one implementation was overly optimistic. Reproducing the verification of a sine implementation in software, previously done using an alternative theorem-proving technique, demonstrates that the MetiTarski approach is a viable competitor. Verification of a 52-bit implementation of the square root function highlights the method’s high-precision capabilities.","PeriodicalId":50432,"journal":{"name":"Formal Aspects of Computing","volume":"34 1","pages":"1 - 22"},"PeriodicalIF":1.0,"publicationDate":"2022-06-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"47722073","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
One of the major advantages of model checking over other formal methods is its ability to generate a counterexample when a model does not satisfy is its specification. A counterexample is an error trace that helps to locate the source of the error. Therefore, the counterexample represents a valuable tool for debugging. In Probabilistic Model Checking (PMC), the task of counterexample generation has a quantitative aspect. Unlike the previous methods proposed for conventional model checking that generate the counterexample as a single path ending with a bad state representing the failure, the task in PMC is completely different. A counterexample in PMC is a set of evidences or diagnostic paths that satisfy a path formula, whose probability mass violates the probability threshold. Counterexample generation is not sufficient for finding the exact source of the error. Therefore, in conventional model checking, many debugging techniques have been proposed to act on the counterexamples generated to locate the source of the error. In PMC, debugging counterexamples is more challenging, since the probabilistic counterexample consists of multiple paths and it is probabilistic. In this article, we propose a debugging technique based on stochastic games to analyze probabilistic counterexamples generated for probabilistic models described as Markov chains in PRISM language. The technique is based mainly on the idea of considering the modules composing the system as players of a reachability game, whose actions contribute to the evolution of the game. Through many case studies, we will show that our technique is very effective for systems employing multiple components. The results are also validated by introducing a debugging tool called GEPCX (Game Explainer of Probabilistic Counterexamples).
与其他形式化方法相比,模型检查的主要优点之一是,当模型不满足其规范时,它能够生成反例。反例是帮助定位错误来源的错误跟踪。因此,反例是一个有价值的调试工具。在概率模型检验(PMC)中,反例生成的任务具有定量化的特点。与之前提出的传统模型检查方法不同,PMC中的任务完全不同,传统模型检查方法将反例生成为以坏状态结束的单个路径,表示失败。PMC中的反例是一组满足路径公式的证据或诊断路径,其概率质量违反概率阈值。反例生成不足以找到错误的确切来源。因此,在传统的模型检查中,提出了许多调试技术来对生成的反例进行操作,以定位错误的来源。在PMC中,调试反例更具挑战性,因为概率反例由多条路径组成,而且是概率性的。在本文中,我们提出了一种基于随机博弈的调试技术来分析PRISM语言中描述为马尔可夫链的概率模型生成的概率反例。该技术主要基于将组成系统的模块视为可达性游戏的玩家的理念,他们的行动有助于游戏的发展。通过许多案例研究,我们将展示我们的技术对于使用多个组件的系统是非常有效的。通过引入一个名为GEPCX (Game Explainer of Probabilistic Counterexamples)的调试工具,结果也得到了验证。
{"title":"A Debugging Game for Probabilistic Models","authors":"Hichem Debbi","doi":"10.1145/3536429","DOIUrl":"https://doi.org/10.1145/3536429","url":null,"abstract":"One of the major advantages of model checking over other formal methods is its ability to generate a counterexample when a model does not satisfy is its specification. A counterexample is an error trace that helps to locate the source of the error. Therefore, the counterexample represents a valuable tool for debugging. In Probabilistic Model Checking (PMC), the task of counterexample generation has a quantitative aspect. Unlike the previous methods proposed for conventional model checking that generate the counterexample as a single path ending with a bad state representing the failure, the task in PMC is completely different. A counterexample in PMC is a set of evidences or diagnostic paths that satisfy a path formula, whose probability mass violates the probability threshold. Counterexample generation is not sufficient for finding the exact source of the error. Therefore, in conventional model checking, many debugging techniques have been proposed to act on the counterexamples generated to locate the source of the error. In PMC, debugging counterexamples is more challenging, since the probabilistic counterexample consists of multiple paths and it is probabilistic. In this article, we propose a debugging technique based on stochastic games to analyze probabilistic counterexamples generated for probabilistic models described as Markov chains in PRISM language. The technique is based mainly on the idea of considering the modules composing the system as players of a reachability game, whose actions contribute to the evolution of the game. Through many case studies, we will show that our technique is very effective for systems employing multiple components. The results are also validated by introducing a debugging tool called GEPCX (Game Explainer of Probabilistic Counterexamples).","PeriodicalId":50432,"journal":{"name":"Formal Aspects of Computing","volume":"34 1","pages":"1 - 25"},"PeriodicalIF":1.0,"publicationDate":"2022-05-15","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"43494912","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Chenghao Cai, Jing Sun, G. Dobbie, Zhé Hóu, Hadrien Bride, J. Dong, S. Lee
Automated model repair techniques enable machines to synthesise patches that ensure models meet given requirements. B-repair, which is an existing model repair approach, assists users in repairing erroneous models in the B formal method, but repairing large models is inefficient due to successive applications of repair. In this work, we improve the performance of B-repair using simultaneous modifications, repair refactoring, and better classifiers. The simultaneous modifications can eliminate multiple invariant violations at a time so the average time to repair each fault can be reduced. Further, the modifications can be refactored to reduce the length of repair. The purpose of using better classifiers is to perform more accurate and general repairs and avoid inefficient brute-force searches. We conducted an empirical study to demonstrate that the improved implementation leads to the entire model process achieving higher accuracy, generality, and efficiency.
{"title":"Fast Automated Abstract Machine Repair Using Simultaneous Modifications and Refactoring","authors":"Chenghao Cai, Jing Sun, G. Dobbie, Zhé Hóu, Hadrien Bride, J. Dong, S. Lee","doi":"10.1145/3536430","DOIUrl":"https://doi.org/10.1145/3536430","url":null,"abstract":"Automated model repair techniques enable machines to synthesise patches that ensure models meet given requirements. B-repair, which is an existing model repair approach, assists users in repairing erroneous models in the B formal method, but repairing large models is inefficient due to successive applications of repair. In this work, we improve the performance of B-repair using simultaneous modifications, repair refactoring, and better classifiers. The simultaneous modifications can eliminate multiple invariant violations at a time so the average time to repair each fault can be reduced. Further, the modifications can be refactored to reduce the length of repair. The purpose of using better classifiers is to perform more accurate and general repairs and avoid inefficient brute-force searches. We conducted an empirical study to demonstrate that the improved implementation leads to the entire model process achieving higher accuracy, generality, and efficiency.","PeriodicalId":50432,"journal":{"name":"Formal Aspects of Computing","volume":"34 1","pages":"1 - 31"},"PeriodicalIF":1.0,"publicationDate":"2022-05-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"44005672","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
M. Bouwman, Djurre van der Wal, B. Luttik, M. Stoelinga, A. Rensink
We present a case study on the application of formal methods in the railway domain. The case study is part of the FormaSig project, which aims to support the development of EULYNX — a European standard defining generic interfaces for railway equipment — using formal methods. We translate the semi-formal SysML models created within EULYNX to formal mCRL2 models. By adopting a model-centric approach in which a formal model is used both for analyzing the quality of the EULYNX specification and for automated compliance testing, a high degree of traceability is achieved. The target of our case study is the EULYNX Point subsystem interface. We present a detailed catalog of the safety requirements, and provide counterexamples that show that some of them do not hold without specific fairness assumptions. We also use the mCRL2 model to generate both random and guided tests, which we apply to a third-party software simulator. We share metrics on the coverage and execution time of the tests, which show that guided testing outperforms random testing. The test results indicate several discrepancies between the model and the simulator. One of these discrepancies is caused by a fault in the simulator, the others are caused by false positives, i.e. an over-approximation of fail verdicts by our test setup.
{"title":"A Case in Point: Verification and Testing of a EULYNX Interface","authors":"M. Bouwman, Djurre van der Wal, B. Luttik, M. Stoelinga, A. Rensink","doi":"10.1145/3528207","DOIUrl":"https://doi.org/10.1145/3528207","url":null,"abstract":"We present a case study on the application of formal methods in the railway domain. The case study is part of the FormaSig project, which aims to support the development of EULYNX — a European standard defining generic interfaces for railway equipment — using formal methods. We translate the semi-formal SysML models created within EULYNX to formal mCRL2 models. By adopting a model-centric approach in which a formal model is used both for analyzing the quality of the EULYNX specification and for automated compliance testing, a high degree of traceability is achieved. The target of our case study is the EULYNX Point subsystem interface. We present a detailed catalog of the safety requirements, and provide counterexamples that show that some of them do not hold without specific fairness assumptions. We also use the mCRL2 model to generate both random and guided tests, which we apply to a third-party software simulator. We share metrics on the coverage and execution time of the tests, which show that guided testing outperforms random testing. The test results indicate several discrepancies between the model and the simulator. One of these discrepancies is caused by a fault in the simulator, the others are caused by false positives, i.e. an over-approximation of fail verdicts by our test setup.","PeriodicalId":50432,"journal":{"name":"Formal Aspects of Computing","volume":"35 1","pages":"1 - 38"},"PeriodicalIF":1.0,"publicationDate":"2022-05-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"47554306","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Stella Simić, A. Bemporad, Omar Inverso, M. Tribastone
We consider the problem of estimating the numerical accuracy of programs with operations in fixed-point arithmetic and variables of arbitrary, mixed precision, and possibly non-deterministic value. By applying a set of parameterised rewrite rules, we transform the relevant fragments of the program under consideration into sequences of operations in integer arithmetic over vectors of bits, thereby reducing the problem as to whether the error enclosures in the initial program can ever exceed a given order of magnitude to simple reachability queries on the transformed program. We describe a possible verification flow and a prototype analyser that implements our technique. We present an experimental evaluation on a particularly complex industrial case study, including a preliminary comparison between bit-level and word-level decision procedures.
{"title":"Tight Error Analysis in Fixed-point Arithmetic","authors":"Stella Simić, A. Bemporad, Omar Inverso, M. Tribastone","doi":"10.1145/3524051","DOIUrl":"https://doi.org/10.1145/3524051","url":null,"abstract":"We consider the problem of estimating the numerical accuracy of programs with operations in fixed-point arithmetic and variables of arbitrary, mixed precision, and possibly non-deterministic value. By applying a set of parameterised rewrite rules, we transform the relevant fragments of the program under consideration into sequences of operations in integer arithmetic over vectors of bits, thereby reducing the problem as to whether the error enclosures in the initial program can ever exceed a given order of magnitude to simple reachability queries on the transformed program. We describe a possible verification flow and a prototype analyser that implements our technique. We present an experimental evaluation on a particularly complex industrial case study, including a preliminary comparison between bit-level and word-level decision procedures.","PeriodicalId":50432,"journal":{"name":"Formal Aspects of Computing","volume":"34 1","pages":"1 - 32"},"PeriodicalIF":1.0,"publicationDate":"2022-05-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"46062211","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}