Pub Date : 2024-09-12DOI: 10.1007/s10703-024-00461-2
Anna Becchi, Alessandro Cimatti
The analysis of legacy systems requires the automated extraction of high-level specifications. We propose a framework, called Abstraction Modulo Stability, for the analysis of transition systems operating in stable states, and responding with run-to-completion transactions to external stimuli. The abstraction captures, in the form of a finite state machine, the effects of external stimuli on the system state. This approach is parametric on a set of predicates of interest and on the definition of stability. We consider some possible stability definitions, which yield different practically relevant abstractions, and propose parametric algorithms for abstraction computation. The framework is evaluated in terms of expressivity and adequacy within an industrial project with the Italian Railway Network, on reverse engineering of relay-based interlocking circuits to extract specifications for a computer-based reimplementation.
{"title":"Abstraction Modulo Stability","authors":"Anna Becchi, Alessandro Cimatti","doi":"10.1007/s10703-024-00461-2","DOIUrl":"https://doi.org/10.1007/s10703-024-00461-2","url":null,"abstract":"<p>The analysis of legacy systems requires the automated extraction of high-level specifications. We propose a framework, called Abstraction Modulo Stability, for the analysis of transition systems operating in stable states, and responding with run-to-completion transactions to external stimuli. The abstraction captures, in the form of a finite state machine, the effects of external stimuli on the system state. This approach is parametric on a set of predicates of interest and on the definition of stability. We consider some possible stability definitions, which yield different practically relevant abstractions, and propose parametric algorithms for abstraction computation. The framework is evaluated in terms of expressivity and adequacy within an industrial project with the Italian Railway Network, on reverse engineering of relay-based interlocking circuits to extract specifications for a computer-based reimplementation.</p>","PeriodicalId":12430,"journal":{"name":"Formal Methods in System Design","volume":"16 1","pages":""},"PeriodicalIF":0.8,"publicationDate":"2024-09-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142217989","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-08-17DOI: 10.1007/s10703-024-00463-0
Chaitanya Agarwal, Shibashis Guha, Jan Křetínský, M. Pazhamalai
Markov decision processes (MDPs) and continuous-time MDP (CTMDPs) are the fundamental models for non-deterministic systems with probabilistic uncertainty. Mean payoff (a.k.a. long-run average reward) is one of the most classic objectives considered in their context. We provide the first practical algorithm to compute mean payoff probably approximately correctly in unknown MDPs. Our algorithm is anytime in the sense that if terminated prematurely, it returns an approximate value with the required confidence. Further, we extend it to unknown CTMDPs. We do not require any knowledge of the state or number of successors of a state, but only a lower bound on the minimum transition probability, which has been advocated in literature. Our algorithm learns the unknown MDP/CTMDP through repeated, directed sampling; thus spending less time on learning components with smaller impact on the mean payoff. In addition to providing probably approximately correct (PAC) bounds for our algorithm, we also demonstrate its practical nature by running experiments on standard benchmarks.
{"title":"PAC statistical model checking of mean payoff in discrete- and continuous-time MDP","authors":"Chaitanya Agarwal, Shibashis Guha, Jan Křetínský, M. Pazhamalai","doi":"10.1007/s10703-024-00463-0","DOIUrl":"https://doi.org/10.1007/s10703-024-00463-0","url":null,"abstract":"<p>Markov decision processes (MDPs) and continuous-time MDP (CTMDPs) are the fundamental models for non-deterministic systems with probabilistic uncertainty. Mean payoff (a.k.a. long-run average reward) is one of the most classic objectives considered in their context. We provide the first practical algorithm to compute mean payoff probably approximately correctly in unknown MDPs. Our algorithm is anytime in the sense that if terminated prematurely, it returns an approximate value with the required confidence. Further, we extend it to unknown CTMDPs. We do not require any knowledge of the state or number of successors of a state, but only a lower bound on the minimum transition probability, which has been advocated in literature. Our algorithm learns the unknown MDP/CTMDP through repeated, directed sampling; thus spending less time on learning components with smaller impact on the mean payoff. In addition to providing probably approximately correct (PAC) bounds for our algorithm, we also demonstrate its practical nature by running experiments on standard benchmarks.</p>","PeriodicalId":12430,"journal":{"name":"Formal Methods in System Design","volume":"19 1","pages":""},"PeriodicalIF":0.8,"publicationDate":"2024-08-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142217984","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-07-31DOI: 10.1007/s10703-024-00462-1
Eleni Vafeiadi Bila, Brijesh Dongol
The advent of non-volatile memory technologies has spurred intensive research interest in correctness and programmability. This paper addresses both by developing and verifying a durable (aka persistent) transactional memory (TM) algorithm, (text {dTML}_{text {Px86}}). Correctness of (text {dTML}_{text {Px86}}) is judged in terms of durable opacity, which ensures both failure atomicity (ensuring memory consistency after a crash) and opacity (ensuring thread safety). We assume a realistic execution model, Px86, which represents Intel’s persistent memory model and extends the Total Store Order memory model with instructions that control persistency. Our TM algorithm, (text {dTML}_{text {Px86}}), is an adaptation of an existing software transactional mutex lock, but with additional synchronisation mechanisms to cope with Px86. Our correctness proof is operational and comprises two distinct types of proofs: (1) proofs of invariants of (text {dTML}_{text {Px86}}) and (2) a proof of refinement against an operational specification that guarantees durable opacity. To achieve (1), we build on recent Owicki–Gries logics for Px86, and for (2) we use a simulation-based proof technique, which, as far as we are aware, is the first application of simulation-based proofs for Px86 programs. Our entire development has been mechanised in the Isabelle/HOL proof assistant.
{"title":"A verified durable transactional mutex lock for persistent x86-TSO","authors":"Eleni Vafeiadi Bila, Brijesh Dongol","doi":"10.1007/s10703-024-00462-1","DOIUrl":"https://doi.org/10.1007/s10703-024-00462-1","url":null,"abstract":"<p>The advent of non-volatile memory technologies has spurred intensive research interest in correctness and programmability. This paper addresses both by developing and verifying a durable (aka persistent) transactional memory (TM) algorithm, <span>(text {dTML}_{text {Px86}})</span>. Correctness of <span>(text {dTML}_{text {Px86}})</span> is judged in terms of <i>durable opacity</i>, which ensures both <i>failure atomicity</i> (ensuring memory consistency after a crash) and <i>opacity</i> (ensuring thread safety). We assume a realistic execution model, Px86, which represents Intel’s persistent memory model and extends the <i>Total Store Order</i> memory model with instructions that control persistency. Our TM algorithm, <span>(text {dTML}_{text {Px86}})</span>, is an adaptation of an existing software transactional mutex lock, but with additional synchronisation mechanisms to cope with Px86. Our correctness proof is operational and comprises two distinct types of proofs: (1) proofs of invariants of <span>(text {dTML}_{text {Px86}})</span> and (2) a proof of refinement against an operational specification that guarantees durable opacity. To achieve (1), we build on recent Owicki–Gries logics for Px86, and for (2) we use a simulation-based proof technique, which, as far as we are aware, is the first application of simulation-based proofs for Px86 programs. Our entire development has been mechanised in the Isabelle/HOL proof assistant.</p>","PeriodicalId":12430,"journal":{"name":"Formal Methods in System Design","volume":"86 1","pages":""},"PeriodicalIF":0.8,"publicationDate":"2024-07-31","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141863919","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-07-09DOI: 10.1007/s10703-024-00460-3
Shuanglong Kan, Zhe Chen, David Sanán, Yang Liu
Rust is an emergent systems programming language highlighting memory safety through its Ownership and Borrowing System (OBS). Formalizing OBS in semantics is essential in certifying Rust’s memory safety guarantees. Existing formalizations of OBS are at the language level. That is, they explain OBS on Rust’s constructs. This paper proposes a different view of OBS at the memory level, independent of Rust’s constructs. The basic idea of our formalization is mapping the OBS invariants maintained by Rust’s type system to memory layouts and checking the invariants for memory operations. Our memory-level formalization of OBS helps people better understand the relationship between OBS and memory safety by narrowing the gap between OBS and memory operations. Moreover, it enables potential reuse of Rust’s OBS in other programming languages since memory operations are standard features and our formalization is not bound to Rust’s constructs. Based on the memory model, we have developed an executable operational semantics for Rust, called RustSEM, and implemented the semantics in K-Framework ((mathbb {K})). RustSEM covers a much larger subset of the significant language constructs than existing formal semantics for Rust. More importantly, RustSEM can run and verify real Rust programs by exploiting (mathbb {K})’s execution and verification engines. We have evaluated the semantic correctness of RustSEM wrt. the Rust compiler using around 700 tests. In particular, we have compared our formalization of OBS in the memory model with Rust’s type system and identified their differences due to the conservation of the Rust compiler. Moreover, our formalization of OBS is helpful to identifying undefined behavior of Rust programs with mixed safe and unsafe operations. We have also evaluated the potential applications of RustSEM in automated runtime and formal verification for functional and memory properties. Experimental results show that RustSEM can enhance Rust’s memory safety mechanism, as it is more powerful than OBS in the Rust compiler for detecting memory errors.
{"title":"Formally understanding Rust’s ownership and borrowing system at the memory level","authors":"Shuanglong Kan, Zhe Chen, David Sanán, Yang Liu","doi":"10.1007/s10703-024-00460-3","DOIUrl":"https://doi.org/10.1007/s10703-024-00460-3","url":null,"abstract":"<p>Rust is an emergent systems programming language highlighting memory safety through its Ownership and Borrowing System (OBS). Formalizing OBS in semantics is essential in certifying Rust’s memory safety guarantees. Existing formalizations of OBS are at the language level. That is, they explain OBS on Rust’s constructs. This paper proposes a different view of OBS at the memory level, independent of Rust’s constructs. The basic idea of our formalization is mapping the OBS invariants maintained by Rust’s type system to memory layouts and checking the invariants for memory operations. Our memory-level formalization of OBS helps people better understand the relationship between OBS and memory safety by narrowing the gap between OBS and memory operations. Moreover, it enables potential reuse of Rust’s OBS in other programming languages since memory operations are standard features and our formalization is not bound to Rust’s constructs. Based on the memory model, we have developed an executable operational semantics for Rust, called RustSEM, and implemented the semantics in K-Framework (<span>(mathbb {K})</span>). RustSEM covers a much larger subset of the significant language constructs than existing formal semantics for Rust. More importantly, RustSEM can run and verify real Rust programs by exploiting <span>(mathbb {K})</span>’s execution and verification engines. We have evaluated the semantic correctness of RustSEM wrt. the Rust compiler using around 700 tests. In particular, we have compared our formalization of OBS in the memory model with Rust’s type system and identified their differences due to the conservation of the Rust compiler. Moreover, our formalization of OBS is helpful to identifying undefined behavior of Rust programs with mixed safe and unsafe operations. We have also evaluated the potential applications of RustSEM in automated runtime and formal verification for functional and memory properties. Experimental results show that RustSEM can enhance Rust’s memory safety mechanism, as it is more powerful than OBS in the Rust compiler for detecting memory errors.</p>","PeriodicalId":12430,"journal":{"name":"Formal Methods in System Design","volume":"1 1","pages":""},"PeriodicalIF":0.8,"publicationDate":"2024-07-09","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141566936","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-06-17DOI: 10.1007/s10703-024-00457-y
Stanley Bak, Taylor Dohmen, K. Subramani, Ashutosh Trivedi, Alvaro Velasquez, Piotr Wojciechowski
Efficient verification algorithms for neural networks often depend on various abstract domains such as intervals, zonotopes, and linear star sets. The choice of the abstract domain presents an expressiveness vs. scalability trade-off: simpler domains are less precise but yield faster algorithms. This paper investigates the hexatope and octatope abstract domains in the context of neural net verification. Hexatopes are affine transformations of higher-dimensional hexagons, defined by difference constraint systems, and octatopes are affine transformations of higher-dimensional octagons, defined by unit-two-variable-per-inequality constraint systems. These domains generalize the idea of zonotopes which can be viewed as affine transformations of hypercubes. On the other hand, they can be considered as a restriction of linear star sets, which are affine transformations of arbitrary (mathcal {H})-Polytopes. This distinction places hexatopes and octatopes firmly between zonotopes and linear star sets in their expressive power, but what about the efficiency of decision procedures? An important analysis problem for neural networks is the exact range computation problem that asks to compute the exact set of possible outputs given a set of possible inputs. For this, three computational procedures are needed: (1) optimization of a linear cost function; (2) affine mapping; and (3) over-approximating the intersection with a half-space. While zonotopes allow an efficient solution for these approaches, star sets solves these procedures via linear programming. We show that these operations are faster for hexatopes and octatopes than they are for the more expressive linear star sets by reducing the linear optimization problem over these domains to the minimum cost network flow, which can be solved in strongly polynomial time using the Out-of-Kilter algorithm. Evaluating exact range computation on several ACAS Xu neural network benchmarks, we find that hexatopes and octatopes show promise as a practical abstract domain for neural network verification.
{"title":"The hexatope and octatope abstract domains for neural network verification","authors":"Stanley Bak, Taylor Dohmen, K. Subramani, Ashutosh Trivedi, Alvaro Velasquez, Piotr Wojciechowski","doi":"10.1007/s10703-024-00457-y","DOIUrl":"https://doi.org/10.1007/s10703-024-00457-y","url":null,"abstract":"<p>Efficient verification algorithms for neural networks often depend on various abstract domains such as <i>intervals</i>, <i>zonotopes</i>, and <i>linear star sets</i>. The choice of the abstract domain presents an expressiveness vs. scalability trade-off: simpler domains are less precise but yield faster algorithms. This paper investigates the <i>hexatope</i> and <i>octatope</i> abstract domains in the context of neural net verification. Hexatopes are affine transformations of higher-dimensional hexagons, defined by difference constraint systems, and octatopes are affine transformations of higher-dimensional octagons, defined by unit-two-variable-per-inequality constraint systems. These domains generalize the idea of zonotopes which can be viewed as affine transformations of hypercubes. On the other hand, they can be considered as a restriction of linear star sets, which are affine transformations of arbitrary <span>(mathcal {H})</span>-Polytopes. This distinction places hexatopes and octatopes firmly between zonotopes and linear star sets in their expressive power, but what about the efficiency of decision procedures? An important analysis problem for neural networks is the <i>exact range computation</i> problem that asks to compute the exact set of possible outputs given a set of possible inputs. For this, three computational procedures are needed: (1) optimization of a linear cost function; (2) affine mapping; and (3) over-approximating the intersection with a half-space. While zonotopes allow an efficient solution for these approaches, star sets solves these procedures via linear programming. We show that these operations are faster for hexatopes and octatopes than they are for the more expressive linear star sets by reducing the linear optimization problem over these domains to the minimum cost network flow, which can be solved in strongly polynomial time using the Out-of-Kilter algorithm. Evaluating exact range computation on several ACAS Xu neural network benchmarks, we find that hexatopes and octatopes show promise as a practical abstract domain for neural network verification.</p>","PeriodicalId":12430,"journal":{"name":"Formal Methods in System Design","volume":"145 1","pages":""},"PeriodicalIF":0.8,"publicationDate":"2024-06-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141529142","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-06-11DOI: 10.1007/s10703-024-00455-0
Daneshvar Amrollahi, Ezio Bartocci, George Kenison, Laura Kovács, Marcel Moosbrugger, Miroslav Stankovič
Automatically generating invariants, key to computer-aided analysis of probabilistic and deterministic programs and compiler optimisation, is a challenging open problem. Whilst the problem is in general undecidable, the goal is settled for restricted classes of loops. For the class of solvable loops, introduced by Rodríguez-Carbonell and Kapur (in: Proceedings of the ISSAC, pp 266–273, 2004), one can automatically compute invariants from closed-form solutions of recurrence equations that model the loop behaviour. In this paper we establish a technique for invariant synthesis for loops that are not solvable, termed unsolvable loops. Our approach automatically partitions the program variables and identifies the so-called defective variables that characterise unsolvability. Herein we consider the following two applications. First, we present a novel technique that automatically synthesises polynomials from defective monomials, that admit closed-form solutions and thus lead to polynomial loop invariants. Second, given an unsolvable loop, we synthesise solvable loops with the following property: the invariant polynomials of the solvable loops are all invariants of the given unsolvable loop. Our implementation and experiments demonstrate both the feasibility and applicability of our approach to both deterministic and probabilistic programs.
{"title":"(Un)Solvable loop analysis","authors":"Daneshvar Amrollahi, Ezio Bartocci, George Kenison, Laura Kovács, Marcel Moosbrugger, Miroslav Stankovič","doi":"10.1007/s10703-024-00455-0","DOIUrl":"https://doi.org/10.1007/s10703-024-00455-0","url":null,"abstract":"<p>Automatically generating invariants, key to computer-aided analysis of probabilistic and deterministic programs and compiler optimisation, is a challenging open problem. Whilst the problem is in general undecidable, the goal is settled for restricted classes of loops. For the class of <i>solvable</i> loops, introduced by Rodríguez-Carbonell and Kapur (in: Proceedings of the ISSAC, pp 266–273, 2004), one can automatically compute invariants from closed-form solutions of recurrence equations that model the loop behaviour. In this paper we establish a technique for invariant synthesis for loops that are not solvable, termed <i>unsolvable</i> loops. Our approach automatically partitions the program variables and identifies the so-called <i>defective</i> variables that characterise unsolvability. Herein we consider the following two applications. First, we present a novel technique that automatically synthesises polynomials from defective monomials, that admit closed-form solutions and thus lead to polynomial loop invariants. Second, given an unsolvable loop, we synthesise solvable loops with the following property: the invariant polynomials of the solvable loops are all invariants of the given unsolvable loop. Our implementation and experiments demonstrate both the feasibility and applicability of our approach to both deterministic and probabilistic programs.</p>","PeriodicalId":12430,"journal":{"name":"Formal Methods in System Design","volume":"3 1","pages":""},"PeriodicalIF":0.8,"publicationDate":"2024-06-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141504913","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-05-24DOI: 10.1007/s10703-024-00452-3
Alexander Konrad, Christoph Scholl, Alireza Mahzoon, Daniel Große, Rolf Drechsler
Recent methods based on Symbolic Computer Algebra (SCA) have shown great success in formal verification of multipliers and—more recently—of dividers as well. In this paper we enhance known approaches by the computation of satisfiability don’t cares for so-called Extended Atomic Blocks (EABs) and by Delayed Don’t Care Optimization (DDCO) for optimizing polynomials during backward rewriting. Using those novel methods we are able to extend the applicability of SCA-based methods to further divider architectures which could not be handled by previous approaches. We successfully apply the approach to the fully automatic formal verification of large dividers (with bit widths up to 512).
{"title":"Divider verification using symbolic computer algebra and delayed don’t care optimization: theory and practical implementation","authors":"Alexander Konrad, Christoph Scholl, Alireza Mahzoon, Daniel Große, Rolf Drechsler","doi":"10.1007/s10703-024-00452-3","DOIUrl":"https://doi.org/10.1007/s10703-024-00452-3","url":null,"abstract":"<p>Recent methods based on <i>Symbolic Computer Algebra</i> (SCA) have shown great success in formal verification of multipliers and—more recently—of dividers as well. In this paper we enhance known approaches by the computation of <i>satisfiability don’t cares for so-called Extended Atomic Blocks (EABs)</i> and by <i>Delayed Don’t Care Optimization (DDCO)</i> for optimizing polynomials during backward rewriting. Using those novel methods we are able to extend the applicability of SCA-based methods to further divider architectures which could not be handled by previous approaches. We successfully apply the approach to the fully automatic formal verification of large dividers (with bit widths up to 512).</p>","PeriodicalId":12430,"journal":{"name":"Formal Methods in System Design","volume":"23 1","pages":""},"PeriodicalIF":0.8,"publicationDate":"2024-05-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141147858","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-05-23DOI: 10.1007/s10703-024-00447-0
Ezio Bartocci, Thomas Ferrère, Thomas A. Henzinger, Dejan Nickovic, Ana Oliveira da Costa
Contract-based design is a promising methodology for taming the complexity of developing sophisticated systems. A formal contract distinguishes between assumptions, which are constraints that the designer of a component puts on the environments in which the component can be used safely, and guarantees, which are promises that the designer asks from the team that implements the component. A theory of formal contracts can be formalized as an interface theory, which supports the composition and refinement of both assumptions and guarantees. Although there is a rich landscape of contract-based design methods that address functional and extra-functional properties, we present the first interface theory designed to ensure system-wide security properties. Our framework provides a refinement relation and a composition operation that support both incremental design and independent implementability. We develop our theory for both stateless and stateful interfaces. Additionally, we introduce information-flow contracts where assumptions and guarantees are sets of flow relations. We use these contracts to illustrate how to enrich information-flow interfaces with a semantic view. We illustrate the applicability of our framework with two examples inspired by the automotive domain.
{"title":"Information-flow interfaces","authors":"Ezio Bartocci, Thomas Ferrère, Thomas A. Henzinger, Dejan Nickovic, Ana Oliveira da Costa","doi":"10.1007/s10703-024-00447-0","DOIUrl":"https://doi.org/10.1007/s10703-024-00447-0","url":null,"abstract":"<p>Contract-based design is a promising methodology for taming the complexity of developing sophisticated systems. A formal contract distinguishes between <i>assumptions</i>, which are constraints that the designer of a component puts on the environments in which the component can be used safely, and <i>guarantees</i>, which are promises that the designer asks from the team that implements the component. A theory of formal contracts can be formalized as an <i>interface theory</i>, which supports the composition and refinement of both assumptions and guarantees. Although there is a rich landscape of contract-based design methods that address functional and extra-functional properties, we present the first interface theory designed to ensure system-wide security properties. Our framework provides a refinement relation and a composition operation that support both incremental design and independent implementability. We develop our theory for both <i>stateless</i> and <i>stateful</i> interfaces. Additionally, we introduce information-flow contracts where <i>assumptions</i> and <i>guarantees</i> are sets of flow relations. We use these contracts to illustrate how to enrich information-flow interfaces with a semantic view. We illustrate the applicability of our framework with two examples inspired by the automotive domain.</p>","PeriodicalId":12430,"journal":{"name":"Formal Methods in System Design","volume":"67 1","pages":""},"PeriodicalIF":0.8,"publicationDate":"2024-05-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141147870","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-05-06DOI: 10.1007/s10703-024-00453-2
Akshay Mambakam, José Ignacio Requeno Jarabo, Alexey Bakhirkin, Nicolas Basset, Thao Dang
Cyber-physical systems are complex environments that combine physical devices (i.e., sensors and actuators) with a software controller. The ubiquity of these systems and dangers associated with their failure require the implementation of mechanisms to monitor, verify and guarantee their correct behaviour. This paper presents ParetoLib 2.0, a Python tool for offline monitoring and specification mining of cyber-physical systems. ParetoLib 2.0 uses signal temporal logic (STL) as the formalism for specifying properties on time series. ParetoLib 2.0 builds upon other tools for evaluating and mining STL expressions, and extends them with new functionalities. ParetoLib 2.0 implements a set of new quantitative operators for trace analysis in STL, a novel mining algorithm and an original graphical user interface. Additionally, the performance is optimised with respect to previous releases of the tool via data-type annotations and multi core support. ParetoLib 2.0 allows the offline verification of STL properties as well as the specification mining of parametric STL templates. Thanks to the implementation of the new quantitative operators for STL, the tool outperforms the expressiveness and capabilities of similar runtime monitors.
{"title":"Mining of extended signal temporal logic specifications with ParetoLib 2.0","authors":"Akshay Mambakam, José Ignacio Requeno Jarabo, Alexey Bakhirkin, Nicolas Basset, Thao Dang","doi":"10.1007/s10703-024-00453-2","DOIUrl":"https://doi.org/10.1007/s10703-024-00453-2","url":null,"abstract":"<p>Cyber-physical systems are complex environments that combine physical devices (i.e., sensors and actuators) with a software controller. The ubiquity of these systems and dangers associated with their failure require the implementation of mechanisms to monitor, verify and guarantee their correct behaviour. This paper presents ParetoLib 2.0, a Python tool for offline monitoring and specification mining of cyber-physical systems. ParetoLib 2.0 uses signal temporal logic (STL) as the formalism for specifying properties on time series. ParetoLib 2.0 builds upon other tools for evaluating and mining STL expressions, and extends them with new functionalities. ParetoLib 2.0 implements a set of new quantitative operators for trace analysis in STL, a novel mining algorithm and an original graphical user interface. Additionally, the performance is optimised with respect to previous releases of the tool via data-type annotations and multi core support. ParetoLib 2.0 allows the offline verification of STL properties as well as the specification mining of parametric STL templates. Thanks to the implementation of the new quantitative operators for STL, the tool outperforms the expressiveness and capabilities of similar runtime monitors.</p>","PeriodicalId":12430,"journal":{"name":"Formal Methods in System Design","volume":"115 1","pages":""},"PeriodicalIF":0.8,"publicationDate":"2024-05-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140885463","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-04-04DOI: 10.1007/s10703-024-00445-2
Sebastian Biewer, Kevin Baum, Sarah Sterz, Holger Hermanns, Sven Hetmank, Markus Langer, Anne Lauber-Rönsberg, Franz Lehr
This article introduces a framework that is meant to assist in mitigating societal risks that software can pose. Concretely, this encompasses facets of software doping as well as unfairness and discrimination in high-risk decision-making systems. The term software doping refers to software that contains surreptitiously added functionality that is against the interest of the user. A prominent example of software doping are the tampered emission cleaning systems that were found in millions of cars around the world when the diesel emissions scandal surfaced. The first part of this article combines the formal foundations of software doping analysis with established probabilistic falsification techniques to arrive at a black-box analysis technique for identifying undesired effects of software. We apply this technique to emission cleaning systems in diesel cars but also to high-risk systems that evaluate humans in a possibly unfair or discriminating way. We demonstrate how our approach can assist humans-in-the-loop to make better informed and more responsible decisions. This is to promote effective human oversight, which will be a central requirement enforced by the European Union’s upcoming AI Act. We complement our technical contribution with a juridically, philosophically, and psychologically informed perspective on the potential problems caused by such systems.
{"title":"Software doping analysis for human oversight","authors":"Sebastian Biewer, Kevin Baum, Sarah Sterz, Holger Hermanns, Sven Hetmank, Markus Langer, Anne Lauber-Rönsberg, Franz Lehr","doi":"10.1007/s10703-024-00445-2","DOIUrl":"https://doi.org/10.1007/s10703-024-00445-2","url":null,"abstract":"<p>This article introduces a framework that is meant to assist in mitigating societal risks that software can pose. Concretely, this encompasses facets of software doping as well as unfairness and discrimination in high-risk decision-making systems. The term <i>software doping</i> refers to software that contains surreptitiously added functionality that is against the interest of the user. A prominent example of software doping are the tampered emission cleaning systems that were found in millions of cars around the world when the diesel emissions scandal surfaced. The first part of this article combines the formal foundations of software doping analysis with established probabilistic falsification techniques to arrive at a black-box analysis technique for identifying undesired effects of software. We apply this technique to emission cleaning systems in diesel cars but also to high-risk systems that evaluate humans in a possibly unfair or discriminating way. We demonstrate how our approach can assist humans-in-the-loop to make better informed and more responsible decisions. This is to promote effective human oversight, which will be a central requirement enforced by the European Union’s upcoming AI Act. We complement our technical contribution with a juridically, philosophically, and psychologically informed perspective on the potential problems caused by such systems.</p>","PeriodicalId":12430,"journal":{"name":"Formal Methods in System Design","volume":"5 1","pages":""},"PeriodicalIF":0.8,"publicationDate":"2024-04-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140575441","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}