Pub Date : 2015-12-03DOI: 10.1109/MEMCOD.2015.7340471
A. Rakib, H. Haque
This paper complements our previous work on formal modeling of resource-bounded context-aware systems, which handle inconsistent context information using defeasible reasoning, by focusing on automated analysis and verification. A case study demonstrates how model checking techniques can be used to formally analyze quantitative and qualitative properties of a context-aware system based on message passing among agents. The behavior (semantics) of the system is modeled by a term rewriting system and the desired properties are expressed as LTL formulas. The Maude LTL model checker is used to perform automated analysis of the system and verify non-conflicting context information guarantees it provides.
{"title":"Modeling and verifying context-aware non-monotonic reasoning agents","authors":"A. Rakib, H. Haque","doi":"10.1109/MEMCOD.2015.7340471","DOIUrl":"https://doi.org/10.1109/MEMCOD.2015.7340471","url":null,"abstract":"This paper complements our previous work on formal modeling of resource-bounded context-aware systems, which handle inconsistent context information using defeasible reasoning, by focusing on automated analysis and verification. A case study demonstrates how model checking techniques can be used to formally analyze quantitative and qualitative properties of a context-aware system based on message passing among agents. The behavior (semantics) of the system is modeled by a term rewriting system and the desired properties are expressed as LTL formulas. The Maude LTL model checker is used to perform automated analysis of the system and verify non-conflicting context information guarantees it provides.","PeriodicalId":106851,"journal":{"name":"2015 ACM/IEEE International Conference on Formal Methods and Models for Codesign (MEMOCODE)","volume":"36 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2015-12-03","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114792473","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2015-12-03DOI: 10.1109/MEMCOD.2015.7340478
J. Beaumont, A. Mokhov, D. Sokolov, A. Yakovlev
Asynchronous circuits can be useful in many applications, however, they are yet to be widely used in industry. The main reason for this is a steep learning curve for concurrency models, such Signal Transition Graphs, that are developed by the academic community for specification and synthesis of asynchronous circuits. In this paper we introduce a compositional design flow for asynchronous circuits using concepts - a set of formalised descriptions for system requirements. Our aim is to simplify the process of capturing system requirements in the form of a formal specification, and promote the concepts as a means for design reuse. The proposed design flow is applied to the development of an asynchronous buck converter.
{"title":"Compositional design of asynchronous circuits from behavioural concepts","authors":"J. Beaumont, A. Mokhov, D. Sokolov, A. Yakovlev","doi":"10.1109/MEMCOD.2015.7340478","DOIUrl":"https://doi.org/10.1109/MEMCOD.2015.7340478","url":null,"abstract":"Asynchronous circuits can be useful in many applications, however, they are yet to be widely used in industry. The main reason for this is a steep learning curve for concurrency models, such Signal Transition Graphs, that are developed by the academic community for specification and synthesis of asynchronous circuits. In this paper we introduce a compositional design flow for asynchronous circuits using concepts - a set of formalised descriptions for system requirements. Our aim is to simplify the process of capturing system requirements in the form of a formal specification, and promote the concepts as a means for design reuse. The proposed design flow is applied to the development of an asynchronous buck converter.","PeriodicalId":106851,"journal":{"name":"2015 ACM/IEEE International Conference on Formal Methods and Models for Codesign (MEMOCODE)","volume":"50 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2015-12-03","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127550634","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2015-12-03DOI: 10.1109/MEMCOD.2015.7340464
Ahlem Triki, Jacques Combaz, S. Bensalem
Distributed implementation of real-time systems has always been a challenging task. The coordination of components executing on a distributed platform has to be ensured by complex communication protocols taking into account their timing constraints. We propose a novel method for distributed implementation of the application software formally expressed in Behavior, Interaction, Priority (BIP). A BIP model consists of a set of components, subject to timing constraints, and synchronizing through multiparty interactions. The proposed method transforms BIP models into Send/Receive BIP models that operate using asynchronous message passing. Send/Receive BIP models include additional components called schedulers that observe atomic components states. Based on these observations, the schedulers are required to plan as soon as possible the execution of interactions. We propose a method that optimizes the number of observed components, and thus reduces the number of exchanged messages.
{"title":"Optimized distributed implementation of timed component-based systems","authors":"Ahlem Triki, Jacques Combaz, S. Bensalem","doi":"10.1109/MEMCOD.2015.7340464","DOIUrl":"https://doi.org/10.1109/MEMCOD.2015.7340464","url":null,"abstract":"Distributed implementation of real-time systems has always been a challenging task. The coordination of components executing on a distributed platform has to be ensured by complex communication protocols taking into account their timing constraints. We propose a novel method for distributed implementation of the application software formally expressed in Behavior, Interaction, Priority (BIP). A BIP model consists of a set of components, subject to timing constraints, and synchronizing through multiparty interactions. The proposed method transforms BIP models into Send/Receive BIP models that operate using asynchronous message passing. Send/Receive BIP models include additional components called schedulers that observe atomic components states. Based on these observations, the schedulers are required to plan as soon as possible the execution of interactions. We propose a method that optimizes the number of observed components, and thus reduces the number of exchanged messages.","PeriodicalId":106851,"journal":{"name":"2015 ACM/IEEE International Conference on Formal Methods and Models for Codesign (MEMOCODE)","volume":"90 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2015-12-03","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126244884","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2015-12-03DOI: 10.1109/MEMCOD.2015.7340465
J. Talpin, P. Jouvelot, S. Shukla
The concept of liquid clocks introduced in this paper is a significant step towards a more precise compile-time framework for the analysis of synchronous and polychromous languages. Compiling languages such as Lustre or Signal indeed involves a number of static analyses of programs before they can be synthesized into executable code, e.g., synchronicity class characterization, clock assignment, static scheduling or causality analysis. These analyses are often equivalent to undecidable problems, necessitating abstracting such programs to provide sound yet incomplete analyses. Such abstractions unfortunately often lead to the rejection of programs that could very well be synthesized into deterministic code, provided abstraction refinement steps could be applied for more accurate analysis. To reduce the number of false negatives occurring during the compilation process, we leverage recent advances in type theory - with the definition of decidable classes of value-dependent type systems - and formal verification, linked to the development of efficient SAT/SMT solvers, to provide a type-theoretic approach that considers all the above analyses as type inference problems. To simplify the exposition of our new approach in this paper, we define a refinement type system for a minimalistic, synchronous, stream-processing language to concisely represent, analyze, and verify logical and quantitative properties of programs expressed as stream-processing data-flow networks. Our type system provides a new framework for representing logical time (clocks) and scheduling properties, and to describe their relations with stream values and, possibly, other quantas. We show how to analyze synchronous stream processing programs (à la Lustre, Signal) to enable previously described analyses involved in compiling such programs. We also prove the soundness of our type system and elaborate on the adaptability of this core framework by outlining its extensibility to specific models of computations and other quantas.
本文中引入的液体时钟的概念是为同步和多色语言的分析提供更精确的编译时框架的重要一步。诸如Lustre或Signal之类的编译语言在将程序合成为可执行代码之前,确实需要对程序进行大量的静态分析,例如,同步类特征、时钟分配、静态调度或因果分析。这些分析通常等同于无法确定的问题,因此需要对这些程序进行抽象,以提供合理但不完整的分析。不幸的是,这种抽象通常会导致程序被拒绝,而这些程序可以很好地合成为确定性代码,只要抽象细化步骤可以应用于更精确的分析。为了减少在编译过程中发生的假阴性的数量,我们利用类型理论的最新进展-与值相关类型系统的可确定类别的定义-和形式化验证,与高效SAT/SMT求解器的开发相关联,提供一种将上述所有分析视为类型推断问题的类型理论方法。为了简化本文中对新方法的阐述,我们定义了一种精简、同步、流处理语言的细化类型系统,以简洁地表示、分析和验证作为流处理数据流网络表示的程序的逻辑和定量特性。我们的类型系统提供了一个新的框架来表示逻辑时间(时钟)和调度属性,并描述它们与流值以及可能的其他量值的关系。我们展示了如何分析同步流处理程序( la Lustre, Signal),以启用先前描述的编译此类程序所涉及的分析。我们还证明了我们的类型系统的合理性,并通过概述其对特定计算模型和其他量子的可扩展性来详细说明该核心框架的适应性。
{"title":"Towards refinement types for time-dependent data-flow networks","authors":"J. Talpin, P. Jouvelot, S. Shukla","doi":"10.1109/MEMCOD.2015.7340465","DOIUrl":"https://doi.org/10.1109/MEMCOD.2015.7340465","url":null,"abstract":"The concept of liquid clocks introduced in this paper is a significant step towards a more precise compile-time framework for the analysis of synchronous and polychromous languages. Compiling languages such as Lustre or Signal indeed involves a number of static analyses of programs before they can be synthesized into executable code, e.g., synchronicity class characterization, clock assignment, static scheduling or causality analysis. These analyses are often equivalent to undecidable problems, necessitating abstracting such programs to provide sound yet incomplete analyses. Such abstractions unfortunately often lead to the rejection of programs that could very well be synthesized into deterministic code, provided abstraction refinement steps could be applied for more accurate analysis. To reduce the number of false negatives occurring during the compilation process, we leverage recent advances in type theory - with the definition of decidable classes of value-dependent type systems - and formal verification, linked to the development of efficient SAT/SMT solvers, to provide a type-theoretic approach that considers all the above analyses as type inference problems. To simplify the exposition of our new approach in this paper, we define a refinement type system for a minimalistic, synchronous, stream-processing language to concisely represent, analyze, and verify logical and quantitative properties of programs expressed as stream-processing data-flow networks. Our type system provides a new framework for representing logical time (clocks) and scheduling properties, and to describe their relations with stream values and, possibly, other quantas. We show how to analyze synchronous stream processing programs (à la Lustre, Signal) to enable previously described analyses involved in compiling such programs. We also prove the soundness of our type system and elaborate on the adaptability of this core framework by outlining its extensibility to specific models of computations and other quantas.","PeriodicalId":106851,"journal":{"name":"2015 ACM/IEEE International Conference on Formal Methods and Models for Codesign (MEMOCODE)","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2015-12-03","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"125917608","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2015-12-03DOI: 10.1109/MEMCOD.2015.7340466
Alan Leung, Dimitar Bounov, Sorin Lerner
To offset the high engineering cost of digital circuit design, hardware engineers are looking increasingly toward high-level languages such as C and C++ to implement their designs. To do this, they employ High-Level Synthesis (HLS) tools that translate their high-level specifications down to a hardware description language such as Verilog. Unfortunately, HLS tools themselves employ sophisticated optimization passes that may have bugs that silently introduce errors in realized hardware. The cost of such errors is high, as hardware is costly or impossible to repair if software patching is not an option. In this work, we present a translation validation approach for verifying the correctness of the HLS translation process. Given an initial C program and the generated Verilog code, our approach establishes their equivalence without relying on any intermediate results or representations produced by the HLS tool. We implemented our approach in a tool called VTV that is able to validate a body of programs compiled by the Xilinx Vivado HLS compiler.
{"title":"C-to-Verilog translation validation","authors":"Alan Leung, Dimitar Bounov, Sorin Lerner","doi":"10.1109/MEMCOD.2015.7340466","DOIUrl":"https://doi.org/10.1109/MEMCOD.2015.7340466","url":null,"abstract":"To offset the high engineering cost of digital circuit design, hardware engineers are looking increasingly toward high-level languages such as C and C++ to implement their designs. To do this, they employ High-Level Synthesis (HLS) tools that translate their high-level specifications down to a hardware description language such as Verilog. Unfortunately, HLS tools themselves employ sophisticated optimization passes that may have bugs that silently introduce errors in realized hardware. The cost of such errors is high, as hardware is costly or impossible to repair if software patching is not an option. In this work, we present a translation validation approach for verifying the correctness of the HLS translation process. Given an initial C program and the generated Verilog code, our approach establishes their equivalence without relying on any intermediate results or representations produced by the HLS tool. We implemented our approach in a tool called VTV that is able to validate a body of programs compiled by the Xilinx Vivado HLS compiler.","PeriodicalId":106851,"journal":{"name":"2015 ACM/IEEE International Conference on Formal Methods and Models for Codesign (MEMOCODE)","volume":"9 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2015-12-03","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"123682290","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2015-12-03DOI: 10.1109/MEMCOD.2015.7340479
Matthew Naylor, S. Moore
Writing test benches is one of the most frequently-performed tasks in the hardware development process. The ability to reuse common test bench features is therefore key to productivity. In this paper, we present a generic test bench, parameterised by a specification of correctness, which can be used to test any design. Our test bench provides several important features, including automatic test-sequence generation and shrinking of counter-examples, and is fully synthesisable, allowing rigorous testing on FPGA as well as in simulation. The approach is easy to use, cheap to implement, and encourages the formal specification of hardware components through the reward of automatic testing and simple failure cases.
{"title":"A generic synthesisable test bench","authors":"Matthew Naylor, S. Moore","doi":"10.1109/MEMCOD.2015.7340479","DOIUrl":"https://doi.org/10.1109/MEMCOD.2015.7340479","url":null,"abstract":"Writing test benches is one of the most frequently-performed tasks in the hardware development process. The ability to reuse common test bench features is therefore key to productivity. In this paper, we present a generic test bench, parameterised by a specification of correctness, which can be used to test any design. Our test bench provides several important features, including automatic test-sequence generation and shrinking of counter-examples, and is fully synthesisable, allowing rigorous testing on FPGA as well as in simulation. The approach is easy to use, cheap to implement, and encourages the formal specification of hardware components through the reward of automatic testing and simple failure cases.","PeriodicalId":106851,"journal":{"name":"2015 ACM/IEEE International Conference on Formal Methods and Models for Codesign (MEMOCODE)","volume":"64 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2015-12-03","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127113007","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2015-12-03DOI: 10.1109/MEMCOD.2015.7340473
Paolo Arcaini, S. Bonfanti, A. Gargantini, A. Mashkoor, E. Riccobene
Medical device software malfunctioning can lead to injuries or death for humans and, therefore, its development should adhere to certification standards. However, these standards establish general guidelines on the use of common software engineering activities without any indication regarding methods and techniques to assure safety and reliability. This paper presents a formal development process, based on the Abstract State Machine method, that integrates most of the activities required by the standards. The process permits to obtain, through a sequence of refinements, more detailed models that can be formally validated and verified. Offline and online testing techniques permit to check the conformance of the implementation w.r.t. the specification. The process is applied to the validation of the SAM medical software, that is used to measure the patients' stereoacuity in the diagnosis of amblyopia.
{"title":"Formal validation and verification of a medical software critical component","authors":"Paolo Arcaini, S. Bonfanti, A. Gargantini, A. Mashkoor, E. Riccobene","doi":"10.1109/MEMCOD.2015.7340473","DOIUrl":"https://doi.org/10.1109/MEMCOD.2015.7340473","url":null,"abstract":"Medical device software malfunctioning can lead to injuries or death for humans and, therefore, its development should adhere to certification standards. However, these standards establish general guidelines on the use of common software engineering activities without any indication regarding methods and techniques to assure safety and reliability. This paper presents a formal development process, based on the Abstract State Machine method, that integrates most of the activities required by the standards. The process permits to obtain, through a sequence of refinements, more detailed models that can be formally validated and verified. Offline and online testing techniques permit to check the conformance of the implementation w.r.t. the specification. The process is applied to the validation of the SAM medical software, that is used to measure the patients' stereoacuity in the diagnosis of amblyopia.","PeriodicalId":106851,"journal":{"name":"2015 ACM/IEEE International Conference on Formal Methods and Models for Codesign (MEMOCODE)","volume":"26 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2015-12-03","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114864867","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2015-12-03DOI: 10.1109/MEMCOD.2015.7340461
A. Brook, D. Peled, S. Schewe
Concurrency theory suggests the use of fairness as a criterion for a reasonable execution: a transition or a process should not wait an unbounded amount of time to execute if it is enabled continuously (under weak fairness) or infinitely often (under strong fairness). Unlike multiprocessing, in actual concurrent systems one may rely on the physical nature of the system to act in a “fair” manner. However, in many realistic concurrent systems, performing the next transition may involve several smaller steps that can include negotiation and communication, and fairness can be hard to achieve. It is useful to be able to control the global fairness guaranteed by enforcing local constraints on processes. We define local fairness conditions and study their relationship with common notions of global fairness constraints.
{"title":"Local and global fairness in concurrent systems","authors":"A. Brook, D. Peled, S. Schewe","doi":"10.1109/MEMCOD.2015.7340461","DOIUrl":"https://doi.org/10.1109/MEMCOD.2015.7340461","url":null,"abstract":"Concurrency theory suggests the use of fairness as a criterion for a reasonable execution: a transition or a process should not wait an unbounded amount of time to execute if it is enabled continuously (under weak fairness) or infinitely often (under strong fairness). Unlike multiprocessing, in actual concurrent systems one may rely on the physical nature of the system to act in a “fair” manner. However, in many realistic concurrent systems, performing the next transition may involve several smaller steps that can include negotiation and communication, and fairness can be hard to achieve. It is useful to be able to control the global fairness guaranteed by enforcing local constraints on processes. We define local fairness conditions and study their relationship with common notions of global fairness constraints.","PeriodicalId":106851,"journal":{"name":"2015 ACM/IEEE International Conference on Formal Methods and Models for Codesign (MEMOCODE)","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2015-12-03","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"121063997","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2015-12-03DOI: 10.1109/MEMCOD.2015.7340486
Alexandru Tanase, Michael Witterauf, J. Teich, Frank Hannig
Loop parallelization techniques for massively parallel processor arrays using one-level tiling are often either I/O- or memory-bounded, exceeding the target architecture's capabilities. Furthermore, if the number of available processing elements is only known at runtime - as in adaptive systems - static approaches fail. To solve these problems, we present a hybrid compile/runtime technique to symbolically parallelize loop nests with uniform dependences on multiple levels. At compile time, two novel transformations are performed: (a) symbolic hierarchical tiling followed by (b) symbolic multi-level scheduling. By tuning the size of the tiles on multiple levels, a trade-off between the necessary I/O-bandwidth and memory is possible, which facilitates obeying resource constraints. The resulting schedules are symbolic with respect to the number of tiles; thus, the number of processing elements to map onto does not need to be known at compile time. At runtime, when the number is known, a simple prolog chooses a feasible schedule with respect to I/O and memory constraints that is latency-optimal for the chosen tile size. In this way, our approach dynamically chooses latency-optimal and feasible schedules while avoiding expensive re-compilations.
{"title":"Symbolic loop parallelization for balancing I/O and memory accesses on processor arrays","authors":"Alexandru Tanase, Michael Witterauf, J. Teich, Frank Hannig","doi":"10.1109/MEMCOD.2015.7340486","DOIUrl":"https://doi.org/10.1109/MEMCOD.2015.7340486","url":null,"abstract":"Loop parallelization techniques for massively parallel processor arrays using one-level tiling are often either I/O- or memory-bounded, exceeding the target architecture's capabilities. Furthermore, if the number of available processing elements is only known at runtime - as in adaptive systems - static approaches fail. To solve these problems, we present a hybrid compile/runtime technique to symbolically parallelize loop nests with uniform dependences on multiple levels. At compile time, two novel transformations are performed: (a) symbolic hierarchical tiling followed by (b) symbolic multi-level scheduling. By tuning the size of the tiles on multiple levels, a trade-off between the necessary I/O-bandwidth and memory is possible, which facilitates obeying resource constraints. The resulting schedules are symbolic with respect to the number of tiles; thus, the number of processing elements to map onto does not need to be known at compile time. At runtime, when the number is known, a simple prolog chooses a feasible schedule with respect to I/O and memory constraints that is latency-optimal for the chosen tile size. In this way, our approach dynamically chooses latency-optimal and feasible schedules while avoiding expensive re-compilations.","PeriodicalId":106851,"journal":{"name":"2015 ACM/IEEE International Conference on Formal Methods and Models for Codesign (MEMOCODE)","volume":"34 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2015-12-03","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126784702","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2015-12-03DOI: 10.1109/MEMCOD.2015.7340462
Karsten Rathlev, Steven Smyth, Christian Motika, R. V. Hanxleden, Michael Mendler
The synchronous language Esterel provides determinate concurrency for reactive systems. Determinacy is ensured by the “signal coherence rule,” which demands that signals have a stable value throughout one reaction cycle. This is natural for the original application domains of Esterel, such as controller design and hardware development; however, it is unnecessarily restrictive for software development. Sequentially Constructive Esterel (SCEst) overcomes this restriction by allowing values to change instantaneously, as long as determinacy is still guaranteed, adopting the recently proposed Sequentially Constructive model of computation. SCEst is grounded in the minimal Sequentially Constructive Language, which also provides a novel semantic definition and compilation approach for Esterel.
{"title":"SCEst: Sequentially constructive esterel","authors":"Karsten Rathlev, Steven Smyth, Christian Motika, R. V. Hanxleden, Michael Mendler","doi":"10.1109/MEMCOD.2015.7340462","DOIUrl":"https://doi.org/10.1109/MEMCOD.2015.7340462","url":null,"abstract":"The synchronous language Esterel provides determinate concurrency for reactive systems. Determinacy is ensured by the “signal coherence rule,” which demands that signals have a stable value throughout one reaction cycle. This is natural for the original application domains of Esterel, such as controller design and hardware development; however, it is unnecessarily restrictive for software development. Sequentially Constructive Esterel (SCEst) overcomes this restriction by allowing values to change instantaneously, as long as determinacy is still guaranteed, adopting the recently proposed Sequentially Constructive model of computation. SCEst is grounded in the minimal Sequentially Constructive Language, which also provides a novel semantic definition and compilation approach for Esterel.","PeriodicalId":106851,"journal":{"name":"2015 ACM/IEEE International Conference on Formal Methods and Models for Codesign (MEMOCODE)","volume":"57 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2015-12-03","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128016768","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}