Pub Date : 2016-05-23DOI: 10.1109/SIES.2016.7509435
Zhongqi Ma, Ryo Kurachi, Gang Zeng, H. Takada
Partitioned fixed priority scheduling is one of the most comprehensively chosen predictable scheduling in practice. The FMLP+ developed in recent years is a better protocol, which ensures asymptotically optimal O(n) maximum priority-inversion blocking. The constraints under several protocols besides the FMLP+ can be exploited to gain the bound on maximum blocking time. However, the blocking time bounds may be pessimistic under the FMLP+ on the ground that shared resources local to a processor do not incur priority-inversion blocking in some cases. It is possible that a schedulable task set is judged as unschedulable because of the pessimistic values. Based on our analysis, a few constraints was added to compute the maximum blocking time of each task, and then its worst-case response time. The results of our experiments show less pessimism than the existing ones. Meanwhile, we also demonstrate the usefulness of the conclusion that global resource sharing should be transformed into local one where possible.
{"title":"Further analysis on blocking time bounds for partitioned fixed priority multiprocessor scheduling","authors":"Zhongqi Ma, Ryo Kurachi, Gang Zeng, H. Takada","doi":"10.1109/SIES.2016.7509435","DOIUrl":"https://doi.org/10.1109/SIES.2016.7509435","url":null,"abstract":"Partitioned fixed priority scheduling is one of the most comprehensively chosen predictable scheduling in practice. The FMLP+ developed in recent years is a better protocol, which ensures asymptotically optimal O(n) maximum priority-inversion blocking. The constraints under several protocols besides the FMLP+ can be exploited to gain the bound on maximum blocking time. However, the blocking time bounds may be pessimistic under the FMLP+ on the ground that shared resources local to a processor do not incur priority-inversion blocking in some cases. It is possible that a schedulable task set is judged as unschedulable because of the pessimistic values. Based on our analysis, a few constraints was added to compute the maximum blocking time of each task, and then its worst-case response time. The results of our experiments show less pessimism than the existing ones. Meanwhile, we also demonstrate the usefulness of the conclusion that global resource sharing should be transformed into local one where possible.","PeriodicalId":185636,"journal":{"name":"2016 11th IEEE Symposium on Industrial Embedded Systems (SIES)","volume":"44 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2016-05-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127881051","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2016-05-23DOI: 10.1109/SIES.2016.7509423
N. Bombieri, F. Busato, F. Fummi, Michele Scala
GPU-accelerated applications are becoming increasingly common in high-performance computing as well as in low-power heterogeneous embedded systems. Nevertheless, GPU programming is a challenging task, especially if a GPU application has to be tuned to fully take advantage of the GPU architectural configuration. Even more challenging is the application tuning by considering power and energy consumption, which have emerged as first-order design constraints in addition to performance. Solving bottlenecks of a GPU application such as high thread divergence or poor memory coalescing have a different impact on the overall performance, power and energy consumption. Such an impact also depends on the GPU device on which the application is run. This paper presents a suite of microbenchmarks, which are specialized chunks of GPU code that exercise specific device components (e.g., arithmetic instruction units, shared memory, cache, DRAM, etc.) and that provide the actual characteristics of such components in terms of throughput, power, and energy consumption. The suite aims at enriching standard profiler information and guiding the GPU application tuning on a specific GPU architecture by considering all three design constraints (i.e., power, performance, energy consumption). The paper presents the results obtained by applying the proposed suite to characterize two different GPU devices and to understand how application tuning may impact differently on them.
{"title":"MIPP: A microbenchmark suite for performance, power, and energy consumption characterization of GPU architectures","authors":"N. Bombieri, F. Busato, F. Fummi, Michele Scala","doi":"10.1109/SIES.2016.7509423","DOIUrl":"https://doi.org/10.1109/SIES.2016.7509423","url":null,"abstract":"GPU-accelerated applications are becoming increasingly common in high-performance computing as well as in low-power heterogeneous embedded systems. Nevertheless, GPU programming is a challenging task, especially if a GPU application has to be tuned to fully take advantage of the GPU architectural configuration. Even more challenging is the application tuning by considering power and energy consumption, which have emerged as first-order design constraints in addition to performance. Solving bottlenecks of a GPU application such as high thread divergence or poor memory coalescing have a different impact on the overall performance, power and energy consumption. Such an impact also depends on the GPU device on which the application is run. This paper presents a suite of microbenchmarks, which are specialized chunks of GPU code that exercise specific device components (e.g., arithmetic instruction units, shared memory, cache, DRAM, etc.) and that provide the actual characteristics of such components in terms of throughput, power, and energy consumption. The suite aims at enriching standard profiler information and guiding the GPU application tuning on a specific GPU architecture by considering all three design constraints (i.e., power, performance, energy consumption). The paper presents the results obtained by applying the proposed suite to characterize two different GPU devices and to understand how application tuning may impact differently on them.","PeriodicalId":185636,"journal":{"name":"2016 11th IEEE Symposium on Industrial Embedded Systems (SIES)","volume":"20 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2016-05-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127504069","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2016-05-23DOI: 10.1109/SIES.2016.7509439
I. Deligiannis, Georgios Kornaros
This paper presents a memory allocation scheme that provides efficient dynamic memory allocation and defragmentation for embedded systems lacking a Memory Management Unit (MMU). Using as main criteria the efficiency in handling both external and internal memory fragmentation, as well as the requirements of soft real-time applications in constraint-embedded systems, the proposed solution of memory management delivers a more precise memory allocation process. The proposed Adaptive Memory Management Scheme (AMM) maintains a balance between performance and efficiency, with the objective to increase the amount of usable memory in MMU-less embedded systems with a bounded and acceptable timing behavior. By maximizing memory utilization, embedded systems applications can optimize their performance in time-critical tasks and meet the demands of Internet-of-Things (IoT) solutions, without undergoing memory leaks and unexpected failures. Its use requires no hardware MMU, and requires few or no manual changes to application software. The proposed scheme is evaluated providing encouraging results regarding performance and reliability compared to the default memory allocator. Allocation of fixed and random size blocks delivers a speedup ranging from 2x to 5x over the standard GLIBC allocator, while the de-allocation process is only 20% percent slower, but provides a perfect (0%) defragmented memory.
{"title":"Adaptive memory management scheme for MMU-less embedded systems","authors":"I. Deligiannis, Georgios Kornaros","doi":"10.1109/SIES.2016.7509439","DOIUrl":"https://doi.org/10.1109/SIES.2016.7509439","url":null,"abstract":"This paper presents a memory allocation scheme that provides efficient dynamic memory allocation and defragmentation for embedded systems lacking a Memory Management Unit (MMU). Using as main criteria the efficiency in handling both external and internal memory fragmentation, as well as the requirements of soft real-time applications in constraint-embedded systems, the proposed solution of memory management delivers a more precise memory allocation process. The proposed Adaptive Memory Management Scheme (AMM) maintains a balance between performance and efficiency, with the objective to increase the amount of usable memory in MMU-less embedded systems with a bounded and acceptable timing behavior. By maximizing memory utilization, embedded systems applications can optimize their performance in time-critical tasks and meet the demands of Internet-of-Things (IoT) solutions, without undergoing memory leaks and unexpected failures. Its use requires no hardware MMU, and requires few or no manual changes to application software. The proposed scheme is evaluated providing encouraging results regarding performance and reliability compared to the default memory allocator. Allocation of fixed and random size blocks delivers a speedup ranging from 2x to 5x over the standard GLIBC allocator, while the de-allocation process is only 20% percent slower, but provides a perfect (0%) defragmented memory.","PeriodicalId":185636,"journal":{"name":"2016 11th IEEE Symposium on Industrial Embedded Systems (SIES)","volume":"25 3","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2016-05-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"132571654","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2016-05-23DOI: 10.1109/SIES.2016.7509410
A. Behrouzian, Dip Goswami, M. Geilen, M. Hendriks, Hadi Alizadeh Ara, E. V. Horssen, W. Heemels, T. Basten
This paper proposes methods for verification of (m, k)-firmness properties of control applications running on a shared TDMA-scheduled processor. We particularly consider dropped samples arising from processor sharing. Based on the available processor budget for any sample that is ready for execution, the Finite-Point (FP) method is proposed for quantification of the maximum number of dropped samples. The FP method is further generalized using a timed automata based model to consider the variation in the period of samples. The UPPAAL tool is used to validate and verify the timed automata based model. The FP method gives an exact bound on the number of dropped samples, whereas the timed-automata analysis provides a conservative bound. The methods are evaluated considering a realistic case study. Scalability analysis of the methods shows acceptable verification times for different sets of parameters.
{"title":"Sample-drop firmness analysis of TDMA-scheduled control applications","authors":"A. Behrouzian, Dip Goswami, M. Geilen, M. Hendriks, Hadi Alizadeh Ara, E. V. Horssen, W. Heemels, T. Basten","doi":"10.1109/SIES.2016.7509410","DOIUrl":"https://doi.org/10.1109/SIES.2016.7509410","url":null,"abstract":"This paper proposes methods for verification of (m, k)-firmness properties of control applications running on a shared TDMA-scheduled processor. We particularly consider dropped samples arising from processor sharing. Based on the available processor budget for any sample that is ready for execution, the Finite-Point (FP) method is proposed for quantification of the maximum number of dropped samples. The FP method is further generalized using a timed automata based model to consider the variation in the period of samples. The UPPAAL tool is used to validate and verify the timed automata based model. The FP method gives an exact bound on the number of dropped samples, whereas the timed-automata analysis provides a conservative bound. The methods are evaluated considering a realistic case study. Scalability analysis of the methods shows acceptable verification times for different sets of parameters.","PeriodicalId":185636,"journal":{"name":"2016 11th IEEE Symposium on Industrial Embedded Systems (SIES)","volume":"2 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2016-05-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130038255","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2016-05-23DOI: 10.1109/SIES.2016.7509419
Sebastian Tobuschat, R. Ernst, A. Hamann, D. Ziegenbein
For automotive systems there is a mismatch between worst-case timing analysis models and the perceived reality, diminishing their relevance, especially for the automotive powertrain domain. Strict worst-case guarantees are rarely needed in the powertrain domain. The reason is that a large amount of functionality is control software and this can tolerate sporadic deadline misses. For instance, certain control approaches can systematically account for sampling losses and still prove whether or not the controller is stable and adheres to required performance criteria. Typical worst-case analysis (TWCA) tackles this problem by providing formal guarantees on typical response-times including upper bounds on the number of violations of these. In this paper, we derive a system-level timing feasibility test exploiting the robustness of control applications based on TWCA. We extend the TWCA to cope with periodic tasks that have varying execution times. Taking the robustness of control applications into account, we derive upper bounds for the overload models of each task, along with possible typical worst-case execution times (TCET), as needed for the TWCA. We then use this information to find a feasible typical-case configuration such that all deadlines are reached and all robustness constraints are satisfied. To verify the approach and show the expressiveness, we apply it on a performance model of a full-blown modern engine management system provided by Bosch.
{"title":"System-level timing feasibility test for cyber-physical automotive systems","authors":"Sebastian Tobuschat, R. Ernst, A. Hamann, D. Ziegenbein","doi":"10.1109/SIES.2016.7509419","DOIUrl":"https://doi.org/10.1109/SIES.2016.7509419","url":null,"abstract":"For automotive systems there is a mismatch between worst-case timing analysis models and the perceived reality, diminishing their relevance, especially for the automotive powertrain domain. Strict worst-case guarantees are rarely needed in the powertrain domain. The reason is that a large amount of functionality is control software and this can tolerate sporadic deadline misses. For instance, certain control approaches can systematically account for sampling losses and still prove whether or not the controller is stable and adheres to required performance criteria. Typical worst-case analysis (TWCA) tackles this problem by providing formal guarantees on typical response-times including upper bounds on the number of violations of these. In this paper, we derive a system-level timing feasibility test exploiting the robustness of control applications based on TWCA. We extend the TWCA to cope with periodic tasks that have varying execution times. Taking the robustness of control applications into account, we derive upper bounds for the overload models of each task, along with possible typical worst-case execution times (TCET), as needed for the TWCA. We then use this information to find a feasible typical-case configuration such that all deadlines are reached and all robustness constraints are satisfied. To verify the approach and show the expressiveness, we apply it on a performance model of a full-blown modern engine management system provided by Bosch.","PeriodicalId":185636,"journal":{"name":"2016 11th IEEE Symposium on Industrial Embedded Systems (SIES)","volume":"309 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2016-05-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"132779575","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2016-05-23DOI: 10.1109/SIES.2016.7509422
Chao-Wu Chen, L. Santinelli, J. Hugues, G. Beltrame
Accurate timing prediction for software execution is becoming a problem due to the increasing complexity of computer architecture, and the presence of mixed-criticality workloads. Probabilistic caches were proposed to set bounds to Worst Case Execution Time (WCET) estimates and help designers improve system resource usage. However, as technology scales down, system fault rates increase and timing behavior is affected. In this paper, we propose a Static Probabilistic Timing Analysis (SPTA) approach for caches with evict-on-miss random replacement policy using a state space modeling technique, with consideration of fault impacts on both timing analysis and task WCET. Different scenarios of transient and permanent faults are investigated. Results show that our proposed approach provides tight probabilistic WCET (pWCET) estimates and as fault rate increases, the timing behavior of the system can be affected significantly.
{"title":"Static probabilistic timing analysis in presence of faults","authors":"Chao-Wu Chen, L. Santinelli, J. Hugues, G. Beltrame","doi":"10.1109/SIES.2016.7509422","DOIUrl":"https://doi.org/10.1109/SIES.2016.7509422","url":null,"abstract":"Accurate timing prediction for software execution is becoming a problem due to the increasing complexity of computer architecture, and the presence of mixed-criticality workloads. Probabilistic caches were proposed to set bounds to Worst Case Execution Time (WCET) estimates and help designers improve system resource usage. However, as technology scales down, system fault rates increase and timing behavior is affected. In this paper, we propose a Static Probabilistic Timing Analysis (SPTA) approach for caches with evict-on-miss random replacement policy using a state space modeling technique, with consideration of fault impacts on both timing analysis and task WCET. Different scenarios of transient and permanent faults are investigated. Results show that our proposed approach provides tight probabilistic WCET (pWCET) estimates and as fault rate increases, the timing behavior of the system can be affected significantly.","PeriodicalId":185636,"journal":{"name":"2016 11th IEEE Symposium on Industrial Embedded Systems (SIES)","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2016-05-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115795823","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2016-05-23DOI: 10.1109/SIES.2016.7509412
N. Harb, C. Valderrama, Esteban Peláez, Alexandre Girardi
At the heart of the European Rail Train Management System (ERTMS) is the European Train Control System (ETCS). One major goal of the ERTMS-ETCS project is the standardization and unification of all train control and command systems in Europe. Hence, it is critical to have a reliable test bed for ease of validation and certification, enforcing the reliability of ERTMS-ETCS train equipment. In this context, we present a low-cost system comprised of several connected Heterogeneous System on Chip (HSoC) cards that are used for the purpose of certifying train equipment. The proposed system mimics real train behaviors in operation. Train behavior scenarios are controlled by a train motion simulator running on a host PC, and train behavior data is fed from our system to the train equipment undergoing testing. An intermediate extension is used to guarantee real-time data transmission since the simulator is not capable of doing so due to its high computation demands and communication latencies. In our intermediate extension, each HSoC card contains a NVIDIA Tegra 2 microprocessor chip, an Altera Cyclone II Field Programmable Gate Array (FPGA) chip and several custom Application Specific Integrated Circuit (ASIC) chips. Each card can be accessed by the simulator over a Gigabit Ethernet port, and all cards intercommunicate using a 1 Mbps back-plane serial bus. We show that by using simulations as a starting point, our system is able to generate authentic train control signals 20 times faster than the software simulator in real-time, presenting the train equipment with a real test case scenario accurately modelling train behavior over a track.
{"title":"FPGA hardware in the loop system for ERTMS-ETCS train equipment testing","authors":"N. Harb, C. Valderrama, Esteban Peláez, Alexandre Girardi","doi":"10.1109/SIES.2016.7509412","DOIUrl":"https://doi.org/10.1109/SIES.2016.7509412","url":null,"abstract":"At the heart of the European Rail Train Management System (ERTMS) is the European Train Control System (ETCS). One major goal of the ERTMS-ETCS project is the standardization and unification of all train control and command systems in Europe. Hence, it is critical to have a reliable test bed for ease of validation and certification, enforcing the reliability of ERTMS-ETCS train equipment. In this context, we present a low-cost system comprised of several connected Heterogeneous System on Chip (HSoC) cards that are used for the purpose of certifying train equipment. The proposed system mimics real train behaviors in operation. Train behavior scenarios are controlled by a train motion simulator running on a host PC, and train behavior data is fed from our system to the train equipment undergoing testing. An intermediate extension is used to guarantee real-time data transmission since the simulator is not capable of doing so due to its high computation demands and communication latencies. In our intermediate extension, each HSoC card contains a NVIDIA Tegra 2 microprocessor chip, an Altera Cyclone II Field Programmable Gate Array (FPGA) chip and several custom Application Specific Integrated Circuit (ASIC) chips. Each card can be accessed by the simulator over a Gigabit Ethernet port, and all cards intercommunicate using a 1 Mbps back-plane serial bus. We show that by using simulations as a starting point, our system is able to generate authentic train control signals 20 times faster than the software simulator in real-time, presenting the train equipment with a real test case scenario accurately modelling train behavior over a track.","PeriodicalId":185636,"journal":{"name":"2016 11th IEEE Symposium on Industrial Embedded Systems (SIES)","volume":"37 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2016-05-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"123248753","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2016-05-23DOI: 10.1109/SIES.2016.7509436
Z. Iqbal, L. Almeida
Ethernet has been gaining momentum as the network technology supporting complex embedded systems. In this work-in-progress paper we recover a previous proposal for using the FTT-SE protocol to provide hierarchical traffic scheduling using sporadic servers and thus support component-based design approaches. In particular, we carry out initial steps towards the timing analysis of such composition, identifying the sources of interference and potential analytical models.
{"title":"Towards an analysis for hierarchies of sporadic servers on Ethernet","authors":"Z. Iqbal, L. Almeida","doi":"10.1109/SIES.2016.7509436","DOIUrl":"https://doi.org/10.1109/SIES.2016.7509436","url":null,"abstract":"Ethernet has been gaining momentum as the network technology supporting complex embedded systems. In this work-in-progress paper we recover a previous proposal for using the FTT-SE protocol to provide hierarchical traffic scheduling using sporadic servers and thus support component-based design approaches. In particular, we carry out initial steps towards the timing analysis of such composition, identifying the sources of interference and potential analytical models.","PeriodicalId":185636,"journal":{"name":"2016 11th IEEE Symposium on Industrial Embedded Systems (SIES)","volume":"13 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2016-05-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"123270088","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2016-04-29DOI: 10.1109/SIES.2016.7509429
A. Khan, F. Mallet, M. Rashid
To verify embedded systems early in the design stages, we need formal ways to requirements specification which can be as close as possible to natural language interpretation, away from the lower ESL/RTL levels. This paper proposes to contribute to the FSL (Formal Specification Level) by specifying natural language requirements graphically in the form of temporal patterns. Standard modeling artifacts like UML and MARTE are used to provide formal semantics of these graphical models allowing to eliminate ambiguity in specifications and automatic design verification at different abstraction levels using these patterns.
{"title":"Natural interpretation of UML/MARTE diagrams for system requirements specification","authors":"A. Khan, F. Mallet, M. Rashid","doi":"10.1109/SIES.2016.7509429","DOIUrl":"https://doi.org/10.1109/SIES.2016.7509429","url":null,"abstract":"To verify embedded systems early in the design stages, we need formal ways to requirements specification which can be as close as possible to natural language interpretation, away from the lower ESL/RTL levels. This paper proposes to contribute to the FSL (Formal Specification Level) by specifying natural language requirements graphically in the form of temporal patterns. Standard modeling artifacts like UML and MARTE are used to provide formal semantics of these graphical models allowing to eliminate ambiguity in specifications and automatic design verification at different abstraction levels using these patterns.","PeriodicalId":185636,"journal":{"name":"2016 11th IEEE Symposium on Industrial Embedded Systems (SIES)","volume":"76 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2016-04-29","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130819593","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}