V. Winter, Jonathan Guerrero, Carl Reinke, James T. Perry
JVM-based processors used in embedded systems are often scaled back versions of the standard JVM which do not support the full set of Java byte codes and native methods assumed by a JVM. As a result, code bases such as Java libraries must be migrated in order make them suitable for execution on the embedded JVM-based processor. This paper describes Monarch, a high-assurance Java-to-java (J2j) source code migrator that we are developing to assist such code migrations.
{"title":"Monarch: A High-Assurance Java-to-Java (J2j) Source-Code Migrator","authors":"V. Winter, Jonathan Guerrero, Carl Reinke, James T. Perry","doi":"10.1109/HASE.2011.30","DOIUrl":"https://doi.org/10.1109/HASE.2011.30","url":null,"abstract":"JVM-based processors used in embedded systems are often scaled back versions of the standard JVM which do not support the full set of Java byte codes and native methods assumed by a JVM. As a result, code bases such as Java libraries must be migrated in order make them suitable for execution on the embedded JVM-based processor. This paper describes Monarch, a high-assurance Java-to-java (J2j) source code migrator that we are developing to assist such code migrations.","PeriodicalId":403140,"journal":{"name":"2011 IEEE 13th International Symposium on High-Assurance Systems Engineering","volume":"253 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2011-11-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"134185630","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Service Oriented Architectures (SOAs) are increasingly being used to support the information infrastructures of organizations. SOAs are dynamic and evolve after deployment in order to adapt to changes in the requirements and infrastructure. Consequently, traditional validation approaches based on offline testing conducted before deployment are not adequate anymore, demanding for new techniques that allow testing the SOA during its whole lifecycle. In this paper we propose a SOA testing approach based on a composite service that is able to trace SOA evolution and automatically test the various services according to specific testing policies. The paper describes the architecture of the testing service and presents a concrete implementation focused on robustness testing. Results from a case study demonstrate the effectiveness of the proposed approach in discovering and testing the robustness of SOA services.
{"title":"A Testing Service for Lifelong Validation of Dynamic SOA","authors":"A. Ceccarelli, M. Vieira, A. Bondavalli","doi":"10.1109/HASE.2011.18","DOIUrl":"https://doi.org/10.1109/HASE.2011.18","url":null,"abstract":"Service Oriented Architectures (SOAs) are increasingly being used to support the information infrastructures of organizations. SOAs are dynamic and evolve after deployment in order to adapt to changes in the requirements and infrastructure. Consequently, traditional validation approaches based on offline testing conducted before deployment are not adequate anymore, demanding for new techniques that allow testing the SOA during its whole lifecycle. In this paper we propose a SOA testing approach based on a composite service that is able to trace SOA evolution and automatically test the various services according to specific testing policies. The paper describes the architecture of the testing service and presents a concrete implementation focused on robustness testing. Results from a case study demonstrate the effectiveness of the proposed approach in discovering and testing the robustness of SOA services.","PeriodicalId":403140,"journal":{"name":"2011 IEEE 13th International Symposium on High-Assurance Systems Engineering","volume":"91 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2011-11-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126083098","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Tushar Deshpande, P. Katsaros, Stylianos Basagiannis, S. Smolka
The DNS Bandwidth Amplification Attack (BAA) is a distributed denial-of-service attack in which a network of computers floods a DNS server with responses to requests that have never been made. Amplification enters into the attack by virtue of the fact that a small 60-byte request can be answered by a substantially larger response of 4,000 bytes or more in size. We use the PRISM probabilistic model checker to introduce a Continuous Time Markov Chain model of the DNS BAA and three recently proposed countermeasures, and to perform an extensive cost-benefit analysis of the countermeasures. Our analysis, which is applicable to both DNS and DNSSec (a security extension of DNS), is based on objective metrics that weigh the benefits for a server in terms of the percentage increase in the processing of legitimate packets against the cost incurred by incorrectly dropping legitimate traffic. The results we obtain, gleaned from more than 450 PRISM runs, demonstrate significant differences between the countermeasures as reflected by their respective net benefits. Our results also reveal that DNSSec is more vulnerable than DNS to a BAA attack, and, relatedly, DNSSec derives significantly less benefit from the countermeasures.
{"title":"Formal Analysis of the DNS Bandwidth Amplification Attack and Its Countermeasures Using Probabilistic Model Checking","authors":"Tushar Deshpande, P. Katsaros, Stylianos Basagiannis, S. Smolka","doi":"10.1109/HASE.2011.57","DOIUrl":"https://doi.org/10.1109/HASE.2011.57","url":null,"abstract":"The DNS Bandwidth Amplification Attack (BAA) is a distributed denial-of-service attack in which a network of computers floods a DNS server with responses to requests that have never been made. Amplification enters into the attack by virtue of the fact that a small 60-byte request can be answered by a substantially larger response of 4,000 bytes or more in size. We use the PRISM probabilistic model checker to introduce a Continuous Time Markov Chain model of the DNS BAA and three recently proposed countermeasures, and to perform an extensive cost-benefit analysis of the countermeasures. Our analysis, which is applicable to both DNS and DNSSec (a security extension of DNS), is based on objective metrics that weigh the benefits for a server in terms of the percentage increase in the processing of legitimate packets against the cost incurred by incorrectly dropping legitimate traffic. The results we obtain, gleaned from more than 450 PRISM runs, demonstrate significant differences between the countermeasures as reflected by their respective net benefits. Our results also reveal that DNSSec is more vulnerable than DNS to a BAA attack, and, relatedly, DNSSec derives significantly less benefit from the countermeasures.","PeriodicalId":403140,"journal":{"name":"2011 IEEE 13th International Symposium on High-Assurance Systems Engineering","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2011-11-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"131055330","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
In this paper, we present a unifying approach to specifying and verifying specification-oriented XML constraints. The formal tree model and the XML constraint logic are developed to describe the XML documents and constraints respectively. The XML constraint logic, as an extension of the first order logic, is constructed and interpreted in the framework of our formal tree model, where the node domain and value domain for XML are separated rigorously. Furthermore, an effective algorithm is given to establish the satisfaction of the logic formulas with respect to the corresponding tree model. We implement a tool and the experiments are carried out for the standard XMLbased specifications from industry, such as WS-BPEL, WS-CDL, and WSDL. The experimental results show that our approach is effective in practice.
{"title":"A Unifying Approach to Validating Specification-Oriented XML Constraints","authors":"Yongxin Zhao, Zheng Wang, Hao Xiao, Jing Ping, G. Pu, Jifeng He, Huibiao Zhu","doi":"10.1109/HASE.2011.28","DOIUrl":"https://doi.org/10.1109/HASE.2011.28","url":null,"abstract":"In this paper, we present a unifying approach to specifying and verifying specification-oriented XML constraints. The formal tree model and the XML constraint logic are developed to describe the XML documents and constraints respectively. The XML constraint logic, as an extension of the first order logic, is constructed and interpreted in the framework of our formal tree model, where the node domain and value domain for XML are separated rigorously. Furthermore, an effective algorithm is given to establish the satisfaction of the logic formulas with respect to the corresponding tree model. We implement a tool and the experiments are carried out for the standard XMLbased specifications from industry, such as WS-BPEL, WS-CDL, and WSDL. The experimental results show that our approach is effective in practice.","PeriodicalId":403140,"journal":{"name":"2011 IEEE 13th International Symposium on High-Assurance Systems Engineering","volume":"25 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2011-11-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"124054734","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
The architecture is the basic structure of every system. The system architect is responsible for ensuring that it fits to the system requirements even if these requirements change according to new conditions during development process. Our approach defines a model driven process for the architect to validate system architecture against system requirements and it supports the architect in analysing the impacts of requirements changes.
{"title":"Model Driven Validation of System Architectures","authors":"A. Pflüger, Wolfgang Golubski, Stefan Queins","doi":"10.1109/HASE.2011.46","DOIUrl":"https://doi.org/10.1109/HASE.2011.46","url":null,"abstract":"The architecture is the basic structure of every system. The system architect is responsible for ensuring that it fits to the system requirements even if these requirements change according to new conditions during development process. Our approach defines a model driven process for the architect to validate system architecture against system requirements and it supports the architect in analysing the impacts of requirements changes.","PeriodicalId":403140,"journal":{"name":"2011 IEEE 13th International Symposium on High-Assurance Systems Engineering","volume":"99 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2011-11-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129344100","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
In Zig Bee, the router capable devices have restriction to accept a number of devices as children devices. A router capable device can not allow any new device to join as a child device if it reaches to the maximum capacity of children or depth limit. If a device can not join the network, it isolates from the network and becomes an orphan node even though address spaces are available in the network. The orphan problem becomes worse when the topology of the network changes dynamically. In this paper we propose an autonomous online expansion technology for Zig Bee networks that shares available address spaces by router devices to connect maximum number of devices. Our simulation results show that the proposed online expansion technology significantly reduces the orphan nodes in the network.
{"title":"Autonomous Online Expansion Technology for ZigBee Wireless Sensor Networks","authors":"M. Haque, Fan Wei, T. Gouda, Xiaodong Lu, K. Mori","doi":"10.1109/HASE.2011.55","DOIUrl":"https://doi.org/10.1109/HASE.2011.55","url":null,"abstract":"In Zig Bee, the router capable devices have restriction to accept a number of devices as children devices. A router capable device can not allow any new device to join as a child device if it reaches to the maximum capacity of children or depth limit. If a device can not join the network, it isolates from the network and becomes an orphan node even though address spaces are available in the network. The orphan problem becomes worse when the topology of the network changes dynamically. In this paper we propose an autonomous online expansion technology for Zig Bee networks that shares available address spaces by router devices to connect maximum number of devices. Our simulation results show that the proposed online expansion technology significantly reduces the orphan nodes in the network.","PeriodicalId":403140,"journal":{"name":"2011 IEEE 13th International Symposium on High-Assurance Systems Engineering","volume":"99 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2011-11-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114539353","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Access control systems rely on a variety of methods for authenticating legitimate users and preventing malicious ones from accessing the system. The most commonly used system is a simple username and password approach. This technology has been the de-facto standard for remote authentication applications. A username-password based system assumes that only the genuine users know their own credentials. However, breaching this type of system has become a common occurrence in today's age of social networks and modern computational devices. Once broken, the system will accept every authentication trial using compromised credentials until the breach is detected. In this paper, we explore certain aspects of utilizing keystroke dynamics in username-password based systems. We show that as users get habituated to typing their credentials, there is a significant reduction in the variance of the keystroke patterns. This trend is more pronounced for long and complex passwords as opposed to short dictionary based passwords. We also study the time window necessary to perceive habituation in user typing patterns. Furthermore, we show that habituation plays a key role in classification of genuine login attempts by reducing the equal error rate (EER) over time. Finally, we explore an authentication scheme that employs the security of complex passwords and keystroke dynamics. Access control systems rely on a variety of methods for authenticating legitimate users and preventing malicious ones from accessing the system. The most commonly used system is a simple username and password approach. This technology has been the de-facto standard for remote authentication applications. A username-password based system assumes that only the genuine users know their own credentials. However, breaching this type of system has become a common occurrence in today's age of social networks and modern computational devices. Once broken, the system will accept every authentication trial using compromised credentials until the breach is detected. In this paper, we explore certain aspects of utilizing keystroke dynamics in username-password based systems. We show that as users get habituated to typing their credentials, there is a significant reduction in the variance of the keystroke patterns. This trend is more pronounced for long and complex passwords as opposed to short dictionary based passwords. We also study the time window necessary to perceive habituation in user typing patterns. Furthermore, we show that habituation plays a key role in classification of genuine login attempts by reducing the equal error rate (EER) over time. Finally, we explore an authentication scheme that employs the security of complex passwords and keystroke dynamics.
{"title":"Effects of User Habituation in Keystroke Dynamics on Password Security Policy","authors":"Zahid A. Syed, Sean Banerjee, Qi Cheng, B. Cukic","doi":"10.1109/HASE.2011.16","DOIUrl":"https://doi.org/10.1109/HASE.2011.16","url":null,"abstract":"Access control systems rely on a variety of methods for authenticating legitimate users and preventing malicious ones from accessing the system. The most commonly used system is a simple username and password approach. This technology has been the de-facto standard for remote authentication applications. A username-password based system assumes that only the genuine users know their own credentials. However, breaching this type of system has become a common occurrence in today's age of social networks and modern computational devices. Once broken, the system will accept every authentication trial using compromised credentials until the breach is detected. In this paper, we explore certain aspects of utilizing keystroke dynamics in username-password based systems. We show that as users get habituated to typing their credentials, there is a significant reduction in the variance of the keystroke patterns. This trend is more pronounced for long and complex passwords as opposed to short dictionary based passwords. We also study the time window necessary to perceive habituation in user typing patterns. Furthermore, we show that habituation plays a key role in classification of genuine login attempts by reducing the equal error rate (EER) over time. Finally, we explore an authentication scheme that employs the security of complex passwords and keystroke dynamics. Access control systems rely on a variety of methods for authenticating legitimate users and preventing malicious ones from accessing the system. The most commonly used system is a simple username and password approach. This technology has been the de-facto standard for remote authentication applications. A username-password based system assumes that only the genuine users know their own credentials. However, breaching this type of system has become a common occurrence in today's age of social networks and modern computational devices. Once broken, the system will accept every authentication trial using compromised credentials until the breach is detected. In this paper, we explore certain aspects of utilizing keystroke dynamics in username-password based systems. We show that as users get habituated to typing their credentials, there is a significant reduction in the variance of the keystroke patterns. This trend is more pronounced for long and complex passwords as opposed to short dictionary based passwords. We also study the time window necessary to perceive habituation in user typing patterns. Furthermore, we show that habituation plays a key role in classification of genuine login attempts by reducing the equal error rate (EER) over time. Finally, we explore an authentication scheme that employs the security of complex passwords and keystroke dynamics.","PeriodicalId":403140,"journal":{"name":"2011 IEEE 13th International Symposium on High-Assurance Systems Engineering","volume":"5 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2011-11-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"125326780","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Mazen El Maarabani, A. Cavalli, Iksoon Hwang, Fatiha Zaïdi
Access control policies are the key point for a secured interaction in business community. In general, an information system has to include an interoperability access control security policy to regulate the access from other systems to its resources. The security policy specifies a set of rules that defines the privileges of any subject accessing to the information system resources. In this paper we provide an approach to verify the correctness of contextual based interoperability access control security policies which are integrated in a system model. Security rules are initially described using the organization to organization model (O2O). We first propose an approach to transform O2O security rules to the well known Linear Temporal Logic (LTL). In order to instantiate the LTL formulae from a set of O2O security rules, we provided a mapping between the elements of the O2O security rule and the elements of the functional model in which the security rules are integrated. The resulted LTL formulae are used to verify the correctness of the security rules by model checking.
{"title":"Verification of Interoperability Security Policies by Model Checking","authors":"Mazen El Maarabani, A. Cavalli, Iksoon Hwang, Fatiha Zaïdi","doi":"10.1109/HASE.2011.17","DOIUrl":"https://doi.org/10.1109/HASE.2011.17","url":null,"abstract":"Access control policies are the key point for a secured interaction in business community. In general, an information system has to include an interoperability access control security policy to regulate the access from other systems to its resources. The security policy specifies a set of rules that defines the privileges of any subject accessing to the information system resources. In this paper we provide an approach to verify the correctness of contextual based interoperability access control security policies which are integrated in a system model. Security rules are initially described using the organization to organization model (O2O). We first propose an approach to transform O2O security rules to the well known Linear Temporal Logic (LTL). In order to instantiate the LTL formulae from a set of O2O security rules, we provided a mapping between the elements of the O2O security rule and the elements of the functional model in which the security rules are integrated. The resulted LTL formulae are used to verify the correctness of the security rules by model checking.","PeriodicalId":403140,"journal":{"name":"2011 IEEE 13th International Symposium on High-Assurance Systems Engineering","volume":"84 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2011-11-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"124946363","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Safety-critical embedded systems often need to undergo a rigorous certification process to ensure that the safety risks associated with the use of the systems are adequately mitigated. Interfaces between software and hardware components (SW/HW interfaces) play a fundamental role in these systems by linking the systems' control software to either the physical hardware components or to a hardware abstraction layer. Subsequently, safety certification of embedded systems necessarily has to cover the SW/HW interfaces used in these systems. In this paper, we describe a Model Driven Engineering (MDE) approach based on the SysML language, targeted at facilitating the certification of SW/HW interfaces in embedded systems. Our work draws on our experience with maritime and energy systems, but the work should also apply to a broader set of domains, e.g., the automotive sector, where similar design principles are used for (SW/HW) interface design. Our approach leverages our previous work on the development of SysML-based modeling and analysis techniques for safety-critical systems. Specifically, we tailor the methodology developed in our previous work to the development of safety-critical interfaces, and provide step-by-step and practical guidelines aimed at providing the evidence necessary for arguing that the safety-related requirements of an interface are properly addressed by its design. We describe an application of our proposed guidelines to a representative safety-critical interface in the maritime and energy domain.
{"title":"Using SysML for Modeling of Safety-Critical Software-Hardware Interfaces: Guidelines and Industry Experience","authors":"M. Sabetzadeh, S. Nejati, L. Briand, A. H. Mills","doi":"10.1109/HASE.2011.23","DOIUrl":"https://doi.org/10.1109/HASE.2011.23","url":null,"abstract":"Safety-critical embedded systems often need to undergo a rigorous certification process to ensure that the safety risks associated with the use of the systems are adequately mitigated. Interfaces between software and hardware components (SW/HW interfaces) play a fundamental role in these systems by linking the systems' control software to either the physical hardware components or to a hardware abstraction layer. Subsequently, safety certification of embedded systems necessarily has to cover the SW/HW interfaces used in these systems. In this paper, we describe a Model Driven Engineering (MDE) approach based on the SysML language, targeted at facilitating the certification of SW/HW interfaces in embedded systems. Our work draws on our experience with maritime and energy systems, but the work should also apply to a broader set of domains, e.g., the automotive sector, where similar design principles are used for (SW/HW) interface design. Our approach leverages our previous work on the development of SysML-based modeling and analysis techniques for safety-critical systems. Specifically, we tailor the methodology developed in our previous work to the development of safety-critical interfaces, and provide step-by-step and practical guidelines aimed at providing the evidence necessary for arguing that the safety-related requirements of an interface are properly addressed by its design. We describe an application of our proposed guidelines to a representative safety-critical interface in the maritime and energy domain.","PeriodicalId":403140,"journal":{"name":"2011 IEEE 13th International Symposium on High-Assurance Systems Engineering","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2011-11-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130883744","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Integrated Modular Avionics (IMA) architectures have been defined for sharing communication and computation resources. The aim of this paper is to evaluate temporal consistency properties of functions implemented on IMA platforms. More specifically, the two contributions are : (1) a modeling approach for IMA platforms based on the tagged signal model and an abstraction of the network, (2) the definition of two evaluation methods for temporal consistency properties. The industrial applicability of the method is demonstrated on an Airbus A380-like platform. We also discuss the significance of the over-approximations induced by the network abstraction.
{"title":"Worst Case Temporal Consistency in Integrated Modular Avionics Systems","authors":"M. Lauer, Jérôme Ermont, F. Boniol, C. Pagetti","doi":"10.1109/HASE.2011.48","DOIUrl":"https://doi.org/10.1109/HASE.2011.48","url":null,"abstract":"Integrated Modular Avionics (IMA) architectures have been defined for sharing communication and computation resources. The aim of this paper is to evaluate temporal consistency properties of functions implemented on IMA platforms. More specifically, the two contributions are : (1) a modeling approach for IMA platforms based on the tagged signal model and an abstraction of the network, (2) the definition of two evaluation methods for temporal consistency properties. The industrial applicability of the method is demonstrated on an Airbus A380-like platform. We also discuss the significance of the over-approximations induced by the network abstraction.","PeriodicalId":403140,"journal":{"name":"2011 IEEE 13th International Symposium on High-Assurance Systems Engineering","volume":"10 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2011-11-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"133035872","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}