Mobile Ad hoc Networks (MANETs) are networks dynamically formed by mobile nodes without the support of prior stationary infrastructures. The essential features of such a network are local broadcast, mobility and probability. In our earlier work, we proposed the pw-calculus to formally model and reason about MANTEs from a group probabilistic perspective, in which a MANET node can locally broadcast messages to a group of nodes within its physical transmission range with a certain probability. The group probabilities depend on the network topology which can evolve with the mobility of nodes. In this paper, to capture the behavior equivalence of networks, the structural congruence is investigated and the operational semantics is refined. Moreover, we define the notion of open bisimulation and prove it to be a congruence relation. Based on this, we discuss several nontrivial properties of MANETs such as mobile node equivalence and replacement. Finally, we by a case study illustrate our calculus and use it to analyze the probability of a transmission via routines.
{"title":"A Calculus for Mobile Ad Hoc Networks from a Group Probabilistic Perspective","authors":"Si Liu, Yongxin Zhao, Huibiao Zhu, Qin Li","doi":"10.1109/HASE.2011.13","DOIUrl":"https://doi.org/10.1109/HASE.2011.13","url":null,"abstract":"Mobile Ad hoc Networks (MANETs) are networks dynamically formed by mobile nodes without the support of prior stationary infrastructures. The essential features of such a network are local broadcast, mobility and probability. In our earlier work, we proposed the pw-calculus to formally model and reason about MANTEs from a group probabilistic perspective, in which a MANET node can locally broadcast messages to a group of nodes within its physical transmission range with a certain probability. The group probabilities depend on the network topology which can evolve with the mobility of nodes. In this paper, to capture the behavior equivalence of networks, the structural congruence is investigated and the operational semantics is refined. Moreover, we define the notion of open bisimulation and prove it to be a congruence relation. Based on this, we discuss several nontrivial properties of MANETs such as mobile node equivalence and replacement. Finally, we by a case study illustrate our calculus and use it to analyze the probability of a transmission via routines.","PeriodicalId":403140,"journal":{"name":"2011 IEEE 13th International Symposium on High-Assurance Systems Engineering","volume":"20 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2011-11-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114697787","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Service-oriented architecture (SOA) requires fault-tolerant implementation because the heterogeneous nature of services is likely to cause faults and failures. Focusing on the runtime re-composition and exception handling strategies for execution faults, we propose a recovery model in SOA using a Markov decision process (MDP). Various ¡®quality of service' (QoS) criteria and possible recovery strategies can be incorporated into our model to determine the optimal policy, which entails cost optimization in service selection. We show how a typical SOA scenario can be translated into our model and how an optimal policy can be determined. Analytical results reveal the usefulness of our approach as compared to sole consideration of service cost. We also analyze the rationale for the selection of the optimal policy.
{"title":"Modeling Recovery Strategies in Service-Oriented Architecture Using a Markov Decision Process","authors":"Dongeun Lee, Heonshik Shin, Eunjeong Park","doi":"10.1109/HASE.2011.25","DOIUrl":"https://doi.org/10.1109/HASE.2011.25","url":null,"abstract":"Service-oriented architecture (SOA) requires fault-tolerant implementation because the heterogeneous nature of services is likely to cause faults and failures. Focusing on the runtime re-composition and exception handling strategies for execution faults, we propose a recovery model in SOA using a Markov decision process (MDP). Various ¡®quality of service' (QoS) criteria and possible recovery strategies can be incorporated into our model to determine the optimal policy, which entails cost optimization in service selection. We show how a typical SOA scenario can be translated into our model and how an optimal policy can be determined. Analytical results reveal the usefulness of our approach as compared to sole consideration of service cost. We also analyze the rationale for the selection of the optimal policy.","PeriodicalId":403140,"journal":{"name":"2011 IEEE 13th International Symposium on High-Assurance Systems Engineering","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2011-11-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127442806","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Janell Duhaney, T. Khoshgoftaar, J. Sloan, B. Alhalabi, P. Beaujean
An ocean turbine extracts the kinetic energy from ocean currents to generate electricity. Machine Condition Monitoring(MCM) / Prognostic Health Monitoring (PHM) systems allow for self-checking and automated fault detection, and are integral in the construction of a highly reliable ocean turbine. This paper presents an onshore test platform for an ocean turbine as well as a case study showing how machine learning can be used to detect changes in the operational state of this plant based on its vibration signals. In the case study, seven widely used machine learners a retrained on experimental data gathered from the test platform, a dynamometer, to detect changes in the machine'sstate. The classification models generated by these classifiers are being considered as possible components of the state detection module of an MCM/PHM system for ocean turbines, and would be used for fault prediction. Experimental results presented here show the effectiveness of decision tree and random forest learners on distinguishing between faulty and normal states based on vibration data preprocessed by a wavelet transform.
{"title":"A Dynamometer for an Ocean Turbine Prototype: Reliability through Automated Monitoring","authors":"Janell Duhaney, T. Khoshgoftaar, J. Sloan, B. Alhalabi, P. Beaujean","doi":"10.1109/HASE.2011.61","DOIUrl":"https://doi.org/10.1109/HASE.2011.61","url":null,"abstract":"An ocean turbine extracts the kinetic energy from ocean currents to generate electricity. Machine Condition Monitoring(MCM) / Prognostic Health Monitoring (PHM) systems allow for self-checking and automated fault detection, and are integral in the construction of a highly reliable ocean turbine. This paper presents an onshore test platform for an ocean turbine as well as a case study showing how machine learning can be used to detect changes in the operational state of this plant based on its vibration signals. In the case study, seven widely used machine learners a retrained on experimental data gathered from the test platform, a dynamometer, to detect changes in the machine'sstate. The classification models generated by these classifiers are being considered as possible components of the state detection module of an MCM/PHM system for ocean turbines, and would be used for fault prediction. Experimental results presented here show the effectiveness of decision tree and random forest learners on distinguishing between faulty and normal states based on vibration data preprocessed by a wavelet transform.","PeriodicalId":403140,"journal":{"name":"2011 IEEE 13th International Symposium on High-Assurance Systems Engineering","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2011-11-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129489587","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Developing adequate system operation contracts at the requirements level can be challenging. A specifier needs to ensure that a contract allows an operation to be invoked in different usage contexts without putting the system in an invalid state. Specifiers need usable rigorous analysis techniques that can help them develop more robust contracts, that is, contracts that are neither too restrictive nor too permissive. In this paper we describe an iterative approach to developing robust operation contracts. The approach supports rigorous robustness analysis of operation contracts against a set of scenarios that provide usage contexts for the operation. We illustrate the approach by developing a robust operation contract for a functional feature in a Location-aware Role-Based Access Control (LRBAC) model.
{"title":"Supporting Iterative Development of Robust Operation Contracts in UML Requirements Models","authors":"Wuliang Sun, R. France, I. Ray","doi":"10.1109/HASE.2011.43","DOIUrl":"https://doi.org/10.1109/HASE.2011.43","url":null,"abstract":"Developing adequate system operation contracts at the requirements level can be challenging. A specifier needs to ensure that a contract allows an operation to be invoked in different usage contexts without putting the system in an invalid state. Specifiers need usable rigorous analysis techniques that can help them develop more robust contracts, that is, contracts that are neither too restrictive nor too permissive. In this paper we describe an iterative approach to developing robust operation contracts. The approach supports rigorous robustness analysis of operation contracts against a set of scenarios that provide usage contexts for the operation. We illustrate the approach by developing a robust operation contract for a functional feature in a Location-aware Role-Based Access Control (LRBAC) model.","PeriodicalId":403140,"journal":{"name":"2011 IEEE 13th International Symposium on High-Assurance Systems Engineering","volume":"93 Pt A 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2011-11-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115786577","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Vibration signals are an important source of information for machine condition monitoring/prognostic health monitoring to ensure the reliability of ocean systems. Because they are waveforms, vibration data must be transformed into the frequency domain before they can be used to build classification and prediction models. One popular transformation is wavelet packet decomposition, a higher resolution variant of wavelet transformation. For wavelet packet decomposition, depth is an important parameter to control the maximum level of detail while minimizing the computational time when constructing and using the decomposition tree. Little guidance exists in the literature to assist researchers in choosing a depth, however. In this paper, we present a feature selection-based approach to determining the optimum depth for wavelet packet decomposition. First, the data is transformed using a very high depth, and all of the features are ordered based on their importance for predicting the class. Then, a depth which captures the most important features is chosen. Finally, a model is built using that depth. We show that a classification model built according to this procedure retains almost all of the accuracy of models built using a much deeper transform, while allowing for smaller depths and vastly fewer features.
{"title":"Using Feature Selection to Determine Optimal Depth for Wavelet Packet Decomposition of Vibration Signals for Ocean System Reliability","authors":"Randall Wald, T. Khoshgoftaar, J. Sloan","doi":"10.1109/HASE.2011.60","DOIUrl":"https://doi.org/10.1109/HASE.2011.60","url":null,"abstract":"Vibration signals are an important source of information for machine condition monitoring/prognostic health monitoring to ensure the reliability of ocean systems. Because they are waveforms, vibration data must be transformed into the frequency domain before they can be used to build classification and prediction models. One popular transformation is wavelet packet decomposition, a higher resolution variant of wavelet transformation. For wavelet packet decomposition, depth is an important parameter to control the maximum level of detail while minimizing the computational time when constructing and using the decomposition tree. Little guidance exists in the literature to assist researchers in choosing a depth, however. In this paper, we present a feature selection-based approach to determining the optimum depth for wavelet packet decomposition. First, the data is transformed using a very high depth, and all of the features are ordered based on their importance for predicting the class. Then, a depth which captures the most important features is chosen. Finally, a model is built using that depth. We show that a classification model built according to this procedure retains almost all of the accuracy of models built using a much deeper transform, while allowing for smaller depths and vastly fewer features.","PeriodicalId":403140,"journal":{"name":"2011 IEEE 13th International Symposium on High-Assurance Systems Engineering","volume":"315 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2011-11-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"123679440","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
M. Sabetzadeh, D. Falessi, L. Briand, Stefano Di Alesio, D. McGeorge, Vidar Åhjem, Jonas Borg
New technologies typically involve innovative aspects that are not addressed by the existing normative standards and hence are not assessable through common certification procedures. To ensure that new technologies can be implemented in a safe and reliable manner, a specific kind of assessment is performed, which in many industries, e.g., the energy sector, is known as Technology Qualification (TQ). TQ aims at demonstrating with an acceptable level of confidence that a new technology will function within specified limits. Expert opinion plays an important role in TQ, both to identify the safety and reliability evidence that needs to be developed, and to interpret the evidence provided. Hence, it is crucial to apply a systematic process for eliciting expert opinions, and to use the opinions for measuring the satisfaction of a technology's safety and reliability objectives. In this paper, drawing on the concept of assurance cases, we propose a goal-based approach for TQ. The approach, which is supported by a software tool, enables analysts to quantitatively reason about the satisfaction of a technology's overall goals and further to identify the aspects that must be improved to increase goal satisfaction. The three main components enabling quantitative assessment are goal models, expert elicitation, and probabilistic simulation. We report on an industrial pilot study where we apply our approach for assessing a new offshore technology.
{"title":"Combining Goal Models, Expert Elicitation, and Probabilistic Simulation for Qualification of New Technology","authors":"M. Sabetzadeh, D. Falessi, L. Briand, Stefano Di Alesio, D. McGeorge, Vidar Åhjem, Jonas Borg","doi":"10.1109/HASE.2011.22","DOIUrl":"https://doi.org/10.1109/HASE.2011.22","url":null,"abstract":"New technologies typically involve innovative aspects that are not addressed by the existing normative standards and hence are not assessable through common certification procedures. To ensure that new technologies can be implemented in a safe and reliable manner, a specific kind of assessment is performed, which in many industries, e.g., the energy sector, is known as Technology Qualification (TQ). TQ aims at demonstrating with an acceptable level of confidence that a new technology will function within specified limits. Expert opinion plays an important role in TQ, both to identify the safety and reliability evidence that needs to be developed, and to interpret the evidence provided. Hence, it is crucial to apply a systematic process for eliciting expert opinions, and to use the opinions for measuring the satisfaction of a technology's safety and reliability objectives. In this paper, drawing on the concept of assurance cases, we propose a goal-based approach for TQ. The approach, which is supported by a software tool, enables analysts to quantitatively reason about the satisfaction of a technology's overall goals and further to identify the aspects that must be improved to increase goal satisfaction. The three main components enabling quantitative assessment are goal models, expert elicitation, and probabilistic simulation. We report on an industrial pilot study where we apply our approach for assessing a new offshore technology.","PeriodicalId":403140,"journal":{"name":"2011 IEEE 13th International Symposium on High-Assurance Systems Engineering","volume":"94 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2011-11-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"124068958","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
The current trend in the silicon industry has been a steady migration towards Chip Multicore Processor (CMP) system to harvest more throughputs. However, chip multicore processors report higher values of soft errors, thereby degrading the overall system reliability. Hence, engineers have been wary of using CMP architectures for safety-critical embedded real-time system applications that require high reliability levels. The larger users of these processors also dictate the processor migration trends. With newer processor architectures, the older ones are destined to become obsolete. This paper compares typical safety-critical architectures and investigates the reliabilities of different CMP architectures. We present the fault tolerance framework and detailed reliability analysis of fault-tolerant single-core and multi-core based systems. The analysis results are then used to compare the reliability of CMP architectures with the corresponding reliability of single processor architectures. Although a CMP system does encounter degradation, by applying some system level dependability assurance mitigation features, its reliability can be enhanced. This enables CMP systems to be effectively deployed in critical applications.
{"title":"High-Assurance Reconfigurable Multicore Processor Based Systems","authors":"M. Peshave, F. Bastani, I. Yen","doi":"10.1109/HASE.2011.33","DOIUrl":"https://doi.org/10.1109/HASE.2011.33","url":null,"abstract":"The current trend in the silicon industry has been a steady migration towards Chip Multicore Processor (CMP) system to harvest more throughputs. However, chip multicore processors report higher values of soft errors, thereby degrading the overall system reliability. Hence, engineers have been wary of using CMP architectures for safety-critical embedded real-time system applications that require high reliability levels. The larger users of these processors also dictate the processor migration trends. With newer processor architectures, the older ones are destined to become obsolete. This paper compares typical safety-critical architectures and investigates the reliabilities of different CMP architectures. We present the fault tolerance framework and detailed reliability analysis of fault-tolerant single-core and multi-core based systems. The analysis results are then used to compare the reliability of CMP architectures with the corresponding reliability of single processor architectures. Although a CMP system does encounter degradation, by applying some system level dependability assurance mitigation features, its reliability can be enhanced. This enables CMP systems to be effectively deployed in critical applications.","PeriodicalId":403140,"journal":{"name":"2011 IEEE 13th International Symposium on High-Assurance Systems Engineering","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2011-11-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"125894082","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
The development of systems based on embedded components is a challenging task because of the distributed, reactive and real-time nature of such systems. From a security point of view, embedded devices are basically systems owned by a certain entity and operated in a potentially hostile environment. Currently, a security engineering process for systems with embedded components that takes these considerations into account does not exist. This paper presents a process, which aims to support the embedded systems developer in integrating the security elements into the overall engineering process. In particular, the proposed process provides means to identify and to consistently and naturally manage security properties and requirements.
{"title":"A Security Modelling Framework for Systems of Embedded Components","authors":"A. Maña, J. Ruiz","doi":"10.1109/HASE.2011.21","DOIUrl":"https://doi.org/10.1109/HASE.2011.21","url":null,"abstract":"The development of systems based on embedded components is a challenging task because of the distributed, reactive and real-time nature of such systems. From a security point of view, embedded devices are basically systems owned by a certain entity and operated in a potentially hostile environment. Currently, a security engineering process for systems with embedded components that takes these considerations into account does not exist. This paper presents a process, which aims to support the embedded systems developer in integrating the security elements into the overall engineering process. In particular, the proposed process provides means to identify and to consistently and naturally manage security properties and requirements.","PeriodicalId":403140,"journal":{"name":"2011 IEEE 13th International Symposium on High-Assurance Systems Engineering","volume":"15 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2011-11-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"132561988","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Mario Bernhart, Stefan Reiterer, Kilian Matt, Andreas Mauczka, T. Grechenig
Software reviews are one of the most efficient quality assurance techniques in software engineering. They are required for the enhancement of the software quality in early phases of the development process and often used in development of safety critical systems. In the field of software engineering for Air Traffic Management (ATM) the standard DO-278/ED-109 requires the rigorous application of code reviews and fully traceable reporting of the results. This case study presents a process and an IDE-integrated tool that complies with the requirements of the standard.
{"title":"A Task-Based Code Review Process and Tool to Comply with the DO-278/ED-109 Standard for Air Traffic Managment Software Development: An Industrial Case Study","authors":"Mario Bernhart, Stefan Reiterer, Kilian Matt, Andreas Mauczka, T. Grechenig","doi":"10.1109/HASE.2011.54","DOIUrl":"https://doi.org/10.1109/HASE.2011.54","url":null,"abstract":"Software reviews are one of the most efficient quality assurance techniques in software engineering. They are required for the enhancement of the software quality in early phases of the development process and often used in development of safety critical systems. In the field of software engineering for Air Traffic Management (ATM) the standard DO-278/ED-109 requires the rigorous application of code reviews and fully traceable reporting of the results. This case study presents a process and an IDE-integrated tool that complies with the requirements of the standard.","PeriodicalId":403140,"journal":{"name":"2011 IEEE 13th International Symposium on High-Assurance Systems Engineering","volume":"30 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2011-11-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"123850469","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Unlike test generation techniques, spectrum-based fault localization techniques have not been rigorously evaluated for their effectiveness in localizing different classes of faults. In this paper, we evaluate the effectiveness of the Tarantula fault localization technique. We state that the following three properties of a fault affect the effectiveness of localizing it: (1) accessibility, (2) original state failure condition, and (3) impact. Accessibility refers to how easy or hard it is to execute a faulty statement. It is measured by the size of the backward slice of the faulty statement. The original state failure condition is the condition that must be satisfied to create a local failure state upon executing the faulty statement. Impact refers to the fraction of the program that is affected by the execution of the faulty statement, measured by the size of the forward slice of the faulty statement. The results of our evaluation with the Siemens benchmark suite show that (1) original state failure condition based fault classes have no relationship with the effectiveness of localization, and (2) faults that are hard to access and have low impact are most effectively localized. These observations are consistent across random and branch coverage based test suites.
{"title":"On the Effectiveness of the Tarantula Fault Localization Technique for Different Fault Classes","authors":"A. Bandyopadhyay, Sudipto Ghosh","doi":"10.1109/HASE.2011.52","DOIUrl":"https://doi.org/10.1109/HASE.2011.52","url":null,"abstract":"Unlike test generation techniques, spectrum-based fault localization techniques have not been rigorously evaluated for their effectiveness in localizing different classes of faults. In this paper, we evaluate the effectiveness of the Tarantula fault localization technique. We state that the following three properties of a fault affect the effectiveness of localizing it: (1) accessibility, (2) original state failure condition, and (3) impact. Accessibility refers to how easy or hard it is to execute a faulty statement. It is measured by the size of the backward slice of the faulty statement. The original state failure condition is the condition that must be satisfied to create a local failure state upon executing the faulty statement. Impact refers to the fraction of the program that is affected by the execution of the faulty statement, measured by the size of the forward slice of the faulty statement. The results of our evaluation with the Siemens benchmark suite show that (1) original state failure condition based fault classes have no relationship with the effectiveness of localization, and (2) faults that are hard to access and have low impact are most effectively localized. These observations are consistent across random and branch coverage based test suites.","PeriodicalId":403140,"journal":{"name":"2011 IEEE 13th International Symposium on High-Assurance Systems Engineering","volume":"95 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2011-11-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"122554733","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}