Tool-based analytic techniques such as formal verification may be used to justify the quality, correctness and dependability of software involved in digital control systems. This paper reports on the development and application of a tool-based methodology, the purpose of which is the formal verification of freedom from intrinsic software faults related to dynamic memory management. The paper introduces the operational and research context in the power generation industry, in which this work takes place. The theoretical framework and the tool at the cornerstone of the methodology are then presented. The paper also presents the practical aspects of the research: software under analysis, experimental results and lessons learned. The results are seen promising, as the methodology scales accurately in identified conditions of analysis, and has a number of perspectives which are currently under study in ongoing work.
{"title":"Formal Verification of Industrial Software with Dynamic Memory Management","authors":"S. Labbé, Arnaud Sangnier","doi":"10.1109/PRDC.2010.19","DOIUrl":"https://doi.org/10.1109/PRDC.2010.19","url":null,"abstract":"Tool-based analytic techniques such as formal verification may be used to justify the quality, correctness and dependability of software involved in digital control systems. This paper reports on the development and application of a tool-based methodology, the purpose of which is the formal verification of freedom from intrinsic software faults related to dynamic memory management. The paper introduces the operational and research context in the power generation industry, in which this work takes place. The theoretical framework and the tool at the cornerstone of the methodology are then presented. The paper also presents the practical aspects of the research: software under analysis, experimental results and lessons learned. The results are seen promising, as the methodology scales accurately in identified conditions of analysis, and has a number of perspectives which are currently under study in ongoing work.","PeriodicalId":382974,"journal":{"name":"2010 IEEE 16th Pacific Rim International Symposium on Dependable Computing","volume":"134 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2010-12-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"123210190","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Intermittent faults are expected to be a great challenge in VLSI circuits. The complexity of manufacturing processes, provoking residues and process variations, and special wear out mechanisms, may increase the presence of such faults. This work presents a case study of the effects of intermittent faults on the behavior of a commercial micro controller. By using VHDL-based fault injection, particularly saboteurs, we have injected different intermittent fault models in the micro controller buses, as they are critical locations, potentially sensitive to intermittent faults. We have compared the impact and the feasibility of implementation of the fault models, in order to select a representative and low cost intermittent fault load. The applied methodology can be generalized to inject intermittent faults in other locations, such as registers and memory, and to validate the dependability of critical systems.
{"title":"Searching Representative and Low Cost Fault Models for Intermittent Faults in Microcontrollers: A Case Study","authors":"J. Gracia, D. Gil, J. Baraza, L. J. Saiz, P. Gil","doi":"10.1109/PRDC.2010.18","DOIUrl":"https://doi.org/10.1109/PRDC.2010.18","url":null,"abstract":"Intermittent faults are expected to be a great challenge in VLSI circuits. The complexity of manufacturing processes, provoking residues and process variations, and special wear out mechanisms, may increase the presence of such faults. This work presents a case study of the effects of intermittent faults on the behavior of a commercial micro controller. By using VHDL-based fault injection, particularly saboteurs, we have injected different intermittent fault models in the micro controller buses, as they are critical locations, potentially sensitive to intermittent faults. We have compared the impact and the feasibility of implementation of the fault models, in order to select a representative and low cost intermittent fault load. The applied methodology can be generalized to inject intermittent faults in other locations, such as registers and memory, and to validate the dependability of critical systems.","PeriodicalId":382974,"journal":{"name":"2010 IEEE 16th Pacific Rim International Symposium on Dependable Computing","volume":"61 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2010-12-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128183937","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
T. Nakajima, Y. Kinebuchi, Alexandre Courbot, H. Shimada, Tsung-Han Lin, Hitoshi Mitake
In this paper, we propose a composition kernel where multiple OS kernels are running on top of a very thin hardware abstraction layer. A composition kernel can reduce the engineering cost of developing an embedded system by reusing existing OS kernels and application with minimum modification without assuming special hardware supports.
{"title":"Composition Kernel: A Multi-core Processor Virtualization Layer for Highly Functional Embedded Systems","authors":"T. Nakajima, Y. Kinebuchi, Alexandre Courbot, H. Shimada, Tsung-Han Lin, Hitoshi Mitake","doi":"10.1109/PRDC.2010.11","DOIUrl":"https://doi.org/10.1109/PRDC.2010.11","url":null,"abstract":"In this paper, we propose a composition kernel where multiple OS kernels are running on top of a very thin hardware abstraction layer. A composition kernel can reduce the engineering cost of developing an embedded system by reusing existing OS kernels and application with minimum modification without assuming special hardware supports.","PeriodicalId":382974,"journal":{"name":"2010 IEEE 16th Pacific Rim International Symposium on Dependable Computing","volume":"27 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2010-12-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"133438619","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
R. Ghosh, Kishor S. Trivedi, V. Naik, Dong Seong Kim
Handling diverse client demands and managing unexpected failures without degrading performance are two key promises of a cloud delivered service. However, evaluation of a cloud service quality becomes difficult as the scale and complexity of a cloud system increases. In a cloud environment, service request from a user goes through a variety of provider specific processing steps from the instant it is submitted until the service is fully delivered. Measurement-based evaluation of cloud service quality is expensive especially if many configurations, workload scenarios, and management methods are to be analyzed. To overcome these difficulties, in this paper we propose a general analytic model based approach for an end-to-end perform ability analysis of a cloud service. We illustrate our approach using Infrastructure-as-a-Service (IaaS) cloud, where service availability and provisioning response delays are two key QoS metrics. A novelty of our approach is in reducing the complexity of analysis by dividing the overall model into sub-models and then obtaining the overall solution by iteration over individual sub-model solutions. In contrast to a single one-level monolithic model, our approach yields a high fidelity model that is tractable and scalable. Our approach and underlying models can be readily extended to other types of cloud services and are applicable to public, private and hybrid clouds.
{"title":"End-to-End Performability Analysis for Infrastructure-as-a-Service Cloud: An Interacting Stochastic Models Approach","authors":"R. Ghosh, Kishor S. Trivedi, V. Naik, Dong Seong Kim","doi":"10.1109/PRDC.2010.30","DOIUrl":"https://doi.org/10.1109/PRDC.2010.30","url":null,"abstract":"Handling diverse client demands and managing unexpected failures without degrading performance are two key promises of a cloud delivered service. However, evaluation of a cloud service quality becomes difficult as the scale and complexity of a cloud system increases. In a cloud environment, service request from a user goes through a variety of provider specific processing steps from the instant it is submitted until the service is fully delivered. Measurement-based evaluation of cloud service quality is expensive especially if many configurations, workload scenarios, and management methods are to be analyzed. To overcome these difficulties, in this paper we propose a general analytic model based approach for an end-to-end perform ability analysis of a cloud service. We illustrate our approach using Infrastructure-as-a-Service (IaaS) cloud, where service availability and provisioning response delays are two key QoS metrics. A novelty of our approach is in reducing the complexity of analysis by dividing the overall model into sub-models and then obtaining the overall solution by iteration over individual sub-model solutions. In contrast to a single one-level monolithic model, our approach yields a high fidelity model that is tractable and scalable. Our approach and underlying models can be readily extended to other types of cloud services and are applicable to public, private and hybrid clouds.","PeriodicalId":382974,"journal":{"name":"2010 IEEE 16th Pacific Rim International Symposium on Dependable Computing","volume":"37 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2010-12-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"133821429","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
For self-organized networks that possess highly decentralized and self-organized natures, neither the identity nor the number of processes is known to all participants at the beginning of the computation because no central authority exists to initialize each participant with some context information. Hence, consensus, which is essential to solving the agreement problem, in such networks cannot be achieved in the ways for traditional fixed networks. To address this problem of Consensus with Unknown Participants (CUP), a variant of the traditional consensus problem was proposed in the literature, by relaxing the requirement for the original knowledge owned by every process about all participants in the computation. Correspondingly, the CUP problem considering process crashes was also introduced, called the Fault-Tolerant Consensus with Unknown Participants (FT-CUP) problem. In this paper, we propose a knowledge connectivity condition sufficient for solving the FT-CUP problem, which is improved from the one proposed in our previous work.
{"title":"An Improved Knowledge Connectivity Condition for Fault-Tolerant Consensus with Unknown Participants","authors":"Jichiang Tsai, Che-Cheng Chang","doi":"10.1109/PRDC.2010.20","DOIUrl":"https://doi.org/10.1109/PRDC.2010.20","url":null,"abstract":"For self-organized networks that possess highly decentralized and self-organized natures, neither the identity nor the number of processes is known to all participants at the beginning of the computation because no central authority exists to initialize each participant with some context information. Hence, consensus, which is essential to solving the agreement problem, in such networks cannot be achieved in the ways for traditional fixed networks. To address this problem of Consensus with Unknown Participants (CUP), a variant of the traditional consensus problem was proposed in the literature, by relaxing the requirement for the original knowledge owned by every process about all participants in the computation. Correspondingly, the CUP problem considering process crashes was also introduced, called the Fault-Tolerant Consensus with Unknown Participants (FT-CUP) problem. In this paper, we propose a knowledge connectivity condition sufficient for solving the FT-CUP problem, which is improved from the one proposed in our previous work.","PeriodicalId":382974,"journal":{"name":"2010 IEEE 16th Pacific Rim International Symposium on Dependable Computing","volume":"36 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2010-12-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114613364","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Intermittent hardware faults are bursts of errors that last from a few CPU cycles to a few seconds. Recent studies have shown that intermittent fault rates are increasing due to technology scaling and are likely to be a significant concern in future systems. We study the impact of intermittent hardware faults in programs. A simulation-based fault-injection campaign shows that the majority of the intermittent faults lead to program crashes. We build a crash model and a program model that represents the data dependencies in a fault-free execution of the program. We then use this model to glean information about when the program crashes and the extent of fault propagation. Empirical validation of our model using fault-injection experiment shows that it predicts almost all actual crash-causing intermittent faults, and in 93% of the considered faults the prediction is accurate within 100 instructions. Further, the model is found to be more than two orders of magnitude faster than equivalent fault-injection experiments performed with a microprocessor simulator.
{"title":"Modeling the Propagation of Intermittent Hardware Faults in Programs","authors":"L. Rashid, K. Pattabiraman, S. Gopalakrishnan","doi":"10.1109/PRDC.2010.52","DOIUrl":"https://doi.org/10.1109/PRDC.2010.52","url":null,"abstract":"Intermittent hardware faults are bursts of errors that last from a few CPU cycles to a few seconds. Recent studies have shown that intermittent fault rates are increasing due to technology scaling and are likely to be a significant concern in future systems. We study the impact of intermittent hardware faults in programs. A simulation-based fault-injection campaign shows that the majority of the intermittent faults lead to program crashes. We build a crash model and a program model that represents the data dependencies in a fault-free execution of the program. We then use this model to glean information about when the program crashes and the extent of fault propagation. Empirical validation of our model using fault-injection experiment shows that it predicts almost all actual crash-causing intermittent faults, and in 93% of the considered faults the prediction is accurate within 100 instructions. Further, the model is found to be more than two orders of magnitude faster than equivalent fault-injection experiments performed with a microprocessor simulator.","PeriodicalId":382974,"journal":{"name":"2010 IEEE 16th Pacific Rim International Symposium on Dependable Computing","volume":"64 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2010-12-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127720382","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
H. Fujita, Motohiko Matsuda, T. Maeda, Shin'ichi Miura, Y. Ishikawa
P-Bus, a new programming interface layer for safe kernel extensions is proposed. P-Bus introduces a new programming interface on top of the Linux kernel in order to give formal specifications to the interface, and to improve portability of extensions. New extensions, called P-Components, are verified with a model checker MKencha to see whether a component is compliant with rules which should be obeyed to implement extensions properly. A network driver has been implemented as a P-Component and verified with MKencha. MKencha has found two bugs in the component.
{"title":"P-Bus: Programming Interface Layer for Safe OS Kernel Extensions","authors":"H. Fujita, Motohiko Matsuda, T. Maeda, Shin'ichi Miura, Y. Ishikawa","doi":"10.1109/PRDC.2010.31","DOIUrl":"https://doi.org/10.1109/PRDC.2010.31","url":null,"abstract":"P-Bus, a new programming interface layer for safe kernel extensions is proposed. P-Bus introduces a new programming interface on top of the Linux kernel in order to give formal specifications to the interface, and to improve portability of extensions. New extensions, called P-Components, are verified with a model checker MKencha to see whether a component is compliant with rules which should be obeyed to implement extensions properly. A network driver has been implemented as a P-Component and verified with MKencha. MKencha has found two bugs in the component.","PeriodicalId":382974,"journal":{"name":"2010 IEEE 16th Pacific Rim International Symposium on Dependable Computing","volume":"56 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2010-12-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126521777","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
The deep sub micron semiconductor technologies increase parameter variations. The increase in parameter variations requires excessive design margin that has serious impact on performance and power consumption. In order to eliminate the excessive design margin, we are investigating canary Flip-Flop (FF). Canary FF requires additional circuits consisting of an FF and a comparator. Thus, it suffers large area overhead. In order to reduce the area overhead, this paper proposes a selective replacement method for canary FF and evaluates it. In the case of Renesas’s M32R processor, the area overhead of 2% is achieved.
{"title":"A Replacement Strategy for Canary Flip-Flops","authors":"Yuji Kunitake, Toshinori Sato, H. Yasuura","doi":"10.1109/PRDC.2010.46","DOIUrl":"https://doi.org/10.1109/PRDC.2010.46","url":null,"abstract":"The deep sub micron semiconductor technologies increase parameter variations. The increase in parameter variations requires excessive design margin that has serious impact on performance and power consumption. In order to eliminate the excessive design margin, we are investigating canary Flip-Flop (FF). Canary FF requires additional circuits consisting of an FF and a comparator. Thus, it suffers large area overhead. In order to reduce the area overhead, this paper proposes a selective replacement method for canary FF and evaluates it. In the case of Renesas’s M32R processor, the area overhead of 2% is achieved.","PeriodicalId":382974,"journal":{"name":"2010 IEEE 16th Pacific Rim International Symposium on Dependable Computing","volume":"24 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2010-12-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128209716","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Overlay networks are used as proxies which mediate communication between an application and its users with-out revealing the application’s location (IP address). The capability that users can communicate with an application without knowing its location is called location-hiding. Al-though recent years have provided little literature about location-hiding such as Tor or anonymous web publishing, most Internet activities where anonymity is desired require only sender and relationship anonymity, thereby location-hiding needs more academic effort. This paper proposes a novel architecture to achieve location-hiding. We describe the design of a dynamic multilayer routing (DMR) where users can communicate withan application without knowing any information about its location (its IP address). The essential factors of DMR protocol are multi-layering, reconfiguration and host-diversity. The goal of DMR is to overcome or reduce several draw-backs of static structure based techniques. Through analytical analysis, this paper provides a de-tailed study of DMR architecture and shows that DMR is completely strong against penetration attacks. Our analysis shows that attackers have a negligible chance (e.g., 10−8)to penetrate the architecture and disclose the application’slocation.
{"title":"Dynamic Multilayer Routing to Achieve Location-Hiding","authors":"H. Beitollahi, Geert Deconinck","doi":"10.1109/PRDC.2010.23","DOIUrl":"https://doi.org/10.1109/PRDC.2010.23","url":null,"abstract":"Overlay networks are used as proxies which mediate communication between an application and its users with-out revealing the application’s location (IP address). The capability that users can communicate with an application without knowing its location is called location-hiding. Al-though recent years have provided little literature about location-hiding such as Tor or anonymous web publishing, most Internet activities where anonymity is desired require only sender and relationship anonymity, thereby location-hiding needs more academic effort. This paper proposes a novel architecture to achieve location-hiding. We describe the design of a dynamic multilayer routing (DMR) where users can communicate withan application without knowing any information about its location (its IP address). The essential factors of DMR protocol are multi-layering, reconfiguration and host-diversity. The goal of DMR is to overcome or reduce several draw-backs of static structure based techniques. Through analytical analysis, this paper provides a de-tailed study of DMR architecture and shows that DMR is completely strong against penetration attacks. Our analysis shows that attackers have a negligible chance (e.g., 10−8)to penetrate the architecture and disclose the application’slocation.","PeriodicalId":382974,"journal":{"name":"2010 IEEE 16th Pacific Rim International Symposium on Dependable Computing","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2010-12-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130422886","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Prior to field deployment, mission critical sensor networks should be analyzed for high reliability assurance. Past research only focused on reliability models for sensor node or network in isolation. This paper presents a comprehensive approach for reliability analysis of a cluster-based sensor network. We use a three-level hierarchical model for sensor networks using fault trees and use Markov chains at the bottom level to model the reliability of individual sensor nodes. We summarize the developed models, showcase the initial numerical results and outline the future avenues of research in the following sections.
{"title":"A Hierarchical Model for Reliability Analysis of Sensor Networks","authors":"Dong Seong Kim, R. Ghosh, Kishor S. Trivedi","doi":"10.1109/PRDC.2010.25","DOIUrl":"https://doi.org/10.1109/PRDC.2010.25","url":null,"abstract":"Prior to field deployment, mission critical sensor networks should be analyzed for high reliability assurance. Past research only focused on reliability models for sensor node or network in isolation. This paper presents a comprehensive approach for reliability analysis of a cluster-based sensor network. We use a three-level hierarchical model for sensor networks using fault trees and use Markov chains at the bottom level to model the reliability of individual sensor nodes. We summarize the developed models, showcase the initial numerical results and outline the future avenues of research in the following sections.","PeriodicalId":382974,"journal":{"name":"2010 IEEE 16th Pacific Rim International Symposium on Dependable Computing","volume":"52 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2010-12-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"116646431","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}