This paper describes a diagnostic test generation procedure for transition faults that produces mixed test sets consisting of broadside and skewed-load tests. A mix of broadside and skewed-load tests yields improved diagnostic resolution compared with a single test type. The procedure starts from a mixed test set generated for fault detection. It uses two procedures to obtain new tests that are useful for diagnosis starting from existing tests. Both procedures allow the type of a test to be modified (from broadside to skewed-load and from skewed-load to broadside). The first procedure is fault independent. The second procedure targets specific fault pairs. Experimental results show that diagnostic test generation changes the mix of broadside and skewed-load tests in the test set compared with a fault detection test set.
{"title":"Generation of Mixed Broadside and Skewed-Load Diagnostic Test Sets for Transition Faults","authors":"I. Pomeranz","doi":"10.1109/PRDC.2011.15","DOIUrl":"https://doi.org/10.1109/PRDC.2011.15","url":null,"abstract":"This paper describes a diagnostic test generation procedure for transition faults that produces mixed test sets consisting of broadside and skewed-load tests. A mix of broadside and skewed-load tests yields improved diagnostic resolution compared with a single test type. The procedure starts from a mixed test set generated for fault detection. It uses two procedures to obtain new tests that are useful for diagnosis starting from existing tests. Both procedures allow the type of a test to be modified (from broadside to skewed-load and from skewed-load to broadside). The first procedure is fault independent. The second procedure targets specific fault pairs. Experimental results show that diagnostic test generation changes the mix of broadside and skewed-load tests in the test set compared with a fault detection test set.","PeriodicalId":254760,"journal":{"name":"2011 IEEE 17th Pacific Rim International Symposium on Dependable Computing","volume":"43 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2011-12-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127271347","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
In this paper we address the mapping of mixed-criticality hard real-time applications on distributed embedded architectures. We assume that the architecture provides both spatial and temporal partitioning, thus enforcing enough separation between applications. With temporal partitioning, each application runs in a separate partition, and each partition is allocated several time slots on the processors where the application is mapped. The sequence of time slots for all the applications on a processor are grouped within a Major Frame, which is repeated periodically. We assume that the applications are scheduled using static-cyclic scheduling. We are interested to determine the task mapping to processors, and the sequence and size of the time slots within the Major Frame on each processor, such that the applications are schedulable. We have proposed a Tabu Search-based approach to solve this optimization problem. The proposed algorithm has been evaluated using several synthetic and real-life benchmarks.
{"title":"Task Mapping and Partition Allocation for Mixed-Criticality Real-Time Systems","authors":"D. Tamas-Selicean, P. Pop","doi":"10.1109/PRDC.2011.42","DOIUrl":"https://doi.org/10.1109/PRDC.2011.42","url":null,"abstract":"In this paper we address the mapping of mixed-criticality hard real-time applications on distributed embedded architectures. We assume that the architecture provides both spatial and temporal partitioning, thus enforcing enough separation between applications. With temporal partitioning, each application runs in a separate partition, and each partition is allocated several time slots on the processors where the application is mapped. The sequence of time slots for all the applications on a processor are grouped within a Major Frame, which is repeated periodically. We assume that the applications are scheduled using static-cyclic scheduling. We are interested to determine the task mapping to processors, and the sequence and size of the time slots within the Major Frame on each processor, such that the applications are schedulable. We have proposed a Tabu Search-based approach to solve this optimization problem. The proposed algorithm has been evaluated using several synthetic and real-life benchmarks.","PeriodicalId":254760,"journal":{"name":"2011 IEEE 17th Pacific Rim International Symposium on Dependable Computing","volume":"15 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2011-12-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126445256","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Real number block codes derived from the discrete Fourier transform (DFT) are corrected by coupling a very modified Berlekamp-Massey algorithm with a syndrome extension process. Enhanced extension recursions based on Kalman syndrome extensions are examined.
{"title":"Correcting DFT Codes with Modified Berlekamp-Massey Algorithm and Syndrome Extension","authors":"G. Redinbo","doi":"10.1109/PRDC.2011.39","DOIUrl":"https://doi.org/10.1109/PRDC.2011.39","url":null,"abstract":"Real number block codes derived from the discrete Fourier transform (DFT) are corrected by coupling a very modified Berlekamp-Massey algorithm with a syndrome extension process. Enhanced extension recursions based on Kalman syndrome extensions are examined.","PeriodicalId":254760,"journal":{"name":"2011 IEEE 17th Pacific Rim International Symposium on Dependable Computing","volume":"40 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2011-12-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"117236151","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
We present a framework that exhaustively explores the scheduling nondeterminism of multi-threaded applications and checks for concurrency errors. We use a flexible design that allows us to integrate multiple algorithms aimed at reducing the number of interleavings that have to be tested.
{"title":"A Framework for Systematic Testing of Multi-threaded Applications","authors":"Mihai Florian","doi":"10.1109/PRDC.2011.48","DOIUrl":"https://doi.org/10.1109/PRDC.2011.48","url":null,"abstract":"We present a framework that exhaustively explores the scheduling nondeterminism of multi-threaded applications and checks for concurrency errors. We use a flexible design that allows us to integrate multiple algorithms aimed at reducing the number of interleavings that have to be tested.","PeriodicalId":254760,"journal":{"name":"2011 IEEE 17th Pacific Rim International Symposium on Dependable Computing","volume":"31 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2011-12-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"122882551","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Nowadays, distributed in-memory caches are increasingly used as a way to improve the performance of applications that require frequent access to large amounts of data. In order to maximize performance and scalability, these platforms typically rely on weakly consistent partial replication mechanisms. These schemes partition the data across the nodes and ensure a predefined (and typically very small) replication degree, thus maximizing the global memory capacity of the platform and ensuring that the cost to ensure replica consistency remains constant as the scale of the platform grows. Moreover, even though several of these platforms provide transactional support, they typically sacrifice consistency, ensuring guarantees that are weaker than classic 1-copy serializability, but that allow for more efficient implementations. This paper proposes and evaluates two partial replication techniques, providing different (weak) consistency guarantees, but having in common the reliance on total order multicast primitives to serialize transactions without incurring in distributed deadlocks, a main source of inefficiency of classical two-phase commit (2PC) based replication mechanisms. We integrate the proposed replication schemes into Infinispan, a prominent open-source distributed in-memory cache, which represents the reference clustering solution for the well-known JBoss AS platform. Our performance evaluation highlights speed-ups of up to 40x when using the proposed algorithms with respect to the native Infinispan replication mechanism, which relies on classic 2PC-based replication.
{"title":"Exploiting Total Order Multicast in Weakly Consistent Transactional Caches","authors":"P. Ruivo, Maria Couceiro, P. Romano, L. Rodrigues","doi":"10.1109/PRDC.2011.21","DOIUrl":"https://doi.org/10.1109/PRDC.2011.21","url":null,"abstract":"Nowadays, distributed in-memory caches are increasingly used as a way to improve the performance of applications that require frequent access to large amounts of data. In order to maximize performance and scalability, these platforms typically rely on weakly consistent partial replication mechanisms. These schemes partition the data across the nodes and ensure a predefined (and typically very small) replication degree, thus maximizing the global memory capacity of the platform and ensuring that the cost to ensure replica consistency remains constant as the scale of the platform grows. Moreover, even though several of these platforms provide transactional support, they typically sacrifice consistency, ensuring guarantees that are weaker than classic 1-copy serializability, but that allow for more efficient implementations. This paper proposes and evaluates two partial replication techniques, providing different (weak) consistency guarantees, but having in common the reliance on total order multicast primitives to serialize transactions without incurring in distributed deadlocks, a main source of inefficiency of classical two-phase commit (2PC) based replication mechanisms. We integrate the proposed replication schemes into Infinispan, a prominent open-source distributed in-memory cache, which represents the reference clustering solution for the well-known JBoss AS platform. Our performance evaluation highlights speed-ups of up to 40x when using the proposed algorithms with respect to the native Infinispan replication mechanism, which relies on classic 2PC-based replication.","PeriodicalId":254760,"journal":{"name":"2011 IEEE 17th Pacific Rim International Symposium on Dependable Computing","volume":"24 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2011-12-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"121899440","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
The paper is concerned with analyzing and comparing two exact algorithms from the viewpoint of computational complexity. They are: composite justification and the D-algorithm. Both serve for calculating fault-detection tests of digital circuits. As a result, it is pointed out that the composite justification requires significantly less computational step than the D-algorithm. From this fact it has been conjectured that possibly no other algorithm is available in this field with fewer computational steps. If the claim holds, then it follows directly that the test-generation problem is of exponential time, and so are all the other NP-complete problems in the field of computation theory.
{"title":"Test Generation and Computational Complexity","authors":"J. Sziray","doi":"10.1109/PRDC.2011.40","DOIUrl":"https://doi.org/10.1109/PRDC.2011.40","url":null,"abstract":"The paper is concerned with analyzing and comparing two exact algorithms from the viewpoint of computational complexity. They are: composite justification and the D-algorithm. Both serve for calculating fault-detection tests of digital circuits. As a result, it is pointed out that the composite justification requires significantly less computational step than the D-algorithm. From this fact it has been conjectured that possibly no other algorithm is available in this field with fewer computational steps. If the claim holds, then it follows directly that the test-generation problem is of exponential time, and so are all the other NP-complete problems in the field of computation theory.","PeriodicalId":254760,"journal":{"name":"2011 IEEE 17th Pacific Rim International Symposium on Dependable Computing","volume":"31 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2011-12-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"133491005","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Horst Schirmeier, Martin Hoffmann, R. Kapitza, D. Lohmann, O. Spinczyk
Many years of research on dependable, fault-tolerant software systems yielded a myriad of tool implementations for vulnerability analysis and experimental validation of resilience measures. Trace recording and fault injection are among the core functionalities these tools provide for hardware debuggers or system simulators, partially including some means to automate larger experiment campaigns. We argue that current fault-injection tools are too highly specialized for specific hardware devices or simulators, and are developed in poorly modularized implementations impeding evolution and maintenance. In this article, we present a novel design approach for a fault-injection infrastructure that allows experimenting researchers to switch simulator or hardware back ends with little effort, fosters experiment code reuse, and retains a high level of maintainability.
{"title":"Revisiting Fault-Injection Experiment-Platform Architectures","authors":"Horst Schirmeier, Martin Hoffmann, R. Kapitza, D. Lohmann, O. Spinczyk","doi":"10.1109/PRDC.2011.46","DOIUrl":"https://doi.org/10.1109/PRDC.2011.46","url":null,"abstract":"Many years of research on dependable, fault-tolerant software systems yielded a myriad of tool implementations for vulnerability analysis and experimental validation of resilience measures. Trace recording and fault injection are among the core functionalities these tools provide for hardware debuggers or system simulators, partially including some means to automate larger experiment campaigns. We argue that current fault-injection tools are too highly specialized for specific hardware devices or simulators, and are developed in poorly modularized implementations impeding evolution and maintenance. In this article, we present a novel design approach for a fault-injection infrastructure that allows experimenting researchers to switch simulator or hardware back ends with little effort, fosters experiment code reuse, and retains a high level of maintainability.","PeriodicalId":254760,"journal":{"name":"2011 IEEE 17th Pacific Rim International Symposium on Dependable Computing","volume":"8 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2011-12-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"133058013","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
In this paper, we analyze the trends of significant accidents in financial information systems from the user viewpoint. Based on the analyses, we show the priority issues for dependability improvement. First, as a prerequisite in this study, we define gaccidents, h gtypes of accidents, h gseverity of accidents, h and gfaults.h Second, we collected as many accident cases of financial information systems as possible during 12 years (1997-2008) from the information contained in four national major newspapers in Japan, news releases on websites, magazines, and books. Third, we analyzed the accident information according to type, severity, faults, and combinations of these factors. As a result, we showed the general trends of significant accidents. Last, based on the result of the analyses, we showed the priority issues for dependability improvement.
{"title":"Trend Analyses of Accidents and Dependability Improvement in Financial Information Systems","authors":"Koichi Bando, Kenji Tanaka","doi":"10.1109/PRDC.2011.35","DOIUrl":"https://doi.org/10.1109/PRDC.2011.35","url":null,"abstract":"In this paper, we analyze the trends of significant accidents in financial information systems from the user viewpoint. Based on the analyses, we show the priority issues for dependability improvement. First, as a prerequisite in this study, we define gaccidents, h gtypes of accidents, h gseverity of accidents, h and gfaults.h Second, we collected as many accident cases of financial information systems as possible during 12 years (1997-2008) from the information contained in four national major newspapers in Japan, news releases on websites, magazines, and books. Third, we analyzed the accident information according to type, severity, faults, and combinations of these factors. As a result, we showed the general trends of significant accidents. Last, based on the result of the analyses, we showed the priority issues for dependability improvement.","PeriodicalId":254760,"journal":{"name":"2011 IEEE 17th Pacific Rim International Symposium on Dependable Computing","volume":"33 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2011-12-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115215002","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
In this paper we evaluate the effectiveness of cascaded triple modular redundancy (TMR) in terms of area-per-yield and defect level by applying to every stage of a pipelined processor. Considering a cascade of nine possible TMR stage architectures, we theoretically derive the area-per-yield on the basis of the given parameters of defect density and the number of stages. Also, assuming that a production test is independently applied for each module and voter in every stage and the pass/fail of a chip is determined on the basis of the test result, we theoretically derive the defect level for the given fault coverage. Numerical examples show that the application of cascaded TMR improves the area-per-yield and the defect level when manufacturing yield is low. In addition, some cases exist in which the number of stages minimize the area-per-yield or the defect level.
{"title":"Area-Per-Yield and Defect Level of Cascaded TMR for Pipelined Processors","authors":"M. Arai, K. Iwasaki","doi":"10.1109/PRDC.2011.38","DOIUrl":"https://doi.org/10.1109/PRDC.2011.38","url":null,"abstract":"In this paper we evaluate the effectiveness of cascaded triple modular redundancy (TMR) in terms of area-per-yield and defect level by applying to every stage of a pipelined processor. Considering a cascade of nine possible TMR stage architectures, we theoretically derive the area-per-yield on the basis of the given parameters of defect density and the number of stages. Also, assuming that a production test is independently applied for each module and voter in every stage and the pass/fail of a chip is determined on the basis of the test result, we theoretically derive the defect level for the given fault coverage. Numerical examples show that the application of cascaded TMR improves the area-per-yield and the defect level when manufacturing yield is low. In addition, some cases exist in which the number of stages minimize the area-per-yield or the defect level.","PeriodicalId":254760,"journal":{"name":"2011 IEEE 17th Pacific Rim International Symposium on Dependable Computing","volume":"16 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2011-12-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114809214","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
The bootstrapping is a statistical technique to replicate the underlying data based on the resampling, and enables us to investigate the statistical properties. It is useful to estimate standard errors and confidence intervals for complex estimators of complex parameters of the probability distribution from a small number of data. In software reliability engineering, it is common to estimate software reliability measures from the fault data (fault-detection time data) and to focus on only the point estimation. However, it is difficult in general to carry out the interval estimation or to obtain the probability distributions of the associated estimators, without applying any approximate method. In this paper, we assume that the software fault-detection process in the system testing is described by a non-homogeneous Poisson process, and develop a comprehensive technique to study the probability distributions on significant software reliability measures. Based on the maximum likelihood estimation, we assess the probability distributions of estimators such as the initial number of software faults remaining in the software, software intensity function, mean value function and software reliability function, via parametric bootstrapping method.
{"title":"Parametric Bootstrapping for Assessing Software Reliability Measures","authors":"Toshio Kaneishi, T. Dohi","doi":"10.1109/PRDC.2011.10","DOIUrl":"https://doi.org/10.1109/PRDC.2011.10","url":null,"abstract":"The bootstrapping is a statistical technique to replicate the underlying data based on the resampling, and enables us to investigate the statistical properties. It is useful to estimate standard errors and confidence intervals for complex estimators of complex parameters of the probability distribution from a small number of data. In software reliability engineering, it is common to estimate software reliability measures from the fault data (fault-detection time data) and to focus on only the point estimation. However, it is difficult in general to carry out the interval estimation or to obtain the probability distributions of the associated estimators, without applying any approximate method. In this paper, we assume that the software fault-detection process in the system testing is described by a non-homogeneous Poisson process, and develop a comprehensive technique to study the probability distributions on significant software reliability measures. Based on the maximum likelihood estimation, we assess the probability distributions of estimators such as the initial number of software faults remaining in the software, software intensity function, mean value function and software reliability function, via parametric bootstrapping method.","PeriodicalId":254760,"journal":{"name":"2011 IEEE 17th Pacific Rim International Symposium on Dependable Computing","volume":"36 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2011-12-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"125798362","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}