Pub Date : 1995-06-27DOI: 10.1109/FTCS.1995.466947
T. Vardanega, P. David, J.-F. Chane, Wolfgang R. Mader, R. Messaros, J. Arlat
As commercial drivers promote the integration of functions of different criticality into a limited set of processing elements, software plays an increasingly important role on board today's satellites. This trend questions the adequacy of the traditional development process and calls for a design and validation approach capable of achieving the required dependability without blowing the development costs. This paper reports on the most innovative features of an integrated project aimed at designing a software-intensive fault tolerance approach suitable for embedded flight control systems, and at assessing its efficiency by means of a non-intrusive software-implemented fault injection prototype tool.<>
{"title":"On the development of fault-tolerant on-board control software and its evaluation by fault injection","authors":"T. Vardanega, P. David, J.-F. Chane, Wolfgang R. Mader, R. Messaros, J. Arlat","doi":"10.1109/FTCS.1995.466947","DOIUrl":"https://doi.org/10.1109/FTCS.1995.466947","url":null,"abstract":"As commercial drivers promote the integration of functions of different criticality into a limited set of processing elements, software plays an increasingly important role on board today's satellites. This trend questions the adequacy of the traditional development process and calls for a design and validation approach capable of achieving the required dependability without blowing the development costs. This paper reports on the most innovative features of an integrated project aimed at designing a software-intensive fault tolerance approach suitable for embedded flight control systems, and at assessing its efficiency by means of a non-intrusive software-implemented fault injection prototype tool.<<ETX>>","PeriodicalId":309075,"journal":{"name":"Twenty-Fifth International Symposium on Fault-Tolerant Computing. Digest of Papers","volume":"2 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1995-06-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"125930384","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 1995-06-27DOI: 10.1109/FTCS.1995.466974
Yingquan Zhou, M. Wong, Y. Min
Self-checking in analog circuits is more difficult than in digital circuits. The technique proposed by A. Chatterjee (1993) can address concurrent error detection and correction in linear analog circuits and hence the reliability of the original circuit is greatly improved. However, hardware overhead is an important issue in this technique, which has never been addressed before. This paper proposes an algorithm for reduction of hardware overhead in the analog checker, and also presents a series of theoretical results, including the concept of all-non-zero solutions and several existence conditions of such solutions. As the basis of the algorithm, these results are new in the mathematical world and can be used to verify the feasibility and effectiveness of the algorithm. Without changing the original circuit, the proposed algorithm can not only reduce the number of passive elements, but also the number of analog operators so that the error detection circuitry in the checker has optimal hardware overhead.<>
{"title":"Feasibility and effectiveness of the algorithm for overhead reduction in analog checkers","authors":"Yingquan Zhou, M. Wong, Y. Min","doi":"10.1109/FTCS.1995.466974","DOIUrl":"https://doi.org/10.1109/FTCS.1995.466974","url":null,"abstract":"Self-checking in analog circuits is more difficult than in digital circuits. The technique proposed by A. Chatterjee (1993) can address concurrent error detection and correction in linear analog circuits and hence the reliability of the original circuit is greatly improved. However, hardware overhead is an important issue in this technique, which has never been addressed before. This paper proposes an algorithm for reduction of hardware overhead in the analog checker, and also presents a series of theoretical results, including the concept of all-non-zero solutions and several existence conditions of such solutions. As the basis of the algorithm, these results are new in the mathematical world and can be used to verify the feasibility and effectiveness of the algorithm. Without changing the original circuit, the proposed algorithm can not only reduce the number of passive elements, but also the number of analog operators so that the error detection circuitry in the checker has optimal hardware overhead.<<ETX>>","PeriodicalId":309075,"journal":{"name":"Twenty-Fifth International Symposium on Fault-Tolerant Computing. Digest of Papers","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1995-06-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128667525","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 1995-06-27DOI: 10.1109/FTCS.1995.466991
Rolf Krieger, B. Becker, Can Ökmen
Numerous methods have been devised to compute and to optimize fault detection probabilities for combinational circuits. The methods range from topological to algebraic. In combination with OBDDs, algebraic methods have received more and more attention. Recently, an OBDD based method has been presented which allows the computation of exact fault detection probabilities for many combinational circuits. We combine this method with strategies making use of necessary assignments (computed by an implication procedure). The experimental results show that the resulting method leads to a decrease of the time and space requirements for computing fault detection probabilities of the hard faults by a factor of 4 on average compared to the original algorithm. By this means it is now possible to efficiently use the OBDD based approach also for the optimization of input probabilities for weighted random pattern testing. Since in contrast to other optimization procedures this method is based on the exact fault detection probabilities we succeed in the determination of weight sets of superior quality, i.e. the test application time (number of random patterns) is considerably reduced compared to previous approaches.<>
{"title":"OBDD-based optimization of input probabilities for weighted random pattern generation","authors":"Rolf Krieger, B. Becker, Can Ökmen","doi":"10.1109/FTCS.1995.466991","DOIUrl":"https://doi.org/10.1109/FTCS.1995.466991","url":null,"abstract":"Numerous methods have been devised to compute and to optimize fault detection probabilities for combinational circuits. The methods range from topological to algebraic. In combination with OBDDs, algebraic methods have received more and more attention. Recently, an OBDD based method has been presented which allows the computation of exact fault detection probabilities for many combinational circuits. We combine this method with strategies making use of necessary assignments (computed by an implication procedure). The experimental results show that the resulting method leads to a decrease of the time and space requirements for computing fault detection probabilities of the hard faults by a factor of 4 on average compared to the original algorithm. By this means it is now possible to efficiently use the OBDD based approach also for the optimization of input probabilities for weighted random pattern testing. Since in contrast to other optimization procedures this method is based on the exact fault detection probabilities we succeed in the determination of weight sets of superior quality, i.e. the test application time (number of random patterns) is considerably reduced compared to previous approaches.<<ETX>>","PeriodicalId":309075,"journal":{"name":"Twenty-Fifth International Symposium on Fault-Tolerant Computing. Digest of Papers","volume":"66 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1995-06-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"121988893","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 1995-06-27DOI: 10.1109/FTCS.1995.466980
N. Jiang, Richard M. Chou, K. Saluja
The goal is to synthesize an FSM with the objective to minimize the number of scanned flip-flops while requiring a minimum number of system clocks to reach the synchronizable state. An algorithm for selecting state variables for scanning while minimizing the length of the synchronizing sequence based on the reverse-order-search technique is presented. Extra transitions may be required to avoid possible lock-in conditions if the initial state is an invalid state for the machines where the number of states is not a power of 2. Experimental results show that the proposed method guarantees synchronizability and testability through the proper state assignment with reasonable hardware overhead for the benchmark circuits.<>
{"title":"Synthesizing finite state machines for minimum length synchronizing sequence using partial scan","authors":"N. Jiang, Richard M. Chou, K. Saluja","doi":"10.1109/FTCS.1995.466980","DOIUrl":"https://doi.org/10.1109/FTCS.1995.466980","url":null,"abstract":"The goal is to synthesize an FSM with the objective to minimize the number of scanned flip-flops while requiring a minimum number of system clocks to reach the synchronizable state. An algorithm for selecting state variables for scanning while minimizing the length of the synchronizing sequence based on the reverse-order-search technique is presented. Extra transitions may be required to avoid possible lock-in conditions if the initial state is an invalid state for the machines where the number of states is not a power of 2. Experimental results show that the proposed method guarantees synchronizability and testability through the proper state assignment with reasonable hardware overhead for the benchmark circuits.<<ETX>>","PeriodicalId":309075,"journal":{"name":"Twenty-Fifth International Symposium on Fault-Tolerant Computing. Digest of Papers","volume":"157 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1995-06-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"124412556","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 1995-06-27DOI: 10.1109/FTCS.1995.466973
S. A. Doyle, J. Dugan
Presents the DREDD (Dependability and Risk Evaluation using Decision Diagrams) algorithm which incorporates coverage modeling into a BDD solution of a combinatorial model. BDDs, which do not use cutsets to generate system unreliability, can be used to find exact solutions for extremely large systems. The DREDD algorithm takes advantage of the efficiency of the BDD solution approach and increases the accuracy of a combinatorial model by including consideration of imperfect coverage. The usefulness of combinatorial models, long appreciated for their logical structure and concise representational form, is extended to include many fault-tolerant systems previously thought to require more complicated analysis techniques in order to include coverage modeling. In. This paper, the DREDD approach is presented and applied to the analysis of two sample systems, the F18 flight control system and a fault-tolerant multistage interconnection network.<>
{"title":"Dependability assessment using binary decision diagrams (BDDs)","authors":"S. A. Doyle, J. Dugan","doi":"10.1109/FTCS.1995.466973","DOIUrl":"https://doi.org/10.1109/FTCS.1995.466973","url":null,"abstract":"Presents the DREDD (Dependability and Risk Evaluation using Decision Diagrams) algorithm which incorporates coverage modeling into a BDD solution of a combinatorial model. BDDs, which do not use cutsets to generate system unreliability, can be used to find exact solutions for extremely large systems. The DREDD algorithm takes advantage of the efficiency of the BDD solution approach and increases the accuracy of a combinatorial model by including consideration of imperfect coverage. The usefulness of combinatorial models, long appreciated for their logical structure and concise representational form, is extended to include many fault-tolerant systems previously thought to require more complicated analysis techniques in order to include coverage modeling. In. This paper, the DREDD approach is presented and applied to the analysis of two sample systems, the F18 flight control system and a fault-tolerant multistage interconnection network.<<ETX>>","PeriodicalId":309075,"journal":{"name":"Twenty-Fifth International Symposium on Fault-Tolerant Computing. Digest of Papers","volume":"76 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1995-06-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127621301","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 1995-06-27DOI: 10.1109/FTCS.1995.466958
Michael F. Buckley, D. Siewiorek
Event logs can be used effectively to improve computer system availability. Uses include retrospective and predictive diagnosis; fault management; failure rate estimation; and trend analysis. Unfortunately, much of the research to date has been hampered by the lack of suitable event data, and occasionally by the incorrect interpretation of the available data. This research uses one of the largest sets of data, and the most intensive investigation of the monitoring process conducted to date, to examine event monitoring and analysis. 2.35 million events from 193 VAX/VMS systems covering 335 machine years were used. Examples are presented which show that monitoring deficiencies complicate the analyses, consume additional time, and make incorrect conclusions more likely. For example, incorrect handling of bogus timestamps changes the mean time between groups of events by an order of magnitude. An analysis procedure to identify defects is provided, along with design rules to create better quality logs.<>
{"title":"VAX/VMS event monitoring and analysis","authors":"Michael F. Buckley, D. Siewiorek","doi":"10.1109/FTCS.1995.466958","DOIUrl":"https://doi.org/10.1109/FTCS.1995.466958","url":null,"abstract":"Event logs can be used effectively to improve computer system availability. Uses include retrospective and predictive diagnosis; fault management; failure rate estimation; and trend analysis. Unfortunately, much of the research to date has been hampered by the lack of suitable event data, and occasionally by the incorrect interpretation of the available data. This research uses one of the largest sets of data, and the most intensive investigation of the monitoring process conducted to date, to examine event monitoring and analysis. 2.35 million events from 193 VAX/VMS systems covering 335 machine years were used. Examples are presented which show that monitoring deficiencies complicate the analyses, consume additional time, and make incorrect conclusions more likely. For example, incorrect handling of bogus timestamps changes the mean time between groups of events by an order of magnitude. An analysis procedure to identify defects is provided, along with design rules to create better quality logs.<<ETX>>","PeriodicalId":309075,"journal":{"name":"Twenty-Fifth International Symposium on Fault-Tolerant Computing. Digest of Papers","volume":"54 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1995-06-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"133537219","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 1995-06-27DOI: 10.1109/FTCS.1995.466961
Yennun Huang, C. Kintala, N. Kolettis, N. D. Fulton
Software rejuvenation is the concept of gracefully terminating an application and immediately restarting it at a clean internal state. In a client-server type of application where the server is intended to ran perpetually for providing a service to its clients, rejuvenating the server process periodically during the most idle time of the server increases the availability of that service. In a long-running computation-intensive application, rejuvenating the application periodically and restarting it at a previous checkpoint increases the likelihood of successfully completing the application execution. We present a model for analyzing software rejuvenation in such continuously-running applications and express downtime and costs due to downtime during rejuvenation in terms of the parameters in that model. Threshold conditions for rejuvenation to be beneficial are also derived. We implemented a reusable module to perform software rejuvenation. That module can be embedded in any existing application on a UNIX platform with minimal effort. Experiences with software rejuvenation in a billing data collection subsystem of a telecommunications operations system and other continuously-running systems and scientific applications in AT&T are described.<>
{"title":"Software rejuvenation: analysis, module and applications","authors":"Yennun Huang, C. Kintala, N. Kolettis, N. D. Fulton","doi":"10.1109/FTCS.1995.466961","DOIUrl":"https://doi.org/10.1109/FTCS.1995.466961","url":null,"abstract":"Software rejuvenation is the concept of gracefully terminating an application and immediately restarting it at a clean internal state. In a client-server type of application where the server is intended to ran perpetually for providing a service to its clients, rejuvenating the server process periodically during the most idle time of the server increases the availability of that service. In a long-running computation-intensive application, rejuvenating the application periodically and restarting it at a previous checkpoint increases the likelihood of successfully completing the application execution. We present a model for analyzing software rejuvenation in such continuously-running applications and express downtime and costs due to downtime during rejuvenation in terms of the parameters in that model. Threshold conditions for rejuvenation to be beneficial are also derived. We implemented a reusable module to perform software rejuvenation. That module can be embedded in any existing application on a UNIX platform with minimal effort. Experiences with software rejuvenation in a billing data collection subsystem of a telecommunications operations system and other continuously-running systems and scientific applications in AT&T are described.<<ETX>>","PeriodicalId":309075,"journal":{"name":"Twenty-Fifth International Symposium on Fault-Tolerant Computing. Digest of Papers","volume":"23 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1995-06-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"131622103","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 1995-06-27DOI: 10.1109/FTCS.1995.466995
H. Buhrman, J. Garay, J. Hoepman
We consider a model where malicious agents can corrupt hosts and move around in a network of processors. We consider a family of mobile fault models MF(t/n-1,/spl rho/). In MF(t/n-1,/spl rho/) there are a total of n processors, the maximum number of mobile faults is t, and their roaming pace is /spl rho/ (for example, /spl rho/=3 means that it takes an agent at least 3 rounds to "hop" to the next host). We study in these models the classical testbed problem for fault tolerant distributed computing: Byzantine agreement. It has been shown that if /spl rho/=1, then agreement cannot be reached in the presence of even one fault, unless one of the processors remains uncorrupted for a certain amount of time. Subject to this proviso, we present a protocol for MF(/sup 1///sub 3/,1), which is optimal. The running time of the protocol is O(n) rounds, also optimal for these models.<>
{"title":"Optimal resiliency against mobile faults","authors":"H. Buhrman, J. Garay, J. Hoepman","doi":"10.1109/FTCS.1995.466995","DOIUrl":"https://doi.org/10.1109/FTCS.1995.466995","url":null,"abstract":"We consider a model where malicious agents can corrupt hosts and move around in a network of processors. We consider a family of mobile fault models MF(t/n-1,/spl rho/). In MF(t/n-1,/spl rho/) there are a total of n processors, the maximum number of mobile faults is t, and their roaming pace is /spl rho/ (for example, /spl rho/=3 means that it takes an agent at least 3 rounds to \"hop\" to the next host). We study in these models the classical testbed problem for fault tolerant distributed computing: Byzantine agreement. It has been shown that if /spl rho/=1, then agreement cannot be reached in the presence of even one fault, unless one of the processors remains uncorrupted for a certain amount of time. Subject to this proviso, we present a protocol for MF(/sup 1///sub 3/,1), which is optimal. The running time of the protocol is O(n) rounds, also optimal for these models.<<ETX>>","PeriodicalId":309075,"journal":{"name":"Twenty-Fifth International Symposium on Fault-Tolerant Computing. Digest of Papers","volume":"320 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1995-06-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"132018464","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 1995-06-27DOI: 10.1109/FTCS.1995.466983
Reinaldo Vallejos Campos, E. D. S. E. Silva
Checkpointing roll back and recovery is a common technique to insure data integrity, to increase availability and to improve the performance of transaction oriented database systems. Parameters such as the the checkpointing frequency and the system load have an impact on the overall performance and it is important to develop accurate models of the system under study. We find expressions for the system availability and for the expected response time of the transactions from a model that, unlike previous analytical work, takes into account the dependency among the recovery times between two checkpoints. Furthermore, our model can incorporate details concerning the contention for the system resources.<>
{"title":"Availability and performance evaluation of database systems under periodic checkpoints","authors":"Reinaldo Vallejos Campos, E. D. S. E. Silva","doi":"10.1109/FTCS.1995.466983","DOIUrl":"https://doi.org/10.1109/FTCS.1995.466983","url":null,"abstract":"Checkpointing roll back and recovery is a common technique to insure data integrity, to increase availability and to improve the performance of transaction oriented database systems. Parameters such as the the checkpointing frequency and the system load have an impact on the overall performance and it is important to develop accurate models of the system under study. We find expressions for the system availability and for the expected response time of the transactions from a model that, unlike previous analytical work, takes into account the dependency among the recovery times between two checkpoints. Furthermore, our model can incorporate details concerning the contention for the system resources.<<ETX>>","PeriodicalId":309075,"journal":{"name":"Twenty-Fifth International Symposium on Fault-Tolerant Computing. Digest of Papers","volume":"5 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1995-06-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"116750953","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 1995-06-27DOI: 10.1109/FTCS.1995.466993
T. Diep, John Paul Shen
The paper presents a new approach to microarchitecture validation that adopts a paradigm analogous to that of automatic test pattern generation (ATPG) for digital logic testing. In this approach, the microarchitecture is rigorously specified in a set of machine description files. Based on these files, all possible pipeline hazards can be systematically identified Using this hazard list (analogous to a fault list for ATPG), specific sequences of instructions (analogous to test patterns) are automatically generated and constitute the test program. The execution of this test program validates the correct detection and resolution of all interinstruction dependences by the microarchitecture's pipeline interlock mechanism. Actual software tools have been developed for the automatic construction of the hazard list and the automatic generation of the test sequences. These explicitly generated can achieve higher sequences coverage in fewer cycles than adhoc approaches. 100% coverage of the hazard list can be ensured. These tools have been applied to four contemporary superscalar processors, namely the Alpha AXP 21064 and 21164 microprocessors, and the PowerPC 601 and 620 microprocessors.<>
{"title":"Systematic validation of pipeline interlock for superscalar microarchitectures","authors":"T. Diep, John Paul Shen","doi":"10.1109/FTCS.1995.466993","DOIUrl":"https://doi.org/10.1109/FTCS.1995.466993","url":null,"abstract":"The paper presents a new approach to microarchitecture validation that adopts a paradigm analogous to that of automatic test pattern generation (ATPG) for digital logic testing. In this approach, the microarchitecture is rigorously specified in a set of machine description files. Based on these files, all possible pipeline hazards can be systematically identified Using this hazard list (analogous to a fault list for ATPG), specific sequences of instructions (analogous to test patterns) are automatically generated and constitute the test program. The execution of this test program validates the correct detection and resolution of all interinstruction dependences by the microarchitecture's pipeline interlock mechanism. Actual software tools have been developed for the automatic construction of the hazard list and the automatic generation of the test sequences. These explicitly generated can achieve higher sequences coverage in fewer cycles than adhoc approaches. 100% coverage of the hazard list can be ensured. These tools have been applied to four contemporary superscalar processors, namely the Alpha AXP 21064 and 21164 microprocessors, and the PowerPC 601 and 620 microprocessors.<<ETX>>","PeriodicalId":309075,"journal":{"name":"Twenty-Fifth International Symposium on Fault-Tolerant Computing. Digest of Papers","volume":"63 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1995-06-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115620576","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}