{"title":"Stopping rules for the operational testing of safety-critical software","authors":"B. Littlewood, David Wright","doi":"10.1109/FTCS.1995.466955","DOIUrl":null,"url":null,"abstract":"It has been proposed to conduct a test of a software safety system for a nuclear reactor by subjecting it to demands that are statistically representative of those it meets in operational use. The intention behind the test is to acquire a high confidence (99%) that the probability of failure on demand is smaller than 10/sup -3/. To this end the test takes the form of executing about 5000 demands and requiring that all of these are successful. In practice if is necessary to consider what happens if the software fails the test and is repaired. We argue that the earlier failure information needs to be taken into account in devising the form of the test that the modified software needs to pass-essentially that after such failure the testing requirement might need to be more stringent (i.e. the number of tests that must be executed failure-free should increase). We examine a Bayesian approach to the problem, for this stopping rule based upon a required bound for the probability of failure on demand, as above, and also for a requirement based upon a prediction of future failure behaviour. We show that the first approach seems to be less conservative than the second, and argue that the second should be preferred for practical application.<<ETX>>","PeriodicalId":309075,"journal":{"name":"Twenty-Fifth International Symposium on Fault-Tolerant Computing. Digest of Papers","volume":"89 4 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"1995-06-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"31","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Twenty-Fifth International Symposium on Fault-Tolerant Computing. Digest of Papers","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/FTCS.1995.466955","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 31
Abstract
It has been proposed to conduct a test of a software safety system for a nuclear reactor by subjecting it to demands that are statistically representative of those it meets in operational use. The intention behind the test is to acquire a high confidence (99%) that the probability of failure on demand is smaller than 10/sup -3/. To this end the test takes the form of executing about 5000 demands and requiring that all of these are successful. In practice if is necessary to consider what happens if the software fails the test and is repaired. We argue that the earlier failure information needs to be taken into account in devising the form of the test that the modified software needs to pass-essentially that after such failure the testing requirement might need to be more stringent (i.e. the number of tests that must be executed failure-free should increase). We examine a Bayesian approach to the problem, for this stopping rule based upon a required bound for the probability of failure on demand, as above, and also for a requirement based upon a prediction of future failure behaviour. We show that the first approach seems to be less conservative than the second, and argue that the second should be preferred for practical application.<>