{"title":"Risk-Based Testing of Self-Adaptive Systems Using Run-Time Predictions","authors":"André Reichstaller, Alexander Knapp","doi":"10.1109/SASO.2018.00019","DOIUrl":null,"url":null,"abstract":"Devising test strategies for specific test goals relies on predictions of the run-time behavior of the software system under test (SuT) based on specifications, models, or the code. For a system following a single strategy as run-time behavior, the test strategy can be fixed at design time. For an adaptive system, which may choose from several strategies due to environment changes, a combination of test strategies has to be found, which still can be achieved at design time provided that all system strategies and the switching policy are predictable. Self-adaptive systems, also adapting their system strategies and strategy switches according to the environmental dynamics, render such design-time predictions futile, but there also the test strategies have to be adapted at run time. We characterize the necessary interplay between system strategy adaptation of the SuT and test strategy adaptation of the tester as a Stochastic Game. We argue that the tester's part, formalized by means of a Markov Decision Process, can be automatically solved by the use of Reinforcement Learning methods where we discuss both model-based and model-free variants. Finally, we propose a particular framework inspired by Direct Future Prediction which, given a simulation of the SuT and its environment, autonomously finds good test strategies w.r.t. imposed quanti?able goals. While these goals, in general, can be initialized arbitrarily, our evaluation concentrates on risk-based goals rewarding the detection of hazardous failures.","PeriodicalId":405522,"journal":{"name":"2018 IEEE 12th International Conference on Self-Adaptive and Self-Organizing Systems (SASO)","volume":"1 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2018-09-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"11","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"2018 IEEE 12th International Conference on Self-Adaptive and Self-Organizing Systems (SASO)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/SASO.2018.00019","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 11
Abstract
Devising test strategies for specific test goals relies on predictions of the run-time behavior of the software system under test (SuT) based on specifications, models, or the code. For a system following a single strategy as run-time behavior, the test strategy can be fixed at design time. For an adaptive system, which may choose from several strategies due to environment changes, a combination of test strategies has to be found, which still can be achieved at design time provided that all system strategies and the switching policy are predictable. Self-adaptive systems, also adapting their system strategies and strategy switches according to the environmental dynamics, render such design-time predictions futile, but there also the test strategies have to be adapted at run time. We characterize the necessary interplay between system strategy adaptation of the SuT and test strategy adaptation of the tester as a Stochastic Game. We argue that the tester's part, formalized by means of a Markov Decision Process, can be automatically solved by the use of Reinforcement Learning methods where we discuss both model-based and model-free variants. Finally, we propose a particular framework inspired by Direct Future Prediction which, given a simulation of the SuT and its environment, autonomously finds good test strategies w.r.t. imposed quanti?able goals. While these goals, in general, can be initialized arbitrarily, our evaluation concentrates on risk-based goals rewarding the detection of hazardous failures.