{"title":"用于自动机学习的学习和测试算法的基准组合","authors":"B. Aichernig, Martin Tappler, Felix Wallner","doi":"10.1145/3605360","DOIUrl":null,"url":null,"abstract":"Automata learning enables model-based analysis of black-box systems by automatically constructing models from system observations, which are often collected via testing. The required testing budget to learn adequate models heavily depends on the applied learning and testing techniques. Test cases executed for learning (1) collect behavioural information and (2) falsify learned hypothesis automata. Falsification test-cases are commonly selected through conformance testing. Active learning algorithms additionally implement test-case selection strategies to gain information, whereas passive algorithms derive models solely from given data. In an active setting, such algorithms require external test-case selection, like repeated conformance testing to extend the available data. There exist various approaches to learning and conformance testing, where interdependencies among them affect performance. We investigate the performance of combinations of six learning algorithms, including a passive algorithm, and seven testing algorithms, by performing experiments using 153 benchmark models. We discuss insights regarding the performance of different configurations for various types of systems. Our findings may provide guidance for future users of automata learning. For example, counterexample processing during learning strongly impacts efficiency, which is further affected by testing approach and system type. Testing with the random Wp-method performs best overall, while mutation-based testing performs well on smaller models.","PeriodicalId":50432,"journal":{"name":"Formal Aspects of Computing","volume":" ","pages":""},"PeriodicalIF":1.4000,"publicationDate":"2023-06-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"Benchmarking Combinations of Learning and Testing Algorithms for Automata Learning\",\"authors\":\"B. Aichernig, Martin Tappler, Felix Wallner\",\"doi\":\"10.1145/3605360\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"Automata learning enables model-based analysis of black-box systems by automatically constructing models from system observations, which are often collected via testing. The required testing budget to learn adequate models heavily depends on the applied learning and testing techniques. Test cases executed for learning (1) collect behavioural information and (2) falsify learned hypothesis automata. Falsification test-cases are commonly selected through conformance testing. Active learning algorithms additionally implement test-case selection strategies to gain information, whereas passive algorithms derive models solely from given data. In an active setting, such algorithms require external test-case selection, like repeated conformance testing to extend the available data. There exist various approaches to learning and conformance testing, where interdependencies among them affect performance. We investigate the performance of combinations of six learning algorithms, including a passive algorithm, and seven testing algorithms, by performing experiments using 153 benchmark models. We discuss insights regarding the performance of different configurations for various types of systems. Our findings may provide guidance for future users of automata learning. For example, counterexample processing during learning strongly impacts efficiency, which is further affected by testing approach and system type. Testing with the random Wp-method performs best overall, while mutation-based testing performs well on smaller models.\",\"PeriodicalId\":50432,\"journal\":{\"name\":\"Formal Aspects of Computing\",\"volume\":\" \",\"pages\":\"\"},\"PeriodicalIF\":1.4000,\"publicationDate\":\"2023-06-21\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"Formal Aspects of Computing\",\"FirstCategoryId\":\"94\",\"ListUrlMain\":\"https://doi.org/10.1145/3605360\",\"RegionNum\":4,\"RegionCategory\":\"计算机科学\",\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"Q3\",\"JCRName\":\"COMPUTER SCIENCE, SOFTWARE ENGINEERING\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"Formal Aspects of Computing","FirstCategoryId":"94","ListUrlMain":"https://doi.org/10.1145/3605360","RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q3","JCRName":"COMPUTER SCIENCE, SOFTWARE ENGINEERING","Score":null,"Total":0}
Benchmarking Combinations of Learning and Testing Algorithms for Automata Learning
Automata learning enables model-based analysis of black-box systems by automatically constructing models from system observations, which are often collected via testing. The required testing budget to learn adequate models heavily depends on the applied learning and testing techniques. Test cases executed for learning (1) collect behavioural information and (2) falsify learned hypothesis automata. Falsification test-cases are commonly selected through conformance testing. Active learning algorithms additionally implement test-case selection strategies to gain information, whereas passive algorithms derive models solely from given data. In an active setting, such algorithms require external test-case selection, like repeated conformance testing to extend the available data. There exist various approaches to learning and conformance testing, where interdependencies among them affect performance. We investigate the performance of combinations of six learning algorithms, including a passive algorithm, and seven testing algorithms, by performing experiments using 153 benchmark models. We discuss insights regarding the performance of different configurations for various types of systems. Our findings may provide guidance for future users of automata learning. For example, counterexample processing during learning strongly impacts efficiency, which is further affected by testing approach and system type. Testing with the random Wp-method performs best overall, while mutation-based testing performs well on smaller models.
期刊介绍:
This journal aims to publish contributions at the junction of theory and practice. The objective is to disseminate applicable research. Thus new theoretical contributions are welcome where they are motivated by potential application; applications of existing formalisms are of interest if they show something novel about the approach or application.
In particular, the scope of Formal Aspects of Computing includes:
well-founded notations for the description of systems;
verifiable design methods;
elucidation of fundamental computational concepts;
approaches to fault-tolerant design;
theorem-proving support;
state-exploration tools;
formal underpinning of widely used notations and methods;
formal approaches to requirements analysis.