A. Bertolino, Guglielmo De Angelis, Breno Miranda, P. Tonella
Modern software systems accommodate complex configurations and execution conditions that depend on the environment where the software is run. While in house testing can exercise only a fraction of such execution contexts, in vivo testing can take advantage of the execution state observed in the field to conduct further testing activities. In this paper, we present the Groucho approach to in vivo testing. Groucho can suspend the execution, run some in vivo tests, rollback the side effects introduced by such tests, and eventually resume normal execution. The approach can be transparently applied to the original application, even if only available as compiled code, and it is fully automated. Our empirical studies of the performance overhead introduced by Groucho under various configurations showed that this may be kept to a negligible level by activating in vivo testing with low probability. Our empirical studies about the effectiveness of the approach confirm previous findings on the existence of faults that are unlikely exposed in house and become easy to expose in the field. Moreover, we include the first study to quantify the coverage increase gained when in vivo testing is added to complement in house testing.
{"title":"In vivo test and rollback of Java applications as they are","authors":"A. Bertolino, Guglielmo De Angelis, Breno Miranda, P. Tonella","doi":"10.1002/stvr.1857","DOIUrl":"https://doi.org/10.1002/stvr.1857","url":null,"abstract":"Modern software systems accommodate complex configurations and execution conditions that depend on the environment where the software is run. While in house testing can exercise only a fraction of such execution contexts, in vivo testing can take advantage of the execution state observed in the field to conduct further testing activities. In this paper, we present the Groucho approach to in vivo testing. Groucho can suspend the execution, run some in vivo tests, rollback the side effects introduced by such tests, and eventually resume normal execution. The approach can be transparently applied to the original application, even if only available as compiled code, and it is fully automated. Our empirical studies of the performance overhead introduced by Groucho under various configurations showed that this may be kept to a negligible level by activating in vivo testing with low probability. Our empirical studies about the effectiveness of the approach confirm previous findings on the existence of faults that are unlikely exposed in house and become easy to expose in the field. Moreover, we include the first study to quantify the coverage increase gained when in vivo testing is added to complement in house testing.","PeriodicalId":49506,"journal":{"name":"Software Testing Verification & Reliability","volume":"40 1","pages":""},"PeriodicalIF":1.5,"publicationDate":"2023-06-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"77991103","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Hanbo Cai, Pengcheng Zhang, Hai Dong, Lars Grunske, Shunhui Ji, Tianhao Yuan
Test case generation techniques based on adversarial examples are commonly used to enhance the reliability and robustness of image‐based and text‐based machine learning applications. However, efficient techniques for speech recognition systems are still absent. This paper proposes a family of methods that generate targeted adversarial examples for speech recognition systems. All are based on the firefly algorithm (F), and are enhanced with gauss mutations and / or gradient estimation (F‐GM, F‐GE, F‐GMGE) to fit the specific problem of targeted adversarial test case generation. We conduct an experimental evaluation on three different types of speech datasets, including Google Command, Common Voice and LibriSpeech. In addition, we recruit volunteers to evaluate the performance of the adversarial examples. The experimental results show that, compared with existing approaches, these approaches can effectively improve the success rate of the targeted adversarial example generation. The code is publicly available at https://github.com/HanboCai/FGMGE.
{"title":"Adversarial example‐based test case generation for black‐box speech recognition systems","authors":"Hanbo Cai, Pengcheng Zhang, Hai Dong, Lars Grunske, Shunhui Ji, Tianhao Yuan","doi":"10.1002/stvr.1848","DOIUrl":"https://doi.org/10.1002/stvr.1848","url":null,"abstract":"Test case generation techniques based on adversarial examples are commonly used to enhance the reliability and robustness of image‐based and text‐based machine learning applications. However, efficient techniques for speech recognition systems are still absent. This paper proposes a family of methods that generate targeted adversarial examples for speech recognition systems. All are based on the firefly algorithm (F), and are enhanced with gauss mutations and / or gradient estimation (F‐GM, F‐GE, F‐GMGE) to fit the specific problem of targeted adversarial test case generation. We conduct an experimental evaluation on three different types of speech datasets, including Google Command, Common Voice and LibriSpeech. In addition, we recruit volunteers to evaluate the performance of the adversarial examples. The experimental results show that, compared with existing approaches, these approaches can effectively improve the success rate of the targeted adversarial example generation. The code is publicly available at https://github.com/HanboCai/FGMGE.","PeriodicalId":49506,"journal":{"name":"Software Testing Verification & Reliability","volume":"36 1","pages":""},"PeriodicalIF":1.5,"publicationDate":"2023-05-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"80310695","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
The trend towards intelligent control processes has introduced the Internet of Things (IoT) and cloud computing technologies to factories. IoT devices can sense data and send it to a cloud for further analysis in a factory. Consequently, the quantity of such valuable data flowing in an industrial cyber‐physical system has gradually increased. Tailoring a risk assessment system for Industrial IoT (IIoT) is essential, particularly for a cloud platform that handles the IIoT data flow. In this study, we leverage analytic hierarchy processes (AHPs) and propose Hierarchical Risk Assessment Model (HiRAM) for an IIoT cloud platform. The proposed model allows the platform to self‐evaluate its security status. Furthermore, a modular and responsive Risk Assessment System based on HiRAM, called HiRAM‐RAS, is realized and evaluated in a real‐world IIoT cloud platform. We deploy HiRAM‐RAS to a sample application and introduce the practical deployment procedures. We then estimate the practicality of the HiRAM‐RAS by injecting different degrees of errors and launching Distributed denial‐of‐service (DDoS) attacks. The results demonstrate the changes in integrity and availability scores evaluated by HiRAM.
{"title":"HiRAM: A hierarchical risk assessment model and its implementation for an industrial Internet of Things in the cloud","authors":"Wen-Lin Sun, Ying-Han Tang, Yu-Lun Huang","doi":"10.1002/stvr.1847","DOIUrl":"https://doi.org/10.1002/stvr.1847","url":null,"abstract":"The trend towards intelligent control processes has introduced the Internet of Things (IoT) and cloud computing technologies to factories. IoT devices can sense data and send it to a cloud for further analysis in a factory. Consequently, the quantity of such valuable data flowing in an industrial cyber‐physical system has gradually increased. Tailoring a risk assessment system for Industrial IoT (IIoT) is essential, particularly for a cloud platform that handles the IIoT data flow. In this study, we leverage analytic hierarchy processes (AHPs) and propose Hierarchical Risk Assessment Model (HiRAM) for an IIoT cloud platform. The proposed model allows the platform to self‐evaluate its security status. Furthermore, a modular and responsive Risk Assessment System based on HiRAM, called HiRAM‐RAS, is realized and evaluated in a real‐world IIoT cloud platform. We deploy HiRAM‐RAS to a sample application and introduce the practical deployment procedures. We then estimate the practicality of the HiRAM‐RAS by injecting different degrees of errors and launching Distributed denial‐of‐service (DDoS) attacks. The results demonstrate the changes in integrity and availability scores evaluated by HiRAM.","PeriodicalId":49506,"journal":{"name":"Software Testing Verification & Reliability","volume":"170 1","pages":""},"PeriodicalIF":1.5,"publicationDate":"2023-05-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"80671750","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Combinatorial testing and machine learning for automated test generation","authors":"Yves Le Traon, Tao Xie","doi":"10.1002/stvr.1846","DOIUrl":"https://doi.org/10.1002/stvr.1846","url":null,"abstract":"In","PeriodicalId":49506,"journal":{"name":"Software Testing Verification & Reliability","volume":"39 1","pages":""},"PeriodicalIF":1.5,"publicationDate":"2023-05-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"85303537","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Test infrastructure and environment","authors":"Yves Le Traon, Tao Xie","doi":"10.1002/stvr.1844","DOIUrl":"https://doi.org/10.1002/stvr.1844","url":null,"abstract":"of","PeriodicalId":49506,"journal":{"name":"Software Testing Verification & Reliability","volume":"16 1","pages":""},"PeriodicalIF":1.5,"publicationDate":"2023-04-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"74677594","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
M. Chaim, Kesina Baral, J. Offutt, Mario Concilio Neto, Roberto P. A. Araujo
Data flow testing creates test requirements as definition‐use (DU) associations, where a definition is a program location that assigns a value to a variable and a use is a location where that value is accessed. Data flow testing is expensive, largely because of the number of test requirements. Luckily, many DU‐associations are redundant in the sense that if one test requirement (e.g. node, edge and DU‐association) is covered, other DU‐associations are guaranteed to also be covered. This relationship is called subsumption. Thus, testers can save resources by only covering DU‐associations that are not subsumed by other testing requirements. Although this has the potential to significantly decrease the cost of data flow testing, there are roadblocks to its application. Finding data flow subsumptions correctly and efficiently has been an elusive goal; the savings provided by data flow subsumptions and the cost to find them need to be assessed; and the fault detection ability of a reduced set of DU‐associations and the advantages of data flow testing over node and edge coverage need to be verified. This paper presents novel solutions to these problems. We present algorithms that correctly find data flow subsumptions and are asymptotically less costly than previous algorithms. We present empirical data that show that data flow subsumption is effective at reducing the number of DU‐associations to be tested and can be found at scale. Furthermore, we found that using reduced DU‐associations decreased the fault detection ability by less than 2%, and data flow testing adds testing value beyond node and edge coverage.
{"title":"On subsumption relationships in data flow testing","authors":"M. Chaim, Kesina Baral, J. Offutt, Mario Concilio Neto, Roberto P. A. Araujo","doi":"10.1002/stvr.1843","DOIUrl":"https://doi.org/10.1002/stvr.1843","url":null,"abstract":"Data flow testing creates test requirements as definition‐use (DU) associations, where a definition is a program location that assigns a value to a variable and a use is a location where that value is accessed. Data flow testing is expensive, largely because of the number of test requirements. Luckily, many DU‐associations are redundant in the sense that if one test requirement (e.g. node, edge and DU‐association) is covered, other DU‐associations are guaranteed to also be covered. This relationship is called subsumption. Thus, testers can save resources by only covering DU‐associations that are not subsumed by other testing requirements. Although this has the potential to significantly decrease the cost of data flow testing, there are roadblocks to its application. Finding data flow subsumptions correctly and efficiently has been an elusive goal; the savings provided by data flow subsumptions and the cost to find them need to be assessed; and the fault detection ability of a reduced set of DU‐associations and the advantages of data flow testing over node and edge coverage need to be verified. This paper presents novel solutions to these problems. We present algorithms that correctly find data flow subsumptions and are asymptotically less costly than previous algorithms. We present empirical data that show that data flow subsumption is effective at reducing the number of DU‐associations to be tested and can be found at scale. Furthermore, we found that using reduced DU‐associations decreased the fault detection ability by less than 2%, and data flow testing adds testing value beyond node and edge coverage.","PeriodicalId":49506,"journal":{"name":"Software Testing Verification & Reliability","volume":"75 1","pages":""},"PeriodicalIF":1.5,"publicationDate":"2023-03-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"86397693","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Combinatorial test generation, also called t ‐way testing, is the process of generating sets of input parameters for a system under test, by considering interactions between values of multiple parameters. In order to decrease total testing time, there is an interest in techniques that generate smaller test suites. In our previous work, we used graph techniques to produce high‐quality test suites. However, these techniques require a lot of computing power and memory, which is why this paper investigates distributed computing for t ‐way testing. We first introduce our distributed graph colouring method, with new algorithms for building the graph and for colouring it. Second, we present our distributed hypergraph vertex covering method and a new heuristic. Third, we show how to build a distributed IPOG algorithm by leveraging either graph colouring or hypergraph vertex covering as vertical growth algorithms. Finally, we test these new methods on a computer cluster and compare them to existing t ‐way testing tools.
{"title":"An investigation of distributed computing for combinatorial testing","authors":"Edmond La Chance, Sylvain Hallé","doi":"10.1002/stvr.1842","DOIUrl":"https://doi.org/10.1002/stvr.1842","url":null,"abstract":"Combinatorial test generation, also called t ‐way testing, is the process of generating sets of input parameters for a system under test, by considering interactions between values of multiple parameters. In order to decrease total testing time, there is an interest in techniques that generate smaller test suites. In our previous work, we used graph techniques to produce high‐quality test suites. However, these techniques require a lot of computing power and memory, which is why this paper investigates distributed computing for t ‐way testing. We first introduce our distributed graph colouring method, with new algorithms for building the graph and for colouring it. Second, we present our distributed hypergraph vertex covering method and a new heuristic. Third, we show how to build a distributed IPOG algorithm by leveraging either graph colouring or hypergraph vertex covering as vertical growth algorithms. Finally, we test these new methods on a computer cluster and compare them to existing t ‐way testing tools.","PeriodicalId":49506,"journal":{"name":"Software Testing Verification & Reliability","volume":"78 1","pages":""},"PeriodicalIF":1.5,"publicationDate":"2023-02-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"80166650","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
This issue contains two papers. Both papers focus on model-based testing. The first paper, “RATE: A Model-Based Testing Approach That Combines Model Refinement and Test Execution” by Andrea Bombarda, Silvia Bonfanti, Angelo Gargantini, Yu Lei, and Feng Duan, presents the RATE approach and its application to three case studies. The RATE approach helps testers verify the compliance of the actual implementation with respect to the specification of the system under test (SUT). In particular, the approach starts from an initial model of the SUT and refines the model based on the testing results of the previous refinement. The approach derives tests from Avalla scenarios written manually during validation or automatically generated from the model using the ATGT tool. The approach then executes the tests on the code implementation to obtain coverage information in order to identify missing system features or behaviours (not captured in the model) and add them to the next refinement. The authors have applied the approach to three different case studies and have shown the approach’s effectiveness. (Recommended by Manuel Nunez). The second paper, “Coloured Petri Nets for Abstract Test Generation in Software Engineering” by Alvaro Sobrinho, Ially Almeida, Leandro Dias da Silva, Lenardo Chaves e Silva, Adriano Araújo, Tassio Fernandes Costa, and Angelo Perkusich, presents an investigation of the current approaches of abstract test generation for Coloured Petri Nets (CPN) in order to guide testers to select a suitable approach when conducting model-based testing using CPN. In particular, the authors conduct a systematic literature review to investigate the current approaches of abstract test generation for CPN and then focus on specific implementations and advantages/disadvantages. The authors then conduct an empirical study with formal models of medical systems to study the current approaches of abstract test generation for CPN. The study results show that CPN provides reliable tests quickly, dependent on the applied approach of abstract test generation. (Recommended by Manuel Nunez).
这一期有两篇论文。两篇论文都关注基于模型的测试。第一篇论文“RATE:一种结合了模型精化和测试执行的基于模型的测试方法”,作者是Andrea Bombarda、Silvia Bonfanti、Angelo Gargantini、Yu Lei和Feng Duan,介绍了RATE方法及其在三个案例研究中的应用。RATE方法帮助测试人员根据被测系统(SUT)的规范来验证实际实现的遵从性。特别是,该方法从SUT的初始模型开始,并根据先前细化的测试结果对模型进行细化。该方法从验证期间手动编写的Avalla场景或使用ATGT工具从模型自动生成的Avalla场景中派生测试。然后,该方法在代码实现上执行测试以获得覆盖信息,以便识别缺失的系统特性或行为(未在模型中捕获),并将它们添加到下一个细化中。作者将该方法应用于三个不同的案例研究,并证明了该方法的有效性。(曼努埃尔·努涅斯推荐)第二篇论文,“软件工程中抽象测试生成的彩色Petri网”,作者是Alvaro Sobrinho, ali Almeida, Leandro Dias da Silva, Lenardo Chaves e Silva, Adriano Araújo, Tassio Fernandes Costa和Angelo Perkusich,该论文对彩色Petri网(CPN)抽象测试生成的当前方法进行了调查,以指导测试人员在使用CPN进行基于模型的测试时选择合适的方法。特别地,作者进行了系统的文献综述,研究了当前CPN抽象测试生成的方法,然后重点讨论了具体的实现和优缺点。然后利用医疗系统的形式化模型进行实证研究,研究当前CPN抽象测试生成的方法。研究结果表明,基于抽象测试生成的应用方法,CPN能够快速提供可靠的测试。(曼努埃尔·努涅斯推荐)
{"title":"Model‐based testing","authors":"Yves Le Traon, Tao Xie","doi":"10.1002/stvr.1841","DOIUrl":"https://doi.org/10.1002/stvr.1841","url":null,"abstract":"This issue contains two papers. Both papers focus on model-based testing. The first paper, “RATE: A Model-Based Testing Approach That Combines Model Refinement and Test Execution” by Andrea Bombarda, Silvia Bonfanti, Angelo Gargantini, Yu Lei, and Feng Duan, presents the RATE approach and its application to three case studies. The RATE approach helps testers verify the compliance of the actual implementation with respect to the specification of the system under test (SUT). In particular, the approach starts from an initial model of the SUT and refines the model based on the testing results of the previous refinement. The approach derives tests from Avalla scenarios written manually during validation or automatically generated from the model using the ATGT tool. The approach then executes the tests on the code implementation to obtain coverage information in order to identify missing system features or behaviours (not captured in the model) and add them to the next refinement. The authors have applied the approach to three different case studies and have shown the approach’s effectiveness. (Recommended by Manuel Nunez). The second paper, “Coloured Petri Nets for Abstract Test Generation in Software Engineering” by Alvaro Sobrinho, Ially Almeida, Leandro Dias da Silva, Lenardo Chaves e Silva, Adriano Araújo, Tassio Fernandes Costa, and Angelo Perkusich, presents an investigation of the current approaches of abstract test generation for Coloured Petri Nets (CPN) in order to guide testers to select a suitable approach when conducting model-based testing using CPN. In particular, the authors conduct a systematic literature review to investigate the current approaches of abstract test generation for CPN and then focus on specific implementations and advantages/disadvantages. The authors then conduct an empirical study with formal models of medical systems to study the current approaches of abstract test generation for CPN. The study results show that CPN provides reliable tests quickly, dependent on the applied approach of abstract test generation. (Recommended by Manuel Nunez).","PeriodicalId":49506,"journal":{"name":"Software Testing Verification & Reliability","volume":"16 1","pages":""},"PeriodicalIF":1.5,"publicationDate":"2023-02-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"72718895","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
The characteristics of the test environment are of vital importance to its ability to support the organizations testing objectives. This paper seeks to address the need for a structured and reliable approach, which can be used by companies and other organizations to optimize their test environments in each individual case. The reported study included a series of interviews with 30 individuals, a series of focus groups with in total 31 individuals and a cross‐company workshop with 30 participants from five large‐scale companies, operating in different industry segments. The study resulted in a list of success factors, including not only characteristics and capabilities existing within a test environment (intrinsic success factors) but also properties not inherent to the test environment, but still vital for a successfully implemented test environment (extrinsic success factors). This distinction is important, as the root causes differ and as addressing them requires distinct approaches—not only of technology but also of organization, communication and collaboration. We find that successful implementations of test environments for large‐scale software systems depend primarily on how they support the company's business strategy, test organization and product testability (extrinsic success factors). Based on this, test environments can then be optimized to improve test environment capabilities, usability and stability (intrinsic success factors). The list of intrinsic and extrinsic success factors was well received by all five companies included in the study, supporting that the intrinsic and extrinsic success factors for test environments can be applied to a large segment of the software industry.
{"title":"Test environments for large‐scale software systems—An industrial study of intrinsic and extrinsic success factors","authors":"Torvald Mårtensson, Göran Ancher, Daniel Ståhl","doi":"10.1002/stvr.1839","DOIUrl":"https://doi.org/10.1002/stvr.1839","url":null,"abstract":"The characteristics of the test environment are of vital importance to its ability to support the organizations testing objectives. This paper seeks to address the need for a structured and reliable approach, which can be used by companies and other organizations to optimize their test environments in each individual case. The reported study included a series of interviews with 30 individuals, a series of focus groups with in total 31 individuals and a cross‐company workshop with 30 participants from five large‐scale companies, operating in different industry segments. The study resulted in a list of success factors, including not only characteristics and capabilities existing within a test environment (intrinsic success factors) but also properties not inherent to the test environment, but still vital for a successfully implemented test environment (extrinsic success factors). This distinction is important, as the root causes differ and as addressing them requires distinct approaches—not only of technology but also of organization, communication and collaboration. We find that successful implementations of test environments for large‐scale software systems depend primarily on how they support the company's business strategy, test organization and product testability (extrinsic success factors). Based on this, test environments can then be optimized to improve test environment capabilities, usability and stability (intrinsic success factors). The list of intrinsic and extrinsic success factors was well received by all five companies included in the study, supporting that the intrinsic and extrinsic success factors for test environments can be applied to a large segment of the software industry.","PeriodicalId":49506,"journal":{"name":"Software Testing Verification & Reliability","volume":"12 1","pages":""},"PeriodicalIF":1.5,"publicationDate":"2023-01-03","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"82402442","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Álvaro Sobrinho, Ially Almeida, Leandro Dias da Silva, L. C. E. Silva, Adriano Araújo, Tássio Fernandes Costa, A. Perkusich
Model‐based testing (MBT) relies on models of the system's behaviour to generate abstract tests. Testers can reuse formal models using MBT to increase confidence in critical systems (e.g., medical and avionic systems). In this article, we investigate the current abstract test generation approaches for CPN to provide insights for testers who need to select a suitable one when applying the MBT using CPN. We conduct a systematic literature review to investigate the existing abstract test generation approaches designed for CPN. Subsequently, focusing on specific implementations and advantages/disadvantages, we experiment with formal models of medical systems during our empirical analysis to improve the discussion on existing abstract test generation approaches for CPN. Our study shows that CPN provides reliable tests quickly, depending on the abstract test generation approach applied.
{"title":"Coloured Petri nets for abstract test generation in software engineering","authors":"Álvaro Sobrinho, Ially Almeida, Leandro Dias da Silva, L. C. E. Silva, Adriano Araújo, Tássio Fernandes Costa, A. Perkusich","doi":"10.1002/stvr.1837","DOIUrl":"https://doi.org/10.1002/stvr.1837","url":null,"abstract":"Model‐based testing (MBT) relies on models of the system's behaviour to generate abstract tests. Testers can reuse formal models using MBT to increase confidence in critical systems (e.g., medical and avionic systems). In this article, we investigate the current abstract test generation approaches for CPN to provide insights for testers who need to select a suitable one when applying the MBT using CPN. We conduct a systematic literature review to investigate the existing abstract test generation approaches designed for CPN. Subsequently, focusing on specific implementations and advantages/disadvantages, we experiment with formal models of medical systems during our empirical analysis to improve the discussion on existing abstract test generation approaches for CPN. Our study shows that CPN provides reliable tests quickly, depending on the abstract test generation approach applied.","PeriodicalId":49506,"journal":{"name":"Software Testing Verification & Reliability","volume":"1 1","pages":""},"PeriodicalIF":1.5,"publicationDate":"2022-12-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"87903507","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}