Hamdy Michael Ayas, Hartmut Fischer, P. Leitner, F. D. O. Neto
{"title":"基于消费者驱动契约测试的微服务系统实证分析","authors":"Hamdy Michael Ayas, Hartmut Fischer, P. Leitner, F. D. O. Neto","doi":"10.1109/SEAA56994.2022.00022","DOIUrl":null,"url":null,"abstract":"Testing has a prominent role in revealing faults in software based on microservices. One of the most important discussion points in MSAs is the granularity of services, often in different levels of abstraction. Similarly, the granularity of tests in MSAs is reflected in different test types. However, it is challenging to conceptualize how the overall testing architecture comes together when combining testing in different levels of abstraction for microservices. There is no empirical evidence on the overall testing architecture in such microservices implementations. Furthermore, there is a need to empirically understand how the current state of practice resonates with existing best practices on testing. In this study, we mine Github to find different candidate projects for an in-depth, qualitative assessment of their test artifacts. We analyze 16 repositories that use microservices and include various test artifacts. We focus on four projects that use consumer-driven-contract testing. Our results demonstrate how these projects cover different levels of testing. This study (i) drafts a testing architecture including activities and artifacts, and (ii) demonstrates how these align with best practices and guidelines. Our proposed architecture helps the categorization of system and test artifacts in empirical studies of microservices. Finally, we showcase a view of the boundaries between different levels of testing in systems using microservices.","PeriodicalId":269970,"journal":{"name":"2022 48th Euromicro Conference on Software Engineering and Advanced Applications (SEAA)","volume":"50 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2022-08-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"1","resultStr":"{\"title\":\"An Empirical Analysis of Microservices Systems Using Consumer-Driven Contract Testing\",\"authors\":\"Hamdy Michael Ayas, Hartmut Fischer, P. Leitner, F. D. O. Neto\",\"doi\":\"10.1109/SEAA56994.2022.00022\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"Testing has a prominent role in revealing faults in software based on microservices. One of the most important discussion points in MSAs is the granularity of services, often in different levels of abstraction. Similarly, the granularity of tests in MSAs is reflected in different test types. However, it is challenging to conceptualize how the overall testing architecture comes together when combining testing in different levels of abstraction for microservices. There is no empirical evidence on the overall testing architecture in such microservices implementations. Furthermore, there is a need to empirically understand how the current state of practice resonates with existing best practices on testing. In this study, we mine Github to find different candidate projects for an in-depth, qualitative assessment of their test artifacts. We analyze 16 repositories that use microservices and include various test artifacts. We focus on four projects that use consumer-driven-contract testing. Our results demonstrate how these projects cover different levels of testing. This study (i) drafts a testing architecture including activities and artifacts, and (ii) demonstrates how these align with best practices and guidelines. Our proposed architecture helps the categorization of system and test artifacts in empirical studies of microservices. Finally, we showcase a view of the boundaries between different levels of testing in systems using microservices.\",\"PeriodicalId\":269970,\"journal\":{\"name\":\"2022 48th Euromicro Conference on Software Engineering and Advanced Applications (SEAA)\",\"volume\":\"50 1\",\"pages\":\"0\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2022-08-01\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"1\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"2022 48th Euromicro Conference on Software Engineering and Advanced Applications (SEAA)\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.1109/SEAA56994.2022.00022\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"2022 48th Euromicro Conference on Software Engineering and Advanced Applications (SEAA)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/SEAA56994.2022.00022","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
An Empirical Analysis of Microservices Systems Using Consumer-Driven Contract Testing
Testing has a prominent role in revealing faults in software based on microservices. One of the most important discussion points in MSAs is the granularity of services, often in different levels of abstraction. Similarly, the granularity of tests in MSAs is reflected in different test types. However, it is challenging to conceptualize how the overall testing architecture comes together when combining testing in different levels of abstraction for microservices. There is no empirical evidence on the overall testing architecture in such microservices implementations. Furthermore, there is a need to empirically understand how the current state of practice resonates with existing best practices on testing. In this study, we mine Github to find different candidate projects for an in-depth, qualitative assessment of their test artifacts. We analyze 16 repositories that use microservices and include various test artifacts. We focus on four projects that use consumer-driven-contract testing. Our results demonstrate how these projects cover different levels of testing. This study (i) drafts a testing architecture including activities and artifacts, and (ii) demonstrates how these align with best practices and guidelines. Our proposed architecture helps the categorization of system and test artifacts in empirical studies of microservices. Finally, we showcase a view of the boundaries between different levels of testing in systems using microservices.