Record and Replay of Online Traffic for Microservices with Automatic Mocking Point Identification

Jiangchao Liu, Jierui Liu, Peng Di, A. Liu, Zexin Zhong
{"title":"Record and Replay of Online Traffic for Microservices with Automatic Mocking Point Identification","authors":"Jiangchao Liu, Jierui Liu, Peng Di, A. Liu, Zexin Zhong","doi":"10.1145/3510457.3513029","DOIUrl":null,"url":null,"abstract":"Using recorded online traffic for the regression testing of web applications has become a common practice in industry. However, this “record and replay” on microservices is challenging because simply recorded online traffic (i.e., values for variables or input/output for function calls) often cannot be successfully replayed because microservices often have various dependencies on the complicated online environment. These dependencies include the states of underlying systems, internal states (e.g., caches), and external states (e.g., interaction with other microservices/middleware). Considering the large size and the complexity of industrial microservices, an automatic, scalable, and precise identification of such dependencies is needed as manual identification is time-consuming. In this paper, we propose an industrial grade solution to identifying all dependencies, and generating mocking points automatically using static program analysis techniques. Our solution has been deployed in a large Internet company (i.e., Ant Group) to handle hundreds of microservices, which consists of hundreds of millions lines of code, with high success rate in replay (99% on average). Moreover, our framework can boost the efficiency of the testing system by refining dependencies that must not affect the behavior of a microservice. Our experimental results show that our approach can filter out 73.1% system state dependency and 71.4% internal state dependency, which have no effect on the behavior of the microservice.","PeriodicalId":119790,"journal":{"name":"2022 IEEE/ACM 44th International Conference on Software Engineering: Software Engineering in Practice (ICSE-SEIP)","volume":"407 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2022-05-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"1","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"2022 IEEE/ACM 44th International Conference on Software Engineering: Software Engineering in Practice (ICSE-SEIP)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1145/3510457.3513029","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 1

Abstract

Using recorded online traffic for the regression testing of web applications has become a common practice in industry. However, this “record and replay” on microservices is challenging because simply recorded online traffic (i.e., values for variables or input/output for function calls) often cannot be successfully replayed because microservices often have various dependencies on the complicated online environment. These dependencies include the states of underlying systems, internal states (e.g., caches), and external states (e.g., interaction with other microservices/middleware). Considering the large size and the complexity of industrial microservices, an automatic, scalable, and precise identification of such dependencies is needed as manual identification is time-consuming. In this paper, we propose an industrial grade solution to identifying all dependencies, and generating mocking points automatically using static program analysis techniques. Our solution has been deployed in a large Internet company (i.e., Ant Group) to handle hundreds of microservices, which consists of hundreds of millions lines of code, with high success rate in replay (99% on average). Moreover, our framework can boost the efficiency of the testing system by refining dependencies that must not affect the behavior of a microservice. Our experimental results show that our approach can filter out 73.1% system state dependency and 71.4% internal state dependency, which have no effect on the behavior of the microservice.
查看原文
分享 分享
微信好友 朋友圈 QQ好友 复制链接
本刊更多论文
基于自动模拟点识别的微服务在线流量记录和重放
使用记录的在线流量进行web应用程序的回归测试已经成为工业界的一种常见做法。然而,微服务上的这种“记录和重播”是具有挑战性的,因为简单地记录在线流量(即变量值或函数调用的输入/输出)通常不能成功地重播,因为微服务通常对复杂的在线环境有各种依赖。这些依赖包括底层系统的状态、内部状态(例如,缓存)和外部状态(例如,与其他微服务/中间件的交互)。考虑到工业微服务的庞大规模和复杂性,需要对这些依赖项进行自动、可扩展和精确的识别,因为手动识别非常耗时。在本文中,我们提出了一个工业级的解决方案来识别所有依赖关系,并使用静态程序分析技术自动生成模拟点。我们的解决方案已经部署在一家大型互联网公司(例如蚂蚁集团)中,以处理数百个微服务,这些微服务由数亿行代码组成,具有很高的重放成功率(平均99%)。此外,我们的框架可以通过细化不影响微服务行为的依赖关系来提高测试系统的效率。实验结果表明,该方法可以过滤掉73.1%的系统状态依赖和71.4%的内部状态依赖,对微服务的行为没有影响。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 去求助
来源期刊
自引率
0.00%
发文量
0
期刊最新文献
Industry's Cry for Tools that Support Large-Scale Refactoring Code Reviewer Recommendation in Tencent: Practice, Challenge, and Direction* What's bothering developers in code review? The Impact of Flaky Tests on Historical Test Prioritization on Chrome Surveying the Developer Experience of Flaky Tests
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
已复制链接
已复制链接
快去分享给好友吧!
我知道了
×
扫码分享
扫码分享
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1