{"title":"描述单节点大规模容器部署的网络性能","authors":"Conrado Boeira, M. Neves, T. Ferreto, I. Haque","doi":"10.1109/CloudNet53349.2021.9657138","DOIUrl":null,"url":null,"abstract":"Cloud services have shifted from complex monolithic designs to hundreds of loosely coupled microservices over the last years. These microservices communicate via pre-defined APIs (e.g., RPC) and are usually implemented on top of containers. To make the microservices model profitable, cloud providers often co-locate them on a single (virtual) machine, thus achieving high server utilization. Despite being overlooked by previous work, the challenge of providing high-quality network connectivity to multiple containers running on the same host becomes crucial for the overall cloud service performance in this scenario. For that reason, this paper focuses on identifying the overheads and bottlenecks caused by the increasing number of concurrent containers running on a single node, particularly from a networking perspective. Through an extensive set of experiments, we show that the networking performance is mostly restricted by the CPU capacity (even for I/O intensive workloads), that containers can largely suffer from interference originated from packet processing, and that proper core scheduling policies can significantly improve connection throughput. Ultimately, our findings can help to pave the way towards more efficient large-scale microservice deployments.","PeriodicalId":369247,"journal":{"name":"2021 IEEE 10th International Conference on Cloud Networking (CloudNet)","volume":"16 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2021-11-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"Characterizing network performance of single-node large-scale container deployments\",\"authors\":\"Conrado Boeira, M. Neves, T. Ferreto, I. Haque\",\"doi\":\"10.1109/CloudNet53349.2021.9657138\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"Cloud services have shifted from complex monolithic designs to hundreds of loosely coupled microservices over the last years. These microservices communicate via pre-defined APIs (e.g., RPC) and are usually implemented on top of containers. To make the microservices model profitable, cloud providers often co-locate them on a single (virtual) machine, thus achieving high server utilization. Despite being overlooked by previous work, the challenge of providing high-quality network connectivity to multiple containers running on the same host becomes crucial for the overall cloud service performance in this scenario. For that reason, this paper focuses on identifying the overheads and bottlenecks caused by the increasing number of concurrent containers running on a single node, particularly from a networking perspective. Through an extensive set of experiments, we show that the networking performance is mostly restricted by the CPU capacity (even for I/O intensive workloads), that containers can largely suffer from interference originated from packet processing, and that proper core scheduling policies can significantly improve connection throughput. Ultimately, our findings can help to pave the way towards more efficient large-scale microservice deployments.\",\"PeriodicalId\":369247,\"journal\":{\"name\":\"2021 IEEE 10th International Conference on Cloud Networking (CloudNet)\",\"volume\":\"16 1\",\"pages\":\"0\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2021-11-08\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"2021 IEEE 10th International Conference on Cloud Networking (CloudNet)\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.1109/CloudNet53349.2021.9657138\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"2021 IEEE 10th International Conference on Cloud Networking (CloudNet)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/CloudNet53349.2021.9657138","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
Characterizing network performance of single-node large-scale container deployments
Cloud services have shifted from complex monolithic designs to hundreds of loosely coupled microservices over the last years. These microservices communicate via pre-defined APIs (e.g., RPC) and are usually implemented on top of containers. To make the microservices model profitable, cloud providers often co-locate them on a single (virtual) machine, thus achieving high server utilization. Despite being overlooked by previous work, the challenge of providing high-quality network connectivity to multiple containers running on the same host becomes crucial for the overall cloud service performance in this scenario. For that reason, this paper focuses on identifying the overheads and bottlenecks caused by the increasing number of concurrent containers running on a single node, particularly from a networking perspective. Through an extensive set of experiments, we show that the networking performance is mostly restricted by the CPU capacity (even for I/O intensive workloads), that containers can largely suffer from interference originated from packet processing, and that proper core scheduling policies can significantly improve connection throughput. Ultimately, our findings can help to pave the way towards more efficient large-scale microservice deployments.