Shreyansh Chhajer, Akhilesh S. Thyagaturu, Anil Yatavelli, P. Lalwaney, M. Reisslein, Kannan G. Raja
{"title":"Hardware Accelerations for Container Engine to Assist Container Migration on Client Devices","authors":"Shreyansh Chhajer, Akhilesh S. Thyagaturu, Anil Yatavelli, P. Lalwaney, M. Reisslein, Kannan G. Raja","doi":"10.1109/LANMAN49260.2020.9153273","DOIUrl":null,"url":null,"abstract":"The increasing computing capabilities of client devices and the increasing demands for ultra-low latency services make it prudent to migrate some micro-service container computations from the cloud and multi-access edge computing (MEC) to the client devices. The migration of a container image requires compression and decompression, which are computationally demanding. We quantitatively examine the hardware acceleration of container image compression and decompression on a client device. Specifically, we compare the Intel® Quick Assist Technology (QAT) hardware acceleration with software compression/decompression. We find that QAT speeds up compression by a factor of over 7 compared to the single-core GZIP software, while QAT speeds up decompression by a factor of over 1.6 compared to the multi-core PIGZ software. QAT also reduces the CPU core utilization by over 15% for large container images. These QAT benefits come at the expense of Input/Output (IO) memory access bitrates of up to 900 Mbyte/s (while the software compression/decompression does not require IO memory access). The presented evaluation results provide reference benchmark performance characteristics of the achievable latencies for container image instantiation and migration with and without hardware acceleration of the compression and decompression of container images.","PeriodicalId":431494,"journal":{"name":"2020 IEEE International Symposium on Local and Metropolitan Area Networks (LANMAN","volume":"18 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2020-07-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"1","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"2020 IEEE International Symposium on Local and Metropolitan Area Networks (LANMAN","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/LANMAN49260.2020.9153273","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 1
Abstract
The increasing computing capabilities of client devices and the increasing demands for ultra-low latency services make it prudent to migrate some micro-service container computations from the cloud and multi-access edge computing (MEC) to the client devices. The migration of a container image requires compression and decompression, which are computationally demanding. We quantitatively examine the hardware acceleration of container image compression and decompression on a client device. Specifically, we compare the Intel® Quick Assist Technology (QAT) hardware acceleration with software compression/decompression. We find that QAT speeds up compression by a factor of over 7 compared to the single-core GZIP software, while QAT speeds up decompression by a factor of over 1.6 compared to the multi-core PIGZ software. QAT also reduces the CPU core utilization by over 15% for large container images. These QAT benefits come at the expense of Input/Output (IO) memory access bitrates of up to 900 Mbyte/s (while the software compression/decompression does not require IO memory access). The presented evaluation results provide reference benchmark performance characteristics of the achievable latencies for container image instantiation and migration with and without hardware acceleration of the compression and decompression of container images.