Yanfeng Chai , Jiake Ge , Qiang Zhang , Yunpeng Chai , Xin Wang , Qingpeng Zhang
{"title":"Correlation Expert Tuning System for Performance Acceleration","authors":"Yanfeng Chai , Jiake Ge , Qiang Zhang , Yunpeng Chai , Xin Wang , Qingpeng Zhang","doi":"10.1016/j.bdr.2022.100345","DOIUrl":null,"url":null,"abstract":"<div><p>One configuration can not fit all workloads and diverse resources limitations in modern databases. Auto-tuning methods based on reinforcement learning (RL) normally depend on the exhaustive offline training process with a huge amount of performance measurements, which includes large inefficient knobs combinations under a trial-and-error method. The most time-consuming part of the process is not the RL network training but the performance measurements for acquiring the reward values of target goals like higher throughput or lower latency. In other words, the whole process nearly could be considered as a zero-knowledge method without any experience or rules to constrain it. So we propose a correlation expert tuning system (CXTuning) for acceleration, which contains a correlation knowledge model to remove unnecessary training costs and a multi-instance mechanism (MIM) to support fine-grained tuning for diverse workloads. The models define the importance and correlations among these configuration knobs for the user's specified target. But knobs-based optimization should not be the final destination for auto-tuning. Furthermore, we import an abstracted architectural optimization method into CXTuning as a part of the progressive expert knowledge tuning (PEKT) algorithm. Experiments show that CXTuning can effectively reduce the training time and achieve extra performance promotion compared with the state-of-the-art auto-tuning method.</p></div>","PeriodicalId":3,"journal":{"name":"ACS Applied Electronic Materials","volume":null,"pages":null},"PeriodicalIF":4.3000,"publicationDate":"2022-11-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.sciencedirect.com/science/article/pii/S2214579622000399/pdfft?md5=959f53ff5a4e8dcd1c236afdbde633e4&pid=1-s2.0-S2214579622000399-main.pdf","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"ACS Applied Electronic Materials","FirstCategoryId":"94","ListUrlMain":"https://www.sciencedirect.com/science/article/pii/S2214579622000399","RegionNum":3,"RegionCategory":"材料科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q1","JCRName":"ENGINEERING, ELECTRICAL & ELECTRONIC","Score":null,"Total":0}
引用次数: 0
Abstract
One configuration can not fit all workloads and diverse resources limitations in modern databases. Auto-tuning methods based on reinforcement learning (RL) normally depend on the exhaustive offline training process with a huge amount of performance measurements, which includes large inefficient knobs combinations under a trial-and-error method. The most time-consuming part of the process is not the RL network training but the performance measurements for acquiring the reward values of target goals like higher throughput or lower latency. In other words, the whole process nearly could be considered as a zero-knowledge method without any experience or rules to constrain it. So we propose a correlation expert tuning system (CXTuning) for acceleration, which contains a correlation knowledge model to remove unnecessary training costs and a multi-instance mechanism (MIM) to support fine-grained tuning for diverse workloads. The models define the importance and correlations among these configuration knobs for the user's specified target. But knobs-based optimization should not be the final destination for auto-tuning. Furthermore, we import an abstracted architectural optimization method into CXTuning as a part of the progressive expert knowledge tuning (PEKT) algorithm. Experiments show that CXTuning can effectively reduce the training time and achieve extra performance promotion compared with the state-of-the-art auto-tuning method.