{"title":"Request Distribution for Heterogeneous Database Server Clusters with Processing Time Estimation","authors":"Minato Omori, H. Nishi","doi":"10.1109/INDIN.2018.8471931","DOIUrl":null,"url":null,"abstract":"Recently, data traffic on the Internet has increased due to the rapid growth of various Internet-based services. The convergence of user requests means that servers are overloaded. To solve this problem, service providers generally install multiple servers and distribute requests using a load balancer. The existing load balancing algorithms do not estimate the size of the load of unknown requests. However, the requested contents are heterogeneous and complex, so the size of the load is dependent on the servers and the contents of the requests. In this study, we propose a load balancing algorithm that distributes the requests based on estimates of the processing time, which avoids mismatches between the characteristics of servers and the request contents. The processing time for requests is estimated based on the requested contents by online machine learning, and a strategy to cover the latency of machine learning is proposed and partially conducted. To test the algorithm, we built a model of multiple database servers and performed an experiment using real log data for database requests. The simulation results showed that the proposed algorithm reduced the average processing time for requests by 94.5% compared with round robin and by 28.3% compared with least connections.","PeriodicalId":6467,"journal":{"name":"2018 IEEE 16th International Conference on Industrial Informatics (INDIN)","volume":"3 1","pages":"278-283"},"PeriodicalIF":0.0000,"publicationDate":"2018-07-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"2018 IEEE 16th International Conference on Industrial Informatics (INDIN)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/INDIN.2018.8471931","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 0
Abstract
Recently, data traffic on the Internet has increased due to the rapid growth of various Internet-based services. The convergence of user requests means that servers are overloaded. To solve this problem, service providers generally install multiple servers and distribute requests using a load balancer. The existing load balancing algorithms do not estimate the size of the load of unknown requests. However, the requested contents are heterogeneous and complex, so the size of the load is dependent on the servers and the contents of the requests. In this study, we propose a load balancing algorithm that distributes the requests based on estimates of the processing time, which avoids mismatches between the characteristics of servers and the request contents. The processing time for requests is estimated based on the requested contents by online machine learning, and a strategy to cover the latency of machine learning is proposed and partially conducted. To test the algorithm, we built a model of multiple database servers and performed an experiment using real log data for database requests. The simulation results showed that the proposed algorithm reduced the average processing time for requests by 94.5% compared with round robin and by 28.3% compared with least connections.