E. G. Radhika, G. Sudha Sadasivam, J. Fenila Naomi
{"title":"An Efficient Predictive technique to Autoscale the Resources for Web applications in Private cloud","authors":"E. G. Radhika, G. Sudha Sadasivam, J. Fenila Naomi","doi":"10.1109/AEEICB.2018.8480899","DOIUrl":null,"url":null,"abstract":"Cloud computing refers to the delivery of computing resources over the network based on user demand. Some web applications may experience different workload at different times, automatic provisioning needs to work efficiently and viably at any point of time. Autoscaling is a feature of cloud computing that has the capability to scale the resources according to demand. Autoscaling provides better fault tolerance, availability and cost management. Although Autoscaling is beneficial, it is not easy to implement. Effective Autoscaling requires techniques to foresee future workload as well as the resources needed to handle the workload. Reactive Autoscaling strategy adds or reduces resources based on threshold set. The predictive strategy is used to address the issues like rapid spike in demand, outages and variable traffic patterns from web applications by providing necessary scaling actions beforehand. In the proposed work, Auto Regressive Integrated Moving Average (ARIMA) and Recurrent Neural Network–Long Short Term Memory (RNN-LSTM) techniques are used for predicting the future workload of based on CPU and RAM usage rate collected from three tier architecture of web application integrated in private cloud. On comparing the performance metrics of both the techniques, the RNN-LSTM deep learning technique gives the minimum error rate and can be applied on large datasets for predicting the future workload of web applications in a private cloud.","PeriodicalId":423671,"journal":{"name":"2018 Fourth International Conference on Advances in Electrical, Electronics, Information, Communication and Bio-Informatics (AEEICB)","volume":"76 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2018-02-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"10","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"2018 Fourth International Conference on Advances in Electrical, Electronics, Information, Communication and Bio-Informatics (AEEICB)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/AEEICB.2018.8480899","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 10
Abstract
Cloud computing refers to the delivery of computing resources over the network based on user demand. Some web applications may experience different workload at different times, automatic provisioning needs to work efficiently and viably at any point of time. Autoscaling is a feature of cloud computing that has the capability to scale the resources according to demand. Autoscaling provides better fault tolerance, availability and cost management. Although Autoscaling is beneficial, it is not easy to implement. Effective Autoscaling requires techniques to foresee future workload as well as the resources needed to handle the workload. Reactive Autoscaling strategy adds or reduces resources based on threshold set. The predictive strategy is used to address the issues like rapid spike in demand, outages and variable traffic patterns from web applications by providing necessary scaling actions beforehand. In the proposed work, Auto Regressive Integrated Moving Average (ARIMA) and Recurrent Neural Network–Long Short Term Memory (RNN-LSTM) techniques are used for predicting the future workload of based on CPU and RAM usage rate collected from three tier architecture of web application integrated in private cloud. On comparing the performance metrics of both the techniques, the RNN-LSTM deep learning technique gives the minimum error rate and can be applied on large datasets for predicting the future workload of web applications in a private cloud.