私有云中Web应用程序资源自动伸缩的有效预测技术

E. G. Radhika, G. Sudha Sadasivam, J. Fenila Naomi
{"title":"私有云中Web应用程序资源自动伸缩的有效预测技术","authors":"E. G. Radhika, G. Sudha Sadasivam, J. Fenila Naomi","doi":"10.1109/AEEICB.2018.8480899","DOIUrl":null,"url":null,"abstract":"Cloud computing refers to the delivery of computing resources over the network based on user demand. Some web applications may experience different workload at different times, automatic provisioning needs to work efficiently and viably at any point of time. Autoscaling is a feature of cloud computing that has the capability to scale the resources according to demand. Autoscaling provides better fault tolerance, availability and cost management. Although Autoscaling is beneficial, it is not easy to implement. Effective Autoscaling requires techniques to foresee future workload as well as the resources needed to handle the workload. Reactive Autoscaling strategy adds or reduces resources based on threshold set. The predictive strategy is used to address the issues like rapid spike in demand, outages and variable traffic patterns from web applications by providing necessary scaling actions beforehand. In the proposed work, Auto Regressive Integrated Moving Average (ARIMA) and Recurrent Neural Network–Long Short Term Memory (RNN-LSTM) techniques are used for predicting the future workload of based on CPU and RAM usage rate collected from three tier architecture of web application integrated in private cloud. On comparing the performance metrics of both the techniques, the RNN-LSTM deep learning technique gives the minimum error rate and can be applied on large datasets for predicting the future workload of web applications in a private cloud.","PeriodicalId":423671,"journal":{"name":"2018 Fourth International Conference on Advances in Electrical, Electronics, Information, Communication and Bio-Informatics (AEEICB)","volume":"76 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2018-02-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"10","resultStr":"{\"title\":\"An Efficient Predictive technique to Autoscale the Resources for Web applications in Private cloud\",\"authors\":\"E. G. Radhika, G. Sudha Sadasivam, J. Fenila Naomi\",\"doi\":\"10.1109/AEEICB.2018.8480899\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"Cloud computing refers to the delivery of computing resources over the network based on user demand. Some web applications may experience different workload at different times, automatic provisioning needs to work efficiently and viably at any point of time. Autoscaling is a feature of cloud computing that has the capability to scale the resources according to demand. Autoscaling provides better fault tolerance, availability and cost management. Although Autoscaling is beneficial, it is not easy to implement. Effective Autoscaling requires techniques to foresee future workload as well as the resources needed to handle the workload. Reactive Autoscaling strategy adds or reduces resources based on threshold set. The predictive strategy is used to address the issues like rapid spike in demand, outages and variable traffic patterns from web applications by providing necessary scaling actions beforehand. In the proposed work, Auto Regressive Integrated Moving Average (ARIMA) and Recurrent Neural Network–Long Short Term Memory (RNN-LSTM) techniques are used for predicting the future workload of based on CPU and RAM usage rate collected from three tier architecture of web application integrated in private cloud. On comparing the performance metrics of both the techniques, the RNN-LSTM deep learning technique gives the minimum error rate and can be applied on large datasets for predicting the future workload of web applications in a private cloud.\",\"PeriodicalId\":423671,\"journal\":{\"name\":\"2018 Fourth International Conference on Advances in Electrical, Electronics, Information, Communication and Bio-Informatics (AEEICB)\",\"volume\":\"76 1\",\"pages\":\"0\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2018-02-01\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"10\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"2018 Fourth International Conference on Advances in Electrical, Electronics, Information, Communication and Bio-Informatics (AEEICB)\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.1109/AEEICB.2018.8480899\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"2018 Fourth International Conference on Advances in Electrical, Electronics, Information, Communication and Bio-Informatics (AEEICB)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/AEEICB.2018.8480899","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 10

摘要

云计算是指根据用户的需求,通过网络交付计算资源。一些web应用程序可能在不同的时间经历不同的工作负载,自动配置需要在任何时间点都有效和可行地工作。自动伸缩是云计算的一个特性,它具有根据需求伸缩资源的能力。自动伸缩提供了更好的容错性、可用性和成本管理。虽然自动缩放是有益的,但实现起来并不容易。有效的自动伸缩需要能够预见未来工作负载以及处理工作负载所需资源的技术。响应式自动缩放策略根据阈值设置添加或减少资源。通过预先提供必要的扩展操作,预测策略用于解决诸如需求快速激增、中断和来自web应用程序的可变流量模式等问题。本文采用自回归综合移动平均(ARIMA)和循环神经网络-长短期记忆(RNN-LSTM)技术,对集成在私有云上的web应用程序的三层架构的CPU和RAM使用率进行预测。通过比较两种技术的性能指标,RNN-LSTM深度学习技术给出了最小的错误率,可以应用于大型数据集,用于预测私有云中web应用程序的未来工作负载。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
查看原文
分享 分享
微信好友 朋友圈 QQ好友 复制链接
本刊更多论文
An Efficient Predictive technique to Autoscale the Resources for Web applications in Private cloud
Cloud computing refers to the delivery of computing resources over the network based on user demand. Some web applications may experience different workload at different times, automatic provisioning needs to work efficiently and viably at any point of time. Autoscaling is a feature of cloud computing that has the capability to scale the resources according to demand. Autoscaling provides better fault tolerance, availability and cost management. Although Autoscaling is beneficial, it is not easy to implement. Effective Autoscaling requires techniques to foresee future workload as well as the resources needed to handle the workload. Reactive Autoscaling strategy adds or reduces resources based on threshold set. The predictive strategy is used to address the issues like rapid spike in demand, outages and variable traffic patterns from web applications by providing necessary scaling actions beforehand. In the proposed work, Auto Regressive Integrated Moving Average (ARIMA) and Recurrent Neural Network–Long Short Term Memory (RNN-LSTM) techniques are used for predicting the future workload of based on CPU and RAM usage rate collected from three tier architecture of web application integrated in private cloud. On comparing the performance metrics of both the techniques, the RNN-LSTM deep learning technique gives the minimum error rate and can be applied on large datasets for predicting the future workload of web applications in a private cloud.
求助全文
通过发布文献求助,成功后即可免费获取论文全文。 去求助
来源期刊
自引率
0.00%
发文量
0
期刊最新文献
Microbial Isolates for Enhancement of Seed Germination A Precise on Wearable ECG Electrodes for Detection Of Heart Rate and Arrthymia Classification Web based Biometric Validation Using Biological Identities: An Elaborate Survey Induction Motor Parameter Monitoring System using Zig bee Protocol & MATLAB GUI : Automated Monitoring System Compressive Sensing and Hyper-Chaos Based Image Compression-Encryption
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
已复制链接
已复制链接
快去分享给好友吧!
我知道了
×
扫码分享
扫码分享
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1