Traffic Prediction-Based VNF Auto-Scaling and Deployment Mechanism for Flexible and Elastic Service Provision

IF 5.8 2区 计算机科学 Q1 COMPUTER SCIENCE, INFORMATION SYSTEMS IEEE Transactions on Services Computing Pub Date : 2024-08-08 DOI:10.1109/TSC.2024.3440050
Bo Yi;Jiacheng Wang;Qiang He;Xingwei Wang;Min Huang;Sajal k. Das;Keqin Li
{"title":"Traffic Prediction-Based VNF Auto-Scaling and Deployment Mechanism for Flexible and Elastic Service Provision","authors":"Bo Yi;Jiacheng Wang;Qiang He;Xingwei Wang;Min Huang;Sajal k. Das;Keqin Li","doi":"10.1109/TSC.2024.3440050","DOIUrl":null,"url":null,"abstract":"Network Function Virtualization (NFV) provides a flexible way to provision new services by decoupling network functions from hardware and implementing them as Virtual Network Functions (VNFs). However, the rapid development of technologies greatly promotes the explosion of diverse services, which directly results in the exponential increase of heterogeneous traffic. In addition, such a tremendous amount of heterogeneous traffic will generate bursts in a more dynamic and unexpected manner, so it becomes extremely hard to satisfy the customer demands. Aiming at addressing these challenges, this work proposes a positive and elastic VNF deployment mechanism for service provisioning, which introduces three novelties: \n<italic>1) a Gated Recurrent Unit (GRU) based traffic prediction model is established to predict the unexpected and dynamically changing traffic behaviors in advance with the accuracy over 98%; 2) a closed-loop system is formed, in which the prediction model can learn and evolve continuously to respond to more complex scenarios; 3) different states of VNF are introduced and dynamically switched to deal with the current demands with reduced cost by avoiding frequent VNF initialization and destroy.</i>\n The experimental results indicate that the proposed mechanism outperforms the state-of-the-art methods, which include achieving over 98% prediction accuracy, improving the service acceptance rate by more than 18%, and reducing the overall cost by more than 20%.","PeriodicalId":13255,"journal":{"name":"IEEE Transactions on Services Computing","volume":"17 5","pages":"2959-2973"},"PeriodicalIF":5.8000,"publicationDate":"2024-08-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"IEEE Transactions on Services Computing","FirstCategoryId":"94","ListUrlMain":"https://ieeexplore.ieee.org/document/10631271/","RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q1","JCRName":"COMPUTER SCIENCE, INFORMATION SYSTEMS","Score":null,"Total":0}
引用次数: 0

Abstract

Network Function Virtualization (NFV) provides a flexible way to provision new services by decoupling network functions from hardware and implementing them as Virtual Network Functions (VNFs). However, the rapid development of technologies greatly promotes the explosion of diverse services, which directly results in the exponential increase of heterogeneous traffic. In addition, such a tremendous amount of heterogeneous traffic will generate bursts in a more dynamic and unexpected manner, so it becomes extremely hard to satisfy the customer demands. Aiming at addressing these challenges, this work proposes a positive and elastic VNF deployment mechanism for service provisioning, which introduces three novelties: 1) a Gated Recurrent Unit (GRU) based traffic prediction model is established to predict the unexpected and dynamically changing traffic behaviors in advance with the accuracy over 98%; 2) a closed-loop system is formed, in which the prediction model can learn and evolve continuously to respond to more complex scenarios; 3) different states of VNF are introduced and dynamically switched to deal with the current demands with reduced cost by avoiding frequent VNF initialization and destroy. The experimental results indicate that the proposed mechanism outperforms the state-of-the-art methods, which include achieving over 98% prediction accuracy, improving the service acceptance rate by more than 18%, and reducing the overall cost by more than 20%.
查看原文
分享 分享
微信好友 朋友圈 QQ好友 复制链接
本刊更多论文
基于流量预测的 VNF 自动缩放和部署机制,实现灵活弹性的服务供应
网络功能虚拟化(NFV)通过将网络功能与硬件分离并将其作为虚拟网络功能(VNF)来实现,为提供新服务提供了一种灵活的方式。然而,技术的飞速发展极大地促进了多样化服务的爆炸式增长,直接导致异构流量呈指数级增长。此外,如此巨大的异构流量将以更加动态和意外的方式产生突发流量,因此要满足客户需求变得极其困难。为了应对这些挑战,本研究提出了一种用于服务供应的积极弹性 VNF 部署机制,其中引入了三个新特性:1)建立基于门控循环单元(GRU)的流量预测模型,提前预测突发和动态变化的流量行为,准确率超过 98%;2)形成闭环系统,预测模型可以不断学习和演进,以应对更复杂的场景;3)引入不同状态的 VNF 并动态切换,以应对当前需求,避免频繁的 VNF 初始化和销毁,从而降低成本。实验结果表明,所提出的机制优于最先进的方法,包括预测准确率达到 98% 以上,服务接受率提高 18% 以上,总体成本降低 20% 以上。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 去求助
来源期刊
IEEE Transactions on Services Computing
IEEE Transactions on Services Computing COMPUTER SCIENCE, INFORMATION SYSTEMS-COMPUTER SCIENCE, SOFTWARE ENGINEERING
CiteScore
11.50
自引率
6.20%
发文量
278
审稿时长
>12 weeks
期刊介绍: IEEE Transactions on Services Computing encompasses the computing and software aspects of the science and technology of services innovation research and development. It places emphasis on algorithmic, mathematical, statistical, and computational methods central to services computing. Topics covered include Service Oriented Architecture, Web Services, Business Process Integration, Solution Performance Management, and Services Operations and Management. The transactions address mathematical foundations, security, privacy, agreement, contract, discovery, negotiation, collaboration, and quality of service for web services. It also covers areas like composite web service creation, business and scientific applications, standards, utility models, business process modeling, integration, collaboration, and more in the realm of Services Computing.
期刊最新文献
Editorial: A Message From the New Editor-in-Chief Complementary Reasoning With Graph-Based Projection for Cloud API Recommendation TLSCG: Transfer Learning-Based Smart Contract Generation to Empower Unknown Vulnerability Detection in Blockchain Services Data Orchestration Service Placement and Resource Allocation Scheme for Cloud-Edge System Maximizing Edge Throughput in Collaborative Multi-Task Inference With Shareable Model Structures
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
已复制链接
已复制链接
快去分享给好友吧!
我知道了
×
扫码分享
扫码分享
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1