首页 > 最新文献

Journal of Network and Computer Applications最新文献

英文 中文
SAT-Net: A staggered attention network using graph neural networks for encrypted traffic classification SAT-Net:使用图神经网络的交错注意力网络,用于加密流量分类
IF 7.7 2区 计算机科学 Q1 COMPUTER SCIENCE, HARDWARE & ARCHITECTURE Pub Date : 2024-11-15 DOI: 10.1016/j.jnca.2024.104069
Zhiyuan Li, Hongyi Zhao, Jingyu Zhao, Yuqi Jiang, Fanliang Bu
With the increasing complexity of network protocol traffic in the modern network environment, the task of traffic classification is facing significant challenges. Existing methods lack research on the characteristics of traffic byte data and suffer from insufficient model generalization, leading to decreased classification accuracy. In response, we propose a method for encrypted traffic classification based on a Staggered Attention Network using Graph Neural Networks (SAT-Net), which takes into consideration both computer network topology and user interaction processes. Firstly, we design a Packet Byte Graph (PBG) to efficiently capture the byte features of flow and their relationships, thereby transforming the encrypted traffic classification problem into a graph classification problem. Secondly, we meticulously construct a GNN-based PBG learner, where the feature remapping layer and staggered attention layer are respectively used for feature propagation and fusion, enhancing the robustness of the model. Experiments on multiple different types of encrypted traffic datasets demonstrate that SAT-Net outperforms various advanced methods in identifying VPN traffic, Tor traffic, and malicious traffic, showing strong generalization capability.
随着现代网络环境中网络协议流量的日益复杂,流量分类任务面临着巨大挑战。现有方法缺乏对流量字节数据特征的研究,模型泛化不足,导致分类准确率下降。为此,我们提出了一种基于图神经网络交错注意力网络(SAT-Net)的加密流量分类方法,该方法同时考虑了计算机网络的拓扑结构和用户交互过程。首先,我们设计了数据包字节图(PBG),以有效捕捉流量的字节特征及其关系,从而将加密流量分类问题转化为图分类问题。其次,我们精心构建了基于 GNN 的 PBG 学习器,其中特征重映射层和交错注意力层分别用于特征传播和融合,从而增强了模型的鲁棒性。在多个不同类型的加密流量数据集上的实验表明,SAT-Net 在识别 VPN 流量、Tor 流量和恶意流量方面的表现优于各种先进方法,显示出很强的泛化能力。
{"title":"SAT-Net: A staggered attention network using graph neural networks for encrypted traffic classification","authors":"Zhiyuan Li,&nbsp;Hongyi Zhao,&nbsp;Jingyu Zhao,&nbsp;Yuqi Jiang,&nbsp;Fanliang Bu","doi":"10.1016/j.jnca.2024.104069","DOIUrl":"10.1016/j.jnca.2024.104069","url":null,"abstract":"<div><div>With the increasing complexity of network protocol traffic in the modern network environment, the task of traffic classification is facing significant challenges. Existing methods lack research on the characteristics of traffic byte data and suffer from insufficient model generalization, leading to decreased classification accuracy. In response, we propose a method for encrypted traffic classification based on a Staggered Attention Network using Graph Neural Networks (SAT-Net), which takes into consideration both computer network topology and user interaction processes. Firstly, we design a Packet Byte Graph (PBG) to efficiently capture the byte features of flow and their relationships, thereby transforming the encrypted traffic classification problem into a graph classification problem. Secondly, we meticulously construct a GNN-based PBG learner, where the feature remapping layer and staggered attention layer are respectively used for feature propagation and fusion, enhancing the robustness of the model. Experiments on multiple different types of encrypted traffic datasets demonstrate that SAT-Net outperforms various advanced methods in identifying VPN traffic, Tor traffic, and malicious traffic, showing strong generalization capability.</div></div>","PeriodicalId":54784,"journal":{"name":"Journal of Network and Computer Applications","volume":"233 ","pages":"Article 104069"},"PeriodicalIF":7.7,"publicationDate":"2024-11-15","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142655178","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
FCG-MFD: Benchmark function call graph-based dataset for malware family detection FCG-MFD:基于函数调用图的恶意软件族检测基准数据集
IF 7.7 2区 计算机科学 Q1 COMPUTER SCIENCE, HARDWARE & ARCHITECTURE Pub Date : 2024-11-07 DOI: 10.1016/j.jnca.2024.104050
Hassan Jalil Hadi , Yue Cao , Sifan Li , Naveed Ahmad , Mohammed Ali Alshara
Cyber crimes related to malware families are on the rise. This growth persists despite the prevalence of various antivirus software and approaches for malware detection and classification. Security experts have implemented Machine Learning (ML) techniques to identify these cyber-crimes. However, these approaches demand updated malware datasets for continuous improvements amid the evolving sophistication of malware strains. Thus, we present the FCG-MFD, a benchmark dataset with extensive Function Call Graphs (FCG) for malware family detection. This dataset guarantees resistance against emerging malware families by enabling security systems. Our dataset has two sub-datasets (FCG & Metadata) (1,00,000 samples) from VirusSamples, Virusshare, VirusSign, theZoo, Vx-underground, and MalwareBazaar curated using FCGs and metadata to optimize the efficacy of ML algorithms. We suggest a new malware analysis technique using FCGs and graph embedding networks, offering a solution to the complexity of feature engineering in ML-based malware analysis. Our approach to extracting semantic features via the Natural Language Processing (NLP) method is inspired by tasks involving sentences and words, respectively, for functions and instructions. We leverage a node2vec mechanism-based graph embedding network to generate malware embedding vectors. These vectors enable automated and efficient malware analysis by combining structural and semantic features. We use two datasets (FCG & Metadata) to assess FCG-MFD performance. F1-Scores of 99.14% and 99.28% are competitive with State-of-the-art (SOTA) methods.
与恶意软件家族有关的网络犯罪呈上升趋势。尽管各种杀毒软件和恶意软件检测与分类方法已经普及,但这种增长趋势依然存在。安全专家采用机器学习(ML)技术来识别这些网络犯罪。然而,这些方法需要更新恶意软件数据集,以便在恶意软件种类不断演变的情况下持续改进。因此,我们提出了 FCG-MFD,这是一个具有大量函数调用图(FCG)的基准数据集,用于恶意软件家族检测。该数据集能确保安全系统抵御新出现的恶意软件家族。我们的数据集包含两个子数据集(FCG & Metadata)(1,00,000 个样本),分别来自 VirusSamples、Virusshare、VirusSign、theZoo、Vx-underground 和 MalwareBazaar,这些数据集利用 FCG 和元数据来优化 ML 算法的功效。我们提出了一种使用 FCG 和图嵌入网络的新型恶意软件分析技术,为基于 ML 的恶意软件分析中复杂的特征工程提供了解决方案。我们通过自然语言处理(NLP)方法提取语义特征的灵感来自分别涉及函数和指令的句子和单词任务。我们利用基于 node2vec 机制的图嵌入网络生成恶意软件嵌入向量。通过结合结构和语义特征,这些向量可实现自动、高效的恶意软件分析。我们使用两个数据集(FCG & Metadata)来评估 FCG-MFD 的性能。F1 分数分别为 99.14% 和 99.28%,与最先进的 (SOTA) 方法相比具有竞争力。
{"title":"FCG-MFD: Benchmark function call graph-based dataset for malware family detection","authors":"Hassan Jalil Hadi ,&nbsp;Yue Cao ,&nbsp;Sifan Li ,&nbsp;Naveed Ahmad ,&nbsp;Mohammed Ali Alshara","doi":"10.1016/j.jnca.2024.104050","DOIUrl":"10.1016/j.jnca.2024.104050","url":null,"abstract":"<div><div>Cyber crimes related to malware families are on the rise. This growth persists despite the prevalence of various antivirus software and approaches for malware detection and classification. Security experts have implemented Machine Learning (ML) techniques to identify these cyber-crimes. However, these approaches demand updated malware datasets for continuous improvements amid the evolving sophistication of malware strains. Thus, we present the FCG-MFD, a benchmark dataset with extensive Function Call Graphs (FCG) for malware family detection. This dataset guarantees resistance against emerging malware families by enabling security systems. Our dataset has two sub-datasets (FCG &amp; Metadata) (1,00,000 samples) from VirusSamples, Virusshare, VirusSign, theZoo, Vx-underground, and MalwareBazaar curated using FCGs and metadata to optimize the efficacy of ML algorithms. We suggest a new malware analysis technique using FCGs and graph embedding networks, offering a solution to the complexity of feature engineering in ML-based malware analysis. Our approach to extracting semantic features via the Natural Language Processing (NLP) method is inspired by tasks involving sentences and words, respectively, for functions and instructions. We leverage a node2vec mechanism-based graph embedding network to generate malware embedding vectors. These vectors enable automated and efficient malware analysis by combining structural and semantic features. We use two datasets (FCG &amp; Metadata) to assess FCG-MFD performance. F1-Scores of 99.14% and 99.28% are competitive with State-of-the-art (SOTA) methods.</div></div>","PeriodicalId":54784,"journal":{"name":"Journal of Network and Computer Applications","volume":"233 ","pages":"Article 104050"},"PeriodicalIF":7.7,"publicationDate":"2024-11-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142655185","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Particle swarm optimization tuned multi-headed long short-term memory networks approach for fuel prices forecasting 用于燃料价格预测的粒子群优化调整多头长短期记忆网络方法
IF 7.7 2区 计算机科学 Q1 COMPUTER SCIENCE, HARDWARE & ARCHITECTURE Pub Date : 2024-11-07 DOI: 10.1016/j.jnca.2024.104048
Andjela Jovanovic , Luka Jovanovic , Miodrag Zivkovic , Nebojsa Bacanin , Vladimir Simic , Dragan Pamucar , Milos Antonijevic
Increasing global energy demands and decreasing stocks of fossil fuels have led to a resurgence of research into energy forecasting. Artificial intelligence, explicitly time series forecasting holds great potential to improve predictions of cost and demand with many lucrative applications across several fields. Many factors influence prices on a global scale, from socio-economic factors to distribution, availability, and international policy. Also, various factors need to be considered in order to make an accurate forecast. By analyzing the current literature, a gap for improvements within this domain exists. Therefore, this work suggests and explores the potential of multi-headed long short-term memory models for gasoline price forecasting, since this issue was not tackled with multi-headed models before. Additionally, since the computational requirements for such models are relatively high, work focuses on lightweight approaches that consist of a relatively low number of neurons per layer, trained in a small number of epochs. However, as algorithm performance can be heavily dependent on appropriate hyper-parameter selections, a modified variant of the particle swarm optimization algorithm is also set forth to help in optimizing the model’s architecture and training parameters. A comparative analysis is conducted using energy data collected from multiple public sources between several contemporary optimizers. The outcomes are put through a meticulous statistical validation to ascertain the significance of the findings. The best-constructed models attained a mean square error of just 0.044025 with an R-squared of 0.911797, suggesting potential for real-world use.
全球能源需求的不断增长和化石燃料存量的不断减少,促使能源预测研究再度兴起。人工智能、明确的时间序列预测在改善成本和需求预测方面具有巨大潜力,在多个领域都有许多利润丰厚的应用。影响全球价格的因素很多,从社会经济因素到分配、供应和国际政策。此外,为了做出准确的预测,还需要考虑各种因素。通过对现有文献的分析,这一领域还存在有待改进的地方。因此,本研究建议并探索多头长期短期记忆模型在汽油价格预测中的应用潜力,因为在此之前并没有使用多头模型来解决这一问题。此外,由于此类模型的计算要求相对较高,因此工作重点放在轻量级方法上,即每层神经元数量相对较少,并在少量的历时中进行训练。不过,由于算法性能在很大程度上取决于适当的超参数选择,因此还提出了粒子群优化算法的改进变体,以帮助优化模型的架构和训练参数。利用从多个公共资源收集到的能源数据,对多个当代优化器进行了比较分析。对结果进行了细致的统计验证,以确定研究结果的重要性。构建的最佳模型的均方误差仅为 0.044025,R 方为 0.911797,这表明该模型在现实世界中具有使用潜力。
{"title":"Particle swarm optimization tuned multi-headed long short-term memory networks approach for fuel prices forecasting","authors":"Andjela Jovanovic ,&nbsp;Luka Jovanovic ,&nbsp;Miodrag Zivkovic ,&nbsp;Nebojsa Bacanin ,&nbsp;Vladimir Simic ,&nbsp;Dragan Pamucar ,&nbsp;Milos Antonijevic","doi":"10.1016/j.jnca.2024.104048","DOIUrl":"10.1016/j.jnca.2024.104048","url":null,"abstract":"<div><div>Increasing global energy demands and decreasing stocks of fossil fuels have led to a resurgence of research into energy forecasting. Artificial intelligence, explicitly time series forecasting holds great potential to improve predictions of cost and demand with many lucrative applications across several fields. Many factors influence prices on a global scale, from socio-economic factors to distribution, availability, and international policy. Also, various factors need to be considered in order to make an accurate forecast. By analyzing the current literature, a gap for improvements within this domain exists. Therefore, this work suggests and explores the potential of multi-headed long short-term memory models for gasoline price forecasting, since this issue was not tackled with multi-headed models before. Additionally, since the computational requirements for such models are relatively high, work focuses on lightweight approaches that consist of a relatively low number of neurons per layer, trained in a small number of epochs. However, as algorithm performance can be heavily dependent on appropriate hyper-parameter selections, a modified variant of the particle swarm optimization algorithm is also set forth to help in optimizing the model’s architecture and training parameters. A comparative analysis is conducted using energy data collected from multiple public sources between several contemporary optimizers. The outcomes are put through a meticulous statistical validation to ascertain the significance of the findings. The best-constructed models attained a mean square error of just 0.044025 with an R-squared of 0.911797, suggesting potential for real-world use.</div></div>","PeriodicalId":54784,"journal":{"name":"Journal of Network and Computer Applications","volume":"233 ","pages":"Article 104048"},"PeriodicalIF":7.7,"publicationDate":"2024-11-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142655183","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Deep learning frameworks for cognitive radio networks: Review and open research challenges 认知无线电网络的深度学习框架:回顾与开放研究挑战
IF 7.7 2区 计算机科学 Q1 COMPUTER SCIENCE, HARDWARE & ARCHITECTURE Pub Date : 2024-11-06 DOI: 10.1016/j.jnca.2024.104051
Senthil Kumar Jagatheesaperumal , Ijaz Ahmad , Marko Höyhtyä , Suleman Khan , Andrei Gurtov
Deep learning has been proven to be a powerful tool for addressing the most significant issues in cognitive radio networks, such as spectrum sensing, spectrum sharing, resource allocation, and security attacks. The utilization of deep learning techniques in cognitive radio networks can significantly enhance the network’s capability to adapt to changing environments and improve the overall system’s efficiency and reliability. As the demand for higher data rates and connectivity increases, B5G/6G wireless networks are expected to enable new services and applications significantly. Therefore, the significance of deep learning in addressing cognitive radio network challenges cannot be overstated. This review article provides valuable insights into potential solutions that can serve as a foundation for the development of future B5G/6G services. By leveraging the power of deep learning, cognitive radio networks can pave the way for the next generation of wireless networks capable of meeting the ever-increasing demands for higher data rates, improved reliability, and security.
深度学习已被证明是解决认知无线电网络中频谱感知、频谱共享、资源分配和安全攻击等最重要问题的有力工具。在认知无线电网络中利用深度学习技术可以显著增强网络适应不断变化的环境的能力,提高整个系统的效率和可靠性。随着对更高数据传输速率和连接性的需求不断增加,B5G/6G 无线网络有望极大地促进新服务和新应用的发展。因此,深度学习在应对认知无线电网络挑战方面的意义怎么强调都不为过。这篇综述文章为潜在的解决方案提供了宝贵的见解,这些解决方案可作为未来 B5G/6G 服务发展的基础。通过利用深度学习的力量,认知无线电网络可以为下一代无线网络铺平道路,满足对更高数据速率、更高可靠性和安全性不断增长的需求。
{"title":"Deep learning frameworks for cognitive radio networks: Review and open research challenges","authors":"Senthil Kumar Jagatheesaperumal ,&nbsp;Ijaz Ahmad ,&nbsp;Marko Höyhtyä ,&nbsp;Suleman Khan ,&nbsp;Andrei Gurtov","doi":"10.1016/j.jnca.2024.104051","DOIUrl":"10.1016/j.jnca.2024.104051","url":null,"abstract":"<div><div>Deep learning has been proven to be a powerful tool for addressing the most significant issues in cognitive radio networks, such as spectrum sensing, spectrum sharing, resource allocation, and security attacks. The utilization of deep learning techniques in cognitive radio networks can significantly enhance the network’s capability to adapt to changing environments and improve the overall system’s efficiency and reliability. As the demand for higher data rates and connectivity increases, B5G/6G wireless networks are expected to enable new services and applications significantly. Therefore, the significance of deep learning in addressing cognitive radio network challenges cannot be overstated. This review article provides valuable insights into potential solutions that can serve as a foundation for the development of future B5G/6G services. By leveraging the power of deep learning, cognitive radio networks can pave the way for the next generation of wireless networks capable of meeting the ever-increasing demands for higher data rates, improved reliability, and security.</div></div>","PeriodicalId":54784,"journal":{"name":"Journal of Network and Computer Applications","volume":"233 ","pages":"Article 104051"},"PeriodicalIF":7.7,"publicationDate":"2024-11-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142655182","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Joint VM and container consolidation with auto-encoder based contribution extraction of decision criteria in Edge-Cloud environment 基于自动编码器的决策标准贡献提取,在边缘云环境中联合整合虚拟机和容器
IF 7.7 2区 计算机科学 Q1 COMPUTER SCIENCE, HARDWARE & ARCHITECTURE Pub Date : 2024-11-05 DOI: 10.1016/j.jnca.2024.104049
Farkhondeh Kiaee , Ehsan Arianyan
In the recent years, emergence huge Edge-Cloud environments faces great challenges like the ever-increasing energy demand, the extensive Internet of Things (IoT) devices adaptation, and the goals of efficiency and reliability. Containers has become increasingly popular to encapsulate various services and container migration among Edge-Cloud nodes may enable new use cases in various IoT domains. In this study, an efficient joint VM and container consolidation solution is proposed for Edge-Cloud environment. The proposed method uses the Auto-Encoder (AE) and TOPSIS modules for two stages of consolidation subproblems, namely, Joint VM and Container Multi-criteria Migration Decision (AE-TOPSIS-JVCMMD) and Edge-Cloud Power SLA Aware (AE-TOPSIS-ECPSA) for VM placement. The module extracts the contribution of different criteria and computes the scores of all the alternatives. Combining the non-linear contribution learning ability of the AE algorithm and the intelligent ranking of the TOPSIS algorithm, the proposed method successfully avoids the bias of conventional multi-criteria approaches toward alternatives that have good evaluations in two or more dependent criteria. The simulations conducted using the Cloudsim simulator confirm the effectiveness of the proposed policies, demonstrating to 41.5%, 30.13%, 12.9%, 10.3%, 58.2% and 56.1% reductions in energy consumption, SLA violation, response time, running cost, number of VM migrations, and number of container migrations, respectively in comparison with state of the arts.
近年来,庞大的边缘云环境面临着巨大的挑战,如不断增长的能源需求、广泛的物联网(IoT)设备适应性以及效率和可靠性目标。容器在封装各种服务方面越来越受欢迎,而边缘云节点之间的容器迁移可能会在各种物联网领域带来新的用例。本研究为边缘云环境提出了一种高效的虚拟机和容器联合整合解决方案。所提方法将自动编码器(AE)和 TOPSIS 模块用于两个阶段的整合子问题,即虚拟机和容器联合多标准迁移决策(AE-TOPSIS-JVCMMD)和边缘云电源 SLA 感知(AE-TOPSIS-ECPSA)的虚拟机放置。该模块提取不同标准的贡献,并计算所有备选方案的分数。结合 AE 算法的非线性贡献学习能力和 TOPSIS 算法的智能排序,所提出的方法成功地避免了传统多标准方法对在两个或两个以上依赖标准中具有良好评价的备选方案的偏见。使用 Cloudsim 模拟器进行的模拟证实了所提策略的有效性,与现有技术相比,能耗、违反服务水平协议(SLA)、响应时间、运行成本、虚拟机迁移次数和容器迁移次数分别减少了 41.5%、30.13%、12.9%、10.3%、58.2% 和 56.1%。
{"title":"Joint VM and container consolidation with auto-encoder based contribution extraction of decision criteria in Edge-Cloud environment","authors":"Farkhondeh Kiaee ,&nbsp;Ehsan Arianyan","doi":"10.1016/j.jnca.2024.104049","DOIUrl":"10.1016/j.jnca.2024.104049","url":null,"abstract":"<div><div>In the recent years, emergence huge Edge-Cloud environments faces great challenges like the ever-increasing energy demand, the extensive Internet of Things (IoT) devices adaptation, and the goals of efficiency and reliability. Containers has become increasingly popular to encapsulate various services and container migration among Edge-Cloud nodes may enable new use cases in various IoT domains. In this study, an efficient joint VM and container consolidation solution is proposed for Edge-Cloud environment. The proposed method uses the Auto-Encoder (AE) and TOPSIS modules for two stages of consolidation subproblems, namely, Joint VM and Container Multi-criteria Migration Decision (AE-TOPSIS-JVCMMD) and Edge-Cloud Power SLA Aware (AE-TOPSIS-ECPSA) for VM placement. The module extracts the contribution of different criteria and computes the scores of all the alternatives. Combining the non-linear contribution learning ability of the AE algorithm and the intelligent ranking of the TOPSIS algorithm, the proposed method successfully avoids the bias of conventional multi-criteria approaches toward alternatives that have good evaluations in two or more dependent criteria. The simulations conducted using the Cloudsim simulator confirm the effectiveness of the proposed policies, demonstrating to 41.5%, 30.13%, 12.9%, 10.3%, 58.2% and 56.1% reductions in energy consumption, SLA violation, response time, running cost, number of VM migrations, and number of container migrations, respectively in comparison with state of the arts.</div></div>","PeriodicalId":54784,"journal":{"name":"Journal of Network and Computer Applications","volume":"233 ","pages":"Article 104049"},"PeriodicalIF":7.7,"publicationDate":"2024-11-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142655184","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
RAaaS: Resource Allocation as a Service in multiple cloud providers RAaaS:多个云提供商中的资源分配即服务
IF 8.7 2区 计算机科学 Q1 COMPUTER SCIENCE, HARDWARE & ARCHITECTURE Pub Date : 2023-11-14 DOI: 10.1016/j.jnca.2023.103790
Cristiano Costa Argemon Vieira , Luiz Fernando Bittencourt , Thiago Augusto Lopes Genez , Maycon Leone M. Peixoto , Edmundo Roberto Mauro Madeira

Cloud users require a set of specific computing needs for their applications, while cloud providers offer a variety of computing products and services on the Internet. These two cloud players make deals through the use of service level agreements (SLAs) where, for instance, prices and levels of quality of service (QoS) are defined. From the cloud user’s point of view, building a robust set of SLAs becomes a challenging problem when multiple cloud providers are present in the market. The allocation of resources in the cloud to run complex applications with guaranteed reliable, secure and acceptable response times is not an easy task, and this paper aims to tackle this problem. This work describes a resource allocation service that aims to optimize the user’s request of cloud resources (virtual machines — VMs) onto multiple Infrastructure-as-a-Service (IaaS) cloud providers. The Resource-Allocation-as-a-Service (RAaaS) proposed in this paper works as a standalone service between cloud users and cloud providers, and it relies on three different requirements: reliability, processing, and mutual trust. The proposed resource allocation service is carried out using the three very common types of VM billing models: on-demand, reserved and spot, where the spot cost model is employed to furnish low-cost resources for the application allocation to improve its reliability. The contributions of this paper are threefold: (i) a three-dimension SLA encompassing reliability, processing, and trust; (ii) an integer linear program (ILP) to schedule cloud-based VMs to applications considering the three-dimension SLA model, and (iii) a heuristic algorithm to mitigate possible QoS violations. Experimental results show that the proposed RAaaS procedure is capable of optimizing resource allocation considering multiple criteria in the SLA while mitigating the extra costs introduced by mutual trust between customers using redundant spot instances allocation.

云用户需要为其应用程序提供一组特定的计算需求,而云提供商则在互联网上提供各种计算产品和服务。这两家云计算公司通过使用服务水平协议(sla)进行交易,其中定义了价格和服务质量(QoS)级别。从云用户的角度来看,当市场上存在多个云提供商时,构建一组健壮的sla成为一个具有挑战性的问题。在云中分配资源以运行具有可靠、安全和可接受的响应时间的复杂应用程序并不是一项容易的任务,本文旨在解决这个问题。这项工作描述了一种资源分配服务,旨在优化用户对多个基础设施即服务(IaaS)云提供商的云资源(虚拟机- vm)请求。本文提出的资源分配即服务(Resource-Allocation-as-a-Service, RAaaS)作为云用户和云提供商之间的独立服务,它依赖于三个不同的需求:可靠性、处理和相互信任。本文提出的资源分配服务使用了三种非常常见的VM计费模型:按需计费、预留计费和现货计费,其中现货成本模型用于为应用分配提供低成本的资源,以提高其可靠性。本文的贡献有三个方面:(i)一个包含可靠性、处理和信任的三维SLA;(ii)一个整数线性程序(ILP),将基于云的vm调度到考虑三维SLA模型的应用程序中,以及(iii)一个启发式算法,以减轻可能的QoS违规。实验结果表明,所提出的RAaaS过程能够在考虑SLA中多个标准的情况下优化资源分配,同时通过冗余点实例分配减少客户间相互信任带来的额外成本。
{"title":"RAaaS: Resource Allocation as a Service in multiple cloud providers","authors":"Cristiano Costa Argemon Vieira ,&nbsp;Luiz Fernando Bittencourt ,&nbsp;Thiago Augusto Lopes Genez ,&nbsp;Maycon Leone M. Peixoto ,&nbsp;Edmundo Roberto Mauro Madeira","doi":"10.1016/j.jnca.2023.103790","DOIUrl":"10.1016/j.jnca.2023.103790","url":null,"abstract":"<div><p>Cloud users require a set of specific computing needs for their applications, while cloud providers offer a variety of computing products and services on the Internet. These two cloud players make deals through the use of service level agreements (SLAs) where, for instance, prices and levels of quality of service (QoS) are defined. From the cloud user’s point of view, building a robust set of SLAs becomes a challenging problem when multiple cloud providers are present in the market. The allocation of resources in the cloud to run complex applications with guaranteed reliable, secure and acceptable response times is not an easy task, and this paper aims to tackle this problem. This work describes a resource allocation service that aims to optimize the user’s request of cloud resources (virtual machines — VMs) onto multiple Infrastructure-as-a-Service (IaaS) cloud providers. The Resource-Allocation-as-a-Service (RAaaS) proposed in this paper works as a <em>standalone service</em> between cloud users and cloud providers, and it relies on three different requirements: <em>reliability</em>, <em>processing</em>, and mutual <em>trust</em>. The proposed resource allocation service is carried out using the three very common types of VM billing models: <em>on-demand</em>, <em>reserved</em> and <em>spot</em>, where the spot cost model is employed to furnish low-cost resources for the application allocation to improve its reliability. The contributions of this paper are threefold: (i) a three-dimension SLA encompassing reliability, processing, and trust; (ii) an integer linear program (ILP) to schedule cloud-based VMs to applications considering the three-dimension SLA model, and (iii) a heuristic algorithm to mitigate possible QoS violations. Experimental results show that the proposed RAaaS procedure is capable of optimizing resource allocation considering multiple criteria in the SLA while mitigating the extra costs introduced by mutual trust between customers using redundant spot instances allocation.</p></div>","PeriodicalId":54784,"journal":{"name":"Journal of Network and Computer Applications","volume":"221 ","pages":"Article 103790"},"PeriodicalIF":8.7,"publicationDate":"2023-11-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.sciencedirect.com/science/article/pii/S1084804523002096/pdfft?md5=2b438420790d34174bf1f8908cc90178&pid=1-s2.0-S1084804523002096-main.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"92158661","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
DTL-IDS: An optimized Intrusion Detection Framework using Deep Transfer Learning and Genetic Algorithm 基于深度迁移学习和遗传算法的入侵检测优化框架
IF 8.7 2区 计算机科学 Q1 COMPUTER SCIENCE, HARDWARE & ARCHITECTURE Pub Date : 2023-11-13 DOI: 10.1016/j.jnca.2023.103784
Shahid Latif , Wadii Boulila , Anis Koubaa , Zhuo Zou , Jawad Ahmad

In the dynamic field of the Industrial Internet of Things (IIoT), the networks are increasingly vulnerable to a diverse range of cyberattacks. This vulnerability necessitates the development of advanced intrusion detection systems (IDSs). Addressing this need, our research contributes to the existing cybersecurity literature by introducing an optimized Intrusion Detection System based on Deep Transfer Learning (DTL), specifically tailored for heterogeneous IIoT networks. Our framework employs a tri-layer architectural approach that synergistically integrates Convolutional Neural Networks (CNNs), Genetic Algorithms (GA), and bootstrap aggregation ensemble techniques. The methodology is executed in three critical stages: First, we convert a state-of-the-art cybersecurity dataset, Edge_IIoTset, into image data, thereby facilitating CNN-based analytics. Second, GA is utilized to fine-tune the hyperparameters of each base learning model, enhancing the model’s adaptability and performance. Finally, the outputs of the top-performing models are amalgamated using ensemble techniques, bolstering the robustness of the IDS. Through rigorous evaluation protocols, our framework demonstrated exceptional performance, reliably achieving a 100% attack detection accuracy rate. This result establishes our framework as highly effective against 14 distinct types of cyberattacks. The findings bear significant implications for the ongoing development of secure, efficient, and adaptive IDS solutions in the complex landscape of IIoT networks.

在工业物联网(IIoT)的动态领域中,网络越来越容易受到各种网络攻击。这个漏洞需要开发先进的入侵检测系统(ids)。为了满足这一需求,我们的研究通过引入一种优化的基于深度迁移学习(DTL)的入侵检测系统,为现有的网络安全文献做出了贡献,该系统专门为异构IIoT网络量身定制。我们的框架采用三层架构方法,协同集成卷积神经网络(cnn)、遗传算法(GA)和自举聚合集成技术。该方法分三个关键阶段执行:首先,我们将最先进的网络安全数据集Edge_IIoTset转换为图像数据,从而促进基于cnn的分析。其次,利用遗传算法对每个基学习模型的超参数进行微调,增强模型的适应性和性能;最后,使用集成技术合并表现最好的模型的输出,增强IDS的鲁棒性。通过严格的评估协议,我们的框架展示了卓越的性能,可靠地实现了100%的攻击检测准确率。这一结果表明,我们的框架对14种不同类型的网络攻击非常有效。这些发现对于在复杂的工业物联网网络环境中持续开发安全、高效和自适应的IDS解决方案具有重要意义。
{"title":"DTL-IDS: An optimized Intrusion Detection Framework using Deep Transfer Learning and Genetic Algorithm","authors":"Shahid Latif ,&nbsp;Wadii Boulila ,&nbsp;Anis Koubaa ,&nbsp;Zhuo Zou ,&nbsp;Jawad Ahmad","doi":"10.1016/j.jnca.2023.103784","DOIUrl":"10.1016/j.jnca.2023.103784","url":null,"abstract":"<div><p>In the dynamic field of the Industrial Internet of Things (IIoT), the networks are increasingly vulnerable to a diverse range of cyberattacks. This vulnerability necessitates the development of advanced intrusion detection systems (IDSs). Addressing this need, our research contributes to the existing cybersecurity literature by introducing an optimized Intrusion Detection System based on Deep Transfer Learning (DTL), specifically tailored for heterogeneous IIoT networks. Our framework employs a tri-layer architectural approach that synergistically integrates Convolutional Neural Networks (CNNs), Genetic Algorithms (GA), and bootstrap aggregation ensemble techniques. The methodology is executed in three critical stages: First, we convert a state-of-the-art cybersecurity dataset, Edge_IIoTset, into image data, thereby facilitating CNN-based analytics. Second, GA is utilized to fine-tune the hyperparameters of each base learning model, enhancing the model’s adaptability and performance. Finally, the outputs of the top-performing models are amalgamated using ensemble techniques, bolstering the robustness of the IDS. Through rigorous evaluation protocols, our framework demonstrated exceptional performance, reliably achieving a 100% attack detection accuracy rate. This result establishes our framework as highly effective against 14 distinct types of cyberattacks. The findings bear significant implications for the ongoing development of secure, efficient, and adaptive IDS solutions in the complex landscape of IIoT networks.</p></div>","PeriodicalId":54784,"journal":{"name":"Journal of Network and Computer Applications","volume":"221 ","pages":"Article 103784"},"PeriodicalIF":8.7,"publicationDate":"2023-11-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.sciencedirect.com/science/article/pii/S1084804523002035/pdfft?md5=28cf5bdbb91039e0db15a006ab7696b0&pid=1-s2.0-S1084804523002035-main.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"91398481","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
An enhanced node segmentation and distance estimation scheme with a reduced search space boundary and improved PSO for obstacle-aware wireless sensor network localization 基于改进粒子群算法和简化搜索空间边界的改进节点分割和距离估计方法的障碍物感知无线传感器网络定位
IF 8.7 2区 计算机科学 Q1 COMPUTER SCIENCE, HARDWARE & ARCHITECTURE Pub Date : 2023-11-10 DOI: 10.1016/j.jnca.2023.103783
Songyut Phoemphon , Nutthanon Leelathakul , Chakchai So-In

This paper proposes an enhanced method for localizing sensor nodes in wireless sensor networks with obstacles. Such environment settings lead to lower localization accuracy because locations are estimated based on detour distances circumventing the obstacles; we, therefore, improve the segmentation technique to address the issue as they divide the whole area into multiple smaller ones, each containing fewer or no obstacles. Nevertheless, when radio transmissions between sensor nodes are obstructed (as simulated by the radio irregularity model), the signal-strength variation tends to be high, reducing localization accuracy; thus, we provide a method for accurately approximating the distances between pairs of an anchor node (whose location is known) and an unknown node by incorporating the related error into the approximation process. Additionally, when the nodes with unknown locations are outside the polygon formed by the anchor nodes, the search area for localization is relatively large, resulting in lower accuracy and a longer search time; we then propose a method for reducing the size of approximation areas by forming boundaries based on the two intersection points between the ranges of two anchor nodes used to localize an unknown node. However, these reduced search areas could still be large; we further increase the accuracy of the PSO location estimation method by adaptively adjusting the number of particles. In addition, with PSO, the accuracy of unknown node location estimation depends on having a properly selected fitness function; therefore, we incorporate appropriate variables to reflect the distance approximation accuracy between each anchor-unknown node pair. In experiments, we measure performance in sensor node deployment areas of three different shapes: C-shaped, with 1 hole, and with 2 rectangular holes. The results show that our method provides higher localization accuracy than others in small-, medium-, and large-scaled WSNs. Specifically, our proposed method is 27.46%, 49.28%, 50.33%, and 74.62% more accurate on average than IDE-NSL, PSO–C, min-max PSO, and niching PSO, respectively.

提出了一种基于障碍物的无线传感器网络节点定位方法。这样的环境设置导致定位精度较低,因为位置是根据绕过障碍物的绕行距离估计的;因此,我们改进了分割技术来解决这个问题,因为它们将整个区域划分为多个较小的区域,每个区域包含较少或没有障碍物。然而,当传感器节点之间的无线电传输受到阻碍时(如无线电不规则模型所模拟的),信号强度变化往往很大,降低了定位精度;因此,我们提供了一种通过将相关误差纳入近似过程来精确逼近锚节点(其位置已知)和未知节点对之间距离的方法。另外,当位置未知的节点在锚节点形成的多边形之外时,定位的搜索区域比较大,精度较低,搜索时间较长;然后,我们提出了一种方法,通过基于用于定位未知节点的两个锚节点范围之间的两个交点形成边界来减小近似区域的大小。然而,这些缩小的搜索区域可能仍然很大;通过自适应调整粒子个数,进一步提高了粒子群定位估计方法的精度。此外,在粒子群算法中,未知节点位置估计的准确性取决于是否选择合适的适应度函数;因此,我们加入适当的变量来反映每个锚未知节点对之间的距离逼近精度。在实验中,我们测量了三种不同形状的传感器节点部署区域的性能:c形、1孔和2矩形孔。结果表明,该方法在小、中、大型无线传感器网络中具有较高的定位精度。具体来说,我们提出的方法比IDE-NSL、PSO - c、min-max和niche PSO的平均准确率分别提高27.46%、49.28%、50.33%和74.62%。
{"title":"An enhanced node segmentation and distance estimation scheme with a reduced search space boundary and improved PSO for obstacle-aware wireless sensor network localization","authors":"Songyut Phoemphon ,&nbsp;Nutthanon Leelathakul ,&nbsp;Chakchai So-In","doi":"10.1016/j.jnca.2023.103783","DOIUrl":"10.1016/j.jnca.2023.103783","url":null,"abstract":"<div><p>This paper proposes an enhanced method for localizing sensor nodes in wireless sensor networks with obstacles. Such environment settings lead to lower localization accuracy because locations are estimated based on detour distances circumventing the obstacles; we, therefore, improve the segmentation technique to address the issue as they divide the whole area into multiple smaller ones, each containing fewer or no obstacles. Nevertheless, when radio transmissions between sensor nodes are obstructed (as simulated by the radio irregularity model), the signal-strength variation tends to be high, reducing localization accuracy; thus, we provide a method for accurately approximating the distances between pairs of an anchor node (whose location is known) and an unknown node by incorporating the related error into the approximation process. Additionally, when the nodes with unknown locations are outside the polygon formed by the anchor nodes, the search area for localization is relatively large, resulting in lower accuracy and a longer search time; we then propose a method for reducing the size of approximation areas by forming boundaries based on the two intersection points between the ranges of two anchor nodes used to localize an unknown node. However, these reduced search areas could still be large; we further increase the accuracy of the PSO location estimation method by adaptively adjusting the number of particles. In addition, with PSO, the accuracy of unknown node location estimation depends on having a properly selected fitness function; therefore, we incorporate appropriate variables to reflect the distance approximation accuracy between each anchor-unknown node pair. In experiments, we measure performance in sensor node deployment areas of three different shapes: C-shaped, with 1 hole, and with 2 rectangular holes. The results show that our method provides higher localization accuracy than others in small-, medium-, and large-scaled WSNs. Specifically, our proposed method is 27.46%, 49.28%, 50.33%, and 74.62% more accurate on average than IDE-NSL, PSO–C, min-max PSO, and niching PSO, respectively.</p></div>","PeriodicalId":54784,"journal":{"name":"Journal of Network and Computer Applications","volume":"221 ","pages":"Article 103783"},"PeriodicalIF":8.7,"publicationDate":"2023-11-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.sciencedirect.com/science/article/pii/S1084804523002023/pdfft?md5=f0463222027e85e41992ef6576402b4e&pid=1-s2.0-S1084804523002023-main.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"72365530","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
A dynamic state sharding blockchain architecture for scalable and secure crowdsourcing systems 一个动态状态分片区块链架构,用于可扩展和安全的众包系统
IF 8.7 2区 计算机科学 Q1 COMPUTER SCIENCE, HARDWARE & ARCHITECTURE Pub Date : 2023-11-10 DOI: 10.1016/j.jnca.2023.103785
Zihang Zhen , Xiaoding Wang , Hui Lin , Sahil Garg , Prabhat Kumar , M. Shamim Hossain

Currently, the crowdsourcing system has serious problems such as single point of failure of the server, leakage of user privacy, unfair arbitration, etc. By storing the interactions between workers, requesters, and crowdsourcing platforms in the form of transactions on the blockchain, these problems can be effectively addressed. However, the improvement in total computing power on the blockchain is difficult to provide positive feedback to the efficiency of transaction confirmation, thereby limiting the performance of crowdsourcing systems. On the other hand, the increasing amount of data in blockchain further increases the difficulty of nodes participating in consensus, affecting the security of crowdsourcing systems. To address the above problems, in this paper we design a blockchain architecture based on dynamic state sharding, called DSSBD. Firstly, we solve the problems caused by cross sharding transactions and reconfiguration in blockchain state sharding through graph segmentation and relay transactions. Then, we model the optimal block generation problem as a Markov decision process. By utilizing deep reinforcement learning, we can dynamically adjust the number of shards, block spacing, and block size. This approach helps improve both the throughput of the blockchain and the proportion of non-malicious nodes. Security analysis has proven that the proposed DSSBD can effectively resist attacks such as transaction atomic attacks, double spending attacks, sybil attacks, replay attacks, etc. The experimental results show that the crowdsourcing system with the proposed DSSBD has better performance in throughput, latency, balancing, cross-shard transaction proportion, and node reconfiguration proportion, etc., while ensuring security.

目前,众包系统存在服务器单点故障、用户隐私泄露、仲裁不公等严重问题。通过将工作人员、请求者和众包平台之间的交互以事务的形式存储在区块链上,可以有效地解决这些问题。然而,区块链上总算力的提升很难对交易确认的效率提供正反馈,从而限制了众包系统的性能。另一方面,区块链中不断增加的数据量进一步增加了节点参与共识的难度,影响了众包系统的安全性。为了解决上述问题,本文设计了一种基于动态分片的区块链架构,称为DSSBD。首先,我们通过图分割和中继事务解决了区块链状态分片中交叉分片事务和重构问题。然后,我们将最优块生成问题建模为马尔可夫决策过程。通过利用深度强化学习,我们可以动态调整分片的数量、块间距和块大小。这种方法有助于提高区块链的吞吐量和非恶意节点的比例。安全分析证明,提出的DSSBD可以有效抵御交易原子攻击、双花攻击、符号攻击、重放攻击等攻击。实验结果表明,采用DSSBD的众包系统在保证安全性的同时,在吞吐量、时延、均衡性、跨分片交易比例、节点重构比例等方面具有更好的性能。
{"title":"A dynamic state sharding blockchain architecture for scalable and secure crowdsourcing systems","authors":"Zihang Zhen ,&nbsp;Xiaoding Wang ,&nbsp;Hui Lin ,&nbsp;Sahil Garg ,&nbsp;Prabhat Kumar ,&nbsp;M. Shamim Hossain","doi":"10.1016/j.jnca.2023.103785","DOIUrl":"10.1016/j.jnca.2023.103785","url":null,"abstract":"<div><p><span>Currently, the crowdsourcing system has serious problems such as single point of failure<span><span><span><span> of the server, leakage of user privacy, unfair arbitration, etc. By storing the interactions between workers, requesters, and crowdsourcing platforms in the form of transactions on the </span>blockchain, these problems can be effectively addressed. However, the improvement in total computing power on the blockchain is difficult to provide positive feedback to the efficiency of transaction confirmation, thereby limiting the performance of crowdsourcing systems. On the other hand, the increasing amount of data in blockchain further increases the difficulty of nodes participating in consensus, affecting the security of crowdsourcing systems. To address the above problems, in this paper we design a blockchain architecture based on dynamic state sharding, called DSSBD. Firstly, we solve the problems caused by cross sharding transactions and reconfiguration in blockchain state sharding through graph segmentation and relay transactions. Then, we model the optimal block generation problem as a </span>Markov decision process. By utilizing </span>deep reinforcement learning<span>, we can dynamically adjust the number of shards, block spacing, and block size. This approach helps improve both the throughput of the blockchain and the proportion of non-malicious nodes. Security analysis has proven that the proposed DSSBD can effectively resist attacks such as transaction atomic attacks, double spending attacks, </span></span></span>sybil attacks, replay attacks, etc. The experimental results show that the crowdsourcing system with the proposed DSSBD has better performance in throughput, latency, balancing, cross-shard transaction proportion, and node reconfiguration proportion, etc., while ensuring security.</p></div>","PeriodicalId":54784,"journal":{"name":"Journal of Network and Computer Applications","volume":"222 ","pages":"Article 103785"},"PeriodicalIF":8.7,"publicationDate":"2023-11-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"72365529","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
quicSDN: Transitioning from TCP to QUIC for southbound communication in software-defined networks quicSDN:从TCP到QUIC的转换,用于sdn的南向通信
IF 8.7 2区 计算机科学 Q1 COMPUTER SCIENCE, HARDWARE & ARCHITECTURE Pub Date : 2023-11-10 DOI: 10.1016/j.jnca.2023.103780
Puneet Kumar, Behnam Dezfouli

In Software-Defined Networks (SDNs), the control plane and data plane communicate for various purposes such as applying configurations and collecting statistical data. While various methods have been proposed to reduce the overhead and enhance the scalability of SDNs, the impact of the transport layer protocol used for southbound communication has not been investigated. Existing SDNs rely on Transmission Control Protocol (TCP) to enforce reliability. In this paper, we show that the use of TCP imposes a considerable overhead on southbound communication, identify the causes of this overhead, and demonstrate how replacing TCP with Quick UDP Internet Connection (QUIC) protocol can enhance the performance of this communication. We introduce the quicSDN architecture to enable southbound communication in SDNs via the QUIC protocol. We present a reference architecture based on the standard, most widely-used protocols by the SDN community and show how the controller and switch are revamped to facilitate this transition. We compare, both analytically and empirically, the performance of quicSDN versus the traditional SDN architecture and confirm the superior performance of quicSDN. Our empirical evaluations in different settings demonstrate that quicSDN lowers communication overhead and message delivery delay by up to 82% and 45%, respectively, compared to SDNs using TCP for their southbound communication.

在sdn (Software-Defined Networks)中,控制平面和数据平面进行通信,用于各种目的,如应用配置和收集统计数据。虽然已经提出了各种方法来减少开销并增强软件定义网络(sdn)的可扩展性,但用于南向通信的传输层协议的影响尚未得到研究。现有的sdn依赖于传输控制协议(TCP)(和传输层安全(TLS))来增强可靠性和安全性。在本文中,我们展示了TCP的使用对南向通信施加了相当大的开销,确定了这种开销的原因,并演示了如何用快速UDP互联网连接(QUIC)取代TCP可以提高这种通信的性能。介绍了通过QUIC协议实现sdn南向通信的quicSDN体系结构。我们提出了一个基于标准的参考架构,SDN社区最广泛使用的协议,并展示了如何修改控制器和交换机以促进这种转换。我们从分析和经验两方面比较了quicSDN与传统SDN架构的性能,并证实了quicSDN的优越性能。我们在不同环境下的经验评估表明,与使用TCP进行南向通信的sdn相比,quicksdn可将通信开销和消息传递延迟分别降低82%和45%。
{"title":"quicSDN: Transitioning from TCP to QUIC for southbound communication in software-defined networks","authors":"Puneet Kumar,&nbsp;Behnam Dezfouli","doi":"10.1016/j.jnca.2023.103780","DOIUrl":"10.1016/j.jnca.2023.103780","url":null,"abstract":"<div><p><span><span>In Software-Defined Networks (SDNs), the control plane and data plane communicate for various purposes such as applying configurations and collecting statistical data. While various methods have been proposed to reduce the overhead and enhance the scalability of SDNs, the impact of the transport layer protocol used for </span>southbound communication<span> has not been investigated. Existing SDNs rely on Transmission Control Protocol<span> (TCP) to enforce reliability. In this paper, we show that the use of TCP imposes a considerable overhead on southbound communication, identify the causes of this overhead, and demonstrate how replacing TCP with Quick UDP Internet Connection (QUIC) protocol can enhance the performance of this communication. We introduce the </span></span></span><em>quicSDN</em><span> architecture to enable southbound communication in SDNs via the QUIC protocol. We present a reference architecture based on the standard, most widely-used protocols by the SDN community and show how the controller and switch are revamped to facilitate this transition. We compare, both analytically and empirically, the performance of quicSDN versus the traditional SDN architecture and confirm the superior performance of quicSDN. Our empirical evaluations in different settings demonstrate that quicSDN lowers communication overhead and message delivery delay by up to 82% and 45%, respectively, compared to SDNs using TCP for their southbound communication.</span></p></div>","PeriodicalId":54784,"journal":{"name":"Journal of Network and Computer Applications","volume":"222 ","pages":"Article 103780"},"PeriodicalIF":8.7,"publicationDate":"2023-11-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"72365528","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
期刊
Journal of Network and Computer Applications
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1