首页 > 最新文献

Journal of Network and Computer Applications最新文献

英文 中文
Optimizing service level agreement in cloud computing with smart virtual machine scheduling using clustered differential evolution and deep learning 基于聚类差分进化和深度学习的智能虚拟机调度优化云计算服务水平协议
IF 8 2区 计算机科学 Q1 COMPUTER SCIENCE, HARDWARE & ARCHITECTURE Pub Date : 2025-12-01 Epub Date: 2025-10-11 DOI: 10.1016/j.jnca.2025.104361
Tassawar Ali , Hikmat Ullah Khan , Babar Nazir , Fawaz Khaled Alarfaj , Mohammed Alreshoodi
Cloud computing is expanding rapidly due to the increasing demand for scalable and efficient services. This growth necessitates more extensive physical infrastructure to accommodate the growing workload. However, managing these workloads effectively presents issues, particularly in optimizing virtual machine (VM) scheduling. Traditional reactive scheduling methods respond to workload changes only after they occur. These approaches struggle in dynamic cloud environments, leading to performance inefficiencies, frequent VM migrations, and service-level agreement (SLA) violations. The purpose of this study is to introduce IntelliSchNet, a novel VM scheduling approach designed to address these challenges. IntelliSchNet uses a deep learning model in which the feature weights of its neurons are optimized using agglomerative clustering-based differential evolution to accurately predict future workloads. Based on these predictions, an intelligent scheduling plan is created to allocate VMs to suitable hosts. The strategy prioritizes non-overloaded hosts to maximize resource utilization and reduce VM migrations, and hence minimizes SLA violations. The basic methodology includes integrating a clustered adaptation of the differential evolution algorithm to fine-tune deep neural network parameters. Real-world data from Google's datacenters is used for training, consisting of traces collected from a production cluster with over 11,000 machines and more than 650,000 jobs, ensuring reliable and practical workload predictions. The effectiveness of IntelliSchNet is evaluated using nine different performance metrics on actual cloud workload datasets. The major findings highlight a significant improvement in VM scheduling efficiency. IntelliSchNet reduces SLA violations by up to 44 %, ensuring more stable and reliable cloud services. This reduction enhances service dependability and increases customer satisfaction. In conclusion, IntelliSchNet outperforms traditional scheduling methods by optimizing workload placement and resource allocation. Its proactive approach enhances cloud system stability, efficiency, and scalability. These improvements contribute to a more sustainable and high-performing cloud computing environment.
由于对可伸缩和高效服务的需求不断增加,云计算正在迅速扩展。这种增长需要更广泛的物理基础设施来适应不断增长的工作量。然而,有效地管理这些工作负载存在一些问题,特别是在优化虚拟机(VM)调度方面。传统的响应式调度方法只在工作负载发生变化之后才对其作出响应。这些方法在动态云环境中存在问题,导致性能低下、频繁的VM迁移和服务水平协议(SLA)违反。本研究的目的是介绍IntelliSchNet,一种新颖的虚拟机调度方法,旨在解决这些挑战。IntelliSchNet使用深度学习模型,其中神经元的特征权重使用基于聚集聚类的差分进化进行优化,以准确预测未来的工作负载。根据这些预测,创建智能调度计划,将虚拟机分配到合适的主机。该策略优先考虑未过载的主机,以最大限度地提高资源利用率,减少虚拟机迁移,从而最大限度地减少SLA违规。基本的方法包括整合微分进化算法的聚类适应来微调深度神经网络参数。来自b谷歌数据中心的真实数据用于培训,包括从拥有超过11,000台机器和超过65万个工作岗位的生产集群收集的痕迹,确保可靠和实用的工作负载预测。IntelliSchNet的有效性在实际的云工作负载数据集上使用九个不同的性能指标进行评估。主要发现突出了虚拟机调度效率的显著提高。IntelliSchNet减少了高达44%的SLA违规,确保了更稳定和可靠的云服务。这种减少提高了服务的可靠性并提高了客户满意度。总之,IntelliSchNet通过优化工作负载布局和资源分配,优于传统的调度方法。它的主动方式增强了云系统的稳定性、效率和可扩展性。这些改进有助于构建更具可持续性和高性能的云计算环境。
{"title":"Optimizing service level agreement in cloud computing with smart virtual machine scheduling using clustered differential evolution and deep learning","authors":"Tassawar Ali ,&nbsp;Hikmat Ullah Khan ,&nbsp;Babar Nazir ,&nbsp;Fawaz Khaled Alarfaj ,&nbsp;Mohammed Alreshoodi","doi":"10.1016/j.jnca.2025.104361","DOIUrl":"10.1016/j.jnca.2025.104361","url":null,"abstract":"<div><div>Cloud computing is expanding rapidly due to the increasing demand for scalable and efficient services. This growth necessitates more extensive physical infrastructure to accommodate the growing workload. However, managing these workloads effectively presents issues, particularly in optimizing virtual machine (VM) scheduling. Traditional reactive scheduling methods respond to workload changes only after they occur. These approaches struggle in dynamic cloud environments, leading to performance inefficiencies, frequent VM migrations, and service-level agreement (SLA) violations. The purpose of this study is to introduce IntelliSchNet, a novel VM scheduling approach designed to address these challenges. IntelliSchNet uses a deep learning model in which the feature weights of its neurons are optimized using agglomerative clustering-based differential evolution to accurately predict future workloads. Based on these predictions, an intelligent scheduling plan is created to allocate VMs to suitable hosts. The strategy prioritizes non-overloaded hosts to maximize resource utilization and reduce VM migrations, and hence minimizes SLA violations. The basic methodology includes integrating a clustered adaptation of the differential evolution algorithm to fine-tune deep neural network parameters. Real-world data from Google's datacenters is used for training, consisting of traces collected from a production cluster with over 11,000 machines and more than 650,000 jobs, ensuring reliable and practical workload predictions. The effectiveness of IntelliSchNet is evaluated using nine different performance metrics on actual cloud workload datasets. The major findings highlight a significant improvement in VM scheduling efficiency. IntelliSchNet reduces SLA violations by up to 44 %, ensuring more stable and reliable cloud services. This reduction enhances service dependability and increases customer satisfaction. In conclusion, IntelliSchNet outperforms traditional scheduling methods by optimizing workload placement and resource allocation. Its proactive approach enhances cloud system stability, efficiency, and scalability. These improvements contribute to a more sustainable and high-performing cloud computing environment.</div></div>","PeriodicalId":54784,"journal":{"name":"Journal of Network and Computer Applications","volume":"244 ","pages":"Article 104361"},"PeriodicalIF":8.0,"publicationDate":"2025-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145314970","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Trace-distance based end-to-end entanglement fidelity with information preservation in quantum networks 量子网络中基于跟踪距离的端到端纠缠保真度与信息保存
IF 8 2区 计算机科学 Q1 COMPUTER SCIENCE, HARDWARE & ARCHITECTURE Pub Date : 2025-12-01 Epub Date: 2025-10-17 DOI: 10.1016/j.jnca.2025.104366
Pankaj Kumar, Binayak Kar, Shan-Hsiang Shen
Quantum networks have the potential to revolutionize communication and computation by outperforming their classical counterparts. Many quantum applications depend on the reliable distribution of high-fidelity entangled pairs between distant nodes. However, due to decoherence and channel noise, entanglement fidelity degrades exponentially with distance, posing a significant challenge to maintaining robust quantum communication. To address this, we propose two strategies to enhance end-to-end (E2E) fidelity and information preservation in quantum networks. First, we employ closeness centrality to identify optimal intermediary nodes that minimize average path length. Second, we introduce the Trace-Distance based Path Purification (TDPP) algorithm, which fuses topological and quantum state information to support fidelity-aware routing decisions. TDPP leverages closeness centrality and trace-distance to identify paths that optimize both network efficiency and entanglement fidelity. Simulation results demonstrate that our approach significantly improves network throughput and E2E entanglement fidelity, outperforming existing routing methods while enhancing information preservation.
量子网络有可能通过超越经典网络来彻底改变通信和计算。许多量子应用依赖于远距离节点间高保真纠缠对的可靠分布。然而,由于退相干和信道噪声,纠缠保真度随距离呈指数级下降,对保持鲁棒量子通信提出了重大挑战。为了解决这个问题,我们提出了两种策略来增强量子网络中的端到端(E2E)保真度和信息保存。首先,我们采用接近中心性来识别使平均路径长度最小的最优中间节点。其次,我们介绍了基于跟踪距离的路径净化(TDPP)算法,该算法融合了拓扑和量子态信息,以支持保真度感知路由决策。TDPP利用接近中心性和跟踪距离来确定优化网络效率和纠缠保真度的路径。仿真结果表明,该方法显著提高了网络吞吐量和端到端纠缠保真度,在增强信息保存的同时优于现有的路由方法。
{"title":"Trace-distance based end-to-end entanglement fidelity with information preservation in quantum networks","authors":"Pankaj Kumar,&nbsp;Binayak Kar,&nbsp;Shan-Hsiang Shen","doi":"10.1016/j.jnca.2025.104366","DOIUrl":"10.1016/j.jnca.2025.104366","url":null,"abstract":"<div><div>Quantum networks have the potential to revolutionize communication and computation by outperforming their classical counterparts. Many quantum applications depend on the reliable distribution of high-fidelity entangled pairs between distant nodes. However, due to decoherence and channel noise, entanglement fidelity degrades exponentially with distance, posing a significant challenge to maintaining robust quantum communication. To address this, we propose two strategies to enhance end-to-end (E2E) fidelity and information preservation in quantum networks. First, we employ closeness centrality to identify optimal intermediary nodes that minimize average path length. Second, we introduce the Trace-Distance based Path Purification (TDPP) algorithm, which fuses topological and quantum state information to support fidelity-aware routing decisions. TDPP leverages closeness centrality and trace-distance to identify paths that optimize both network efficiency and entanglement fidelity. Simulation results demonstrate that our approach significantly improves network throughput and E2E entanglement fidelity, outperforming existing routing methods while enhancing information preservation.</div></div>","PeriodicalId":54784,"journal":{"name":"Journal of Network and Computer Applications","volume":"244 ","pages":"Article 104366"},"PeriodicalIF":8.0,"publicationDate":"2025-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145364206","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Cost-effective container elastic scaling and scheduling under multi-resource constraints 多资源约束下具有成本效益的容器弹性伸缩与调度
IF 8 2区 计算机科学 Q1 COMPUTER SCIENCE, HARDWARE & ARCHITECTURE Pub Date : 2025-12-01 Epub Date: 2025-10-17 DOI: 10.1016/j.jnca.2025.104359
Hongjian Li , Yu Tian , Yuzheng Cui , Xiaolin Duan
Recent advancements in containerization and Kubernetes have solidified their status as mainstream paradigms for service delivery. However, existing Kubernetes scaling mechanisms often suffer from limitations, such as suboptimal utilization of multi-dimensional resources, reliance on historical workload patterns, and inability to adapt quickly to real-time workload fluctuations. To overcome these limitations, this study introduces two cost-effective resource scheduling strategies. First, a hybrid control-theoretic vertical scaling algorithm is proposed, operating under multi-resource constraints. This algorithm leverages Prometheus monitoring data encompassing diverse resource metrics. It facilitates dynamic resource optimization through a hierarchical decision-making model that combines feedforward prediction with feedback correction mechanisms. Second, a synergistic vertical–horizontal elastic scaling framework, namely the MR-CEHA framework proposed in this work, is developed. This framework classifies resource states using multi-level thresholds and integrates a cost-sensitive optimization model to balance instance-level resource allocation with cluster-level scaling operations. Experimental evaluations demonstrate substantial improvements: under surge load conditions, the SLA violation rate decreased by 16.5%; during load reduction scenarios, energy consumption dropped by 39.4%; and in mixed workload environments, energy usage declined by 16.6% while simultaneously achieving a 37.8% reduction in SLA violation rate. These findings contribute both to the theoretical understanding and the practical advancement of efficient resource utilization and service stability in Kubernetes-based cloud deployments, offering meaningful value for academic exploration and industrial implementation.
容器化和Kubernetes的最新进展巩固了它们作为服务交付主流范例的地位。然而,现有的Kubernetes扩展机制经常受到限制,例如多维资源的次优利用率,对历史工作负载模式的依赖,以及无法快速适应实时工作负载波动。为了克服这些限制,本研究引入了两种具有成本效益的资源调度策略。首先,提出了一种多资源约束下的混合控制理论垂直缩放算法。该算法利用Prometheus监控包含各种资源指标的数据。它通过将前馈预测与反馈修正机制相结合的分层决策模型,促进资源的动态优化。其次,提出了一种协同垂直水平弹性标度框架,即本文提出的MR-CEHA框架。该框架使用多级阈值对资源状态进行分类,并集成了一个成本敏感的优化模型,以平衡实例级资源分配和集群级扩展操作。实验结果表明:在浪涌工况下,SLA违例率降低了16.5%;在减负荷情景下,能耗下降39.4%;在混合工作负载环境中,能源使用下降了16.6%,同时SLA违规率降低了37.8%。这些发现有助于对基于kubernetes的云部署中高效资源利用和服务稳定性的理论理解和实践推进,为学术探索和工业实施提供有意义的价值。
{"title":"Cost-effective container elastic scaling and scheduling under multi-resource constraints","authors":"Hongjian Li ,&nbsp;Yu Tian ,&nbsp;Yuzheng Cui ,&nbsp;Xiaolin Duan","doi":"10.1016/j.jnca.2025.104359","DOIUrl":"10.1016/j.jnca.2025.104359","url":null,"abstract":"<div><div>Recent advancements in containerization and Kubernetes have solidified their status as mainstream paradigms for service delivery. However, existing Kubernetes scaling mechanisms often suffer from limitations, such as suboptimal utilization of multi-dimensional resources, reliance on historical workload patterns, and inability to adapt quickly to real-time workload fluctuations. To overcome these limitations, this study introduces two cost-effective resource scheduling strategies. First, a hybrid control-theoretic vertical scaling algorithm is proposed, operating under multi-resource constraints. This algorithm leverages Prometheus monitoring data encompassing diverse resource metrics. It facilitates dynamic resource optimization through a hierarchical decision-making model that combines feedforward prediction with feedback correction mechanisms. Second, a synergistic vertical–horizontal elastic scaling framework, namely the MR-CEHA framework proposed in this work, is developed. This framework classifies resource states using multi-level thresholds and integrates a cost-sensitive optimization model to balance instance-level resource allocation with cluster-level scaling operations. Experimental evaluations demonstrate substantial improvements: under surge load conditions, the SLA violation rate decreased by 16.5%; during load reduction scenarios, energy consumption dropped by 39.4%; and in mixed workload environments, energy usage declined by 16.6% while simultaneously achieving a 37.8% reduction in SLA violation rate. These findings contribute both to the theoretical understanding and the practical advancement of efficient resource utilization and service stability in Kubernetes-based cloud deployments, offering meaningful value for academic exploration and industrial implementation.</div></div>","PeriodicalId":54784,"journal":{"name":"Journal of Network and Computer Applications","volume":"244 ","pages":"Article 104359"},"PeriodicalIF":8.0,"publicationDate":"2025-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145364129","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Lightweight verifiable privacy preserving federated learning 轻量级可验证的隐私保护联邦学习
IF 8 2区 计算机科学 Q1 COMPUTER SCIENCE, HARDWARE & ARCHITECTURE Pub Date : 2025-12-01 Epub Date: 2025-09-24 DOI: 10.1016/j.jnca.2025.104335
Li Zhang , Bing Tang , Jianbo Xu
Federated learning (FL) has garnered considerable attention owing to its capability of accomplishing model training through the sharing local models without accessing training datasets. Nevertheless, it has been demonstrated that the shared models still possess sensitive information related to the training data. Moreover, there is a possibility that malicious aggregation servers can return manipulated global models. While the verification problem in FL has been explored in existing schemes, most of these schemes employ bilinear pairing operations and homomorphic hash computations dependent on the model’s dimension, leading to substantial computational costs. Additionally, some schemes necessitate multiple parties to collectively manage one or more sets of confidential keys for privacy preservation and validation, which renders them vulnerable to collusion attacks between certain clients and servers. Consequently, we propose a privacy-preserving federated learning mechanism under a dual-server architecture. This mechanism adopts a coding matrix computation-based approach to ensure the privacy security of local models at the client side and achieves the aggregation of local models through collaborative efforts between two servers situated at the server side. To verify the correctness of the aggregated model, a Model Verification Code (MVC) mechanism is designed. By effectively combining the MVC mechanism with the coded matrix computation, there is no requirement for all clients to possess identical sets of confidential keys during the privacy preservation and verification process. Meanwhile, this ensures the fulfillment of security requirements under the malicious threat posed by the server. The computational overhead of this mechanism remains low since it avoids the application of complex cryptographic primitives. We perform extensive experiments on real datasets, and the experimental results further demonstrate the proposed scheme exhibits lightweight characteristics while ensuring the validity and usability of the model.
联邦学习(FL)由于能够在不访问训练数据集的情况下通过共享局部模型来完成模型训练而受到广泛关注。然而,已经证明共享模型仍然具有与训练数据相关的敏感信息。此外,恶意聚合服务器有可能返回被操纵的全局模型。虽然现有方案已经探索了FL中的验证问题,但这些方案大多采用依赖于模型维数的双线性配对操作和同态哈希计算,导致大量的计算成本。此外,一些方案需要多方共同管理一组或多组机密密钥以进行隐私保护和验证,这使得它们容易受到某些客户机和服务器之间的共谋攻击。因此,我们提出了一种双服务器架构下的隐私保护联邦学习机制。该机制采用基于编码矩阵计算的方法,保证了本地模型在客户端的隐私安全,并通过位于服务器端的两台服务器之间的协作实现了本地模型的聚合。为了验证聚合模型的正确性,设计了模型验证码(model Verification Code, MVC)机制。通过将MVC机制与编码矩阵计算有效地结合起来,在隐私保护和验证过程中,不需要所有客户端拥有相同的机密密钥集。同时保证了在服务器端恶意威胁下的安全需求。这种机制的计算开销仍然很低,因为它避免了复杂的加密原语的应用。我们在真实数据集上进行了大量的实验,实验结果进一步证明了该方案在保证模型有效性和可用性的同时,具有轻量级的特点。
{"title":"Lightweight verifiable privacy preserving federated learning","authors":"Li Zhang ,&nbsp;Bing Tang ,&nbsp;Jianbo Xu","doi":"10.1016/j.jnca.2025.104335","DOIUrl":"10.1016/j.jnca.2025.104335","url":null,"abstract":"<div><div>Federated learning (FL) has garnered considerable attention owing to its capability of accomplishing model training through the sharing local models without accessing training datasets. Nevertheless, it has been demonstrated that the shared models still possess sensitive information related to the training data. Moreover, there is a possibility that malicious aggregation servers can return manipulated global models. While the verification problem in FL has been explored in existing schemes, most of these schemes employ bilinear pairing operations and homomorphic hash computations dependent on the model’s dimension, leading to substantial computational costs. Additionally, some schemes necessitate multiple parties to collectively manage one or more sets of confidential keys for privacy preservation and validation, which renders them vulnerable to collusion attacks between certain clients and servers. Consequently, we propose a privacy-preserving federated learning mechanism under a dual-server architecture. This mechanism adopts a coding matrix computation-based approach to ensure the privacy security of local models at the client side and achieves the aggregation of local models through collaborative efforts between two servers situated at the server side. To verify the correctness of the aggregated model, a Model Verification Code (MVC) mechanism is designed. By effectively combining the MVC mechanism with the coded matrix computation, there is no requirement for all clients to possess identical sets of confidential keys during the privacy preservation and verification process. Meanwhile, this ensures the fulfillment of security requirements under the malicious threat posed by the server. The computational overhead of this mechanism remains low since it avoids the application of complex cryptographic primitives. We perform extensive experiments on real datasets, and the experimental results further demonstrate the proposed scheme exhibits lightweight characteristics while ensuring the validity and usability of the model.</div></div>","PeriodicalId":54784,"journal":{"name":"Journal of Network and Computer Applications","volume":"244 ","pages":"Article 104335"},"PeriodicalIF":8.0,"publicationDate":"2025-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145160066","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
FedCoRE: Effective Federated Learning for constrained RESTful environments in the Artificial Intelligence of Things FedCoRE:物联网人工智能中约束rest环境的有效联邦学习
IF 8 2区 计算机科学 Q1 COMPUTER SCIENCE, HARDWARE & ARCHITECTURE Pub Date : 2025-12-01 Epub Date: 2025-10-08 DOI: 10.1016/j.jnca.2025.104357
Badis Djamaa, Habib Yekhlef, Mohamed Amine Kouda, Abbas Bradai
Federated Learning (FL) empowers Internet-of-Things (IoT) devices to train intelligent models without sharing sensitive data, facilitating the transition to an Artificial Intelligence of Things (AIoT) ecosystem. However, FL demands significant storage, computation, and communication resources, which often exceed the capabilities of resource-constrained IoT devices. In this work, we introduce FedCoRE, an effective and practical FL architecture tailored for IoT environments. FedCoRE leverages standards for constrained RESTful environments, such as the Constrained Application Protocol (CoAP), to optimize communication and applies model quantization to address computation and storage limitations. FedCoRE has been implemented on resource-constrained IoT devices with 256 KB of RAM and evaluated on a human activity recognition task using a deep neural network. Extensive evaluations conducted in a real-world IoT environment, comprising 10 Thunderboard Sense 2 nodes, demonstrate the feasibility and effectiveness of our proposal. Notably, compared to FL, FedCoRE achieves up to a 60% reduction in communication cost, while maintaining model accuracy and requiring only approximately 75 KB of RAM and 438 KB of ROM.
联邦学习(FL)使物联网(IoT)设备能够在不共享敏感数据的情况下训练智能模型,从而促进向人工智能(AIoT)生态系统的过渡。然而,FL需要大量的存储、计算和通信资源,这往往超出了资源受限的物联网设备的能力。在这项工作中,我们介绍了FedCoRE,这是一种为物联网环境量身定制的有效实用的FL架构。FedCoRE利用约束rest环境的标准,例如约束应用协议(constrained Application Protocol, CoAP),来优化通信,并应用模型量化来解决计算和存储限制。FedCoRE已经在具有256 KB RAM的资源受限物联网设备上实现,并使用深度神经网络在人类活动识别任务上进行了评估。在真实的物联网环境中进行了广泛的评估,包括10个Thunderboard Sense 2节点,证明了我们提议的可行性和有效性。值得注意的是,与FL相比,FedCoRE实现了高达60%的通信成本降低,同时保持模型精度,只需要大约75KB的RAM和438KB的ROM。
{"title":"FedCoRE: Effective Federated Learning for constrained RESTful environments in the Artificial Intelligence of Things","authors":"Badis Djamaa,&nbsp;Habib Yekhlef,&nbsp;Mohamed Amine Kouda,&nbsp;Abbas Bradai","doi":"10.1016/j.jnca.2025.104357","DOIUrl":"10.1016/j.jnca.2025.104357","url":null,"abstract":"<div><div>Federated Learning (FL) empowers Internet-of-Things (IoT) devices to train intelligent models without sharing sensitive data, facilitating the transition to an Artificial Intelligence of Things (AIoT) ecosystem. However, FL demands significant storage, computation, and communication resources, which often exceed the capabilities of resource-constrained IoT devices. In this work, we introduce FedCoRE, an effective and practical FL architecture tailored for IoT environments. FedCoRE leverages standards for constrained RESTful environments, such as the Constrained Application Protocol (CoAP), to optimize communication and applies model quantization to address computation and storage limitations. FedCoRE has been implemented on resource-constrained IoT devices with 256 KB of RAM and evaluated on a human activity recognition task using a deep neural network. Extensive evaluations conducted in a real-world IoT environment, comprising 10 Thunderboard Sense 2 nodes, demonstrate the feasibility and effectiveness of our proposal. Notably, compared to FL, FedCoRE achieves up to a 60% reduction in communication cost, while maintaining model accuracy and requiring only approximately 75<!--> <!-->KB of RAM and 438<!--> <!-->KB of ROM.</div></div>","PeriodicalId":54784,"journal":{"name":"Journal of Network and Computer Applications","volume":"244 ","pages":"Article 104357"},"PeriodicalIF":8.0,"publicationDate":"2025-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145311718","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Enhanced spreading factor allocation and backscatter communication via membership based tuna swarm optimization for LoRa protocol 基于隶属度的金枪鱼群优化LoRa协议增强扩频因子分配和反向散射通信
IF 8 2区 计算机科学 Q1 COMPUTER SCIENCE, HARDWARE & ARCHITECTURE Pub Date : 2025-12-01 Epub Date: 2025-10-13 DOI: 10.1016/j.jnca.2025.104360
Swathika R., Dilip Kumar S.M.
With spread spectrum modulation, LoRa (Long-Range), an Internet of Things (IoT) communication method, enables ultra-long-distance transmission in the recent times. Data conflicts occur frequently in networks with many nodes, and the equivalent rate often suffers in ultra-long-distance transmissions. This work examines several kinds of data collisions in LoRa wireless networks, most of which are influenced by the assignment of the Spreading Factor (SF). The study also explores the integration of Membership based Tuna Swarm Optimization (MTSO) with LoRa modulation into Backscatter Communications (BackCom). An analytical structure is established to examine the error rate efficiency of the network simulation under consideration. With restricted network resources, MTSO is employed to implement an SF redistribution mechanism, thereby increasing the terminal capacity of the LoRa gateway. Without increasing network or gateway capacity, the proposed technique reduces the frequency of data collisions. This paper addresses the reallocation of SF as the number of terminals increases, presenting an SF selection mechanism and an iterative SF inspection method to ensure independent data rates for each communication link. Specifically, assuming canceled Radio-Frequency Interference (RFI), this paper derives new precise and estimated closed-form equations for the Bit Error Rate (BER), Symbol Error Rate (SER), and Frame Error Rate (FER). The findings show that as the Signal-To-Noise Ratio (SNR) increases, the system’s BER, FER, and SER efficiency also improve when the SF variables are tuned.
最近,物联网(IoT)通信方式LoRa(远程)通过扩频调制实现了超远距离传输。在多节点网络中,数据冲突频繁发生,在超远距离传输中,数据的等效速率经常受到影响。本文研究了LoRa无线网络中几种类型的数据冲突,其中大多数冲突受扩展因子(SF)分配的影响。该研究还探讨了基于LoRa调制的基于成员的金枪鱼群优化(MTSO)与后向散射通信(BackCom)的集成。建立了一个分析结构来检验所考虑的网络仿真的错误率效率。在网络资源有限的情况下,利用MTSO实现了一种SF重分配机制,从而增加了LoRa网关的终端容量。在不增加网络或网关容量的情况下,该技术降低了数据冲突的频率。本文针对终端数量增加时顺丰的再分配问题,提出了一种顺丰选择机制和一种迭代的顺丰检测方法,以确保每个通信链路的数据速率独立。具体地说,假设消除射频干扰(RFI),本文导出了新的精确估计的误码率(BER)、符号误码率(SER)和帧误码率(FER)的封闭形式方程。研究结果表明,随着信噪比(SNR)的增加,系统的误码率(BER)、误码率(FER)和误码率(SER)效率也随着SF变量的调整而提高。
{"title":"Enhanced spreading factor allocation and backscatter communication via membership based tuna swarm optimization for LoRa protocol","authors":"Swathika R.,&nbsp;Dilip Kumar S.M.","doi":"10.1016/j.jnca.2025.104360","DOIUrl":"10.1016/j.jnca.2025.104360","url":null,"abstract":"<div><div>With spread spectrum modulation, LoRa (Long-Range), an Internet of Things (IoT) communication method, enables ultra-long-distance transmission in the recent times. Data conflicts occur frequently in networks with many nodes, and the equivalent rate often suffers in ultra-long-distance transmissions. This work examines several kinds of data collisions in LoRa wireless networks, most of which are influenced by the assignment of the Spreading Factor (SF). The study also explores the integration of Membership based Tuna Swarm Optimization (MTSO) with LoRa modulation into Backscatter Communications (BackCom). An analytical structure is established to examine the error rate efficiency of the network simulation under consideration. With restricted network resources, MTSO is employed to implement an SF redistribution mechanism, thereby increasing the terminal capacity of the LoRa gateway. Without increasing network or gateway capacity, the proposed technique reduces the frequency of data collisions. This paper addresses the reallocation of SF as the number of terminals increases, presenting an SF selection mechanism and an iterative SF inspection method to ensure independent data rates for each communication link. Specifically, assuming canceled Radio-Frequency Interference (RFI), this paper derives new precise and estimated closed-form equations for the Bit Error Rate (BER), Symbol Error Rate (SER), and Frame Error Rate (FER). The findings show that as the Signal-To-Noise Ratio (SNR) increases, the system’s BER, FER, and SER efficiency also improve when the SF variables are tuned.</div></div>","PeriodicalId":54784,"journal":{"name":"Journal of Network and Computer Applications","volume":"244 ","pages":"Article 104360"},"PeriodicalIF":8.0,"publicationDate":"2025-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145314969","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
ArtPerception: ASCII art-based jailbreak on LLMs with recognition pre-test ArtPerception:基于ASCII艺术的llm越狱与识别预测试
IF 8 2区 计算机科学 Q1 COMPUTER SCIENCE, HARDWARE & ARCHITECTURE Pub Date : 2025-12-01 Epub Date: 2025-10-10 DOI: 10.1016/j.jnca.2025.104356
Guan-Yan Yang , Tzu-Yu Cheng , Ya-Wen Teng , Farn Wang , Kuo-Hui Yeh
The integration of Large Language Models (LLMs) into computer applications has introduced transformative capabilities but also significant security challenges. Existing safety alignments, which primarily focus on semantic interpretation, leave LLMs vulnerable to attacks that use non-standard data representations. This paper introduces ArtPerception, a novel black-box jailbreak framework that strategically leverages ASCII art to bypass the security measures of state-of-the-art (SOTA) LLMs. Unlike prior methods that rely on iterative, brute-force attacks, ArtPerception introduces a systematic, two-phase methodology. Phase 1 conducts a one-time, model-specific pre-test to empirically determine the optimal parameters for ASCII art recognition. Phase 2 leverages these insights to launch a highly efficient, one-shot malicious jailbreak attack. We propose a Modified Levenshtein Distance (MLD) metric for a more nuanced evaluation of an LLM’s recognition capability. Through comprehensive experiments on four SOTA open-source LLMs, we demonstrate superior jailbreak performance. We further validate our framework’s real-world relevance by showing its successful transferability to leading commercial models, including GPT-4o, Claude Sonnet 3.7, and DeepSeek-V3, and by conducting a rigorous effectiveness analysis against potential defenses such as LLaMA Guard and Azure’s content filters. Our findings underscore that true LLM security requires defending against a multi-modal space of interpretations, even within text-only inputs, and highlight the effectiveness of strategic, reconnaissance-based attacks.
Content Warning: This paper includes potentially harmful and offensive model outputs.
将大型语言模型(llm)集成到计算机应用程序中带来了变革性的能力,但也带来了重大的安全挑战。现有的安全校准主要关注语义解释,这使得llm容易受到使用非标准数据表示的攻击。本文介绍了ArtPerception,一个新颖的黑盒越狱框架,战略性地利用ASCII艺术绕过最先进的(SOTA) llm的安全措施。与之前依赖于迭代、暴力攻击的方法不同,ArtPerception引入了一种系统的两阶段方法。阶段1进行一次性的、特定于模型的预测试,以经验确定ASCII艺术识别的最佳参数。阶段2利用这些洞见发起高效的一次性恶意越狱攻击。我们提出了一个改进的Levenshtein距离(MLD)度量来更细致地评估LLM的识别能力。通过在四个SOTA开源llm上的综合实验,我们展示了优越的越狱性能。我们进一步验证了我们的框架与现实世界的相关性,展示了其成功转移到领先的商业模型,包括gpt - 40、Claude Sonnet 3.7和DeepSeek-V3,并对潜在的防御(如LLaMA Guard和Azure的内容过滤器)进行了严格的有效性分析。我们的研究结果强调,真正的法学硕士安全需要防御多模态的解释空间,即使在纯文本输入中也是如此,并强调了基于侦察的战略攻击的有效性。
{"title":"ArtPerception: ASCII art-based jailbreak on LLMs with recognition pre-test","authors":"Guan-Yan Yang ,&nbsp;Tzu-Yu Cheng ,&nbsp;Ya-Wen Teng ,&nbsp;Farn Wang ,&nbsp;Kuo-Hui Yeh","doi":"10.1016/j.jnca.2025.104356","DOIUrl":"10.1016/j.jnca.2025.104356","url":null,"abstract":"<div><div>The integration of Large Language Models (LLMs) into computer applications has introduced transformative capabilities but also significant security challenges. Existing safety alignments, which primarily focus on semantic interpretation, leave LLMs vulnerable to attacks that use non-standard data representations. This paper introduces ArtPerception, a novel black-box jailbreak framework that strategically leverages ASCII art to bypass the security measures of state-of-the-art (SOTA) LLMs. Unlike prior methods that rely on iterative, brute-force attacks, ArtPerception introduces a systematic, two-phase methodology. Phase 1 conducts a one-time, model-specific pre-test to empirically determine the optimal parameters for ASCII art recognition. Phase 2 leverages these insights to launch a highly efficient, one-shot malicious jailbreak attack. We propose a Modified Levenshtein Distance (MLD) metric for a more nuanced evaluation of an LLM’s recognition capability. Through comprehensive experiments on four SOTA open-source LLMs, we demonstrate superior jailbreak performance. We further validate our framework’s real-world relevance by showing its successful transferability to leading commercial models, including GPT-4o, Claude Sonnet 3.7, and DeepSeek-V3, and by conducting a rigorous effectiveness analysis against potential defenses such as LLaMA Guard and Azure’s content filters. Our findings underscore that true LLM security requires defending against a multi-modal space of interpretations, even within text-only inputs, and highlight the effectiveness of strategic, reconnaissance-based attacks.</div><div>Content Warning: This paper includes potentially harmful and offensive model outputs.</div></div>","PeriodicalId":54784,"journal":{"name":"Journal of Network and Computer Applications","volume":"244 ","pages":"Article 104356"},"PeriodicalIF":8.0,"publicationDate":"2025-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145314972","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Reinforcement learning based multi-agent system for smart microgrid 基于强化学习的智能微电网多智能体系统
IF 8 2区 计算机科学 Q1 COMPUTER SCIENCE, HARDWARE & ARCHITECTURE Pub Date : 2025-12-01 Epub Date: 2025-09-30 DOI: 10.1016/j.jnca.2025.104339
Niharika Singh , Kishu Gupta , Ashutosh Kumar Singh , Perumal Nallagownden , Irraivan Elamvazuthi
Smart microgrid (SMG) communication networks face significant challenges in maintaining high Quality of Service (QoS) due to dynamic load variations, fluctuating network conditions, and potential component faults, which can increase latency, reduce throughput, and compromise fault recovery. The growing integration of distributed renewable energy resources demands adaptive and intelligent routing mechanisms capable of operating efficiently under such diverse and fault-prone conditions. This paper presents a Q-Reinforcement Learning-based Multi-Agent Bellman Routing (QRL-MABR) algorithm, which enhances the traditional MABR approach by embedding a Q-learning module within each network agent. Agents dynamically learn optimal routing policies, balance exploration and exploitation action selection with adaptive temperature scaling, and jointly optimize latency, throughput, jitter, convergence speed, and fault resilience.
Simulations on IEEE 9, 14, 34, 39, and 57 bus SMG testbeds demonstrate that QRL-MABR significantly outperforms conventional routing protocols (MABR, RIP, OLSR, OSPFv2) and advanced RL-based algorithms (SN-MAPPO, DDQL, MDDPG, SARSA-λ, TD3), achieving 16%–28% delay reduction, 14%–16% throughput gains, 17%–21% jitter improvement, and superior fault recovery. Thus, QRL-MABR provides a robust, scalable, and intelligent framework for next-generation smart microgrids.
由于动态负载变化、波动的网络条件和潜在的组件故障,智能微电网(SMG)通信网络在保持高服务质量(QoS)方面面临着重大挑战,这些故障会增加延迟、降低吞吐量并危及故障恢复。分布式可再生能源的日益整合需要自适应和智能的路由机制,能够在这种多样化和易故障的条件下有效运行。本文提出了一种基于q -强化学习的多智能体Bellman路由(QRL-MABR)算法,该算法通过在每个网络智能体中嵌入q -学习模块来改进传统的MABR方法。智能体动态学习最优路由策略,通过自适应温度缩放平衡探索和开发动作选择,共同优化延迟、吞吐量、抖动、收敛速度和故障恢复能力。在IEEE 9、14、34、39和57总线SMG测试台上的仿真结果表明,QRL-MABR显著优于传统路由协议(MABR、RIP、OLSR、OSPFv2)和基于路由的高级算法(SN-MAPPO、DDQL、MDDPG、SARSA-λ、TD3),时延降低16%-28%,吞吐量提高14%-16%,抖动改善17%-21%,故障恢复能力更强。因此,QRL-MABR为下一代智能微电网提供了一个强大、可扩展和智能的框架。
{"title":"Reinforcement learning based multi-agent system for smart microgrid","authors":"Niharika Singh ,&nbsp;Kishu Gupta ,&nbsp;Ashutosh Kumar Singh ,&nbsp;Perumal Nallagownden ,&nbsp;Irraivan Elamvazuthi","doi":"10.1016/j.jnca.2025.104339","DOIUrl":"10.1016/j.jnca.2025.104339","url":null,"abstract":"<div><div>Smart microgrid (SMG) communication networks face significant challenges in maintaining high Quality of Service (QoS) due to dynamic load variations, fluctuating network conditions, and potential component faults, which can increase latency, reduce throughput, and compromise fault recovery. The growing integration of distributed renewable energy resources demands adaptive and intelligent routing mechanisms capable of operating efficiently under such diverse and fault-prone conditions. This paper presents a Q-Reinforcement Learning-based Multi-Agent Bellman Routing (QRL-MABR) algorithm, which enhances the traditional MABR approach by embedding a Q-learning module within each network agent. Agents dynamically learn optimal routing policies, balance exploration and exploitation action selection with adaptive temperature scaling, and jointly optimize latency, throughput, jitter, convergence speed, and fault resilience.</div><div>Simulations on IEEE 9, 14, 34, 39, and 57 bus SMG testbeds demonstrate that QRL-MABR significantly outperforms conventional routing protocols (MABR, RIP, OLSR, OSPFv2) and advanced RL-based algorithms (SN-MAPPO, DDQL, MDDPG, SARSA-<span><math><mi>λ</mi></math></span>, TD3), achieving 16%–28% delay reduction, 14%–16% throughput gains, 17%–21% jitter improvement, and superior fault recovery. Thus, QRL-MABR provides a robust, scalable, and intelligent framework for next-generation smart microgrids.</div></div>","PeriodicalId":54784,"journal":{"name":"Journal of Network and Computer Applications","volume":"244 ","pages":"Article 104339"},"PeriodicalIF":8.0,"publicationDate":"2025-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145223412","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Mamba-NTP: Mamba-based network traffic prediction with sparse measurements Mamba-NTP:基于mamba的稀疏测量网络流量预测
IF 8 2区 计算机科学 Q1 COMPUTER SCIENCE, HARDWARE & ARCHITECTURE Pub Date : 2025-12-01 Epub Date: 2025-10-22 DOI: 10.1016/j.jnca.2025.104364
Chengzhe Xu , Yingya Guo , Huan Luo , Yue Yu , Zebo Huang
Accurate network traffic prediction is critical for efficient network planning and routing, especially in large-scale and dynamic environments. Traditional approaches rely on full-scale measurements, which are often impractical due to cost, scalability, and privacy concerns. Sparse measurements offer a more feasible alternative but lead to incomplete traffic data, posing significant challenges for accurate prediction. To address this, we propose Mamba-NTP, a novel end-to-end Mamba-based Network Traffic Prediction framework designed for sparse measurement settings. Leveraging the recent Mamba state-space model, Mamba-NTP captures long-range spatiotemporal dependencies with linear time complexity, enabling efficient and scalable prediction. Furthermore, Mamba-NTP employs a multi-task learning paradigm — comprising Traffic Completion, Graph Learning, and Traffic Prediction tasks — to extract shared traffic representations and enhance prediction robustness. In addition, the graph learning task in Mamba-NTP leverages graph learning techniques to infer intrinsic traffic correlations and model the inner traffic dependencies among network nodes. Extensive experiments on real-world datasets demonstrate that Mamba-NTP consistently outperforms state-of-the-art methods in both accuracy and efficiency under various levels of measurement sparsity.
准确的网络流量预测对于有效的网络规划和路由至关重要,特别是在大规模和动态环境中。传统的方法依赖于全面的测量,由于成本、可伸缩性和隐私问题,这通常是不切实际的。稀疏测量提供了一个更可行的替代方案,但导致交通数据不完整,对准确预测提出了重大挑战。为了解决这个问题,我们提出了Mamba-NTP,这是一个新颖的端到端基于mamba的网络流量预测框架,专为稀疏测量设置而设计。利用最新的Mamba状态空间模型,Mamba- ntp可以捕获具有线性时间复杂性的远程时空依赖关系,从而实现高效和可扩展的预测。此外,Mamba-NTP采用多任务学习范式——包括流量完成、图学习和流量预测任务——来提取共享的流量表示并增强预测的鲁棒性。此外,Mamba-NTP中的图学习任务利用图学习技术来推断内在的流量相关性,并对网络节点之间的内部流量依赖关系进行建模。对真实世界数据集的广泛实验表明,在各种测量稀疏度水平下,Mamba-NTP始终优于最先进的精度和效率方法。
{"title":"Mamba-NTP: Mamba-based network traffic prediction with sparse measurements","authors":"Chengzhe Xu ,&nbsp;Yingya Guo ,&nbsp;Huan Luo ,&nbsp;Yue Yu ,&nbsp;Zebo Huang","doi":"10.1016/j.jnca.2025.104364","DOIUrl":"10.1016/j.jnca.2025.104364","url":null,"abstract":"<div><div>Accurate network traffic prediction is critical for efficient network planning and routing, especially in large-scale and dynamic environments. Traditional approaches rely on full-scale measurements, which are often impractical due to cost, scalability, and privacy concerns. Sparse measurements offer a more feasible alternative but lead to incomplete traffic data, posing significant challenges for accurate prediction. To address this, we propose Mamba-NTP, a novel end-to-end Mamba-based Network Traffic Prediction framework designed for sparse measurement settings. Leveraging the recent Mamba state-space model, Mamba-NTP captures long-range spatiotemporal dependencies with linear time complexity, enabling efficient and scalable prediction. Furthermore, Mamba-NTP employs a multi-task learning paradigm — comprising Traffic Completion, Graph Learning, and Traffic Prediction tasks — to extract shared traffic representations and enhance prediction robustness. In addition, the graph learning task in Mamba-NTP leverages graph learning techniques to infer intrinsic traffic correlations and model the inner traffic dependencies among network nodes. Extensive experiments on real-world datasets demonstrate that Mamba-NTP consistently outperforms state-of-the-art methods in both accuracy and efficiency under various levels of measurement sparsity.</div></div>","PeriodicalId":54784,"journal":{"name":"Journal of Network and Computer Applications","volume":"244 ","pages":"Article 104364"},"PeriodicalIF":8.0,"publicationDate":"2025-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145364130","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
A protocol-independent in-network security service for cloud applications 为云应用程序提供协议独立的网络内安全服务
IF 8 2区 计算机科学 Q1 COMPUTER SCIENCE, HARDWARE & ARCHITECTURE Pub Date : 2025-12-01 Epub Date: 2025-10-22 DOI: 10.1016/j.jnca.2025.104368
Bin Song , Bin Sun , Qiang Fu , Hao Li
Cloud services are increasingly generating a large amount of Internet traffic. Much of it such as rich media and gaming traffic is not highly sensitive but prefers some protection. The traditional end-to-end encryption such as Transport Layer Security Protocol (TLS) is costly and has its own issues, such as increased latency, while the simple anonymity solutions cannot resist traffic analysis attacks. In this paper, we propose FlowShredder, a protocol-independent and in-network service to secure such traffic in the cloud. FlowShredder aims to break the association between the packets, the data flow, and the hosts by obfuscating the packet header (and some payload if needed). Without the context of the flow and the hosts, these packets are of little value to the adversary. The operation is carried out at cloud gateways, without encrypting the payload. Its simple logic can therefore be executed within a single pipeline of the Tofino programmable switch, to ensure wire-speed performance without the scalability issue. Being protocol-independent and operating in-network at wire speed make FlowShredder a practical and generic security service to protect the cloud traffic. In addition, FlowShredder can work with end-to-end encryption such as 0-RTT TLS (e.g., Quick UDP Internet Connections Protocol, QUIC) for enhanced protection, ideal for web browsing or real-time communications. We implement FlowShredder in Programming Protocol-Independent Packet Processors Language (P4) switches. Experiments in synthetic and real scenarios show that FlowShredder can effectively resist the traffic analysis attack with supervised learning techniques, and realize the wire-speed performance over a 100Gbps network while incurring minor overhead.
云服务越来越多地产生大量的互联网流量。其中很多数据,如富媒体和游戏流量,并不是高度敏感的,但需要一些保护。传输层安全协议(Transport Layer Security Protocol, TLS)等传统的端到端加密成本高,并且存在延迟增加等问题,而简单的匿名解决方案无法抵御流量分析攻击。在本文中,我们提出了FlowShredder,这是一种协议独立的网络内服务,用于保护云中的此类流量。FlowShredder旨在通过混淆包头(如果需要的话,还有一些有效负载)来打破包、数据流和主机之间的关联。如果没有流和主机的上下文,这些数据包对攻击者来说几乎没有价值。该操作在云网关上执行,不加密有效负载。因此,其简单的逻辑可以在Tofino可编程交换机的单个管道中执行,以确保线速性能而不存在可扩展性问题。协议独立和在网络中以线速运行使FlowShredder成为保护云流量的实用和通用安全服务。此外,FlowShredder可以使用端到端加密,如0-RTT TLS(例如,快速UDP互联网连接协议,QUIC),以增强保护,非常适合网页浏览或实时通信。我们在编程协议独立包处理器语言(P4)交换机中实现了FlowShredder。综合和真实场景实验表明,FlowShredder利用监督学习技术可以有效抵御流量分析攻击,并且在产生较小开销的情况下实现100Gbps网络的线速性能。
{"title":"A protocol-independent in-network security service for cloud applications","authors":"Bin Song ,&nbsp;Bin Sun ,&nbsp;Qiang Fu ,&nbsp;Hao Li","doi":"10.1016/j.jnca.2025.104368","DOIUrl":"10.1016/j.jnca.2025.104368","url":null,"abstract":"<div><div>Cloud services are increasingly generating a large amount of Internet traffic. Much of it such as rich media and gaming traffic is not highly sensitive but prefers some protection. The traditional end-to-end encryption such as Transport Layer Security Protocol (TLS) is costly and has its own issues, such as increased latency, while the simple anonymity solutions cannot resist traffic analysis attacks. In this paper, we propose FlowShredder, a protocol-independent and in-network service to secure such traffic in the cloud. FlowShredder aims to break the association between the packets, the data flow, and the hosts by obfuscating the packet header (and some payload if needed). Without the context of the flow and the hosts, these packets are of little value to the adversary. The operation is carried out at cloud gateways, without encrypting the payload. Its simple logic can therefore be executed within a single pipeline of the Tofino programmable switch, to ensure wire-speed performance without the scalability issue. Being protocol-independent and operating in-network at wire speed make FlowShredder a practical and generic security service to protect the cloud traffic. In addition, FlowShredder can work with end-to-end encryption such as 0-RTT TLS (<em>e.g.</em>, Quick UDP Internet Connections Protocol, QUIC) for enhanced protection, ideal for web browsing or real-time communications. We implement FlowShredder in Programming Protocol-Independent Packet Processors Language (P4) switches. Experiments in synthetic and real scenarios show that FlowShredder can effectively resist the traffic analysis attack with supervised learning techniques, and realize the wire-speed performance over a 100Gbps network while incurring minor overhead.</div></div>","PeriodicalId":54784,"journal":{"name":"Journal of Network and Computer Applications","volume":"244 ","pages":"Article 104368"},"PeriodicalIF":8.0,"publicationDate":"2025-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145364208","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
期刊
Journal of Network and Computer Applications
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1