首页 > 最新文献

Journal of Network and Computer Applications最新文献

英文 中文
Optimizing 5G network slicing with DRL: Balancing eMBB, URLLC, and mMTC with OMA, NOMA, and RSMA 用DRL优化5G网络切片:用OMA、NOMA和RSMA平衡eMBB、URLLC和mMTC
IF 7.7 2区 计算机科学 Q1 COMPUTER SCIENCE, HARDWARE & ARCHITECTURE Pub Date : 2024-11-28 DOI: 10.1016/j.jnca.2024.104068
Silvestre Malta , Pedro Pinto , Manuel Fernández-Veiga
The advent of 5th Generation (5G) networks has introduced the strategy of network slicing as a paradigm shift, enabling the provision of services with distinct Quality of Service (QoS) requirements. The 5th Generation New Radio (5G NR) standard complies with the use cases Enhanced Mobile Broadband (eMBB), Ultra-Reliable Low Latency Communications (URLLC), and Massive Machine Type Communications (mMTC), which demand a dynamic adaptation of network slicing to meet the diverse traffic needs. This dynamic adaptation presents both a critical challenge and a significant opportunity to improve 5G network efficiency. This paper proposes a Deep Reinforcement Learning (DRL) agent that performs dynamic resource allocation in 5G wireless network slicing according to traffic requirements of the 5G use cases within two scenarios: eMBB with URLLC and eMBB with mMTC. The DRL agent evaluates the performance of different decoding schemes such as Orthogonal Multiple Access (OMA), Non-Orthogonal Multiple Access (NOMA), and Rate Splitting Multiple Access (RSMA) and applies the best decoding scheme in these scenarios under different network conditions. The DRL agent has been tested to maximize the sum rate in scenario eMBB with URLLC and to maximize the number of successfully decoded devices in scenario eMBB with mMTC, both with different combinations of number of devices, power gains and number of allocated frequencies. The results show that the DRL agent dynamically chooses the best decoding scheme and presents an efficiency in maximizing the sum rate and the decoded devices between 84% and 100% for both scenarios evaluated.
第五代(5G)网络的出现引入了网络切片策略作为一种范式转变,使提供具有不同服务质量(QoS)要求的服务成为可能。第5代新无线电(5G NR)标准符合增强型移动宽带(eMBB)、超可靠低延迟通信(URLLC)和大规模机器类型通信(mMTC)用例,这些用例需要动态适应网络切片,以满足不同的流量需求。这种动态适应是提高5G网络效率的重大挑战,也是重大机遇。本文提出了一种深度强化学习(DRL)智能体,该智能体根据5G用例的流量需求,在带URLLC的eMBB和带mMTC的eMBB两种场景下,对5G无线网络切片进行动态资源分配。DRL代理对OMA (Orthogonal Multiple Access)、NOMA (Non-Orthogonal Multiple Access)、RSMA (Rate Splitting Multiple Access)等不同的译码方案的性能进行评估,并在不同的网络条件下应用最佳的译码方案。已经测试了DRL代理在带URLLC的eMBB场景中最大限度地提高和速率,在带mMTC的eMBB场景中最大限度地提高成功解码设备的数量,这两种情况都采用了设备数量、功率增益和分配频率数量的不同组合。结果表明,在两种情况下,DRL代理动态选择最佳解码方案,并且在最大化和率和解码设备之间具有84%到100%的效率。
{"title":"Optimizing 5G network slicing with DRL: Balancing eMBB, URLLC, and mMTC with OMA, NOMA, and RSMA","authors":"Silvestre Malta ,&nbsp;Pedro Pinto ,&nbsp;Manuel Fernández-Veiga","doi":"10.1016/j.jnca.2024.104068","DOIUrl":"10.1016/j.jnca.2024.104068","url":null,"abstract":"<div><div>The advent of 5th Generation (5G) networks has introduced the strategy of network slicing as a paradigm shift, enabling the provision of services with distinct Quality of Service (QoS) requirements. The 5th Generation New Radio (5G NR) standard complies with the use cases Enhanced Mobile Broadband (eMBB), Ultra-Reliable Low Latency Communications (URLLC), and Massive Machine Type Communications (mMTC), which demand a dynamic adaptation of network slicing to meet the diverse traffic needs. This dynamic adaptation presents both a critical challenge and a significant opportunity to improve 5G network efficiency. This paper proposes a Deep Reinforcement Learning (DRL) agent that performs dynamic resource allocation in 5G wireless network slicing according to traffic requirements of the 5G use cases within two scenarios: eMBB with URLLC and eMBB with mMTC. The DRL agent evaluates the performance of different decoding schemes such as Orthogonal Multiple Access (OMA), Non-Orthogonal Multiple Access (NOMA), and Rate Splitting Multiple Access (RSMA) and applies the best decoding scheme in these scenarios under different network conditions. The DRL agent has been tested to maximize the sum rate in scenario eMBB with URLLC and to maximize the number of successfully decoded devices in scenario eMBB with mMTC, both with different combinations of number of devices, power gains and number of allocated frequencies. The results show that the DRL agent dynamically chooses the best decoding scheme and presents an efficiency in maximizing the sum rate and the decoded devices between 84% and 100% for both scenarios evaluated.</div></div>","PeriodicalId":54784,"journal":{"name":"Journal of Network and Computer Applications","volume":"234 ","pages":"Article 104068"},"PeriodicalIF":7.7,"publicationDate":"2024-11-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142745343","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Gwydion: Efficient auto-scaling for complex containerized applications in Kubernetes through Reinforcement Learning Gwydion:通过强化学习为Kubernetes中的复杂容器化应用程序提供高效的自动扩展
IF 7.7 2区 计算机科学 Q1 COMPUTER SCIENCE, HARDWARE & ARCHITECTURE Pub Date : 2024-11-26 DOI: 10.1016/j.jnca.2024.104067
José Santos , Efstratios Reppas , Tim Wauters , Bruno Volckaert , Filip De Turck
Containers have reshaped application deployment and life-cycle management in recent cloud platforms. The paradigm shift from large monolithic applications to complex graphs of loosely-coupled microservices aims to increase deployment flexibility and operational efficiency. However, efficient allocation and scaling of microservice applications is challenging due to their intricate inter-dependencies. Existing works do not consider microservice dependencies, which could lead to the application’s performance degradation when service demand increases. As dependencies increase, communication between microservices becomes more complex and frequent, leading to slower response times and higher resource consumption, especially during high demand. In addition, performance issues in one microservice can also trigger a ripple effect across dependent services, exacerbating the performance degradation across the entire application. This paper studies the impact of microservice inter-dependencies in auto-scaling by proposing Gwydion, a novel framework that enables different auto-scaling goals through Reinforcement Learning (RL) algorithms. Gwydion has been developed based on the OpenAI Gym library and customized for the popular Kubernetes (K8s) platform to bridge the gap between RL and auto-scaling research by training RL algorithms on real cloud environments for two opposing reward strategies: cost-aware and latency-aware. Gwydion focuses on improving resource usage and reducing the application’s response time by considering microservice inter-dependencies when scaling horizontally. Experiments with microservice benchmark applications, such as Redis Cluster (RC) and Online Boutique (OB), show that RL agents can reduce deployment costs and the application’s response time compared to default scaling mechanisms, achieving up to 50% lower latency while avoiding performance degradation. For RC, cost-aware algorithms can reduce the number of deployed pods (2 to 4), resulting in slightly higher latency (300μs to 6 ms) but lower resource consumption. For OB, all RL algorithms exhibit a notable response time improvement by considering all microservices in the observation space, enabling the sequential triggering of actions across different deployments. This leads to nearly 30% cost savings while maintaining consistently lower latency throughout the experiment. Gwydion aims to advance auto-scaling research in a rapidly evolving dynamic cloud environment.
在最近的云平台中,容器重塑了应用程序部署和生命周期管理。从大型单片应用程序到松散耦合微服务的复杂图的范式转变旨在提高部署灵活性和操作效率。然而,由于微服务应用程序之间错综复杂的相互依赖关系,有效的分配和扩展是具有挑战性的。现有的工作没有考虑微服务依赖,当服务需求增加时,这可能导致应用程序的性能下降。随着依赖关系的增加,微服务之间的通信变得更加复杂和频繁,从而导致更慢的响应时间和更高的资源消耗,特别是在高需求期间。此外,一个微服务中的性能问题还可能引发跨依赖服务的连锁反应,从而加剧整个应用程序的性能下降。本文通过提出Gwydion来研究微服务相互依赖对自动扩展的影响,Gwydion是一个通过强化学习(RL)算法实现不同自动扩展目标的新框架。Gwydion是基于OpenAI Gym库开发的,并为流行的Kubernetes (K8s)平台定制的,通过在真实的云环境中训练RL算法,为两种相反的奖励策略(成本感知和延迟感知)架起了RL和自动扩展研究之间的桥梁。在横向扩展时,Gwydion通过考虑微服务的相互依赖,专注于改善资源的使用,减少应用程序的响应时间。对微服务基准应用程序(如Redis Cluster (RC)和Online Boutique (OB))的实验表明,与默认扩展机制相比,RL代理可以降低部署成本和应用程序的响应时间,在避免性能下降的同时实现高达50%的延迟降低。对于RC,成本感知算法可以减少部署的pod数量(2到4),从而导致稍高的延迟(300μs到6 ms),但降低资源消耗。对于OB,所有RL算法通过考虑观察空间中的所有微服务,支持跨不同部署的顺序触发操作,显示出显著的响应时间改进。这可以节省近30%的成本,同时在整个实验过程中始终保持较低的延迟。Gwydion旨在在快速发展的动态云环境中推进自动伸缩研究。
{"title":"Gwydion: Efficient auto-scaling for complex containerized applications in Kubernetes through Reinforcement Learning","authors":"José Santos ,&nbsp;Efstratios Reppas ,&nbsp;Tim Wauters ,&nbsp;Bruno Volckaert ,&nbsp;Filip De Turck","doi":"10.1016/j.jnca.2024.104067","DOIUrl":"10.1016/j.jnca.2024.104067","url":null,"abstract":"<div><div>Containers have reshaped application deployment and life-cycle management in recent cloud platforms. The paradigm shift from large monolithic applications to complex graphs of loosely-coupled microservices aims to increase deployment flexibility and operational efficiency. However, efficient allocation and scaling of microservice applications is challenging due to their intricate inter-dependencies. Existing works do not consider microservice dependencies, which could lead to the application’s performance degradation when service demand increases. As dependencies increase, communication between microservices becomes more complex and frequent, leading to slower response times and higher resource consumption, especially during high demand. In addition, performance issues in one microservice can also trigger a ripple effect across dependent services, exacerbating the performance degradation across the entire application. This paper studies the impact of microservice inter-dependencies in auto-scaling by proposing <em>Gwydion</em>, a novel framework that enables different auto-scaling goals through Reinforcement Learning (RL) algorithms. <em>Gwydion</em> has been developed based on the OpenAI Gym library and customized for the popular Kubernetes (K8s) platform to bridge the gap between RL and auto-scaling research by training RL algorithms on real cloud environments for two opposing reward strategies: cost-aware and latency-aware. <em>Gwydion</em> focuses on improving resource usage and reducing the application’s response time by considering microservice inter-dependencies when scaling horizontally. Experiments with microservice benchmark applications, such as Redis Cluster (RC) and Online Boutique (OB), show that RL agents can reduce deployment costs and the application’s response time compared to default scaling mechanisms, achieving up to 50% lower latency while avoiding performance degradation. For RC, cost-aware algorithms can reduce the number of deployed pods (2 to 4), resulting in slightly higher latency (<span><math><mrow><mn>300</mn><mspace></mspace><mi>μ</mi><mi>s</mi></mrow></math></span> to 6 ms) but lower resource consumption. For OB, all RL algorithms exhibit a notable response time improvement by considering all microservices in the observation space, enabling the sequential triggering of actions across different deployments. This leads to nearly 30% cost savings while maintaining consistently lower latency throughout the experiment. Gwydion aims to advance auto-scaling research in a rapidly evolving dynamic cloud environment.</div></div>","PeriodicalId":54784,"journal":{"name":"Journal of Network and Computer Applications","volume":"234 ","pages":"Article 104067"},"PeriodicalIF":7.7,"publicationDate":"2024-11-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142745345","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Handover Authenticated Key Exchange for Multi-access Edge Computing 多接入边缘计算的交接认证密钥交换
IF 7.7 2区 计算机科学 Q1 COMPUTER SCIENCE, HARDWARE & ARCHITECTURE Pub Date : 2024-11-22 DOI: 10.1016/j.jnca.2024.104071
Yuxin Xia , Jie Zhang , Ka Lok Man , Yuji Dong
Authenticated Key Exchange (AKE) has been playing a significant role in ensuring communication security. However, in some Multi-access Edge Computing (MEC) scenarios where a moving end-node switchedly connects to a sequence of edge-nodes, it is costly in terms of time and computing resources to repeatedly run AKE protocols between the end-node and each edge-node. Moreover, the cloud needs to be involved to assist the authentication between them, which goes against MEC’s purpose of bringing cloud services from cloud to closer to end-user. To address the above problems, this paper proposes a new type of AKE, named as Handover Authenticated Key Exchange (HAKE). In HAKE, an earlier AKE procedure handovers authentication materials and some parameters to its temporally next AKE procedure, thereby saving resources and reducing the participation of remote cloud. Following the framework of HAKE, we propose a concrete HAKE protocol based on Elliptic Curve Diffie–Hellman (ECDH) key exchange and ratcheted key exchange. Then we verify its security via Burrows-Abadi-Needham (BAN) logic and the Automated Validation of Internet Security Protocols and Applications (AVISPA) tool. Finally, we evaluate and test its performance. The results show that the HAKE protocol achieves security goals and reduces communication and computation costs compared to similar protocols.
验证密钥交换(AKE)在确保通信安全方面一直发挥着重要作用。然而,在某些多接入边缘计算(MEC)场景中,移动终端节点会切换连接到一系列边缘节点,在终端节点和每个边缘节点之间重复运行 AKE 协议会耗费大量时间和计算资源。此外,还需要云参与协助它们之间的身份验证,这与 MEC 将云服务从云端带到更接近终端用户的目的背道而驰。为解决上述问题,本文提出了一种新型 AKE,即 "切换认证密钥交换"(Handover Authenticated Key Exchange,HAKE)。在 HAKE 中,前一个 AKE 程序将认证材料和一些参数移交给其在时间上的下一个 AKE 程序,从而节省了资源并减少了远程云的参与。按照 HAKE 的框架,我们提出了一种基于椭圆曲线 Diffie-Hellman 密钥交换和梯度密钥交换的具体 HAKE 协议。然后,我们通过Burrows-Abadi-Needham(BAN)逻辑和互联网安全协议与应用自动验证(AVISPA)工具来验证其安全性。最后,我们对其性能进行了评估和测试。结果表明,与类似协议相比,HAKE 协议实现了安全目标,并降低了通信和计算成本。
{"title":"Handover Authenticated Key Exchange for Multi-access Edge Computing","authors":"Yuxin Xia ,&nbsp;Jie Zhang ,&nbsp;Ka Lok Man ,&nbsp;Yuji Dong","doi":"10.1016/j.jnca.2024.104071","DOIUrl":"10.1016/j.jnca.2024.104071","url":null,"abstract":"<div><div>Authenticated Key Exchange (AKE) has been playing a significant role in ensuring communication security. However, in some Multi-access Edge Computing (MEC) scenarios where a moving end-node switchedly connects to a sequence of edge-nodes, it is costly in terms of time and computing resources to repeatedly run AKE protocols between the end-node and each edge-node. Moreover, the cloud needs to be involved to assist the authentication between them, which goes against MEC’s purpose of bringing cloud services from cloud to closer to end-user. To address the above problems, this paper proposes a new type of AKE, named as Handover Authenticated Key Exchange (HAKE). In HAKE, an earlier AKE procedure handovers authentication materials and some parameters to its temporally next AKE procedure, thereby saving resources and reducing the participation of remote cloud. Following the framework of HAKE, we propose a concrete HAKE protocol based on Elliptic Curve Diffie–Hellman (ECDH) key exchange and ratcheted key exchange. Then we verify its security via Burrows-Abadi-Needham (BAN) logic and the Automated Validation of Internet Security Protocols and Applications (AVISPA) tool. Finally, we evaluate and test its performance. The results show that the HAKE protocol achieves security goals and reduces communication and computation costs compared to similar protocols.</div></div>","PeriodicalId":54784,"journal":{"name":"Journal of Network and Computer Applications","volume":"234 ","pages":"Article 104071"},"PeriodicalIF":7.7,"publicationDate":"2024-11-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142720458","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Community Detection method based on Random walk and Multi objective Evolutionary algorithm in complex networks 复杂网络中基于随机行走和多目标进化算法的社区检测方法
IF 7.7 2区 计算机科学 Q1 COMPUTER SCIENCE, HARDWARE & ARCHITECTURE Pub Date : 2024-11-22 DOI: 10.1016/j.jnca.2024.104070
Fahimeh Dabaghi-Zarandi, Mohammad Mehdi Afkhami, Mohammad Hossein Ashoori
In recent years, due to the existence of intricate interactions between multiple entities in complex networks, ranging from biology to social or economic networks, community detection has helped us to better understand these networks. In fact, research in community detection aims at extracting several almost separate sub-networks called communities from the complex structure of a network in order to gain a better understanding of network topology and functionality. In this regard, we propose a novel community detection method in this paper that is performed based on our defined architecture composed of four components including Pre-Processing, Primary Communities Composing, Population Generating, and Genetic Mutation components. In the first component, we identify and store similarity measures and estimate the number of communities. The second component composes primary community structures based on several random walks from significant center nodes. Afterwards, our identified primary community structure is converted to a suitable chromosome structure to use in next evolutionary-based components. In the third component, we generate a primary population along with their objective function. Then, we select several significant chromosomes from the primary population and merge their communities in order to generate subsequent populations. Finally, in the fourth component, we extract several best chromosomes and apply the mutation process on them to reach the best community structure considering evaluation functions. We evaluate our proposal based on different size of network scenarios including both real and artificial network scenarios. Compared with other approaches, the community structures detected by our proposal are not dependent on the size of networks and exhibit acceptable evaluation measures in all types of networks. Therefore, our proposal can detect results similar to real community structure.
近年来,由于复杂网络(从生物学到社会或经济网络)中多个实体之间存在复杂的相互作用,社区检测帮助我们更好地理解这些网络。事实上,社区检测的研究旨在从复杂的网络结构中提取出几个几乎独立的子网络,称为社区,以便更好地了解网络拓扑和功能。为此,本文提出了一种新的群落检测方法,该方法基于我们定义的由预处理、主群落组成、种群生成和基因突变四个组件组成的体系结构。在第一部分中,我们识别和存储相似性度量,并估计社区的数量。第二个组件基于从重要中心节点随机行走的若干次组成初级社区结构。然后,我们确定的主要群落结构被转换成合适的染色体结构,用于下一个基于进化的组件。在第三个组件中,我们生成一个主要人口及其目标函数。然后,我们从原始群体中选择一些重要的染色体并合并它们的群落以产生后续群体。最后,在第四部分中,我们提取了几条最佳染色体,并对其进行突变处理,以获得考虑评价函数的最佳群落结构。我们根据不同规模的网络场景来评估我们的提案,包括真实网络场景和人工网络场景。与其他方法相比,我们的方法检测的社区结构不依赖于网络的大小,并且在所有类型的网络中都表现出可接受的评估指标。因此,我们的提议可以检测出与真实社区结构相似的结果。
{"title":"Community Detection method based on Random walk and Multi objective Evolutionary algorithm in complex networks","authors":"Fahimeh Dabaghi-Zarandi,&nbsp;Mohammad Mehdi Afkhami,&nbsp;Mohammad Hossein Ashoori","doi":"10.1016/j.jnca.2024.104070","DOIUrl":"10.1016/j.jnca.2024.104070","url":null,"abstract":"<div><div>In recent years, due to the existence of intricate interactions between multiple entities in complex networks, ranging from biology to social or economic networks, community detection has helped us to better understand these networks. In fact, research in community detection aims at extracting several almost separate sub-networks called communities from the complex structure of a network in order to gain a better understanding of network topology and functionality. In this regard, we propose a novel community detection method in this paper that is performed based on our defined architecture composed of four components including Pre-Processing, Primary Communities Composing, Population Generating, and Genetic Mutation components. In the first component, we identify and store similarity measures and estimate the number of communities. The second component composes primary community structures based on several random walks from significant center nodes. Afterwards, our identified primary community structure is converted to a suitable chromosome structure to use in next evolutionary-based components. In the third component, we generate a primary population along with their objective function. Then, we select several significant chromosomes from the primary population and merge their communities in order to generate subsequent populations. Finally, in the fourth component, we extract several best chromosomes and apply the mutation process on them to reach the best community structure considering evaluation functions. We evaluate our proposal based on different size of network scenarios including both real and artificial network scenarios. Compared with other approaches, the community structures detected by our proposal are not dependent on the size of networks and exhibit acceptable evaluation measures in all types of networks. Therefore, our proposal can detect results similar to real community structure.</div></div>","PeriodicalId":54784,"journal":{"name":"Journal of Network and Computer Applications","volume":"234 ","pages":"Article 104070"},"PeriodicalIF":7.7,"publicationDate":"2024-11-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142745342","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Blockchain-inspired intelligent framework for logistic theft control 区块链启发的物流防盗智能框架
IF 7.7 2区 计算机科学 Q1 COMPUTER SCIENCE, HARDWARE & ARCHITECTURE Pub Date : 2024-11-17 DOI: 10.1016/j.jnca.2024.104055
Abed Alanazi , Abdullah Alqahtani , Shtwai Alsubai , Munish Bhatia
The smart logistics industry utilizes advanced software and hardware technologies to enhance efficient transmission. By integrating smart components, it identifies vulnerabilities within the logistics sector, making it more susceptible to physical attacks aimed at theft and control. The main goal is to propose an effective logistics monitoring system that automates theft prevention. Specifically, the suggested model analyzes logistics transmission patterns through secure surveillance enabled by IoT-based blockchain technology. Additionally, a bi-directional convolutional neural network is employed to evaluate real-time theft vulnerabilities, aiding optimal decision-making. The proposed method has been shown to provide accurate real-time analysis of risky behaviors. Experimental simulations indicate that the proposed solution significantly improves logistics monitoring. The system’s performance is assessed using various statistical metrics, including latency rate (7.44 s), a data processing cost (O((n1)logn)), and model training and testing results (precision (94.60%), recall (95.67%), and F-Measure (96.64%)), statistical performance (error reduction (48%)) and reliability (94.48%).
智能物流业利用先进的软件和硬件技术提高传输效率。通过集成智能组件,该系统可识别物流行业内的漏洞,使其更容易受到以盗窃和控制为目的的物理攻击。主要目标是提出一种有效的物流监控系统,实现自动防盗。具体来说,建议的模型通过基于物联网的区块链技术实现的安全监控来分析物流传输模式。此外,还采用了双向卷积神经网络来评估实时盗窃漏洞,从而帮助做出最佳决策。实验表明,所提出的方法能对风险行为进行准确的实时分析。实验模拟表明,所提出的解决方案大大改善了物流监控。该系统的性能使用各种统计指标进行评估,包括延迟率(7.44 秒)、数据处理成本(O((n-1)logn))、模型训练和测试结果(精确度(94.60%)、召回率(95.67%)和 F-Measure(96.64%))、统计性能(错误减少率(48%))和可靠性(94.48%)。
{"title":"Blockchain-inspired intelligent framework for logistic theft control","authors":"Abed Alanazi ,&nbsp;Abdullah Alqahtani ,&nbsp;Shtwai Alsubai ,&nbsp;Munish Bhatia","doi":"10.1016/j.jnca.2024.104055","DOIUrl":"10.1016/j.jnca.2024.104055","url":null,"abstract":"<div><div>The smart logistics industry utilizes advanced software and hardware technologies to enhance efficient transmission. By integrating smart components, it identifies vulnerabilities within the logistics sector, making it more susceptible to physical attacks aimed at theft and control. The main goal is to propose an effective logistics monitoring system that automates theft prevention. Specifically, the suggested model analyzes logistics transmission patterns through secure surveillance enabled by IoT-based blockchain technology. Additionally, a bi-directional convolutional neural network is employed to evaluate real-time theft vulnerabilities, aiding optimal decision-making. The proposed method has been shown to provide accurate real-time analysis of risky behaviors. Experimental simulations indicate that the proposed solution significantly improves logistics monitoring. The system’s performance is assessed using various statistical metrics, including latency rate (7.44 s), a data processing cost (<span><math><mrow><mi>O</mi><mrow><mo>(</mo><mrow><mo>(</mo><mi>n</mi><mo>−</mo><mn>1</mn><mo>)</mo></mrow><mo>log</mo><mi>n</mi><mo>)</mo></mrow></mrow></math></span>), and model training and testing results (precision (94.60%), recall (95.67%), and F-Measure (96.64%)), statistical performance (error reduction (48%)) and reliability (94.48%).</div></div>","PeriodicalId":54784,"journal":{"name":"Journal of Network and Computer Applications","volume":"234 ","pages":"Article 104055"},"PeriodicalIF":7.7,"publicationDate":"2024-11-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142696324","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
FRRL: A reinforcement learning approach for link failure recovery in a hybrid SDN FRRL:混合 SDN 中链路故障恢复的强化学习方法
IF 7.7 2区 计算机科学 Q1 COMPUTER SCIENCE, HARDWARE & ARCHITECTURE Pub Date : 2024-11-16 DOI: 10.1016/j.jnca.2024.104054
Yulong Ma , Yingya Guo , Ruiyu Yang , Huan Luo
Network failures, especially link failures, happen frequently in Internet Service Provider (ISP) networks. When link failures occur, the routing policies need to be re-computed and failure recovery usually takes a few minutes, which degrades the network performance to a great extent. Therefore, a proper failure recovery scheme that can realize a fast and timely routing policy computation needs to be designed. In this paper, we propose FRRL, a Reinforcement Learning (RL) approach to intelligently perceive network failures and timely compute the routing policy for improving the network performance when link failure happens. Specifically, to perceive the link failures, we design a Topology Difference Vector (TDV) encoder module in FRRL for encoding the topology structure with link failures. To efficiently compute the routing policy when link failures happen, we integrate the TDV in the agent training for learning the map between the encoded failure topology structure and routing policies. To evaluate the performance of our proposed method, we conduct experiments on three network topologies and the experimental results demonstrate that our proposed method has superior performance when link failures happen compared to other methods.
在互联网服务提供商(ISP)网络中,网络故障,尤其是链路故障经常发生。当链路故障发生时,路由策略需要重新计算,故障恢复通常需要几分钟的时间,这在很大程度上降低了网络性能。因此,需要设计一种合适的故障恢复方案,以实现快速、及时的路由策略计算。本文提出的 FRRL 是一种强化学习(RL)方法,它能智能地感知网络故障,并在链路故障发生时及时计算路由策略以提高网络性能。具体来说,为了感知链路故障,我们在 FRRL 中设计了拓扑差异向量(TDV)编码器模块,用于对链路故障拓扑结构进行编码。为了在链路故障发生时高效计算路由策略,我们将 TDV 集成到代理训练中,以学习编码故障拓扑结构与路由策略之间的映射。为了评估我们提出的方法的性能,我们在三种网络拓扑结构上进行了实验,实验结果表明,与其他方法相比,我们提出的方法在发生链路故障时具有更优越的性能。
{"title":"FRRL: A reinforcement learning approach for link failure recovery in a hybrid SDN","authors":"Yulong Ma ,&nbsp;Yingya Guo ,&nbsp;Ruiyu Yang ,&nbsp;Huan Luo","doi":"10.1016/j.jnca.2024.104054","DOIUrl":"10.1016/j.jnca.2024.104054","url":null,"abstract":"<div><div>Network failures, especially link failures, happen frequently in Internet Service Provider (ISP) networks. When link failures occur, the routing policies need to be re-computed and failure recovery usually takes a few minutes, which degrades the network performance to a great extent. Therefore, a proper failure recovery scheme that can realize a fast and timely routing policy computation needs to be designed. In this paper, we propose FRRL, a Reinforcement Learning (RL) approach to intelligently perceive network failures and timely compute the routing policy for improving the network performance when link failure happens. Specifically, to perceive the link failures, we design a Topology Difference Vector (TDV) encoder module in FRRL for encoding the topology structure with link failures. To efficiently compute the routing policy when link failures happen, we integrate the TDV in the agent training for learning the map between the encoded failure topology structure and routing policies. To evaluate the performance of our proposed method, we conduct experiments on three network topologies and the experimental results demonstrate that our proposed method has superior performance when link failures happen compared to other methods.</div></div>","PeriodicalId":54784,"journal":{"name":"Journal of Network and Computer Applications","volume":"234 ","pages":"Article 104054"},"PeriodicalIF":7.7,"publicationDate":"2024-11-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142696483","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
SAT-Net: A staggered attention network using graph neural networks for encrypted traffic classification SAT-Net:使用图神经网络的交错注意力网络,用于加密流量分类
IF 7.7 2区 计算机科学 Q1 COMPUTER SCIENCE, HARDWARE & ARCHITECTURE Pub Date : 2024-11-15 DOI: 10.1016/j.jnca.2024.104069
Zhiyuan Li, Hongyi Zhao, Jingyu Zhao, Yuqi Jiang, Fanliang Bu
With the increasing complexity of network protocol traffic in the modern network environment, the task of traffic classification is facing significant challenges. Existing methods lack research on the characteristics of traffic byte data and suffer from insufficient model generalization, leading to decreased classification accuracy. In response, we propose a method for encrypted traffic classification based on a Staggered Attention Network using Graph Neural Networks (SAT-Net), which takes into consideration both computer network topology and user interaction processes. Firstly, we design a Packet Byte Graph (PBG) to efficiently capture the byte features of flow and their relationships, thereby transforming the encrypted traffic classification problem into a graph classification problem. Secondly, we meticulously construct a GNN-based PBG learner, where the feature remapping layer and staggered attention layer are respectively used for feature propagation and fusion, enhancing the robustness of the model. Experiments on multiple different types of encrypted traffic datasets demonstrate that SAT-Net outperforms various advanced methods in identifying VPN traffic, Tor traffic, and malicious traffic, showing strong generalization capability.
随着现代网络环境中网络协议流量的日益复杂,流量分类任务面临着巨大挑战。现有方法缺乏对流量字节数据特征的研究,模型泛化不足,导致分类准确率下降。为此,我们提出了一种基于图神经网络交错注意力网络(SAT-Net)的加密流量分类方法,该方法同时考虑了计算机网络的拓扑结构和用户交互过程。首先,我们设计了数据包字节图(PBG),以有效捕捉流量的字节特征及其关系,从而将加密流量分类问题转化为图分类问题。其次,我们精心构建了基于 GNN 的 PBG 学习器,其中特征重映射层和交错注意力层分别用于特征传播和融合,从而增强了模型的鲁棒性。在多个不同类型的加密流量数据集上的实验表明,SAT-Net 在识别 VPN 流量、Tor 流量和恶意流量方面的表现优于各种先进方法,显示出很强的泛化能力。
{"title":"SAT-Net: A staggered attention network using graph neural networks for encrypted traffic classification","authors":"Zhiyuan Li,&nbsp;Hongyi Zhao,&nbsp;Jingyu Zhao,&nbsp;Yuqi Jiang,&nbsp;Fanliang Bu","doi":"10.1016/j.jnca.2024.104069","DOIUrl":"10.1016/j.jnca.2024.104069","url":null,"abstract":"<div><div>With the increasing complexity of network protocol traffic in the modern network environment, the task of traffic classification is facing significant challenges. Existing methods lack research on the characteristics of traffic byte data and suffer from insufficient model generalization, leading to decreased classification accuracy. In response, we propose a method for encrypted traffic classification based on a Staggered Attention Network using Graph Neural Networks (SAT-Net), which takes into consideration both computer network topology and user interaction processes. Firstly, we design a Packet Byte Graph (PBG) to efficiently capture the byte features of flow and their relationships, thereby transforming the encrypted traffic classification problem into a graph classification problem. Secondly, we meticulously construct a GNN-based PBG learner, where the feature remapping layer and staggered attention layer are respectively used for feature propagation and fusion, enhancing the robustness of the model. Experiments on multiple different types of encrypted traffic datasets demonstrate that SAT-Net outperforms various advanced methods in identifying VPN traffic, Tor traffic, and malicious traffic, showing strong generalization capability.</div></div>","PeriodicalId":54784,"journal":{"name":"Journal of Network and Computer Applications","volume":"233 ","pages":"Article 104069"},"PeriodicalIF":7.7,"publicationDate":"2024-11-15","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142655178","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
RLL-SWE: A Robust Linked List Steganography Without Embedding for intelligence networks in smart environments RLL-SWE:适用于智能环境中情报网络的无嵌入式稳健链接列表隐写术
IF 7.7 2区 计算机科学 Q1 COMPUTER SCIENCE, HARDWARE & ARCHITECTURE Pub Date : 2024-11-14 DOI: 10.1016/j.jnca.2024.104053
Pengbiao Zhao , Yuanjian Zhou , Salman Ijaz , Fazlullah Khan , Jingxue Chen , Bandar Alshawi , Zhen Qin , Md Arafatur Rahman
With the rapid development of technology, smart environments utilizing the Internet of Things, artificial intelligence, and big data are improving the quality of life and work efficiency through connected devices. However, these advances present significant security challenges. The data generated by these smart devices contains many private and sensitive information. In data transmission, crime and terrorism may intercept this sensitive information and use it for secret communications and illegal activities. Steganography hides information in media files and prevents information leakage and interception by criminal and terrorist networks in an intelligent environment. It is an important technology to protect data integrity and security. Traditional steganography techniques often cause detectable distortions, whereas Steganography Without Embedding (SWE) avoids direct modification of cover media, thereby minimizing detection risks. This paper introduces an innovative and robust technique called Robust Linked List (RLL)-SWE, which improves resistance to attacks compared to traditional methods. Using multiple median downsampling and gradient calculations, this method extracts stable features. It restructures them into a multi-head unidirectional linked list, ensuring accurate message retrieval and high resistance to adversarial attacks. Comprehensive analysis and simulation experiments confirm the technique’s exceptional effectiveness and steganographic capacity.
随着技术的快速发展,利用物联网、人工智能和大数据的智能环境正在通过联网设备提高生活质量和工作效率。然而,这些进步也带来了巨大的安全挑战。这些智能设备产生的数据包含许多私人和敏感信息。在数据传输过程中,犯罪和恐怖主义可能会截获这些敏感信息,并将其用于秘密通信和非法活动。在智能环境下,隐写术可以将信息隐藏在媒体文件中,防止信息泄露和被犯罪和恐怖网络截获。它是保护数据完整性和安全性的一项重要技术。传统的隐写技术往往会造成可检测到的失真,而无嵌入隐写术(SWE)则避免了对封面介质的直接修改,从而最大限度地降低了检测风险。本文介绍了一种名为 "稳健链接列表(RLL)-SWE "的创新稳健技术,与传统方法相比,该技术提高了抗攻击能力。该方法使用多重中值下采样和梯度计算,提取稳定的特征。它将这些特征重组为多头单向链表,确保了信息检索的准确性和对恶意攻击的高抵抗力。综合分析和模拟实验证实了该技术的卓越功效和隐写能力。
{"title":"RLL-SWE: A Robust Linked List Steganography Without Embedding for intelligence networks in smart environments","authors":"Pengbiao Zhao ,&nbsp;Yuanjian Zhou ,&nbsp;Salman Ijaz ,&nbsp;Fazlullah Khan ,&nbsp;Jingxue Chen ,&nbsp;Bandar Alshawi ,&nbsp;Zhen Qin ,&nbsp;Md Arafatur Rahman","doi":"10.1016/j.jnca.2024.104053","DOIUrl":"10.1016/j.jnca.2024.104053","url":null,"abstract":"<div><div>With the rapid development of technology, smart environments utilizing the Internet of Things, artificial intelligence, and big data are improving the quality of life and work efficiency through connected devices. However, these advances present significant security challenges. The data generated by these smart devices contains many private and sensitive information. In data transmission, crime and terrorism may intercept this sensitive information and use it for secret communications and illegal activities. Steganography hides information in media files and prevents information leakage and interception by criminal and terrorist networks in an intelligent environment. It is an important technology to protect data integrity and security. Traditional steganography techniques often cause detectable distortions, whereas Steganography Without Embedding (SWE) avoids direct modification of cover media, thereby minimizing detection risks. This paper introduces an innovative and robust technique called Robust Linked List (RLL)-SWE, which improves resistance to attacks compared to traditional methods. Using multiple median downsampling and gradient calculations, this method extracts stable features. It restructures them into a multi-head unidirectional linked list, ensuring accurate message retrieval and high resistance to adversarial attacks. Comprehensive analysis and simulation experiments confirm the technique’s exceptional effectiveness and steganographic capacity.</div></div>","PeriodicalId":54784,"journal":{"name":"Journal of Network and Computer Applications","volume":"234 ","pages":"Article 104053"},"PeriodicalIF":7.7,"publicationDate":"2024-11-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142696325","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
FCG-MFD: Benchmark function call graph-based dataset for malware family detection FCG-MFD:基于函数调用图的恶意软件族检测基准数据集
IF 7.7 2区 计算机科学 Q1 COMPUTER SCIENCE, HARDWARE & ARCHITECTURE Pub Date : 2024-11-07 DOI: 10.1016/j.jnca.2024.104050
Hassan Jalil Hadi , Yue Cao , Sifan Li , Naveed Ahmad , Mohammed Ali Alshara
Cyber crimes related to malware families are on the rise. This growth persists despite the prevalence of various antivirus software and approaches for malware detection and classification. Security experts have implemented Machine Learning (ML) techniques to identify these cyber-crimes. However, these approaches demand updated malware datasets for continuous improvements amid the evolving sophistication of malware strains. Thus, we present the FCG-MFD, a benchmark dataset with extensive Function Call Graphs (FCG) for malware family detection. This dataset guarantees resistance against emerging malware families by enabling security systems. Our dataset has two sub-datasets (FCG & Metadata) (1,00,000 samples) from VirusSamples, Virusshare, VirusSign, theZoo, Vx-underground, and MalwareBazaar curated using FCGs and metadata to optimize the efficacy of ML algorithms. We suggest a new malware analysis technique using FCGs and graph embedding networks, offering a solution to the complexity of feature engineering in ML-based malware analysis. Our approach to extracting semantic features via the Natural Language Processing (NLP) method is inspired by tasks involving sentences and words, respectively, for functions and instructions. We leverage a node2vec mechanism-based graph embedding network to generate malware embedding vectors. These vectors enable automated and efficient malware analysis by combining structural and semantic features. We use two datasets (FCG & Metadata) to assess FCG-MFD performance. F1-Scores of 99.14% and 99.28% are competitive with State-of-the-art (SOTA) methods.
与恶意软件家族有关的网络犯罪呈上升趋势。尽管各种杀毒软件和恶意软件检测与分类方法已经普及,但这种增长趋势依然存在。安全专家采用机器学习(ML)技术来识别这些网络犯罪。然而,这些方法需要更新恶意软件数据集,以便在恶意软件种类不断演变的情况下持续改进。因此,我们提出了 FCG-MFD,这是一个具有大量函数调用图(FCG)的基准数据集,用于恶意软件家族检测。该数据集能确保安全系统抵御新出现的恶意软件家族。我们的数据集包含两个子数据集(FCG & Metadata)(1,00,000 个样本),分别来自 VirusSamples、Virusshare、VirusSign、theZoo、Vx-underground 和 MalwareBazaar,这些数据集利用 FCG 和元数据来优化 ML 算法的功效。我们提出了一种使用 FCG 和图嵌入网络的新型恶意软件分析技术,为基于 ML 的恶意软件分析中复杂的特征工程提供了解决方案。我们通过自然语言处理(NLP)方法提取语义特征的灵感来自分别涉及函数和指令的句子和单词任务。我们利用基于 node2vec 机制的图嵌入网络生成恶意软件嵌入向量。通过结合结构和语义特征,这些向量可实现自动、高效的恶意软件分析。我们使用两个数据集(FCG & Metadata)来评估 FCG-MFD 的性能。F1 分数分别为 99.14% 和 99.28%,与最先进的 (SOTA) 方法相比具有竞争力。
{"title":"FCG-MFD: Benchmark function call graph-based dataset for malware family detection","authors":"Hassan Jalil Hadi ,&nbsp;Yue Cao ,&nbsp;Sifan Li ,&nbsp;Naveed Ahmad ,&nbsp;Mohammed Ali Alshara","doi":"10.1016/j.jnca.2024.104050","DOIUrl":"10.1016/j.jnca.2024.104050","url":null,"abstract":"<div><div>Cyber crimes related to malware families are on the rise. This growth persists despite the prevalence of various antivirus software and approaches for malware detection and classification. Security experts have implemented Machine Learning (ML) techniques to identify these cyber-crimes. However, these approaches demand updated malware datasets for continuous improvements amid the evolving sophistication of malware strains. Thus, we present the FCG-MFD, a benchmark dataset with extensive Function Call Graphs (FCG) for malware family detection. This dataset guarantees resistance against emerging malware families by enabling security systems. Our dataset has two sub-datasets (FCG &amp; Metadata) (1,00,000 samples) from VirusSamples, Virusshare, VirusSign, theZoo, Vx-underground, and MalwareBazaar curated using FCGs and metadata to optimize the efficacy of ML algorithms. We suggest a new malware analysis technique using FCGs and graph embedding networks, offering a solution to the complexity of feature engineering in ML-based malware analysis. Our approach to extracting semantic features via the Natural Language Processing (NLP) method is inspired by tasks involving sentences and words, respectively, for functions and instructions. We leverage a node2vec mechanism-based graph embedding network to generate malware embedding vectors. These vectors enable automated and efficient malware analysis by combining structural and semantic features. We use two datasets (FCG &amp; Metadata) to assess FCG-MFD performance. F1-Scores of 99.14% and 99.28% are competitive with State-of-the-art (SOTA) methods.</div></div>","PeriodicalId":54784,"journal":{"name":"Journal of Network and Computer Applications","volume":"233 ","pages":"Article 104050"},"PeriodicalIF":7.7,"publicationDate":"2024-11-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142655185","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Particle swarm optimization tuned multi-headed long short-term memory networks approach for fuel prices forecasting 用于燃料价格预测的粒子群优化调整多头长短期记忆网络方法
IF 7.7 2区 计算机科学 Q1 COMPUTER SCIENCE, HARDWARE & ARCHITECTURE Pub Date : 2024-11-07 DOI: 10.1016/j.jnca.2024.104048
Andjela Jovanovic , Luka Jovanovic , Miodrag Zivkovic , Nebojsa Bacanin , Vladimir Simic , Dragan Pamucar , Milos Antonijevic
Increasing global energy demands and decreasing stocks of fossil fuels have led to a resurgence of research into energy forecasting. Artificial intelligence, explicitly time series forecasting holds great potential to improve predictions of cost and demand with many lucrative applications across several fields. Many factors influence prices on a global scale, from socio-economic factors to distribution, availability, and international policy. Also, various factors need to be considered in order to make an accurate forecast. By analyzing the current literature, a gap for improvements within this domain exists. Therefore, this work suggests and explores the potential of multi-headed long short-term memory models for gasoline price forecasting, since this issue was not tackled with multi-headed models before. Additionally, since the computational requirements for such models are relatively high, work focuses on lightweight approaches that consist of a relatively low number of neurons per layer, trained in a small number of epochs. However, as algorithm performance can be heavily dependent on appropriate hyper-parameter selections, a modified variant of the particle swarm optimization algorithm is also set forth to help in optimizing the model’s architecture and training parameters. A comparative analysis is conducted using energy data collected from multiple public sources between several contemporary optimizers. The outcomes are put through a meticulous statistical validation to ascertain the significance of the findings. The best-constructed models attained a mean square error of just 0.044025 with an R-squared of 0.911797, suggesting potential for real-world use.
全球能源需求的不断增长和化石燃料存量的不断减少,促使能源预测研究再度兴起。人工智能、明确的时间序列预测在改善成本和需求预测方面具有巨大潜力,在多个领域都有许多利润丰厚的应用。影响全球价格的因素很多,从社会经济因素到分配、供应和国际政策。此外,为了做出准确的预测,还需要考虑各种因素。通过对现有文献的分析,这一领域还存在有待改进的地方。因此,本研究建议并探索多头长期短期记忆模型在汽油价格预测中的应用潜力,因为在此之前并没有使用多头模型来解决这一问题。此外,由于此类模型的计算要求相对较高,因此工作重点放在轻量级方法上,即每层神经元数量相对较少,并在少量的历时中进行训练。不过,由于算法性能在很大程度上取决于适当的超参数选择,因此还提出了粒子群优化算法的改进变体,以帮助优化模型的架构和训练参数。利用从多个公共资源收集到的能源数据,对多个当代优化器进行了比较分析。对结果进行了细致的统计验证,以确定研究结果的重要性。构建的最佳模型的均方误差仅为 0.044025,R 方为 0.911797,这表明该模型在现实世界中具有使用潜力。
{"title":"Particle swarm optimization tuned multi-headed long short-term memory networks approach for fuel prices forecasting","authors":"Andjela Jovanovic ,&nbsp;Luka Jovanovic ,&nbsp;Miodrag Zivkovic ,&nbsp;Nebojsa Bacanin ,&nbsp;Vladimir Simic ,&nbsp;Dragan Pamucar ,&nbsp;Milos Antonijevic","doi":"10.1016/j.jnca.2024.104048","DOIUrl":"10.1016/j.jnca.2024.104048","url":null,"abstract":"<div><div>Increasing global energy demands and decreasing stocks of fossil fuels have led to a resurgence of research into energy forecasting. Artificial intelligence, explicitly time series forecasting holds great potential to improve predictions of cost and demand with many lucrative applications across several fields. Many factors influence prices on a global scale, from socio-economic factors to distribution, availability, and international policy. Also, various factors need to be considered in order to make an accurate forecast. By analyzing the current literature, a gap for improvements within this domain exists. Therefore, this work suggests and explores the potential of multi-headed long short-term memory models for gasoline price forecasting, since this issue was not tackled with multi-headed models before. Additionally, since the computational requirements for such models are relatively high, work focuses on lightweight approaches that consist of a relatively low number of neurons per layer, trained in a small number of epochs. However, as algorithm performance can be heavily dependent on appropriate hyper-parameter selections, a modified variant of the particle swarm optimization algorithm is also set forth to help in optimizing the model’s architecture and training parameters. A comparative analysis is conducted using energy data collected from multiple public sources between several contemporary optimizers. The outcomes are put through a meticulous statistical validation to ascertain the significance of the findings. The best-constructed models attained a mean square error of just 0.044025 with an R-squared of 0.911797, suggesting potential for real-world use.</div></div>","PeriodicalId":54784,"journal":{"name":"Journal of Network and Computer Applications","volume":"233 ","pages":"Article 104048"},"PeriodicalIF":7.7,"publicationDate":"2024-11-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142655183","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
期刊
Journal of Network and Computer Applications
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1