Blockchain technology underpins secure, decentralized digital ecosystems and supports applications ranging from finance and supply chains to the emerging Metaverse. However, latency remains a key challenge, particularly for real time applications. Hyperledger Fabric (HLF), a leading enterprise blockchain, suffers from transaction delays due to its endorsement policies, which enhance security but introduce computational and communication overhead. This paper addresses the latency challenge in HLF by proposing a reinforcement learning (RL)-based dynamic endorsement mechanism. The model learns from past transaction patterns and system states to predict the optimal number of endorsers needed for each transaction. By dynamically adjusting the “AND” endorsement policy based on whether the observed latency meets a defined threshold, the approach balances security with performance, which is critical for low-latency applications like the Metaverse. Experimental evaluations across diverse HLF configurations, using both mathematical and empirical methods, show that the proposed RL model reduces transaction latency by up to 37.54 % compared to static policies and outperforms other RL models (SARSA, Dueling DQN, Double Q-learning) by 6.81 % to 16.04 %. Results confirm the model’s adaptability and superior performance, particularly in single-client environments. In terms of throughput, the proposed RL model consistently surpasses the static configuration across all workloads, demonstrating strong adaptability to varying transaction loads with the most notable improvement of 27.61 % under single-client conditions, underscoring the model’s capability to optimise light workloads. This research contributes to the development of scalable, responsive, and secure blockchain infrastructures, offering an intelligent solution for real-time latency optimisation in digital applications such as the Metaverse.
扫码关注我们
求助内容:
应助结果提醒方式:
