首页 > 最新文献

Future Generation Computer Systems-The International Journal of Escience最新文献

英文 中文
Bodyless block propagation: TPS fully scalable blockchain with pre-validation 无主体区块传播:带有预验证功能的 TPS 完全可扩展区块链
IF 6.2 2区 计算机科学 Q1 COMPUTER SCIENCE, THEORY & METHODS Pub Date : 2024-09-07 DOI: 10.1016/j.future.2024.107516

Despite numerous prior attempts to boost transaction per second (TPS) of blockchain system, most of them were at a price of degraded decentralization and security. In this paper, we propose a bodyless block propagation (BBP) scheme for which the blockbody is not validated and transmitted during the block propagation process, to increase TPS without compromising security. Rather, the nodes in the blockchain network anticipate the transactions and their ordering in the next upcoming block so that these transactions can be pre-executed and pre-validated before the birth of the block. It is critical, however, that all nodes have a consensus on the transaction content of the next block.

This paper puts forth a transaction selection, ordering, and synchronization algorithm to drive the nodes to reach such a consensus. Yet, the Coinbase Address of the miner of the next block cannot be anticipated, and therefore transactions that depend on the Coinbase Address cannot be pre-executed and pre-validated. This paper further puts forth an algorithm to deal with such unresolvable transactions for an overall consistent and TPS-efficient scheme. With our scheme, most transactions do not need to be validated and transmitted during block propagation, ridding the dependence of propagation time on the number of transactions in the block, and making the system fully TPS scalable. Experimental results show that our protocol can reduce propagation time by 4× with respect to the current Ethereum blockchain, and its TPS performance is limited by the node hardware performance rather than block propagation.

尽管之前有许多提高区块链系统每秒交易量(TPS)的尝试,但大多数都是以降低去中心化和安全性为代价的。在本文中,我们提出了一种无主体区块传播(BBP)方案,即在区块传播过程中不验证和传输区块主体,从而在不影响安全性的情况下提高每秒交易量(TPS)。相反,区块链网络中的节点会预测下一个即将到来的区块中的交易及其排序,这样这些交易就可以在区块诞生之前预先执行和预先验证。本文提出了一种交易选择、排序和同步算法,以推动节点达成这样的共识。然而,下一个区块的矿工的 Coinbase 地址是无法预测的,因此依赖于 Coinbase 地址的交易无法预先执行和预先验证。本文进一步提出了一种算法来处理这种无法解决的交易,以实现整体一致和 TPS 高效的方案。在我们的方案中,大多数事务无需在区块传播过程中进行验证和传输,从而摆脱了传播时间对区块中事务数量的依赖,使系统完全具有 TPS 可扩展性。实验结果表明,与目前的以太坊区块链相比,我们的协议可将传播时间缩短 4 倍,其 TPS 性能受限于节点硬件性能而非区块传播。
{"title":"Bodyless block propagation: TPS fully scalable blockchain with pre-validation","authors":"","doi":"10.1016/j.future.2024.107516","DOIUrl":"10.1016/j.future.2024.107516","url":null,"abstract":"<div><p>Despite numerous prior attempts to boost transaction per second (TPS) of blockchain system, most of them were at a price of degraded decentralization and security. In this paper, we propose a bodyless block propagation (BBP) scheme for which the blockbody is not validated and transmitted during the block propagation process, to increase TPS without compromising security. Rather, the nodes in the blockchain network anticipate the transactions and their ordering in the next upcoming block so that these transactions can be pre-executed and pre-validated before the birth of the block. It is critical, however, that all nodes have a consensus on the transaction content of the next block.</p><p>This paper puts forth a transaction selection, ordering, and synchronization algorithm to drive the nodes to reach such a consensus. Yet, the Coinbase Address of the miner of the next block cannot be anticipated, and therefore transactions that depend on the Coinbase Address cannot be pre-executed and pre-validated. This paper further puts forth an algorithm to deal with such unresolvable transactions for an overall consistent and TPS-efficient scheme. With our scheme, most transactions do not need to be validated and transmitted during block propagation, ridding the dependence of propagation time on the number of transactions in the block, and making the system fully TPS scalable. Experimental results show that our protocol can reduce propagation time by 4<span><math><mo>×</mo></math></span> with respect to the current Ethereum blockchain, and its TPS performance is limited by the node hardware performance rather than block propagation.</p></div>","PeriodicalId":55132,"journal":{"name":"Future Generation Computer Systems-The International Journal of Escience","volume":null,"pages":null},"PeriodicalIF":6.2,"publicationDate":"2024-09-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142162611","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Intelligent transportation system for automated medical services during pandemic 大流行病期间自动医疗服务的智能交通系统
IF 6.2 2区 计算机科学 Q1 COMPUTER SCIENCE, THEORY & METHODS Pub Date : 2024-09-05 DOI: 10.1016/j.future.2024.107515

Infectious viruses are spread during human-to-human contact and can cause worldwide pandemics. We have witnessed worldwide disasters during the COVID-19 pandemic because of infectious viruses, and these incidents often unfold in various phases and waves. During this pandemic, so many deaths have occurred worldwide that they cannot even be counted accurately. The biggest issue that comes to the forefront is that health workers going to treat patients suffering from COVID-19 also may get infected. Many health workers have lost their lives to COVID-19 and are still losing their lives. The situation can worsen further by coinciding with other natural disasters like cyclones, earthquakes, and tsunamis. In these situations, an intelligent automated model is needed to provide contactless medical services such as ambulance facilities and primary health tests. In this paper, we explore these types of services safely with the help of an intelligent automated transportation model using a vehicular delay-tolerant network. To solve the scenario, we propose an intelligent transportation system for automated medical services to prevent healthcare workers from becoming infected during testing and collecting health data by collaborating with a delay-tolerant network of vehicles in intelligent transport systems. The proposed model automatically categorizes and filters infected patients, providing medical facilities based on their illnesses. Our mathematical evaluation and simulation results affirm the effectiveness and feasibility of the proposed model, highlighting its strength compared to existing state-of-the-art protocols.

传染性病毒在人与人的接触中传播,并可能造成世界性的大流行。在 COVID-19 大流行期间,我们目睹了因传染性病毒而造成的世界性灾难,这些事件往往是分阶段、分波次发生的。在这次大流行中,全球死亡人数之多甚至无法准确统计。最突出的问题是,去治疗 COVID-19 患者的医务工作者也可能受到感染。许多医务工作者已经因 COVID-19 而丧生,而且仍在继续丧生。如果同时发生气旋、地震和海啸等其他自然灾害,情况可能会进一步恶化。在这种情况下,需要一种智能自动化模型来提供非接触式医疗服务,如救护车设施和初级健康检测。在本文中,我们借助使用车载容错网络的智能自动交通模型,探索如何安全地提供这些类型的服务。为了解决这一问题,我们提出了一种用于自动医疗服务的智能交通系统,通过与智能交通系统中的车辆容错网络协作,防止医护人员在检测和收集健康数据时受到感染。所提出的模型可自动分类和过滤受感染的病人,并根据他们的病情提供医疗设施。我们的数学评估和仿真结果证实了所提模型的有效性和可行性,突出了它与现有最先进协议相比的优势。
{"title":"Intelligent transportation system for automated medical services during pandemic","authors":"","doi":"10.1016/j.future.2024.107515","DOIUrl":"10.1016/j.future.2024.107515","url":null,"abstract":"<div><p>Infectious viruses are spread during human-to-human contact and can cause worldwide pandemics. We have witnessed worldwide disasters during the COVID-19 pandemic because of infectious viruses, and these incidents often unfold in various phases and waves. During this pandemic, so many deaths have occurred worldwide that they cannot even be counted accurately. The biggest issue that comes to the forefront is that health workers going to treat patients suffering from COVID-19 also may get infected. Many health workers have lost their lives to COVID-19 and are still losing their lives. The situation can worsen further by coinciding with other natural disasters like cyclones, earthquakes, and tsunamis. In these situations, an intelligent automated model is needed to provide contactless medical services such as ambulance facilities and primary health tests. In this paper, we explore these types of services safely with the help of an intelligent automated transportation model using a vehicular delay-tolerant network. To solve the scenario, we propose an intelligent transportation system for automated medical services to prevent healthcare workers from becoming infected during testing and collecting health data by collaborating with a delay-tolerant network of vehicles in intelligent transport systems. The proposed model automatically categorizes and filters infected patients, providing medical facilities based on their illnesses. Our mathematical evaluation and simulation results affirm the effectiveness and feasibility of the proposed model, highlighting its strength compared to existing state-of-the-art protocols.</p></div>","PeriodicalId":55132,"journal":{"name":"Future Generation Computer Systems-The International Journal of Escience","volume":null,"pages":null},"PeriodicalIF":6.2,"publicationDate":"2024-09-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142162612","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
A Digital Twin-based multi-objective optimized task offloading and scheduling scheme for vehicular edge networks 基于数字双胞胎的车载边缘网络多目标优化任务卸载和调度方案
IF 6.2 2区 计算机科学 Q1 COMPUTER SCIENCE, THEORY & METHODS Pub Date : 2024-09-04 DOI: 10.1016/j.future.2024.107517

Traditional research on vehicular edge computing often assumes that the requested and processed task types are the same or that the edge servers have identical computing resources, ignoring the heterogeneity of task types in mobile vehicles and the services provided by edge servers. Meanwhile, the complexity of the vehicular edge environment and the large amount of real-time data required by DRL are often ignored when using Deep Reinforcement Learning (DRL) to process the vehicular edge tasks; Furthermore, traditional offloading and scheduling models are usually based on idealized models with deterministic task quantities and a single objective (such as latency or energy consumption). This paper proposes a Digital Twin(DT)-based multi-objective optimized task offloading and scheduling scheme for vehicular edge networks to address these issues. To address the complexity of vehicular edge environments and the need for a large amount of real-time data for DRL, this paper designs a DT-assisted vehicular edge environment; To tackle the problem of task heterogeneity in mobile vehicles and edge server service differentiation, a computation model based on Deep Neural Networks (DNN) partitioning and an early exit mechanism is proposed, which leverages the resources of mobile vehicles and edge servers to reduce the time and energy consumption of DNN tasks during the computation process. For the uncertain task quantity of DNN tasks, a schedule model based on the pointer network and Asynchronous Advantage Actor-Critic (A3C) is proposed, which utilizes the characteristics of the pointer network in handling variable-length sequence problems to solve it and trains the pointer network with the A3C algorithm for improved performance. Moreover, this paper introduces the joint optimization of multiple metrics, including energy consumption and latency. Experimental comparative analysis demonstrates that the proposed scheme outperforms other schemes and can reduce time and energy consumption.

传统的车载边缘计算研究往往假设请求和处理的任务类型相同,或者假设边缘服务器拥有相同的计算资源,而忽视了移动车辆任务类型的异质性和边缘服务器所提供的服务。同时,在使用深度强化学习(DRL)处理车辆边缘任务时,往往忽略了车辆边缘环境的复杂性和 DRL 所需的大量实时数据;此外,传统的卸载和调度模型通常基于理想化模型,具有确定性的任务量和单一目标(如延迟或能耗)。本文针对这些问题提出了一种基于数字孪生(DT)的多目标优化车载边缘网络任务卸载和调度方案。针对车载边缘环境的复杂性和DRL对大量实时数据的需求,本文设计了DT辅助的车载边缘环境;针对移动车辆任务异构和边缘服务器服务差异化的问题,提出了基于深度神经网络(DNN)分区的计算模型和提前退出机制,充分利用移动车辆和边缘服务器的资源,减少DNN任务在计算过程中的时间和能耗。针对 DNN 任务的不确定任务量,提出了基于指针网络和异步优势行动者批判(A3C)的调度模型,利用指针网络在处理变长序列问题时的特性来解决该问题,并用 A3C 算法训练指针网络以提高性能。此外,本文还引入了多个指标的联合优化,包括能耗和延迟。实验对比分析表明,所提出的方案优于其他方案,并能减少时间和能耗。
{"title":"A Digital Twin-based multi-objective optimized task offloading and scheduling scheme for vehicular edge networks","authors":"","doi":"10.1016/j.future.2024.107517","DOIUrl":"10.1016/j.future.2024.107517","url":null,"abstract":"<div><p>Traditional research on vehicular edge computing often assumes that the requested and processed task types are the same or that the edge servers have identical computing resources, ignoring the heterogeneity of task types in mobile vehicles and the services provided by edge servers. Meanwhile, the complexity of the vehicular edge environment and the large amount of real-time data required by DRL are often ignored when using Deep Reinforcement Learning (DRL) to process the vehicular edge tasks; Furthermore, traditional offloading and scheduling models are usually based on idealized models with deterministic task quantities and a single objective (such as latency or energy consumption). This paper proposes a Digital Twin(DT)-based multi-objective optimized task offloading and scheduling scheme for vehicular edge networks to address these issues. To address the complexity of vehicular edge environments and the need for a large amount of real-time data for DRL, this paper designs a DT-assisted vehicular edge environment; To tackle the problem of task heterogeneity in mobile vehicles and edge server service differentiation, a computation model based on Deep Neural Networks (DNN) partitioning and an early exit mechanism is proposed, which leverages the resources of mobile vehicles and edge servers to reduce the time and energy consumption of DNN tasks during the computation process. For the uncertain task quantity of DNN tasks, a schedule model based on the pointer network and Asynchronous Advantage Actor-Critic (A3C) is proposed, which utilizes the characteristics of the pointer network in handling variable-length sequence problems to solve it and trains the pointer network with the A3C algorithm for improved performance. Moreover, this paper introduces the joint optimization of multiple metrics, including energy consumption and latency. Experimental comparative analysis demonstrates that the proposed scheme outperforms other schemes and can reduce time and energy consumption.</p></div>","PeriodicalId":55132,"journal":{"name":"Future Generation Computer Systems-The International Journal of Escience","volume":null,"pages":null},"PeriodicalIF":6.2,"publicationDate":"2024-09-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142148332","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Olsync: Object-level tiering and coordination in tiered storage systems based on software-defined network Olsync:基于软件定义网络的分层存储系统中的对象级分层与协调
IF 6.2 2区 计算机科学 Q1 COMPUTER SCIENCE, THEORY & METHODS Pub Date : 2024-09-04 DOI: 10.1016/j.future.2024.107521

With the adoption of new storage technologies like NVMs, tiered storage has gained popularity in large-scale, hyper-converged clusters. The storage back-end of hyper-converged systems supports data storage on devices such as SSDs and HDDs, yet lacks fine-grained tiered storage solutions. For example, Ceph selects storage nodes based primarily on limited criteria, such as node storage capacity, disregarding the diverse performance characteristics of various storage media. In this study, we introduce Olsync, an object-level tiering and coordination system designed to enhance storage resource utilization and data access performance. Specifically, Olsync employs PIPO (Packet-In-Packet-Out), an innovative network communication framework based on Software-defined Networking (SDN), to collaboratively optimize both the network control plane and underlying data plane. Additionally, Olsync can offer efficient object-level tiering and coordination services using the global views obtained by PIPO (e.g., data access patterns and interfering object requests) to make tiered storage and performance optimization decisions. We incorporated the Olsync prototype into Ceph and performed a thorough comparison with contemporary state-of-the-art systems. The evaluation results demonstrate that Olsync significantly enhances system response time (up to 68%), I/O throughput (up to 24%), and 99th percentile latency (up to 16%) in various environments.

随着 NVM 等新存储技术的采用,分层存储在大规模超融合集群中越来越受欢迎。超融合系统的存储后端支持 SSD 和 HDD 等设备上的数据存储,但缺乏细粒度的分层存储解决方案。例如,Ceph 主要根据节点存储容量等有限标准选择存储节点,而忽略了各种存储介质的不同性能特点。在本研究中,我们介绍了 Olsync,这是一个对象级分层和协调系统,旨在提高存储资源利用率和数据访问性能。具体来说,Olsync 采用基于软件定义网络(SDN)的创新网络通信框架 PIPO(包入包出),协同优化网络控制平面和底层数据平面。此外,Olsync 还能利用 PIPO 获得的全局视图(如数据访问模式和干扰对象请求)提供高效的对象级分层和协调服务,从而做出分层存储和性能优化决策。我们将 Olsync 原型纳入了 Ceph,并与当代最先进的系统进行了全面比较。评估结果表明,Olsync 能在各种环境中显著提高系统响应时间(高达 68%)、I/O 吞吐量(高达 24%)和第 99 百分位数延迟(高达 16%)。
{"title":"Olsync: Object-level tiering and coordination in tiered storage systems based on software-defined network","authors":"","doi":"10.1016/j.future.2024.107521","DOIUrl":"10.1016/j.future.2024.107521","url":null,"abstract":"<div><p>With the adoption of new storage technologies like NVMs, tiered storage has gained popularity in large-scale, hyper-converged clusters. The storage back-end of hyper-converged systems supports data storage on devices such as SSDs and HDDs, yet lacks fine-grained tiered storage solutions. For example, Ceph selects storage nodes based primarily on limited criteria, such as node storage capacity, disregarding the diverse performance characteristics of various storage media. In this study, we introduce Olsync, an object-level tiering and coordination system designed to enhance storage resource utilization and data access performance. Specifically, Olsync employs PIPO (Packet-In-Packet-Out), an innovative network communication framework based on Software-defined Networking (SDN), to collaboratively optimize both the network control plane and underlying data plane. Additionally, Olsync can offer efficient object-level tiering and coordination services using the global views obtained by PIPO (e.g., data access patterns and interfering object requests) to make tiered storage and performance optimization decisions. We incorporated the Olsync prototype into Ceph and performed a thorough comparison with contemporary state-of-the-art systems. The evaluation results demonstrate that Olsync significantly enhances system response time (up to 68%), I/O throughput (up to 24%), and 99th percentile latency (up to 16%) in various environments.</p></div>","PeriodicalId":55132,"journal":{"name":"Future Generation Computer Systems-The International Journal of Escience","volume":null,"pages":null},"PeriodicalIF":6.2,"publicationDate":"2024-09-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142162613","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Joint energy efficiency and network optimization for integrated blockchain-SDN-based internet of things networks 基于区块链-SDN 的集成物联网网络的联合能效和网络优化
IF 6.2 2区 计算机科学 Q1 COMPUTER SCIENCE, THEORY & METHODS Pub Date : 2024-09-04 DOI: 10.1016/j.future.2024.107519

The Internet of Things (IoT) networks are poised to play a critical role in providing ultra-low latency and high bandwidth communications in various real-world IoT scenarios. Assuring end-to-end secure, energy-aware, reliable, real-time IoT communication is hard due to the heterogeneity and transient behavior of IoT networks. Additionally, the lack of integrated approaches to efficiently schedule IoT tasks and holistically offload computing resources, and computational limits in IoT systems to achieve effective resource utilization. This paper makes three contributions to research on overcoming these problems in the context of distributed IoT systems that use the Software Defined Networking (SDN) programmable control plane in symbiosis with blockchain to benefit from the dispersed or decentralized, and efficient environment of distributed IoT transactions over Wide Area Networks (WANs). First, it introduces a Blockchain-SDN architectural component to reinforce flexibility and trustworthiness and improve the Quality of Service (QoS) of IoT networks. Second, it describes the design of an IoT-focused smart contract that implements the control logic to manage IoT data, detect and report suspected IoT nodes, and mitigate malicious traffic. Third, we introduce a novel consensus algorithm based on the Proof-of-Authority (PoA) to achieve agreements between blockchain-enabled IoT nodes, improve the reliability of IoT edge devices, and establish absolute trust among all smart IoT systems. Experimental results show that integrating SDN with blockchain outperforms traditional Proof-of-Work (PoW) and Practical Byzantine Fault Tolerance (PBFT) algorithms, delivering up to 68% lower latency, 87% higher transaction throughput, and 45% better energy savings.

物联网(IoT)网络将在各种实际物联网场景中提供超低延迟和高带宽通信方面发挥关键作用。由于物联网网络的异构性和瞬态行为,确保端到端安全、能源感知、可靠、实时的物联网通信非常困难。此外,缺乏高效调度物联网任务和整体卸载计算资源的综合方法,以及物联网系统中实现有效资源利用的计算限制。在分布式物联网系统中,使用软件定义网络(SDN)可编程控制平面与区块链共生,从而受益于广域网(WAN)上分散或去中心化的高效分布式物联网交易环境,本文在克服这些问题的研究方面做出了三点贡献。首先,它介绍了区块链-SDN 架构组件,以加强物联网网络的灵活性和可信度,并提高服务质量(QoS)。其次,它描述了以物联网为重点的智能合约的设计,该合约实现了管理物联网数据、检测和报告可疑物联网节点以及减少恶意流量的控制逻辑。第三,我们引入了一种基于权威证明(PoA)的新型共识算法,以在支持区块链的物联网节点之间达成协议,提高物联网边缘设备的可靠性,并在所有智能物联网系统之间建立绝对信任。实验结果表明,SDN 与区块链的集成优于传统的工作证明(PoW)和实用拜占庭容错(PBFT)算法,延迟降低了 68%,交易吞吐量提高了 87%,节能效果提高了 45%。
{"title":"Joint energy efficiency and network optimization for integrated blockchain-SDN-based internet of things networks","authors":"","doi":"10.1016/j.future.2024.107519","DOIUrl":"10.1016/j.future.2024.107519","url":null,"abstract":"<div><p>The Internet of Things (IoT) networks are poised to play a critical role in providing ultra-low latency and high bandwidth communications in various real-world IoT scenarios. Assuring end-to-end secure, energy-aware, reliable, real-time IoT communication is hard due to the heterogeneity and transient behavior of IoT networks. Additionally, the lack of integrated approaches to efficiently schedule IoT tasks and holistically offload computing resources, and computational limits in IoT systems to achieve effective resource utilization. This paper makes three contributions to research on overcoming these problems in the context of distributed IoT systems that use the Software Defined Networking (SDN) programmable control plane in symbiosis with blockchain to benefit from the dispersed or decentralized, and efficient environment of distributed IoT transactions over Wide Area Networks (WANs). First, it introduces a Blockchain-SDN architectural component to reinforce flexibility and trustworthiness and improve the Quality of Service (QoS) of IoT networks. Second, it describes the design of an IoT-focused smart contract that implements the control logic to manage IoT data, detect and report suspected IoT nodes, and mitigate malicious traffic. Third, we introduce a novel consensus algorithm based on the Proof-of-Authority (PoA) to achieve agreements between blockchain-enabled IoT nodes, improve the reliability of IoT edge devices, and establish absolute trust among all smart IoT systems. Experimental results show that integrating SDN with blockchain outperforms traditional Proof-of-Work (PoW) and Practical Byzantine Fault Tolerance (PBFT) algorithms, delivering up to 68% lower latency, 87% higher transaction throughput, and 45% better energy savings.</p></div>","PeriodicalId":55132,"journal":{"name":"Future Generation Computer Systems-The International Journal of Escience","volume":null,"pages":null},"PeriodicalIF":6.2,"publicationDate":"2024-09-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142162614","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Towards benchmarking erasure coding schemes in object storage system: A systematic review 为对象存储系统中的擦除编码方案制定基准:系统回顾
IF 6.2 2区 计算机科学 Q1 COMPUTER SCIENCE, THEORY & METHODS Pub Date : 2024-09-04 DOI: 10.1016/j.future.2024.107522

Erasure Coding (EC) in cloud storage minimizes data replication by reconstructing data from parity fragments. This method enhances data redundancy and efficiency while reducing storage costs and improving fault tolerance. It is more advantageous than replication in Object Storage Systems. EC guarantees data integrity by ensuring lossless transmission of all coded pieces. As data volumes continue to increase rapidly, the time efficiency of the EC method becomes crucial in ensuring optimal system performance. Various variables, including the algorithm employed, data size, number of storage nodes, hardware resources, and network conditions, can influence the speed of EC operations. Although some literature covers various aspects, there is still a research gap in understanding the I/O activities, time efficiency, and fault tolerance of EC in object storage systems. Hence, our research aims to address these challenges in cloud-based object storage systems. We analyze and benchmark the data storage I/O performance of OpenStack Swift, focusing on the time efficiency of the Reed–Solomon (RS) algorithm across two datasets. Additionally, our contributions include benchmarking EC performance in both local and remote testbeds, utilizing the SimEDC simulator for comprehensive efficiency and fault tolerance assessments. Moreover, we create a comprehensive dataset (MCSD-100) for benchmarking and conduct a systematic literature review. Finally, we identify and discuss future opportunities for enhancing EC in cloud-based object storage systems.

云存储中的擦除编码(EC)通过从奇偶校验片段重建数据,最大限度地减少了数据复制。这种方法可提高数据冗余度和效率,同时降低存储成本并提高容错性。它比对象存储系统中的复制更具优势。EC 通过确保无损传输所有编码片段来保证数据的完整性。随着数据量的持续快速增长,EC 方法的时间效率对确保最佳系统性能至关重要。各种变量,包括采用的算法、数据大小、存储节点数量、硬件资源和网络条件,都会影响 EC 的运行速度。虽然一些文献涉及各个方面,但在了解对象存储系统中 EC 的 I/O 活动、时间效率和容错性方面仍存在研究空白。因此,我们的研究旨在应对基于云的对象存储系统中的这些挑战。我们对 OpenStack Swift 的数据存储 I/O 性能进行了分析和基准测试,重点关注两个数据集中里德-所罗门(RS)算法的时间效率。此外,我们的贡献还包括在本地和远程测试平台上对 EC 性能进行基准测试,利用 SimEDC 模拟器进行全面的效率和容错评估。此外,我们还创建了一个用于基准测试的综合数据集(MCSD-100),并进行了系统的文献综述。最后,我们确定并讨论了在基于云的对象存储系统中增强 EC 的未来机遇。
{"title":"Towards benchmarking erasure coding schemes in object storage system: A systematic review","authors":"","doi":"10.1016/j.future.2024.107522","DOIUrl":"10.1016/j.future.2024.107522","url":null,"abstract":"<div><p>Erasure Coding (EC) in cloud storage minimizes data replication by reconstructing data from parity fragments. This method enhances data redundancy and efficiency while reducing storage costs and improving fault tolerance. It is more advantageous than replication in Object Storage Systems. EC guarantees data integrity by ensuring lossless transmission of all coded pieces. As data volumes continue to increase rapidly, the time efficiency of the EC method becomes crucial in ensuring optimal system performance. Various variables, including the algorithm employed, data size, number of storage nodes, hardware resources, and network conditions, can influence the speed of EC operations. Although some literature covers various aspects, there is still a research gap in understanding the I/O activities, time efficiency, and fault tolerance of EC in object storage systems. Hence, our research aims to address these challenges in cloud-based object storage systems. We analyze and benchmark the data storage I/O performance of OpenStack Swift, focusing on the time efficiency of the Reed–Solomon (RS) algorithm across two datasets. Additionally, our contributions include benchmarking EC performance in both local and remote testbeds, utilizing the SimEDC simulator for comprehensive efficiency and fault tolerance assessments. Moreover, we create a comprehensive dataset (MCSD-100) for benchmarking and conduct a systematic literature review. Finally, we identify and discuss future opportunities for enhancing EC in cloud-based object storage systems.</p></div>","PeriodicalId":55132,"journal":{"name":"Future Generation Computer Systems-The International Journal of Escience","volume":null,"pages":null},"PeriodicalIF":6.2,"publicationDate":"2024-09-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142241708","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Trajectory privacy preservation model based on LSTM-DCGAN 基于 LSTM-DCGAN 的轨迹隐私保护模型
IF 6.2 2区 计算机科学 Q1 COMPUTER SCIENCE, THEORY & METHODS Pub Date : 2024-09-01 DOI: 10.1016/j.future.2024.107496

Rapid scientific and technological development has brought many innovations to electronic devices, which has greatly improved our daily lives. Nowadays, many apps require the permission to access user location information, causing the concern on user privacy and making it an important task to protect user trajectory information. This paper proposes a novel model called LSTM-DCGAN by integrating LSTM (Long Short-Term Memory Network) with DCGAN (Deep Convolution Generative Adversarial Network). LSTM-DCGAN takes the advantages of LSTM to remember attributes in the trajectory data and the generator and the discriminator in DCGAN to generate and discriminate the trajectories. The proposed model is trained using real user trajectory data and the experimental results are validated from the perspectives of both effectiveness and practicality. Results show that the proposed LSTM-DCGAN model outperforms similar methods in generating synthesized trajectories that are similar to real trajectories in terms of the temporal and the spatial characteristics. In addition, various influencing factors are evaluated to investigate ways of further improving and optimizing the model. Overall, the proposed LSTM-DCGAN model can achieve the balance between the effectiveness of privacy protection and the practicality of user trajectory data and can thus be applied to safeguarding user trajectory information.

科学技术的飞速发展给电子设备带来了许多创新,极大地改善了我们的日常生活。如今,许多应用程序都需要用户许可才能获取用户位置信息,这引起了人们对用户隐私的关注,保护用户轨迹信息成为一项重要任务。本文通过整合 LSTM(长短期记忆网络)和 DCGAN(深度卷积生成对抗网络),提出了一种名为 LSTM-DCGAN 的新型模型。LSTM-DCGAN 利用 LSTM 的优势记忆轨迹数据中的属性,并利用 DCGAN 中的生成器和判别器生成和判别轨迹。利用真实用户的轨迹数据对所提出的模型进行了训练,并从有效性和实用性两个角度对实验结果进行了验证。结果表明,所提出的 LSTM-DCGAN 模型在生成与真实轨迹在时间和空间特征上相似的合成轨迹方面优于同类方法。此外,还对各种影响因素进行了评估,以研究进一步改进和优化模型的方法。总体而言,所提出的 LSTM-DCGAN 模型能在隐私保护的有效性和用户轨迹数据的实用性之间取得平衡,因此可应用于用户轨迹信息的保护。
{"title":"Trajectory privacy preservation model based on LSTM-DCGAN","authors":"","doi":"10.1016/j.future.2024.107496","DOIUrl":"10.1016/j.future.2024.107496","url":null,"abstract":"<div><p>Rapid scientific and technological development has brought many innovations to electronic devices, which has greatly improved our daily lives. Nowadays, many apps require the permission to access user location information, causing the concern on user privacy and making it an important task to protect user trajectory information. This paper proposes a novel model called LSTM-DCGAN by integrating LSTM (Long Short-Term Memory Network) with DCGAN (Deep Convolution Generative Adversarial Network). LSTM-DCGAN takes the advantages of LSTM to remember attributes in the trajectory data and the generator and the discriminator in DCGAN to generate and discriminate the trajectories. The proposed model is trained using real user trajectory data and the experimental results are validated from the perspectives of both effectiveness and practicality. Results show that the proposed LSTM-DCGAN model outperforms similar methods in generating synthesized trajectories that are similar to real trajectories in terms of the temporal and the spatial characteristics. In addition, various influencing factors are evaluated to investigate ways of further improving and optimizing the model. Overall, the proposed LSTM-DCGAN model can achieve the balance between the effectiveness of privacy protection and the practicality of user trajectory data and can thus be applied to safeguarding user trajectory information.</p></div>","PeriodicalId":55132,"journal":{"name":"Future Generation Computer Systems-The International Journal of Escience","volume":null,"pages":null},"PeriodicalIF":6.2,"publicationDate":"2024-09-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142148265","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Two-stage multi-objective optimization based on knowledge-driven approach: A case study on production and transportation integration 基于知识驱动方法的两阶段多目标优化:生产与运输一体化案例研究
IF 6.2 2区 计算机科学 Q1 COMPUTER SCIENCE, THEORY & METHODS Pub Date : 2024-08-31 DOI: 10.1016/j.future.2024.107494

The multi-objective evolutionary algorithm (MOEA) has been widely applied to solve various optimization problems. Existing search models based on dominance and decomposition are extensively used in MOEAs to balance convergence and diversity during the search process. In this paper, we propose for the first time a two-stage MOEA based on a knowledge-driven approach (TMOK). The first stage aims to find a rough Pareto front through an improved nondominated sorting algorithm, whereas the second stage incorporates a dynamic learning mechanism into a decomposition-based search model to reasonably allocate computational resources. To further speed up the convergence of TMOK, we present a Markov chain-based TMOK (MTMOK), which can potentially capture variable dependencies. In particular, MTMOK employs a marginal probability distribution of single variables and an N-state Markov chain of two adjacent variables to extract valuable knowledge about the problem solved. Moreover, a simple yet effective local search is embedded into MTMOK to improve solutions through variable neighborhood search procedures. To illustrate the potential of the proposed algorithms, we apply them to solve a distributed production and transportation-integrated problem encountered in many industries. Numerical results and comparisons on 54 test instances with different sizes verify the effectiveness of TMOK and MTMOK. We have made the 54 instances and the source code of our algorithms publicly available to support future research and real-life applications.

多目标进化算法(MOEA)已被广泛应用于解决各种优化问题。现有的基于优势和分解的搜索模型被广泛应用于 MOEA,以平衡搜索过程中的收敛性和多样性。在本文中,我们首次提出了一种基于知识驱动方法(TMOK)的两阶段 MOEA。第一阶段旨在通过改进的非支配排序算法找到粗略的帕累托前沿,而第二阶段则将动态学习机制纳入基于分解的搜索模型,以合理分配计算资源。为了进一步加快 TMOK 的收敛速度,我们提出了基于马尔可夫链的 TMOK(MTMOK),它可以捕捉变量依赖关系。具体来说,MTMOK 利用单个变量的边际概率分布和两个相邻变量的 N 态马尔可夫链来提取所解决问题的宝贵知识。此外,MTMOK 还嵌入了一种简单而有效的局部搜索,通过变量邻域搜索程序来改进解决方案。为了说明所提算法的潜力,我们将其用于解决许多行业中遇到的分布式生产和运输一体化问题。54 个不同大小的测试实例的数值结果和比较验证了 TMOK 和 MTMOK 的有效性。我们公开了 54 个实例和算法源代码,以支持未来的研究和实际应用。
{"title":"Two-stage multi-objective optimization based on knowledge-driven approach: A case study on production and transportation integration","authors":"","doi":"10.1016/j.future.2024.107494","DOIUrl":"10.1016/j.future.2024.107494","url":null,"abstract":"<div><p>The multi-objective evolutionary algorithm (MOEA) has been widely applied to solve various optimization problems. Existing search models based on dominance and decomposition are extensively used in MOEAs to balance convergence and diversity during the search process. In this paper, we propose for the first time a two-stage MOEA based on a knowledge-driven approach (TMOK). The first stage aims to find a rough Pareto front through an improved nondominated sorting algorithm, whereas the second stage incorporates a dynamic learning mechanism into a decomposition-based search model to reasonably allocate computational resources. To further speed up the convergence of TMOK, we present a Markov chain-based TMOK (MTMOK), which can potentially capture variable dependencies. In particular, MTMOK employs a marginal probability distribution of single variables and an <em>N</em>-state Markov chain of two adjacent variables to extract valuable knowledge about the problem solved. Moreover, a simple yet effective local search is embedded into MTMOK to improve solutions through variable neighborhood search procedures. To illustrate the potential of the proposed algorithms, we apply them to solve a distributed production and transportation-integrated problem encountered in many industries. Numerical results and comparisons on 54 test instances with different sizes verify the effectiveness of TMOK and MTMOK. We have made the 54 instances and the source code of our algorithms publicly available to support future research and real-life applications.</p></div>","PeriodicalId":55132,"journal":{"name":"Future Generation Computer Systems-The International Journal of Escience","volume":null,"pages":null},"PeriodicalIF":6.2,"publicationDate":"2024-08-31","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142232101","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Software stewardship and advancement of a high-performance computing scientific application: QMCPACK 高性能计算科学应用软件的管理和改进:QMCPACK
IF 6.2 2区 计算机科学 Q1 COMPUTER SCIENCE, THEORY & METHODS Pub Date : 2024-08-31 DOI: 10.1016/j.future.2024.107502

We provide an overview of the software engineering efforts and their impact in QMCPACK, a production-level ab-initio Quantum Monte Carlo open-source code targeting high-performance computing (HPC) systems. Aspects included are: (i) strategic expansion of continuous integration (CI) targeting CPUs, using GitHub Actions own runners, and NVIDIA and AMD GPUs used in pre-exascale systems, (ii) incremental reduction of memory leaks using sanitizers, (iii) incorporation of Docker containers for CI and reproducibility, and (iv) refactoring efforts to improve maintainability, testing coverage, and memory lifetime management. We quantify the value of these improvements by providing metrics to illustrate the shift towards a predictive, rather than reactive, maintenance approach. Our goal, in documenting the impact of these efforts on QMCPACK, is to contribute to the body of knowledge on the importance of research software engineering (RSE) for the stewardship and advancement of community HPC codes to enable scientific discovery at scale.

QMCPACK 是一款针对高性能计算 (HPC) 系统的生产级非原位量子蒙特卡洛开源代码,我们将概述软件工程工作及其对 QMCPACK 的影响。其中包括以下方面(i) 针对 CPU 的持续集成 (CI) 战略扩展,使用 GitHub Actions 自带的运行程序,以及用于超大规模前系统的 NVIDIA 和 AMD GPU,(ii) 使用 sanitizers 逐步减少内存泄漏,(iii) 将 Docker 容器纳入 CI 和可重复性,以及 (iv) 重构工作,以提高可维护性、测试覆盖率和内存寿命管理。我们通过提供指标来量化这些改进的价值,以说明向预测性而非反应性维护方法的转变。我们记录了这些工作对 QMCPACK 的影响,目的是为研究软件工程(RSE)对管理和推进社区 HPC 代码的重要性的知识体系做出贡献,从而实现大规模的科学发现。
{"title":"Software stewardship and advancement of a high-performance computing scientific application: QMCPACK","authors":"","doi":"10.1016/j.future.2024.107502","DOIUrl":"10.1016/j.future.2024.107502","url":null,"abstract":"<div><p>We provide an overview of the software engineering efforts and their impact in QMCPACK, a production-level ab-initio Quantum Monte Carlo open-source code targeting high-performance computing (HPC) systems. Aspects included are: (i) strategic expansion of continuous integration (CI) targeting CPUs, using GitHub Actions own runners, and NVIDIA and AMD GPUs used in pre-exascale systems, (ii) incremental reduction of memory leaks using sanitizers, (iii) incorporation of Docker containers for CI and reproducibility, and (iv) refactoring efforts to improve maintainability, testing coverage, and memory lifetime management. We quantify the value of these improvements by providing metrics to illustrate the shift towards a predictive, rather than reactive, maintenance approach. Our goal, in documenting the impact of these efforts on QMCPACK, is to contribute to the body of knowledge on the importance of research software engineering (RSE) for the stewardship and advancement of community HPC codes to enable scientific discovery at scale.</p></div>","PeriodicalId":55132,"journal":{"name":"Future Generation Computer Systems-The International Journal of Escience","volume":null,"pages":null},"PeriodicalIF":6.2,"publicationDate":"2024-08-31","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142167924","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Special Collection on Advances in Quantum Computing: Methods, Algorithms, and Systems 量子计算进展特辑:方法、算法和系统
IF 6.2 2区 计算机科学 Q1 COMPUTER SCIENCE, THEORY & METHODS Pub Date : 2024-08-30 DOI: 10.1016/j.future.2024.107503
{"title":"Special Collection on Advances in Quantum Computing: Methods, Algorithms, and Systems","authors":"","doi":"10.1016/j.future.2024.107503","DOIUrl":"10.1016/j.future.2024.107503","url":null,"abstract":"","PeriodicalId":55132,"journal":{"name":"Future Generation Computer Systems-The International Journal of Escience","volume":null,"pages":null},"PeriodicalIF":6.2,"publicationDate":"2024-08-30","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142229599","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
期刊
Future Generation Computer Systems-The International Journal of Escience
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1