首页 > 最新文献

IEEE Transactions on Cloud Computing最新文献

英文 中文
MFSSE: Multi-Keyword Fuzzy Ranked Symmetric Searchable Encryption With Pattern Hidden in Mobile Cloud Computing MFSSE:移动云计算中带有模式隐藏的多关键词模糊排序对称可搜索加密技术
IF 5.3 2区 计算机科学 Q1 COMPUTER SCIENCE, INFORMATION SYSTEMS Pub Date : 2024-07-19 DOI: 10.1109/TCC.2024.3430237
Dajiang Chen;Zeyu Liao;Zhidong Xie;Ruidong Chen;Zhen Qin;Mingsheng Cao;Hong-Ning Dai;Kuan Zhang
In this paper, we propose a novel Multi-keyword Fuzzy Symmetric Searchable Encryption (SSE) with patterns hidden, namely MFSSE. In MFSSE, the search trapdoor can be modified differently each time even if the keywords are the same when performing multi-keyword search to prevent the leakage of search patterns. Moreover, MFSSE modifies the search trapdoor by introducing random false negative and false positive errors to resist access pattern leakage. Furthermore, MFSSE utilizes efficient cryptographic algorithms (e.g., Locality-Sensitive Hashing) and lightweight operations (such as, integer addition, matrix multiplication, etc.) to minimize computational and communication, and storage overheads on mobile devices while meeting security and functional requirements. Specifically, its query process requires only a single round of communication, in which, the communication cost is linearly related to the number of the documents in the database, and is independent of the total number of keywords and the number of queried keywords; its computational complexity for matching a document is $O(1)$; and it requires only a small amount of fixed local storage (i.e., secret key) to be suitable for mobile scenarios. The experimental results demonstrate that MFSSE can prevent the leakage of access patterns and search patterns, while keeping a low communication and computation overheads.
本文提出了一种新的隐藏模式的多关键字模糊对称可搜索加密(SSE),即MFSSE。在MFSSE中,在执行多关键词搜索时,即使关键词相同,每次也可以对搜索陷门进行不同的修改,以防止搜索模式的泄漏。此外,MFSSE通过引入随机假阴性和假阳性错误来修改搜索陷阱门,以防止访问模式泄漏。此外,MFSSE利用高效的加密算法(如位置敏感散列)和轻量级操作(如整数加法、矩阵乘法等),在满足安全和功能要求的同时,最大限度地减少移动设备上的计算、通信和存储开销。具体来说,它的查询过程只需要单轮通信,其中,通信成本与数据库中文档的数量线性相关,与关键字总数和查询的关键字数量无关;匹配文档的计算复杂度为$O(1)$;它只需要少量的固定本地存储(即密钥)就可以适用于移动场景。实验结果表明,MFSSE能够有效防止访问模式和搜索模式的泄漏,同时保持较低的通信开销和计算开销。
{"title":"MFSSE: Multi-Keyword Fuzzy Ranked Symmetric Searchable Encryption With Pattern Hidden in Mobile Cloud Computing","authors":"Dajiang Chen;Zeyu Liao;Zhidong Xie;Ruidong Chen;Zhen Qin;Mingsheng Cao;Hong-Ning Dai;Kuan Zhang","doi":"10.1109/TCC.2024.3430237","DOIUrl":"10.1109/TCC.2024.3430237","url":null,"abstract":"In this paper, we propose a novel Multi-keyword Fuzzy Symmetric Searchable Encryption (SSE) with patterns hidden, namely MFSSE. In MFSSE, the search trapdoor can be modified differently each time even if the keywords are the same when performing multi-keyword search to prevent the leakage of search patterns. Moreover, MFSSE modifies the search trapdoor by introducing random false negative and false positive errors to resist access pattern leakage. Furthermore, MFSSE utilizes efficient cryptographic algorithms (e.g., Locality-Sensitive Hashing) and lightweight operations (such as, integer addition, matrix multiplication, etc.) to minimize computational and communication, and storage overheads on mobile devices while meeting security and functional requirements. Specifically, its query process requires only a single round of communication, in which, the communication cost is linearly related to the number of the documents in the database, and is independent of the total number of keywords and the number of queried keywords; its computational complexity for matching a document is \u0000<inline-formula><tex-math>$O(1)$</tex-math></inline-formula>\u0000; and it requires only a small amount of fixed local storage (i.e., secret key) to be suitable for mobile scenarios. The experimental results demonstrate that MFSSE can prevent the leakage of access patterns and search patterns, while keeping a low communication and computation overheads.","PeriodicalId":13202,"journal":{"name":"IEEE Transactions on Cloud Computing","volume":"12 4","pages":"1042-1057"},"PeriodicalIF":5.3,"publicationDate":"2024-07-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141740686","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Security, Reliability, Cost, and Energy-Aware Scheduling of Real-Time Workflows in Compute-Continuum Environments 计算连续环境中实时工作流的安全性、可靠性、成本和能源感知调度
IF 5.3 2区 计算机科学 Q1 COMPUTER SCIENCE, INFORMATION SYSTEMS Pub Date : 2024-07-10 DOI: 10.1109/TCC.2024.3426282
Ahmad Taghinezhad-Niar;Javid Taheri
Emerging computing paradigms like mist, edge, and fog computing address challenges in the real-time processing of vast Internet of Things (IoT) applications. Alongside, cloud computing offers a suitable platform for executing services. Together, they form a multi-tier computing environment known as compute-continuum to efficiently enhance data management and task execution of real-time tasks. The primary considerations for compute-continuum include variations in resource configuration and network architecture, rental cost model, application security needs, energy consumption, transmission latency, and system reliability. To address these problems, we propose two scheduling algorithms (RCSECH and RSECH) for real-time multi-workflow scheduling frameworks. Both algorithms optimize for rental cost, energy consumption, and task reliability when scheduling real-time workflows while considering deadlines and security requirements as constraints. RCSECH also factors in reliability alongside these constraints. The environment under investigation consists of a compute-continuum architecture consisting of mist, edge, fog, and cloud layers, each potentially composed of heterogeneous resources. The framework undergoes evaluation via simulation experiments, revealing promising results. Specifically, the framework exhibits the capability to enhance reliability by up to 7%, reduce energy consumption by 8%, surpass reliability constraints by more than 25%, and generate cost savings by at least 15%.
雾计算、边缘计算和雾计算等新兴计算模式解决了大量物联网(IoT)应用的实时处理难题。同时,云计算为执行服务提供了合适的平台。它们共同组成了一个多层计算环境,称为计算连续性,可有效加强数据管理和实时任务的执行。计算连续性的主要考虑因素包括资源配置和网络架构的变化、租赁成本模式、应用安全需求、能源消耗、传输延迟和系统可靠性。为解决这些问题,我们为实时多工作流调度框架提出了两种调度算法(RCSECH 和 RSECH)。在调度实时工作流时,这两种算法都对租用成本、能耗和任务可靠性进行了优化,同时将截止日期和安全要求作为约束条件。RCSECH 在考虑这些约束条件的同时,还考虑了可靠性因素。所研究的环境由计算连续架构组成,包括雾层、边缘层、雾层和云层,每个层都可能由异构资源组成。通过模拟实验对该框架进行了评估,结果令人欣喜。具体来说,该框架能够将可靠性提高 7%,将能耗降低 8%,将可靠性约束提高 25%以上,并节省至少 15%的成本。
{"title":"Security, Reliability, Cost, and Energy-Aware Scheduling of Real-Time Workflows in Compute-Continuum Environments","authors":"Ahmad Taghinezhad-Niar;Javid Taheri","doi":"10.1109/TCC.2024.3426282","DOIUrl":"10.1109/TCC.2024.3426282","url":null,"abstract":"Emerging computing paradigms like mist, edge, and fog computing address challenges in the real-time processing of vast Internet of Things (IoT) applications. Alongside, cloud computing offers a suitable platform for executing services. Together, they form a multi-tier computing environment known as compute-continuum to efficiently enhance data management and task execution of real-time tasks. The primary considerations for compute-continuum include variations in resource configuration and network architecture, rental cost model, application security needs, energy consumption, transmission latency, and system reliability. To address these problems, we propose two scheduling algorithms (RCSECH and RSECH) for real-time multi-workflow scheduling frameworks. Both algorithms optimize for rental cost, energy consumption, and task reliability when scheduling real-time workflows while considering deadlines and security requirements as constraints. RCSECH also factors in reliability alongside these constraints. The environment under investigation consists of a compute-continuum architecture consisting of mist, edge, fog, and cloud layers, each potentially composed of heterogeneous resources. The framework undergoes evaluation via simulation experiments, revealing promising results. Specifically, the framework exhibits the capability to enhance reliability by up to 7%, reduce energy consumption by 8%, surpass reliability constraints by more than 25%, and generate cost savings by at least 15%.","PeriodicalId":13202,"journal":{"name":"IEEE Transactions on Cloud Computing","volume":"12 3","pages":"954-965"},"PeriodicalIF":5.3,"publicationDate":"2024-07-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141585945","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
$varepsilon$ɛ-LAP: A Lightweight and Adaptive Cache Partitioning Scheme With Prudent Resizing Decisions for Content Delivery Networks $varepsilon$-LAP:针对内容分发网络的轻量级自适应缓存分区方案与审慎的大小调整决策
IF 5.3 2区 计算机科学 Q1 COMPUTER SCIENCE, INFORMATION SYSTEMS Pub Date : 2024-06-28 DOI: 10.1109/TCC.2024.3420454
Peng Wang;Yu Liu;Ziqi Liu;Zhelong Zhao;Ke Liu;Ke Zhou;Zhihai Huang
As dependence on Content Delivery Networks (CDNs) increases, there is a growing need for innovative solutions to optimize cache performance amid increasing traffic and complicated cache-sharing workloads. Allocating exclusive resources to applications in CDNs boosts the overall cache hit ratio (OHR), enhancing efficiency. However, the traditional method of creating the miss ratio curve (MRC) is unsuitable for CDNs due to the diverse sizes of items and the vast number of applications, leading to high computational overhead and performance inconsistency. To tackle this issue, we propose a lightweight and adaptive cache partitioning scheme called $varepsilon$-LAP. This scheme uses a corresponding shadow cache for each partition and sorts them based on the average hit numbers on the granularity unit in the shadow caches. During partition resizing, $varepsilon$-LAP transfers storage capacity, measured in units of granularity, from the $(N-k+1)$-th ($kleq frac{N}{2}$) partition to the $k$-th partition. A learning threshold parameter, i.e., $varepsilon$, is also introduced to prudently determine when to resize partitions, improving caching efficiency. This can eliminate about 96.8% of unnecessary partition resizing without compromising performance. $varepsilon$-LAP, when deployed in PicCloud at Tencent, improved OHR by 9.34% and reduced the average user access latency by 12.5 ms. Experimental results show that $varepsilon$-LAP outperforms other cache partitioning schemes in terms of both OHR and access latency, and it effectively adapts to workload variations.
随着对内容分发网络(CDN)的依赖程度不断增加,人们越来越需要创新的解决方案,以在流量不断增加和复杂的缓存共享工作负载中优化缓存性能。为 CDN 中的应用分配独占资源可提高整体缓存命中率(OHR),从而提高效率。然而,由于项目大小不一,应用数量众多,传统的未命中率曲线(MRC)创建方法并不适合 CDN,会导致高计算开销和性能不一致。为解决这一问题,我们提出了一种名为 $varepsilon$-LAP 的轻量级自适应缓存分区方案。该方案为每个分区使用一个相应的影子缓存,并根据影子缓存中粒度单位的平均命中率对它们进行排序。在调整分区大小的过程中,$varepsilon$-LAP 会将存储容量(以粒度单位衡量)从 $(N-k+1)$-th ($kleq frac{N}{2}$)分区转移到 $k$-th 分区。此外,还引入了一个学习阈值参数,即 $varepsilon$,以审慎地决定何时调整分区大小,从而提高缓存效率。这可以在不影响性能的情况下,消除约 96.8% 不必要的分区大小调整。$varepsilon$-LAP在腾讯PicCloud中部署后,OHR提高了9.34%,用户平均访问延迟降低了12.5毫秒。实验结果表明,$varepsilon$-LAP在OHR和访问延迟方面均优于其他缓存分区方案,并能有效适应工作负载的变化。
{"title":"$varepsilon$ɛ-LAP: A Lightweight and Adaptive Cache Partitioning Scheme With Prudent Resizing Decisions for Content Delivery Networks","authors":"Peng Wang;Yu Liu;Ziqi Liu;Zhelong Zhao;Ke Liu;Ke Zhou;Zhihai Huang","doi":"10.1109/TCC.2024.3420454","DOIUrl":"10.1109/TCC.2024.3420454","url":null,"abstract":"As dependence on Content Delivery Networks (CDNs) increases, there is a growing need for innovative solutions to optimize cache performance amid increasing traffic and complicated cache-sharing workloads. Allocating exclusive resources to applications in CDNs boosts the overall cache hit ratio (OHR), enhancing efficiency. However, the traditional method of creating the miss ratio curve (MRC) is unsuitable for CDNs due to the diverse sizes of items and the vast number of applications, leading to high computational overhead and performance inconsistency. To tackle this issue, we propose a \u0000<u>l</u>\u0000ightweight and \u0000<u>a</u>\u0000daptive cache \u0000<u>p</u>\u0000artitioning scheme called \u0000<inline-formula><tex-math>$varepsilon$</tex-math></inline-formula>\u0000-LAP. This scheme uses a corresponding shadow cache for each partition and sorts them based on the average hit numbers on the granularity unit in the shadow caches. During partition resizing, \u0000<inline-formula><tex-math>$varepsilon$</tex-math></inline-formula>\u0000-LAP transfers storage capacity, measured in units of granularity, from the \u0000<inline-formula><tex-math>$(N-k+1)$</tex-math></inline-formula>\u0000-th (\u0000<inline-formula><tex-math>$kleq frac{N}{2}$</tex-math></inline-formula>\u0000) partition to the \u0000<inline-formula><tex-math>$k$</tex-math></inline-formula>\u0000-th partition. A learning threshold parameter, i.e., \u0000<inline-formula><tex-math>$varepsilon$</tex-math></inline-formula>\u0000, is also introduced to prudently determine when to resize partitions, improving caching efficiency. This can eliminate about 96.8% of unnecessary partition resizing without compromising performance. \u0000<inline-formula><tex-math>$varepsilon$</tex-math></inline-formula>\u0000-LAP, when deployed in \u0000<i>PicCloud</i>\u0000 at \u0000<i>Tencent</i>\u0000, improved OHR by 9.34% and reduced the average user access latency by 12.5 ms. Experimental results show that \u0000<inline-formula><tex-math>$varepsilon$</tex-math></inline-formula>\u0000-LAP outperforms other cache partitioning schemes in terms of both OHR and access latency, and it effectively adapts to workload variations.","PeriodicalId":13202,"journal":{"name":"IEEE Transactions on Cloud Computing","volume":"12 3","pages":"942-953"},"PeriodicalIF":5.3,"publicationDate":"2024-06-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141503565","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Secure and Flexible Coded Distributed Matrix Multiplication Based on Edge Computing for Industrial Metaverse 基于边缘计算的安全灵活编码分布式矩阵乘法,适用于工业元宇宙
IF 5.3 2区 计算机科学 Q1 COMPUTER SCIENCE, INFORMATION SYSTEMS Pub Date : 2024-06-18 DOI: 10.1109/TCC.2024.3415165
Houming Qiu;Kun Zhu;Dusit Niyato
The Industrial Metaverse is driving a new revolution wave for smart manufacturing domain by reproducing the real industrial environment in a virtual space. Real-time synchronization and rendering of all industrial factors result in numerous time-sensitive and computation-intensive tasks, especially matrix multiplication. Distributed edge computing (DEC) can be exploited to handle these tasks due to its low-latency and powerful computing. In this paper, we propose an efficient and reliable coded DEC framework to compute large-scale matrix multiplication tasks. However, an existence of stragglers causes high computation latency that seriously limits the application of DEC in the Industrial Metaverse. To mitigate the impact of stragglers, we design a secure and flexible PolyDot (SFPD) code, which enables information theoretic security (ITS) protection. Several improvements can be achieved with the proposed SFPD. First, it can achieve a smaller recovery threshold than that of the existing codes in almost all settings. And compared with the original PolyDot codes, our SFPD code considers the extra workers required to add ITS protection. It also provides a flexible tradeoff between recovery threshold and communication & computation loads by simply adjusting two given storage parameters $p$ and $t$. Furthermore, as an important application scenario, the SFPD code is employed to secure model training in machine learning, which can alleviate the straggler effects and protect ITS of raw data. The experiments demonstrate that the SFPD code can significantly speed up the training process while providing ITS of data. Finally, we provide comprehensive performance analysis which shows the superiority of the SFPD code.
工业元宇宙通过在虚拟空间中再现真实工业环境,正在推动智能制造领域的新革命浪潮。所有工业因素的实时同步和渲染导致了大量的时间敏感和计算密集型任务,特别是矩阵乘法。分布式边缘计算(DEC)由于其低延迟和强大的计算能力,可以用来处理这些任务。本文提出了一种高效可靠的编码DEC框架来计算大规模矩阵乘法任务。然而,由于离散体的存在,导致了较高的计算延迟,严重限制了DEC在工业元宇宙中的应用。为了减轻掉队者的影响,我们设计了一个安全灵活的PolyDot (SFPD)代码,它可以实现信息理论安全(ITS)保护。提出的SFPD可以实现若干改进。首先,在几乎所有的设置下,它都能实现比现有代码更小的恢复阈值。与原来的PolyDot代码相比,我们的SFPD代码考虑了增加ITS保护所需的额外工人。它还通过简单地调整两个给定的存储参数$p$和$t$,在恢复阈值和通信&计算负载之间提供了灵活的权衡。此外,SFPD代码作为一种重要的应用场景,用于机器学习中的模型训练,可以减轻离散效应,保护原始数据的ITS。实验表明,SFPD代码在提供数据ITS的同时,可以显著加快训练过程。最后,对SFPD代码进行了全面的性能分析,证明了SFPD代码的优越性。
{"title":"Secure and Flexible Coded Distributed Matrix Multiplication Based on Edge Computing for Industrial Metaverse","authors":"Houming Qiu;Kun Zhu;Dusit Niyato","doi":"10.1109/TCC.2024.3415165","DOIUrl":"10.1109/TCC.2024.3415165","url":null,"abstract":"The Industrial Metaverse is driving a new revolution wave for smart manufacturing domain by reproducing the real industrial environment in a virtual space. Real-time synchronization and rendering of all industrial factors result in numerous time-sensitive and computation-intensive tasks, especially matrix multiplication. Distributed edge computing (DEC) can be exploited to handle these tasks due to its low-latency and powerful computing. In this paper, we propose an efficient and reliable coded DEC framework to compute large-scale matrix multiplication tasks. However, an existence of stragglers causes high computation latency that seriously limits the application of DEC in the Industrial Metaverse. To mitigate the impact of stragglers, we design a secure and flexible PolyDot (SFPD) code, which enables information theoretic security (ITS) protection. Several improvements can be achieved with the proposed SFPD. First, it can achieve a smaller recovery threshold than that of the existing codes in almost all settings. And compared with the original PolyDot codes, our SFPD code considers the extra workers required to add ITS protection. It also provides a flexible tradeoff between recovery threshold and communication & computation loads by simply adjusting two given storage parameters \u0000<inline-formula><tex-math>$p$</tex-math></inline-formula>\u0000 and \u0000<inline-formula><tex-math>$t$</tex-math></inline-formula>\u0000. Furthermore, as an important application scenario, the SFPD code is employed to secure model training in machine learning, which can alleviate the straggler effects and protect ITS of raw data. The experiments demonstrate that the SFPD code can significantly speed up the training process while providing ITS of data. Finally, we provide comprehensive performance analysis which shows the superiority of the SFPD code.","PeriodicalId":13202,"journal":{"name":"IEEE Transactions on Cloud Computing","volume":"12 4","pages":"1026-1041"},"PeriodicalIF":5.3,"publicationDate":"2024-06-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141937583","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Non-Clairvoyant Scheduling of Distributed Machine Learning With Inter-Job and Intra-Job Parallelism on Heterogeneous GPUs 异构 GPU 上具有任务间和任务内并行性的分布式机器学习的非千里眼调度
IF 5.3 2区 计算机科学 Q1 COMPUTER SCIENCE, INFORMATION SYSTEMS Pub Date : 2024-06-14 DOI: 10.1109/TCC.2024.3414440
Fahao Chen;Peng Li;Celimuge Wu;Song Guo
Distributed machine learning (DML) has shown great promise in accelerating model training on multiple GPUs. To increase GPU utilization, a common practice is to let multiple learning jobs share GPU clusters, where the most fundamental and critical challenge is how to efficiently schedule these jobs on GPUs. However, existing works about DML job scheduling are constrained to settings with homogeneous GPUs. GPU heterogeneity is common in practice, but its influence on multiple DML job scheduling has been seldom studied. Moreover, DML jobs have internal structures that contain great parallelism potentials, which have not yet been fully exploited in the heterogeneous computing environment. In this paper, we propose Hare, a DML job scheduler that exploits both inter-job and intra-job parallelism in a heterogeneous GPU cluster. Hare adopts a relaxed fixed-scale synchronization scheme that allows independent tasks to be flexibly scheduled within a training round. Given full knowledge of job arrival time and sizes, we propose a fast heuristic algorithm to minimize the average job completion time and derive its theoretical bound is derived. Without prior knowledge of jobs, we propose an online algorithm based on the Heterogeneity-aware Least-Attained Service (HLAS) policy. We evaluate Hare using a small-scale testbed and a trace-driven simulator. The results show that it can outperform the state-of-the-art, achieving a performance improvement of about 2.94×.
分布式机器学习(DML)在加速多gpu上的模型训练方面显示出巨大的前景。为了提高GPU利用率,一种常见的做法是让多个学习作业共享GPU集群,其中最基本和最关键的挑战是如何有效地在GPU上调度这些作业。然而,现有的关于DML作业调度的工作仅限于同构gpu的设置。GPU的异构性在实际应用中很常见,但其对多DML作业调度的影响却很少被研究。此外,DML作业的内部结构包含巨大的并行性潜力,这在异构计算环境中尚未得到充分利用。在本文中,我们提出了Hare,一个在异构GPU集群中利用作业间和作业内部并行性的DML作业调度器。Hare采用了一种宽松的固定规模同步方案,允许在一个训练回合内灵活地安排独立任务。在充分了解作业到达时间和作业大小的情况下,提出了一种快速的启发式算法来最小化平均作业完成时间,并推导了其理论边界。在不需要预先了解作业的情况下,我们提出了一种基于异构感知的最小可达服务(HLAS)策略的在线算法。我们使用小型测试平台和跟踪驱动模拟器来评估Hare。结果表明,该算法的性能优于现有算法,提高了约2.94倍。
{"title":"Non-Clairvoyant Scheduling of Distributed Machine Learning With Inter-Job and Intra-Job Parallelism on Heterogeneous GPUs","authors":"Fahao Chen;Peng Li;Celimuge Wu;Song Guo","doi":"10.1109/TCC.2024.3414440","DOIUrl":"10.1109/TCC.2024.3414440","url":null,"abstract":"Distributed machine learning (DML) has shown great promise in accelerating model training on multiple GPUs. To increase GPU utilization, a common practice is to let multiple learning jobs share GPU clusters, where the most fundamental and critical challenge is how to efficiently schedule these jobs on GPUs. However, existing works about DML job scheduling are constrained to settings with homogeneous GPUs. GPU heterogeneity is common in practice, but its influence on multiple DML job scheduling has been seldom studied. Moreover, DML jobs have internal structures that contain great parallelism potentials, which have not yet been fully exploited in the heterogeneous computing environment. In this paper, we propose \u0000<italic>Hare</i>\u0000, a DML job scheduler that exploits both inter-job and intra-job parallelism in a heterogeneous GPU cluster. \u0000<italic>Hare</i>\u0000 adopts a relaxed fixed-scale synchronization scheme that allows independent tasks to be flexibly scheduled within a training round. Given full knowledge of job arrival time and sizes, we propose a fast heuristic algorithm to minimize the average job completion time and derive its theoretical bound is derived. Without prior knowledge of jobs, we propose an online algorithm based on the Heterogeneity-aware Least-Attained Service (HLAS) policy. We evaluate \u0000<italic>Hare</i>\u0000 using a small-scale testbed and a trace-driven simulator. The results show that it can outperform the state-of-the-art, achieving a performance improvement of about 2.94×.","PeriodicalId":13202,"journal":{"name":"IEEE Transactions on Cloud Computing","volume":"12 4","pages":"1011-1025"},"PeriodicalIF":5.3,"publicationDate":"2024-06-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141937581","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
An Adaptive Cloud Resource Quota Scheme Based on Dynamic Portraits and Task-Resource Matching 基于动态肖像和任务资源匹配的自适应云资源配额方案
IF 5.3 2区 计算机科学 Q1 COMPUTER SCIENCE, INFORMATION SYSTEMS Pub Date : 2024-06-11 DOI: 10.1109/TCC.2024.3410390
Zuodong Jin;Dan Tao;Peng Qi;Ruipeng Gao
Due to the unrestricted location of cloud resources, an increasing number of users are opting to apply for them. However, determining the appropriate resource quota has always been a challenge for applicants. Excessive quotas can result in resource wastage, while insufficient quotas can pose stability risks. Therefore, it's necessary to propose an adaptive quota scheme for cloud resource. Most existing researches have designed fixed quota schemes for all users, without considering the differences among users. To solve this, we propose an adaptive cloud quota scheme through dynamic portraits and task-resource optimal matching. Specifically, we first aggregate information from text, statistical, and fractal three dimensions to establish dynamic portraits. On this basis, the bidirectional mixture of experts (Bi-MoE) model is designed to match the most suitable resource combinations for tasks. Moreover, we define the time-varying rewards and utilize portrait-based reinforcement learning (PRL) to obtain the optimal quotas, which ensures stability and reduces waste. Extensive simulation results demonstrate that the proposed scheme achieves a memory utilization rate of around 70%. Additionally, it shows improvements in task execution stability, throughput, and the percentage of effective execution time.
由于云资源的位置不受限制,越来越多的用户选择申请云资源。然而,确定适当的资源配额对申请人来说一直是一个挑战。配额过多会造成资源浪费,配额不足会带来稳定性风险。因此,有必要提出一种云资源自适应配额方案。现有的研究大多为所有用户设计了固定的配额方案,没有考虑用户之间的差异。为了解决这个问题,我们提出了一种通过动态画像和任务资源最优匹配的自适应云配额方案。具体而言,我们首先从文本、统计和分形三个维度汇总信息,建立动态画像。在此基础上,设计双向混合专家(Bi-MoE)模型,匹配最适合任务的资源组合。此外,我们定义了时变奖励,并利用基于肖像的强化学习(PRL)来获得最优配额,保证了稳定性并减少了浪费。大量的仿真结果表明,该方案的内存利用率约为70%。此外,它还显示了任务执行稳定性、吞吐量和有效执行时间百分比方面的改进。
{"title":"An Adaptive Cloud Resource Quota Scheme Based on Dynamic Portraits and Task-Resource Matching","authors":"Zuodong Jin;Dan Tao;Peng Qi;Ruipeng Gao","doi":"10.1109/TCC.2024.3410390","DOIUrl":"10.1109/TCC.2024.3410390","url":null,"abstract":"Due to the unrestricted location of cloud resources, an increasing number of users are opting to apply for them. However, determining the appropriate resource quota has always been a challenge for applicants. Excessive quotas can result in resource wastage, while insufficient quotas can pose stability risks. Therefore, it's necessary to propose an adaptive quota scheme for cloud resource. Most existing researches have designed fixed quota schemes for all users, without considering the differences among users. To solve this, we propose an adaptive cloud quota scheme through dynamic portraits and task-resource optimal matching. Specifically, we first aggregate information from text, statistical, and fractal three dimensions to establish dynamic portraits. On this basis, the bidirectional mixture of experts (Bi-MoE) model is designed to match the most suitable resource combinations for tasks. Moreover, we define the time-varying rewards and utilize portrait-based reinforcement learning (PRL) to obtain the optimal quotas, which ensures stability and reduces waste. Extensive simulation results demonstrate that the proposed scheme achieves a memory utilization rate of around 70%. Additionally, it shows improvements in task execution stability, throughput, and the percentage of effective execution time.","PeriodicalId":13202,"journal":{"name":"IEEE Transactions on Cloud Computing","volume":"12 4","pages":"996-1010"},"PeriodicalIF":5.3,"publicationDate":"2024-06-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141937582","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Multi-Data Center Tie-Line Power Smoothing Method Based on Demand Response 基于需求响应的多数据中心连接线功率平滑方法
IF 5.3 2区 计算机科学 Q1 COMPUTER SCIENCE, INFORMATION SYSTEMS Pub Date : 2024-06-06 DOI: 10.1109/TCC.2024.3410377
Ting Yang;Yuxing Hou;Shaotang Cai;Jie Yu;Haibo Pen
Geographically distributed data centers (DCs) have emerged as significant energy consumers, which has led to the integration of renewable energy sources (RES) into DC power provisioning systems. However, the intermittent nature of RES and the randomness of user requests can cause significant fluctuations in DC operating power. It can be detrimental to the operation of IT equipment and lead to instability in the power grid. In this paper, aiming for tightly coupled interconnection scenarios with multi-data centers in varying regions, a multi-data center tie-line power smoothing method based on demand response is proposed. By modulating the power load of server clusters with workload scheduling, we establish a control model combined with intra-DC temporal task migration and inter-DC spatial task migration to deal with high-frequency power fluctuations. The uninterruptible power supply (UPS) battery control model is established to tackle low-frequency fluctuations. Furthermore, we design the two-stage heuristic power regulation algorithm to achieve the best practice of smoothing effect by real-time tracking of power targets after two-layer filtering. Finally, this paper performs a detailed performance simulation evaluation based on tracking data from a real DC and wind and photovoltaic (PV) new energy generation data, using four interconnected DC parks of different sizes across different regions as examples. The simulation results demonstrate that the proposed method effectively smoothing the multi-data center's tie-line power. Additionally, inter-DC temporal task migration serves as a viable solution to overcome the limitations of task migration response within a single DC, reducing the frequency of UPS battery bank charges and discharges, which in turn prolongs their service life. This approach facilitates the utilization of RES while maintaining power quality, and it also aids in reducing the escalating operation and maintenance expenses of DCs.
地理分布式数据中心(DC)已经成为重要的能源消费者,这导致可再生能源(RES)集成到直流供电系统中。然而,RES的间歇性和用户请求的随机性会导致直流工作功率的显著波动。它可能对It设备的运行有害,并导致电网的不稳定。针对不同区域多数据中心紧密耦合互联场景,提出了一种基于需求响应的多数据中心配线功率平滑方法。通过负载调度对服务器集群的功率负载进行调节,建立了数据中心内时间任务迁移和数据中心间空间任务迁移相结合的控制模型,以应对高频功率波动。针对不间断电源电池的低频波动问题,建立了不间断电源电池控制模型。此外,我们设计了两阶段启发式功率调节算法,通过对两层滤波后的功率目标进行实时跟踪,达到平滑效果的最佳实践。最后,本文以4个不同区域、不同规模的互联直流园区为例,基于真实直流跟踪数据和风电光伏新能源发电数据进行了详细的性能仿真评估。仿真结果表明,该方法能有效地平滑多数据中心的联络线功率。此外,跨数据中心时间任务迁移是一种可行的解决方案,可以克服单个数据中心内任务迁移响应的局限性,从而降低UPS电池组充放电的频率,从而延长其使用寿命。这种方法有助于在保持电能质量的同时利用RES,并有助于降低数据中心不断增加的运行和维护费用。
{"title":"Multi-Data Center Tie-Line Power Smoothing Method Based on Demand Response","authors":"Ting Yang;Yuxing Hou;Shaotang Cai;Jie Yu;Haibo Pen","doi":"10.1109/TCC.2024.3410377","DOIUrl":"10.1109/TCC.2024.3410377","url":null,"abstract":"Geographically distributed data centers (DCs) have emerged as significant energy consumers, which has led to the integration of renewable energy sources (RES) into DC power provisioning systems. However, the intermittent nature of RES and the randomness of user requests can cause significant fluctuations in DC operating power. It can be detrimental to the operation of IT equipment and lead to instability in the power grid. In this paper, aiming for tightly coupled interconnection scenarios with multi-data centers in varying regions, a multi-data center tie-line power smoothing method based on demand response is proposed. By modulating the power load of server clusters with workload scheduling, we establish a control model combined with intra-DC temporal task migration and inter-DC spatial task migration to deal with high-frequency power fluctuations. The uninterruptible power supply (UPS) battery control model is established to tackle low-frequency fluctuations. Furthermore, we design the two-stage heuristic power regulation algorithm to achieve the best practice of smoothing effect by real-time tracking of power targets after two-layer filtering. Finally, this paper performs a detailed performance simulation evaluation based on tracking data from a real DC and wind and photovoltaic (PV) new energy generation data, using four interconnected DC parks of different sizes across different regions as examples. The simulation results demonstrate that the proposed method effectively smoothing the multi-data center's tie-line power. Additionally, inter-DC temporal task migration serves as a viable solution to overcome the limitations of task migration response within a single DC, reducing the frequency of UPS battery bank charges and discharges, which in turn prolongs their service life. This approach facilitates the utilization of RES while maintaining power quality, and it also aids in reducing the escalating operation and maintenance expenses of DCs.","PeriodicalId":13202,"journal":{"name":"IEEE Transactions on Cloud Computing","volume":"12 4","pages":"983-995"},"PeriodicalIF":5.3,"publicationDate":"2024-06-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141937598","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Multi-Dimensional Flat Indexing for Encrypted Data 加密数据的多维平面索引
IF 5.3 2区 计算机科学 Q1 COMPUTER SCIENCE, INFORMATION SYSTEMS Pub Date : 2024-06-04 DOI: 10.1109/TCC.2024.3408905
Sabrina De Capitani di Vimercati;Dario Facchinetti;Sara Foresti;Gianluca Oldani;Stefano Paraboschi;Matthew Rossi;Pierangela Samarati
We address the problem of indexing encrypted data outsourced to an external cloud server to support server-side execution of multi-attribute queries. Our approach partitions the dataset in groups with the same number of tuples, and associates all tuples in a group with the same combination of index values, so to guarantee protection against static inferences. Our indexing approach does not require any modifications to the server-side software stack, and requires limited storage at the client for query support. The experimental evaluation considers, for the storage of the encrypted and indexed dataset, both a relational database (PostgreSQL) and a key-value database (Redis). We carried out extensive experiments evaluating client-storage requirements and query performance. The experimental results confirm the efficiency of our solution. The proposal is supported by an open source implementation.
我们要解决的问题是为外包给外部云服务器的加密数据编制索引,以支持在服务器端执行多属性查询。我们的方法将数据集划分为具有相同数据元组数量的组,并将组中的所有数据元组与相同的索引值组合关联起来,从而确保防止静态推断。我们的索引方法不需要对服务器端软件栈进行任何修改,客户端只需要有限的存储空间来支持查询。实验评估考虑了关系数据库(PostgreSQL)和键值数据库(Redis)来存储加密和索引数据集。我们进行了大量实验,评估客户端存储要求和查询性能。实验结果证实了我们解决方案的效率。我们的建议得到了开源实现的支持。
{"title":"Multi-Dimensional Flat Indexing for Encrypted Data","authors":"Sabrina De Capitani di Vimercati;Dario Facchinetti;Sara Foresti;Gianluca Oldani;Stefano Paraboschi;Matthew Rossi;Pierangela Samarati","doi":"10.1109/TCC.2024.3408905","DOIUrl":"10.1109/TCC.2024.3408905","url":null,"abstract":"We address the problem of indexing encrypted data outsourced to an external cloud server to support server-side execution of multi-attribute queries. Our approach partitions the dataset in groups with the same number of tuples, and associates all tuples in a group with the same combination of index values, so to guarantee protection against static inferences. Our indexing approach does not require any modifications to the server-side software stack, and requires limited storage at the client for query support. The experimental evaluation considers, for the storage of the encrypted and indexed dataset, both a relational database (PostgreSQL) and a key-value database (Redis). We carried out extensive experiments evaluating client-storage requirements and query performance. The experimental results confirm the efficiency of our solution. The proposal is supported by an open source implementation.","PeriodicalId":13202,"journal":{"name":"IEEE Transactions on Cloud Computing","volume":"12 3","pages":"928-941"},"PeriodicalIF":5.3,"publicationDate":"2024-06-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=10547318","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141937577","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
How to Securely and Efficiently Solve the Large-Scale Modular System of Linear Equations on the Cloud 如何在云上安全高效地求解大规模模块线性方程组
IF 5.3 2区 计算机科学 Q1 COMPUTER SCIENCE, INFORMATION SYSTEMS Pub Date : 2024-06-03 DOI: 10.1109/TCC.2024.3408240
Chengliang Tian;Jia Yu;Panpan Meng;Guoyan Zhang;Weizhong Tian;Yan Zhang
Cloud-assisted computation empowers resource-constrained clients to efficiently tackle computationally intensive tasks by outsourcing them to resource-rich cloud servers. In the current era of Big Data, the widespread need to solve large-scale modular linear systems of equations ($mathcal {LMLSE}$) of the form $mathbf {A}mathbf {x}equiv mathbf {b};{rm mod};{q}$ poses a significant challenge, particularly for lightweight devices. This paper delves into the secure outsourcing of $mathcal {LMLSE}$ under a malicious single-server model and, to the best of our knowledge, introduces the inaugural protocol tailored to this specific context. The cornerstone of our protocol lies in the innovation of a novel matrix encryption method based on sparse unimodular matrix transformations. This novel technique bestows our protocol with several key advantages. First and foremost, it ensures robust privacy for all computation inputs, encompassing $mathbf {A},mathbf {b}, q$, and the output $mathbf {x}$, as validated by thorough theoretical analysis. Second, the protocol delivers optimal verifiability, enabling clients to detect cloud server misbehavior with an unparalleled probability of 1. Furthermore, it boasts high efficiency, requiring only a single interaction between the client and the cloud server, significantly reducing local-client time costs. For an $m$-by-$n$ matrix $mathbf {A}$, a given parameter $lambda =omega (log q)$, and $rho =2.371552$, the time complexity is diminished from $O(max lbrace m n^{rho -1}, m^{rho -2} n^{2}rbrace cdot (log q)^{2})$ to $O((mn+m^{2})lambda log q+mn(log q)^{2})$. The comprehensive results of our experimental performance evaluations substantiate the protocol's practical efficiency and effectiveness.
云辅助计算通过将计算密集型任务外包给资源丰富的云服务器,使资源受限的客户能够高效地处理这些任务。在当前的大数据时代,人们普遍需要求解形式为 $mathbf {A}mathbf {x}equiv mathbf {b};{rm mod};{q}$ 的大规模模块线性方程组($mathcal {LMLSE}$),这带来了巨大的挑战,尤其是对轻量级设备而言。本文深入研究了恶意单服务器模型下 $mathcal {LMLSE}$ 的安全外包问题,并且据我们所知,本文介绍了为这一特定环境量身定制的首创协议。我们协议的基石在于基于稀疏单模态矩阵变换的新型矩阵加密方法的创新。这项新技术赋予了我们的协议多项关键优势。首先,它能确保所有计算输入(包括 $mathbf {A}、mathbf {b}、q$)和输出 $mathbf {x}$的稳健隐私,这一点已通过全面的理论分析得到验证。其次,该协议提供了最佳的可验证性,使客户端能够以无与伦比的1概率检测到云服务器的不当行为。此外,它还拥有高效率,客户端和云服务器之间只需进行一次交互,大大降低了本地客户端的时间成本。对于$m$-by-n$矩阵$mathbf {A}$、给定参数$lambda =omega (log q)$和$rho =2.371552$ 时,时间复杂度从 $O(max lbrace m n^{rho -1}, m^{rho -2} n^{2}rbrace cdot (log q)^{2})$ 下降到 $O((mn+m^{2})lambda log q+mn(log q)^{2})$。实验性能评估的综合结果证明了该协议的实用效率和有效性。
{"title":"How to Securely and Efficiently Solve the Large-Scale Modular System of Linear Equations on the Cloud","authors":"Chengliang Tian;Jia Yu;Panpan Meng;Guoyan Zhang;Weizhong Tian;Yan Zhang","doi":"10.1109/TCC.2024.3408240","DOIUrl":"10.1109/TCC.2024.3408240","url":null,"abstract":"Cloud-assisted computation empowers resource-constrained clients to efficiently tackle computationally intensive tasks by outsourcing them to resource-rich cloud servers. In the current era of Big Data, the widespread need to solve large-scale modular linear systems of equations (\u0000<inline-formula><tex-math>$mathcal {LMLSE}$</tex-math></inline-formula>\u0000) of the form \u0000<inline-formula><tex-math>$mathbf {A}mathbf {x}equiv mathbf {b};{rm mod};{q}$</tex-math></inline-formula>\u0000 poses a significant challenge, particularly for lightweight devices. This paper delves into the secure outsourcing of \u0000<inline-formula><tex-math>$mathcal {LMLSE}$</tex-math></inline-formula>\u0000 under a malicious single-server model and, to the best of our knowledge, introduces the inaugural protocol tailored to this specific context. The cornerstone of our protocol lies in the innovation of a novel matrix encryption method based on sparse unimodular matrix transformations. This novel technique bestows our protocol with several key advantages. First and foremost, it ensures robust privacy for all computation inputs, encompassing \u0000<inline-formula><tex-math>$mathbf {A},mathbf {b}, q$</tex-math></inline-formula>\u0000, and the output \u0000<inline-formula><tex-math>$mathbf {x}$</tex-math></inline-formula>\u0000, as validated by thorough theoretical analysis. Second, the protocol delivers optimal verifiability, enabling clients to detect cloud server misbehavior with an unparalleled probability of 1. Furthermore, it boasts high efficiency, requiring only a single interaction between the client and the cloud server, significantly reducing local-client time costs. For an \u0000<inline-formula><tex-math>$m$</tex-math></inline-formula>\u0000-by-\u0000<inline-formula><tex-math>$n$</tex-math></inline-formula>\u0000 matrix \u0000<inline-formula><tex-math>$mathbf {A}$</tex-math></inline-formula>\u0000, a given parameter \u0000<inline-formula><tex-math>$lambda =omega (log q)$</tex-math></inline-formula>\u0000, and \u0000<inline-formula><tex-math>$rho =2.371552$</tex-math></inline-formula>\u0000, the time complexity is diminished from \u0000<inline-formula><tex-math>$O(max lbrace m n^{rho -1}, m^{rho -2} n^{2}rbrace cdot (log q)^{2})$</tex-math></inline-formula>\u0000 to \u0000<inline-formula><tex-math>$O((mn+m^{2})lambda log q+mn(log q)^{2})$</tex-math></inline-formula>\u0000. The comprehensive results of our experimental performance evaluations substantiate the protocol's practical efficiency and effectiveness.","PeriodicalId":13202,"journal":{"name":"IEEE Transactions on Cloud Computing","volume":"12 3","pages":"913-927"},"PeriodicalIF":5.3,"publicationDate":"2024-06-03","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141937601","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Decentralized Funding of Public Goods in Blockchain System: Leveraging Expert Advice 区块链系统中公共产品的去中心化筹资:利用专家建议
IF 6.5 2区 计算机科学 Q1 COMPUTER SCIENCE, INFORMATION SYSTEMS Pub Date : 2024-04-30 DOI: 10.1109/TCC.2024.3394973
Jichen Li;Yukun Cheng;Wenhan Huang;Mengqian Zhang;Jiarui Fan;Xiaotie Deng;Jan Xie;Jie Zhang
Public goods projects, such as open-source technology, are essential for the blockchain ecosystem's growth. However, funding these projects effectively remains a critical issue within the ecosystem. Currently, the funding protocols for blockchain public goods lack professionalism and fail to learn from past experiences. To address this challenge, our research introduces a human oracle protocol involving public goods projects, experts, and funders. In our approach, funders contribute investments to a funding pool, while experts offer investment advice based on their expertise in public goods projects. The oracle's decisions on funding support are influenced by the reputations of the experts. Experts earn or lose reputation based on how well their project implementations align with their advice, with successful investments leading to higher reputations. Our oracle is designed to adapt to changing circumstances, such as experts exiting or entering the decision-making process. We also introduce a regret bound to gauge the oracle's effectiveness. Theoretically, we establish an upper regret bound for both static and dynamic models and demonstrate its closeness to an asymptotically equal lower bound. Empirically, we implement our protocol on a test chain and show that our oracle's investment decisions closely mirror optimal investments in hindsight.
开源技术等公益项目对区块链生态系统的发展至关重要。然而,如何有效地为这些项目提供资金仍然是生态系统中的一个关键问题。目前,区块链公共产品的筹资协议缺乏专业性,也未能从过去的经验中吸取教训。为了应对这一挑战,我们的研究引入了一种涉及公共产品项目、专家和出资人的人类甲骨文协议。在我们的方法中,出资人向资金池提供投资,而专家则根据他们在公益项目方面的专业知识提供投资建议。甲骨文的资金支持决定受专家声誉的影响。专家赢得或失去声誉取决于他们的项目实施与其建议的一致性,成功的投资会带来更高的声誉。我们设计的 Oracle 能够适应不断变化的情况,例如专家退出或加入决策过程。我们还引入了后悔约束来衡量神谕的有效性。从理论上讲,我们建立了静态和动态模型的遗憾上限,并证明其接近于渐进相等的下限。在经验上,我们在测试链上实现了我们的协议,并证明我们的神谕投资决策密切反映了事后的最优投资。
{"title":"Decentralized Funding of Public Goods in Blockchain System: Leveraging Expert Advice","authors":"Jichen Li;Yukun Cheng;Wenhan Huang;Mengqian Zhang;Jiarui Fan;Xiaotie Deng;Jan Xie;Jie Zhang","doi":"10.1109/TCC.2024.3394973","DOIUrl":"10.1109/TCC.2024.3394973","url":null,"abstract":"Public goods projects, such as open-source technology, are essential for the blockchain ecosystem's growth. However, funding these projects effectively remains a critical issue within the ecosystem. Currently, the funding protocols for blockchain public goods lack professionalism and fail to learn from past experiences. To address this challenge, our research introduces a human oracle protocol involving public goods projects, experts, and funders. In our approach, funders contribute investments to a funding pool, while experts offer investment advice based on their expertise in public goods projects. The oracle's decisions on funding support are influenced by the reputations of the experts. Experts earn or lose reputation based on how well their project implementations align with their advice, with successful investments leading to higher reputations. Our oracle is designed to adapt to changing circumstances, such as experts exiting or entering the decision-making process. We also introduce a regret bound to gauge the oracle's effectiveness. Theoretically, we establish an upper regret bound for both static and dynamic models and demonstrate its closeness to an asymptotically equal lower bound. Empirically, we implement our protocol on a test chain and show that our oracle's investment decisions closely mirror optimal investments in hindsight.","PeriodicalId":13202,"journal":{"name":"IEEE Transactions on Cloud Computing","volume":"12 2","pages":"725-736"},"PeriodicalIF":6.5,"publicationDate":"2024-04-30","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140836678","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
期刊
IEEE Transactions on Cloud Computing
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1