首页 > 最新文献

IEEE Transactions on Cloud Computing最新文献

英文 中文
$varepsilon$ɛ-LAP: A Lightweight and Adaptive Cache Partitioning Scheme With Prudent Resizing Decisions for Content Delivery Networks $varepsilon$-LAP:针对内容分发网络的轻量级自适应缓存分区方案与审慎的大小调整决策
IF 5.3 2区 计算机科学 Q1 COMPUTER SCIENCE, INFORMATION SYSTEMS Pub Date : 2024-06-28 DOI: 10.1109/TCC.2024.3420454
Peng Wang;Yu Liu;Ziqi Liu;Zhelong Zhao;Ke Liu;Ke Zhou;Zhihai Huang
As dependence on Content Delivery Networks (CDNs) increases, there is a growing need for innovative solutions to optimize cache performance amid increasing traffic and complicated cache-sharing workloads. Allocating exclusive resources to applications in CDNs boosts the overall cache hit ratio (OHR), enhancing efficiency. However, the traditional method of creating the miss ratio curve (MRC) is unsuitable for CDNs due to the diverse sizes of items and the vast number of applications, leading to high computational overhead and performance inconsistency. To tackle this issue, we propose a lightweight and adaptive cache partitioning scheme called $varepsilon$-LAP. This scheme uses a corresponding shadow cache for each partition and sorts them based on the average hit numbers on the granularity unit in the shadow caches. During partition resizing, $varepsilon$-LAP transfers storage capacity, measured in units of granularity, from the $(N-k+1)$-th ($kleq frac{N}{2}$) partition to the $k$-th partition. A learning threshold parameter, i.e., $varepsilon$, is also introduced to prudently determine when to resize partitions, improving caching efficiency. This can eliminate about 96.8% of unnecessary partition resizing without compromising performance. $varepsilon$-LAP, when deployed in PicCloud at Tencent, improved OHR by 9.34% and reduced the average user access latency by 12.5 ms. Experimental results show that $varepsilon$-LAP outperforms other cache partitioning schemes in terms of both OHR and access latency, and it effectively adapts to workload variations.
随着对内容分发网络(CDN)的依赖程度不断增加,人们越来越需要创新的解决方案,以在流量不断增加和复杂的缓存共享工作负载中优化缓存性能。为 CDN 中的应用分配独占资源可提高整体缓存命中率(OHR),从而提高效率。然而,由于项目大小不一,应用数量众多,传统的未命中率曲线(MRC)创建方法并不适合 CDN,会导致高计算开销和性能不一致。为解决这一问题,我们提出了一种名为 $varepsilon$-LAP 的轻量级自适应缓存分区方案。该方案为每个分区使用一个相应的影子缓存,并根据影子缓存中粒度单位的平均命中率对它们进行排序。在调整分区大小的过程中,$varepsilon$-LAP 会将存储容量(以粒度单位衡量)从 $(N-k+1)$-th ($kleq frac{N}{2}$)分区转移到 $k$-th 分区。此外,还引入了一个学习阈值参数,即 $varepsilon$,以审慎地决定何时调整分区大小,从而提高缓存效率。这可以在不影响性能的情况下,消除约 96.8% 不必要的分区大小调整。$varepsilon$-LAP在腾讯PicCloud中部署后,OHR提高了9.34%,用户平均访问延迟降低了12.5毫秒。实验结果表明,$varepsilon$-LAP在OHR和访问延迟方面均优于其他缓存分区方案,并能有效适应工作负载的变化。
{"title":"$varepsilon$ɛ-LAP: A Lightweight and Adaptive Cache Partitioning Scheme With Prudent Resizing Decisions for Content Delivery Networks","authors":"Peng Wang;Yu Liu;Ziqi Liu;Zhelong Zhao;Ke Liu;Ke Zhou;Zhihai Huang","doi":"10.1109/TCC.2024.3420454","DOIUrl":"10.1109/TCC.2024.3420454","url":null,"abstract":"As dependence on Content Delivery Networks (CDNs) increases, there is a growing need for innovative solutions to optimize cache performance amid increasing traffic and complicated cache-sharing workloads. Allocating exclusive resources to applications in CDNs boosts the overall cache hit ratio (OHR), enhancing efficiency. However, the traditional method of creating the miss ratio curve (MRC) is unsuitable for CDNs due to the diverse sizes of items and the vast number of applications, leading to high computational overhead and performance inconsistency. To tackle this issue, we propose a \u0000<u>l</u>\u0000ightweight and \u0000<u>a</u>\u0000daptive cache \u0000<u>p</u>\u0000artitioning scheme called \u0000<inline-formula><tex-math>$varepsilon$</tex-math></inline-formula>\u0000-LAP. This scheme uses a corresponding shadow cache for each partition and sorts them based on the average hit numbers on the granularity unit in the shadow caches. During partition resizing, \u0000<inline-formula><tex-math>$varepsilon$</tex-math></inline-formula>\u0000-LAP transfers storage capacity, measured in units of granularity, from the \u0000<inline-formula><tex-math>$(N-k+1)$</tex-math></inline-formula>\u0000-th (\u0000<inline-formula><tex-math>$kleq frac{N}{2}$</tex-math></inline-formula>\u0000) partition to the \u0000<inline-formula><tex-math>$k$</tex-math></inline-formula>\u0000-th partition. A learning threshold parameter, i.e., \u0000<inline-formula><tex-math>$varepsilon$</tex-math></inline-formula>\u0000, is also introduced to prudently determine when to resize partitions, improving caching efficiency. This can eliminate about 96.8% of unnecessary partition resizing without compromising performance. \u0000<inline-formula><tex-math>$varepsilon$</tex-math></inline-formula>\u0000-LAP, when deployed in \u0000<i>PicCloud</i>\u0000 at \u0000<i>Tencent</i>\u0000, improved OHR by 9.34% and reduced the average user access latency by 12.5 ms. Experimental results show that \u0000<inline-formula><tex-math>$varepsilon$</tex-math></inline-formula>\u0000-LAP outperforms other cache partitioning schemes in terms of both OHR and access latency, and it effectively adapts to workload variations.","PeriodicalId":13202,"journal":{"name":"IEEE Transactions on Cloud Computing","volume":null,"pages":null},"PeriodicalIF":5.3,"publicationDate":"2024-06-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141503565","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Secure and Flexible Coded Distributed Matrix Multiplication Based on Edge Computing for Industrial Metaverse 基于边缘计算的安全灵活编码分布式矩阵乘法,适用于工业元宇宙
IF 6.5 2区 计算机科学 Q1 COMPUTER SCIENCE, INFORMATION SYSTEMS Pub Date : 2024-06-18 DOI: 10.1109/tcc.2024.3415165
Houming Qiu, Kun Zhu, Dusit Niyato
{"title":"Secure and Flexible Coded Distributed Matrix Multiplication Based on Edge Computing for Industrial Metaverse","authors":"Houming Qiu, Kun Zhu, Dusit Niyato","doi":"10.1109/tcc.2024.3415165","DOIUrl":"https://doi.org/10.1109/tcc.2024.3415165","url":null,"abstract":"","PeriodicalId":13202,"journal":{"name":"IEEE Transactions on Cloud Computing","volume":null,"pages":null},"PeriodicalIF":6.5,"publicationDate":"2024-06-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141937583","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Non-Clairvoyant Scheduling of Distributed Machine Learning with Inter-job and Intra-job Parallelism on Heterogeneous GPUs 异构 GPU 上具有任务间和任务内并行性的分布式机器学习的非千里眼调度
IF 6.5 2区 计算机科学 Q1 COMPUTER SCIENCE, INFORMATION SYSTEMS Pub Date : 2024-06-14 DOI: 10.1109/tcc.2024.3414440
Fahao Chen, Peng Li, Celimuge Wu, Song Guo
{"title":"Non-Clairvoyant Scheduling of Distributed Machine Learning with Inter-job and Intra-job Parallelism on Heterogeneous GPUs","authors":"Fahao Chen, Peng Li, Celimuge Wu, Song Guo","doi":"10.1109/tcc.2024.3414440","DOIUrl":"https://doi.org/10.1109/tcc.2024.3414440","url":null,"abstract":"","PeriodicalId":13202,"journal":{"name":"IEEE Transactions on Cloud Computing","volume":null,"pages":null},"PeriodicalIF":6.5,"publicationDate":"2024-06-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141937581","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
An Adaptive Cloud Resource Quota Scheme Based on Dynamic Portraits and Task-Resource Matching 基于动态肖像和任务资源匹配的自适应云资源配额方案
IF 6.5 2区 计算机科学 Q1 COMPUTER SCIENCE, INFORMATION SYSTEMS Pub Date : 2024-06-11 DOI: 10.1109/tcc.2024.3410390
Zuodong Jin, Dan Tao, Peng Qi, Ruipeng Gao
{"title":"An Adaptive Cloud Resource Quota Scheme Based on Dynamic Portraits and Task-Resource Matching","authors":"Zuodong Jin, Dan Tao, Peng Qi, Ruipeng Gao","doi":"10.1109/tcc.2024.3410390","DOIUrl":"https://doi.org/10.1109/tcc.2024.3410390","url":null,"abstract":"","PeriodicalId":13202,"journal":{"name":"IEEE Transactions on Cloud Computing","volume":null,"pages":null},"PeriodicalIF":6.5,"publicationDate":"2024-06-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141937582","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Multi-Data Center Tie-line Power Smoothing Method Based on Demand Response 基于需求响应的多数据中心连接线功率平滑方法
IF 6.5 2区 计算机科学 Q1 COMPUTER SCIENCE, INFORMATION SYSTEMS Pub Date : 2024-06-06 DOI: 10.1109/tcc.2024.3410377
Ting Yang, Yuxing Hou, Shaotang Cai, Jie Yu, Haibo Pen
{"title":"Multi-Data Center Tie-line Power Smoothing Method Based on Demand Response","authors":"Ting Yang, Yuxing Hou, Shaotang Cai, Jie Yu, Haibo Pen","doi":"10.1109/tcc.2024.3410377","DOIUrl":"https://doi.org/10.1109/tcc.2024.3410377","url":null,"abstract":"","PeriodicalId":13202,"journal":{"name":"IEEE Transactions on Cloud Computing","volume":null,"pages":null},"PeriodicalIF":6.5,"publicationDate":"2024-06-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141937598","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Multi-Dimensional Flat Indexing for Encrypted Data 加密数据的多维平面索引
IF 5.3 2区 计算机科学 Q1 COMPUTER SCIENCE, INFORMATION SYSTEMS Pub Date : 2024-06-04 DOI: 10.1109/TCC.2024.3408905
Sabrina De Capitani di Vimercati;Dario Facchinetti;Sara Foresti;Gianluca Oldani;Stefano Paraboschi;Matthew Rossi;Pierangela Samarati
We address the problem of indexing encrypted data outsourced to an external cloud server to support server-side execution of multi-attribute queries. Our approach partitions the dataset in groups with the same number of tuples, and associates all tuples in a group with the same combination of index values, so to guarantee protection against static inferences. Our indexing approach does not require any modifications to the server-side software stack, and requires limited storage at the client for query support. The experimental evaluation considers, for the storage of the encrypted and indexed dataset, both a relational database (PostgreSQL) and a key-value database (Redis). We carried out extensive experiments evaluating client-storage requirements and query performance. The experimental results confirm the efficiency of our solution. The proposal is supported by an open source implementation.
我们要解决的问题是为外包给外部云服务器的加密数据编制索引,以支持在服务器端执行多属性查询。我们的方法将数据集划分为具有相同数据元组数量的组,并将组中的所有数据元组与相同的索引值组合关联起来,从而确保防止静态推断。我们的索引方法不需要对服务器端软件栈进行任何修改,客户端只需要有限的存储空间来支持查询。实验评估考虑了关系数据库(PostgreSQL)和键值数据库(Redis)来存储加密和索引数据集。我们进行了大量实验,评估客户端存储要求和查询性能。实验结果证实了我们解决方案的效率。我们的建议得到了开源实现的支持。
{"title":"Multi-Dimensional Flat Indexing for Encrypted Data","authors":"Sabrina De Capitani di Vimercati;Dario Facchinetti;Sara Foresti;Gianluca Oldani;Stefano Paraboschi;Matthew Rossi;Pierangela Samarati","doi":"10.1109/TCC.2024.3408905","DOIUrl":"10.1109/TCC.2024.3408905","url":null,"abstract":"We address the problem of indexing encrypted data outsourced to an external cloud server to support server-side execution of multi-attribute queries. Our approach partitions the dataset in groups with the same number of tuples, and associates all tuples in a group with the same combination of index values, so to guarantee protection against static inferences. Our indexing approach does not require any modifications to the server-side software stack, and requires limited storage at the client for query support. The experimental evaluation considers, for the storage of the encrypted and indexed dataset, both a relational database (PostgreSQL) and a key-value database (Redis). We carried out extensive experiments evaluating client-storage requirements and query performance. The experimental results confirm the efficiency of our solution. The proposal is supported by an open source implementation.","PeriodicalId":13202,"journal":{"name":"IEEE Transactions on Cloud Computing","volume":null,"pages":null},"PeriodicalIF":5.3,"publicationDate":"2024-06-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=10547318","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141937577","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
How to Securely and Efficiently Solve the Large-Scale Modular System of Linear Equations on the Cloud 如何在云上安全高效地求解大规模模块线性方程组
IF 5.3 2区 计算机科学 Q1 COMPUTER SCIENCE, INFORMATION SYSTEMS Pub Date : 2024-06-03 DOI: 10.1109/TCC.2024.3408240
Chengliang Tian;Jia Yu;Panpan Meng;Guoyan Zhang;Weizhong Tian;Yan Zhang
Cloud-assisted computation empowers resource-constrained clients to efficiently tackle computationally intensive tasks by outsourcing them to resource-rich cloud servers. In the current era of Big Data, the widespread need to solve large-scale modular linear systems of equations ($mathcal {LMLSE}$) of the form $mathbf {A}mathbf {x}equiv mathbf {b};{rm mod};{q}$ poses a significant challenge, particularly for lightweight devices. This paper delves into the secure outsourcing of $mathcal {LMLSE}$ under a malicious single-server model and, to the best of our knowledge, introduces the inaugural protocol tailored to this specific context. The cornerstone of our protocol lies in the innovation of a novel matrix encryption method based on sparse unimodular matrix transformations. This novel technique bestows our protocol with several key advantages. First and foremost, it ensures robust privacy for all computation inputs, encompassing $mathbf {A},mathbf {b}, q$, and the output $mathbf {x}$, as validated by thorough theoretical analysis. Second, the protocol delivers optimal verifiability, enabling clients to detect cloud server misbehavior with an unparalleled probability of 1. Furthermore, it boasts high efficiency, requiring only a single interaction between the client and the cloud server, significantly reducing local-client time costs. For an $m$-by-$n$ matrix $mathbf {A}$, a given parameter $lambda =omega (log q)$, and $rho =2.371552$, the time complexity is diminished from $O(max lbrace m n^{rho -1}, m^{rho -2} n^{2}rbrace cdot (log q)^{2})$ to $O((mn+m^{2})lambda log q+mn(log q)^{2})$. The comprehensive results of our experimental performance evaluations substantiate the protocol's practical efficiency and effectiveness.
云辅助计算通过将计算密集型任务外包给资源丰富的云服务器,使资源受限的客户能够高效地处理这些任务。在当前的大数据时代,人们普遍需要求解形式为 $mathbf {A}mathbf {x}equiv mathbf {b};{rm mod};{q}$ 的大规模模块线性方程组($mathcal {LMLSE}$),这带来了巨大的挑战,尤其是对轻量级设备而言。本文深入研究了恶意单服务器模型下 $mathcal {LMLSE}$ 的安全外包问题,并且据我们所知,本文介绍了为这一特定环境量身定制的首创协议。我们协议的基石在于基于稀疏单模态矩阵变换的新型矩阵加密方法的创新。这项新技术赋予了我们的协议多项关键优势。首先,它能确保所有计算输入(包括 $mathbf {A}、mathbf {b}、q$)和输出 $mathbf {x}$的稳健隐私,这一点已通过全面的理论分析得到验证。其次,该协议提供了最佳的可验证性,使客户端能够以无与伦比的1概率检测到云服务器的不当行为。此外,它还拥有高效率,客户端和云服务器之间只需进行一次交互,大大降低了本地客户端的时间成本。对于$m$-by-n$矩阵$mathbf {A}$、给定参数$lambda =omega (log q)$和$rho =2.371552$ 时,时间复杂度从 $O(max lbrace m n^{rho -1}, m^{rho -2} n^{2}rbrace cdot (log q)^{2})$ 下降到 $O((mn+m^{2})lambda log q+mn(log q)^{2})$。实验性能评估的综合结果证明了该协议的实用效率和有效性。
{"title":"How to Securely and Efficiently Solve the Large-Scale Modular System of Linear Equations on the Cloud","authors":"Chengliang Tian;Jia Yu;Panpan Meng;Guoyan Zhang;Weizhong Tian;Yan Zhang","doi":"10.1109/TCC.2024.3408240","DOIUrl":"10.1109/TCC.2024.3408240","url":null,"abstract":"Cloud-assisted computation empowers resource-constrained clients to efficiently tackle computationally intensive tasks by outsourcing them to resource-rich cloud servers. In the current era of Big Data, the widespread need to solve large-scale modular linear systems of equations (\u0000<inline-formula><tex-math>$mathcal {LMLSE}$</tex-math></inline-formula>\u0000) of the form \u0000<inline-formula><tex-math>$mathbf {A}mathbf {x}equiv mathbf {b};{rm mod};{q}$</tex-math></inline-formula>\u0000 poses a significant challenge, particularly for lightweight devices. This paper delves into the secure outsourcing of \u0000<inline-formula><tex-math>$mathcal {LMLSE}$</tex-math></inline-formula>\u0000 under a malicious single-server model and, to the best of our knowledge, introduces the inaugural protocol tailored to this specific context. The cornerstone of our protocol lies in the innovation of a novel matrix encryption method based on sparse unimodular matrix transformations. This novel technique bestows our protocol with several key advantages. First and foremost, it ensures robust privacy for all computation inputs, encompassing \u0000<inline-formula><tex-math>$mathbf {A},mathbf {b}, q$</tex-math></inline-formula>\u0000, and the output \u0000<inline-formula><tex-math>$mathbf {x}$</tex-math></inline-formula>\u0000, as validated by thorough theoretical analysis. Second, the protocol delivers optimal verifiability, enabling clients to detect cloud server misbehavior with an unparalleled probability of 1. Furthermore, it boasts high efficiency, requiring only a single interaction between the client and the cloud server, significantly reducing local-client time costs. For an \u0000<inline-formula><tex-math>$m$</tex-math></inline-formula>\u0000-by-\u0000<inline-formula><tex-math>$n$</tex-math></inline-formula>\u0000 matrix \u0000<inline-formula><tex-math>$mathbf {A}$</tex-math></inline-formula>\u0000, a given parameter \u0000<inline-formula><tex-math>$lambda =omega (log q)$</tex-math></inline-formula>\u0000, and \u0000<inline-formula><tex-math>$rho =2.371552$</tex-math></inline-formula>\u0000, the time complexity is diminished from \u0000<inline-formula><tex-math>$O(max lbrace m n^{rho -1}, m^{rho -2} n^{2}rbrace cdot (log q)^{2})$</tex-math></inline-formula>\u0000 to \u0000<inline-formula><tex-math>$O((mn+m^{2})lambda log q+mn(log q)^{2})$</tex-math></inline-formula>\u0000. The comprehensive results of our experimental performance evaluations substantiate the protocol's practical efficiency and effectiveness.","PeriodicalId":13202,"journal":{"name":"IEEE Transactions on Cloud Computing","volume":null,"pages":null},"PeriodicalIF":5.3,"publicationDate":"2024-06-03","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141937601","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Decentralized Funding of Public Goods in Blockchain System: Leveraging Expert Advice 区块链系统中公共产品的去中心化筹资:利用专家建议
IF 6.5 2区 计算机科学 Q1 Computer Science Pub Date : 2024-04-30 DOI: 10.1109/TCC.2024.3394973
Jichen Li;Yukun Cheng;Wenhan Huang;Mengqian Zhang;Jiarui Fan;Xiaotie Deng;Jan Xie;Jie Zhang
Public goods projects, such as open-source technology, are essential for the blockchain ecosystem's growth. However, funding these projects effectively remains a critical issue within the ecosystem. Currently, the funding protocols for blockchain public goods lack professionalism and fail to learn from past experiences. To address this challenge, our research introduces a human oracle protocol involving public goods projects, experts, and funders. In our approach, funders contribute investments to a funding pool, while experts offer investment advice based on their expertise in public goods projects. The oracle's decisions on funding support are influenced by the reputations of the experts. Experts earn or lose reputation based on how well their project implementations align with their advice, with successful investments leading to higher reputations. Our oracle is designed to adapt to changing circumstances, such as experts exiting or entering the decision-making process. We also introduce a regret bound to gauge the oracle's effectiveness. Theoretically, we establish an upper regret bound for both static and dynamic models and demonstrate its closeness to an asymptotically equal lower bound. Empirically, we implement our protocol on a test chain and show that our oracle's investment decisions closely mirror optimal investments in hindsight.
开源技术等公益项目对区块链生态系统的发展至关重要。然而,如何有效地为这些项目提供资金仍然是生态系统中的一个关键问题。目前,区块链公共产品的筹资协议缺乏专业性,也未能从过去的经验中吸取教训。为了应对这一挑战,我们的研究引入了一种涉及公共产品项目、专家和出资人的人类甲骨文协议。在我们的方法中,出资人向资金池提供投资,而专家则根据他们在公益项目方面的专业知识提供投资建议。甲骨文的资金支持决定受专家声誉的影响。专家赢得或失去声誉取决于他们的项目实施与其建议的一致性,成功的投资会带来更高的声誉。我们设计的 Oracle 能够适应不断变化的情况,例如专家退出或加入决策过程。我们还引入了后悔约束来衡量神谕的有效性。从理论上讲,我们建立了静态和动态模型的遗憾上限,并证明其接近于渐进相等的下限。在经验上,我们在测试链上实现了我们的协议,并证明我们的神谕投资决策密切反映了事后的最优投资。
{"title":"Decentralized Funding of Public Goods in Blockchain System: Leveraging Expert Advice","authors":"Jichen Li;Yukun Cheng;Wenhan Huang;Mengqian Zhang;Jiarui Fan;Xiaotie Deng;Jan Xie;Jie Zhang","doi":"10.1109/TCC.2024.3394973","DOIUrl":"10.1109/TCC.2024.3394973","url":null,"abstract":"Public goods projects, such as open-source technology, are essential for the blockchain ecosystem's growth. However, funding these projects effectively remains a critical issue within the ecosystem. Currently, the funding protocols for blockchain public goods lack professionalism and fail to learn from past experiences. To address this challenge, our research introduces a human oracle protocol involving public goods projects, experts, and funders. In our approach, funders contribute investments to a funding pool, while experts offer investment advice based on their expertise in public goods projects. The oracle's decisions on funding support are influenced by the reputations of the experts. Experts earn or lose reputation based on how well their project implementations align with their advice, with successful investments leading to higher reputations. Our oracle is designed to adapt to changing circumstances, such as experts exiting or entering the decision-making process. We also introduce a regret bound to gauge the oracle's effectiveness. Theoretically, we establish an upper regret bound for both static and dynamic models and demonstrate its closeness to an asymptotically equal lower bound. Empirically, we implement our protocol on a test chain and show that our oracle's investment decisions closely mirror optimal investments in hindsight.","PeriodicalId":13202,"journal":{"name":"IEEE Transactions on Cloud Computing","volume":null,"pages":null},"PeriodicalIF":6.5,"publicationDate":"2024-04-30","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140836678","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Are ARM Cloud Servers Ready for Database Workloads? an Experimental Study ARM 云服务器是否已为数据库工作负载做好准备?实验研究
IF 5.3 2区 计算机科学 Q1 COMPUTER SCIENCE, INFORMATION SYSTEMS Pub Date : 2024-04-26 DOI: 10.1109/TCC.2024.3393895
Dumitrel Loghin
Almost all major cloud providers offer virtual machines running on servers with 64-bit ARM CPUs. For example, Amazon Web Services (AWS) designed custom ARM-based CPUs named Graviton2 and Graviton3. Other cloud providers, such as Microsoft Azure and Google Cloud Platform (GCP), employ servers with Ampere Altra CPUs. In this context, we conduct a comprehensive experimental study covering in-memory key-value stores, relational databases, enterprise blockchains, and Machine Learning inference. We cover all the available types of ARM cloud processors, including Graviton2 (AWS), Graviton3 (AWS), Ampere Altra (Azure and GCP), Yitian 710 (Alibaba Cloud), and Kunpeng 920 (Huawei Cloud). Our analysis shows that Yitian and Graviton3 are serious competitors for servers with Intel Xeon CPUs, achieving similar or better results with in-memory workloads. However, the performance of OLAP, ML inference, and blockchain on ARM-based servers is below that of Xeon. The reasons are mainly threefold 1) un-optimized software, 2) lower clock frequency, and 3) lower performance at core level. Surprisingly, ARM servers spend 2X more time in Linux kernel system calls compared to Xeon servers. Nonetheless, ARM-based servers show great potential. Given their lower cloud computing price, ARM servers could be the ideal choice when the performance is not critical.
几乎所有主要的云计算提供商都提供在配备 64 位 ARM CPU 的服务器上运行的虚拟机。例如,亚马逊网络服务(AWS)设计了基于 ARM 的定制 CPU,命名为 Graviton2 和 Graviton3。其他云提供商,如微软 Azure 和谷歌云平台(GCP),则采用了配备 Ampere Altra CPU 的服务器。在此背景下,我们进行了一项全面的实验研究,涵盖内存键值存储、关系数据库、企业区块链和机器学习推理。我们涵盖了所有可用的 ARM 云处理器类型,包括 Graviton2(AWS)、Graviton3(AWS)、Ampere Altra(Azure 和 GCP)、倚天 710(阿里巴巴云)和鲲鹏 920(华为云)。我们的分析表明,倚天和 Graviton3 是英特尔至强 CPU 服务器的有力竞争者,在内存工作负载方面取得了相似或更好的结果。然而,在基于 ARM 的服务器上,OLAP、ML 推理和区块链的性能却低于至强。原因主要有三个方面:1)软件未优化;2)时钟频率较低;3)内核级性能较低。令人惊讶的是,与 Xeon 服务器相比,ARM 服务器在 Linux 内核系统调用上花费的时间多出 2 倍。不过,基于 ARM 的服务器显示出巨大的潜力。鉴于其较低的云计算价格,ARM 服务器可能是性能要求不高时的理想选择。
{"title":"Are ARM Cloud Servers Ready for Database Workloads? an Experimental Study","authors":"Dumitrel Loghin","doi":"10.1109/TCC.2024.3393895","DOIUrl":"10.1109/TCC.2024.3393895","url":null,"abstract":"Almost all major cloud providers offer virtual machines running on servers with 64-bit ARM CPUs. For example, Amazon Web Services (AWS) designed custom ARM-based CPUs named Graviton2 and Graviton3. Other cloud providers, such as Microsoft Azure and Google Cloud Platform (GCP), employ servers with Ampere Altra CPUs. In this context, we conduct a comprehensive experimental study covering in-memory key-value stores, relational databases, enterprise blockchains, and Machine Learning inference. We cover all the available types of ARM cloud processors, including Graviton2 (AWS), Graviton3 (AWS), Ampere Altra (Azure and GCP), Yitian 710 (Alibaba Cloud), and Kunpeng 920 (Huawei Cloud). Our analysis shows that Yitian and Graviton3 are serious competitors for servers with Intel Xeon CPUs, achieving similar or better results with in-memory workloads. However, the performance of OLAP, ML inference, and blockchain on ARM-based servers is below that of Xeon. The reasons are mainly threefold 1) un-optimized software, 2) lower clock frequency, and 3) lower performance at core level. Surprisingly, ARM servers spend 2X more time in Linux kernel system calls compared to Xeon servers. Nonetheless, ARM-based servers show great potential. Given their lower cloud computing price, ARM servers could be the ideal choice when the performance is not critical.","PeriodicalId":13202,"journal":{"name":"IEEE Transactions on Cloud Computing","volume":null,"pages":null},"PeriodicalIF":5.3,"publicationDate":"2024-04-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140799413","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Design and Evaluation of a Hierarchical Characterization and Adaptive Prediction Model for Cloud Workloads 设计和评估云工作负载的分层特征描述和自适应预测模型
IF 6.5 2区 计算机科学 Q1 Computer Science Pub Date : 2024-04-24 DOI: 10.1109/TCC.2024.3393114
Karthick Seshadri;Korrapati Sindhu;S. Nagesh Bhattu;Chidambaran Kollengode
Workload characterization and subsequent prediction are significant steps in maintaining the elasticity and scalability of resources in Cloud Data Centers. Due to the high variance in cloud workloads, designing a prediction algorithm that models the variations in the workload is a non-trivial task. If the workload predictor is unable to handle the dynamism in the workloads, then the result of the predictor may lead to over-provisioning or under-provisioning of cloud resources. To address this problem, we have created a Super Markov Prediction Model (SMPM) whose behaviour changes as per the change in the workload patterns. As the time progresses, based on the workload pattern SMPM uses different sequence models to predict the future workload. To evaluate the proposed model, we have experimented with Alibaba trace 2018, Google Cluster Trace (GCT), Alibaba trace 2020 and TPC-W workload trace. We have compared SMPM's prediction results with existing state-of-the-art prediction models and empirically verified that the proposed prediction model achieves a better accuracy as quantified using Root Mean Square Error (RMSE) and Mean Absolute Error (MAE).
工作负载特征描述和后续预测是保持云数据中心资源弹性和可扩展性的重要步骤。由于云工作负载的变化很大,因此设计一种能够模拟工作负载变化的预测算法并非易事。如果工作负载预测器无法处理工作负载的动态变化,那么预测器的结果可能会导致云资源的过度分配或分配不足。为了解决这个问题,我们创建了一个超级马尔可夫预测模型(SMPM),其行为会随着工作负载模式的变化而改变。随着时间的推移,SMPM 会根据工作负载模式使用不同的序列模型来预测未来的工作负载。为了评估所提出的模型,我们使用 2018 年阿里巴巴跟踪、谷歌集群跟踪(GCT)、2020 年阿里巴巴跟踪和 TPC-W 工作负载跟踪进行了实验。我们将 SMPM 的预测结果与现有的最先进预测模型进行了比较,并通过实证验证了所提出的预测模型具有更高的准确性,并使用均方根误差(RMSE)和平均绝对误差(MAE)进行了量化。
{"title":"Design and Evaluation of a Hierarchical Characterization and Adaptive Prediction Model for Cloud Workloads","authors":"Karthick Seshadri;Korrapati Sindhu;S. Nagesh Bhattu;Chidambaran Kollengode","doi":"10.1109/TCC.2024.3393114","DOIUrl":"10.1109/TCC.2024.3393114","url":null,"abstract":"Workload characterization and subsequent prediction are significant steps in maintaining the elasticity and scalability of resources in Cloud Data Centers. Due to the high variance in cloud workloads, designing a prediction algorithm that models the variations in the workload is a non-trivial task. If the workload predictor is unable to handle the dynamism in the workloads, then the result of the predictor may lead to over-provisioning or under-provisioning of cloud resources. To address this problem, we have created a Super Markov Prediction Model (SMPM) whose behaviour changes as per the change in the workload patterns. As the time progresses, based on the workload pattern SMPM uses different sequence models to predict the future workload. To evaluate the proposed model, we have experimented with Alibaba trace 2018, Google Cluster Trace (GCT), Alibaba trace 2020 and TPC-W workload trace. We have compared SMPM's prediction results with existing state-of-the-art prediction models and empirically verified that the proposed prediction model achieves a better accuracy as quantified using Root Mean Square Error (RMSE) and Mean Absolute Error (MAE).","PeriodicalId":13202,"journal":{"name":"IEEE Transactions on Cloud Computing","volume":null,"pages":null},"PeriodicalIF":6.5,"publicationDate":"2024-04-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140806316","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
期刊
IEEE Transactions on Cloud Computing
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1