首页 > 最新文献

IEEE Transactions on Cloud Computing最新文献

英文 中文
Leakage-Suppressed Encrypted Keyword Queries Over Multiple Cloud Servers 多个云服务器上的泄漏抑制加密关键字查询
IF 6.5 2区 计算机科学 Q1 Computer Science Pub Date : 2023-11-15 DOI: 10.1109/TCC.2023.3333223
Yi Dou;Henry C. B. Chan
Searchable encryption is a technique that can support operations on encrypted data directly. However, searchable encryption is still vulnerable to attacks that exploit the leakages from encrypted query results. This article presents an effective multi-server searchable encryption scheme to prevent volume and access pattern leakages. To hide the volume leakage of a keyword, a new index construction is proposed to compress multiple results into one index. To prevent the attacker from observing the access pattern of injected records, the update and search phases are executed in batches, such that the server can only retrieve multiple numbers of fixed volumes. To reduce the co-occurrence leakage, we propose our index distribution algorithm. Both records and queries are dispatched among cloud servers such that the attacker cannot recover the trapdoor values by only observing one cloud server. We use the minimum $s-t$ cut algorithm to find the optimal assignment strategy that can diminish the query response time and the information disclosure at the same time. We formally analyze the security strengths and conduct evaluations. The experimental results indicate that our designs can strike a good balance between security and efficiency.
可搜索加密是一种可直接支持对加密数据进行操作的技术。然而,可搜索加密仍容易受到利用加密查询结果泄漏的攻击。本文提出了一种有效的多服务器可搜索加密方案,以防止数据量和访问模式泄漏。为了隐藏关键字的数量泄漏,本文提出了一种新的索引结构,将多个结果压缩到一个索引中。为防止攻击者观察注入记录的访问模式,更新和搜索阶段分批执行,这样服务器只能检索多个固定数量的记录。为了减少共现泄漏,我们提出了索引分配算法。记录和查询都在云服务器之间分配,这样攻击者就无法通过只观察一个云服务器来恢复陷阱门值。我们使用最小$s-t$切割算法找到最优分配策略,该策略可同时减少查询响应时间和信息泄露。我们正式分析了安全强度并进行了评估。实验结果表明,我们的设计在安全性和效率之间取得了良好的平衡。
{"title":"Leakage-Suppressed Encrypted Keyword Queries Over Multiple Cloud Servers","authors":"Yi Dou;Henry C. B. Chan","doi":"10.1109/TCC.2023.3333223","DOIUrl":"10.1109/TCC.2023.3333223","url":null,"abstract":"Searchable encryption is a technique that can support operations on encrypted data directly. However, searchable encryption is still vulnerable to attacks that exploit the leakages from encrypted query results. This article presents an effective multi-server searchable encryption scheme to prevent volume and access pattern leakages. To hide the volume leakage of a keyword, a new index construction is proposed to compress multiple results into one index. To prevent the attacker from observing the access pattern of injected records, the update and search phases are executed in batches, such that the server can only retrieve multiple numbers of fixed volumes. To reduce the co-occurrence leakage, we propose our index distribution algorithm. Both records and queries are dispatched among cloud servers such that the attacker cannot recover the trapdoor values by only observing one cloud server. We use the minimum \u0000<inline-formula><tex-math>$s-t$</tex-math></inline-formula>\u0000 cut algorithm to find the optimal assignment strategy that can diminish the query response time and the information disclosure at the same time. We formally analyze the security strengths and conduct evaluations. The experimental results indicate that our designs can strike a good balance between security and efficiency.","PeriodicalId":13202,"journal":{"name":"IEEE Transactions on Cloud Computing","volume":null,"pages":null},"PeriodicalIF":6.5,"publicationDate":"2023-11-15","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"135709782","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Learning-Based Dynamic Memory Allocation Schemes for Apache Spark Data Processing 用于 Apache Spark 数据处理的基于学习的动态内存分配方案
IF 6.5 2区 计算机科学 Q1 Computer Science Pub Date : 2023-11-10 DOI: 10.1109/TCC.2023.3329129
Danlin Jia;Li Wang;Natalia Valencia;Janki Bhimani;Bo Sheng;Ningfang Mi
Apache Spark is an in-memory analytic framework that has been adopted in the industry and research fields. Two memory managers, Static and Unified, are available in Spark to allocate memory for caching Resilient Distributed Datasets (RDDs) and executing tasks. However, we find that the static memory manager (SMM) lacks flexibility, while the unified memory manager (UMM) puts heavy pressure on the garbage collection of the JVM on which Spark resides. To address these issues, we design a learning-based bidirectional usage-bounded memory allocation scheme to support dynamic memory allocation with the consideration of both memory demands and latency introduced by garbage collection. We first develop an auto-tuning memory manager (ATuMm) that adopts an intuitive feedback-based learning solution. However, ATuMm is a slow learner that can only alter the states of Java Virtual Memory (JVM) Heap in a limited range. That is, ATuMm decides to increase or decrease the boundary between the execution and storage memory pools by a fixed portion of JVM Heap size. To overcome this shortcoming, we further develop a new reinforcement learning-based memory manager (Q-ATuMm) that uses a Q-learning intelligent agent to dynamically learn and tune the partition of JVM Heap. We implement our new memory managers in Spark 2.4.0 and evaluate them by conducting experiments in a real Spark cluster. Our experimental results show that our memory manager can reduce the total garbage collection time and thus further improve Spark applications’ performance (i.e., reduced latency) compared to the existing Spark memory management solutions. By integrating our machine learning-driven memory manager into Spark, we can further obtain around 1.3x times reduction in the latency.
Apache Spark 是一个内存分析框架,已被业界和研究领域采用。Spark 中有两种内存管理器(静态和统一),用于为缓存弹性分布式数据集(RDD)和执行任务分配内存。然而,我们发现静态内存管理器(SMM)缺乏灵活性,而统一内存管理器(UMM)则给 Spark 所在的 JVM 的垃圾回收带来了巨大压力。为了解决这些问题,我们设计了一种基于学习的双向使用限制内存分配方案,以支持动态内存分配,同时考虑内存需求和垃圾回收带来的延迟。我们首先开发了一种自动调整内存管理器(ATuMm),它采用了一种基于反馈的直观学习解决方案。但是,ATuMm 的学习速度较慢,只能在有限的范围内改变 Java 虚拟内存(JVM)堆的状态。也就是说,ATuMm 只能根据 JVM 堆大小的固定部分来决定增加或减少执行内存池和存储内存池之间的边界。为了克服这一缺陷,我们进一步开发了一种基于强化学习的新内存管理器(Q-ATuMm),它使用 Q-learning 智能代理来动态学习和调整 JVM Heap 的分区。我们在 Spark 2.4.0 中实现了新的内存管理器,并在真实的 Spark 集群中进行了实验评估。实验结果表明,与现有的 Spark 内存管理解决方案相比,我们的内存管理器可以减少总的垃圾回收时间,从而进一步提高 Spark 应用程序的性能(即减少延迟)。通过将我们的机器学习驱动型内存管理器集成到 Spark 中,我们可以进一步将延迟降低约 1.3 倍。
{"title":"Learning-Based Dynamic Memory Allocation Schemes for Apache Spark Data Processing","authors":"Danlin Jia;Li Wang;Natalia Valencia;Janki Bhimani;Bo Sheng;Ningfang Mi","doi":"10.1109/TCC.2023.3329129","DOIUrl":"10.1109/TCC.2023.3329129","url":null,"abstract":"Apache Spark is an in-memory analytic framework that has been adopted in the industry and research fields. Two memory managers, Static and Unified, are available in Spark to allocate memory for caching Resilient Distributed Datasets (RDDs) and executing tasks. However, we find that the static memory manager (SMM) lacks flexibility, while the unified memory manager (UMM) puts heavy pressure on the garbage collection of the JVM on which Spark resides. To address these issues, we design a learning-based bidirectional usage-bounded memory allocation scheme to support dynamic memory allocation with the consideration of both memory demands and latency introduced by garbage collection. We first develop an auto-tuning memory manager (ATuMm) that adopts an intuitive feedback-based learning solution. However, ATuMm is a slow learner that can only alter the states of Java Virtual Memory (JVM) Heap in a limited range. That is, ATuMm decides to increase or decrease the boundary between the execution and storage memory pools by a fixed portion of JVM Heap size. To overcome this shortcoming, we further develop a new reinforcement learning-based memory manager (Q-ATuMm) that uses a Q-learning intelligent agent to dynamically learn and tune the partition of JVM Heap. We implement our new memory managers in Spark 2.4.0 and evaluate them by conducting experiments in a real Spark cluster. Our experimental results show that our memory manager can reduce the total garbage collection time and thus further improve Spark applications’ performance (i.e., reduced latency) compared to the existing Spark memory management solutions. By integrating our machine learning-driven memory manager into Spark, we can further obtain around 1.3x times reduction in the latency.","PeriodicalId":13202,"journal":{"name":"IEEE Transactions on Cloud Computing","volume":null,"pages":null},"PeriodicalIF":6.5,"publicationDate":"2023-11-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"135610951","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Improving LSM-Tree Based Key-Value Stores With Fine-Grained Compaction Mechanism 利用细粒度压缩机制改进基于 LSM 树的键值存储
IF 6.5 2区 计算机科学 Q1 Computer Science Pub Date : 2023-11-02 DOI: 10.1109/TCC.2023.3329646
Hui Sun;Guanzhong Chen;Yinliang Yue;Xiao Qin
LSM-tree-based key-value stores (KV stores) render high-performance read/write services to data-intensive applications. KV stores employ an SSTable-based Coarse-Grained Compaction (CGC) mechanism, which involves a huge amount of data that do not need to be updated, thereby bringing a high write amplification (WA) and long tail latency. To address this issue, we propose a Fine-Grained Compaction (FGC) mechanism anchored on a Log-Structured patched-Merge tree (LSpM-tree) - a new data organization that averts rewriting irrelevant data into disks amid compaction. A cluster, the basic unit in FGC, encloses several patches and a redirection table, where each patch has an array of KV regions. We devise three compaction modes powered by the LSpM-tree, and we implement a high-performance key-value store, named FGKV. The extensive experiments show that FGKV improves the random-write throughput by up to 121%, 36.8%, 38.6%, and 15.2% compared with LevelDB, RocksDB, LDC, and ALDC, respectively. FGKV lowers the WA of the alternative KV stores by up to 50%. FGKV boosts read performance by up to 122%, 51.4%, 96.6%, and 368%, respectively, and FGKV curbs the 99th percentile latency of LevelDB, RocksDB, LDC, and ALDC by up to 78.2%, 77.6%, 78.3%, and 73.1% under YCSB A, respectively. Moreover, FGKV is readily extended to the other KV stores.
基于lsm树的键值存储(KV存储)为数据密集型应用程序提供高性能读/写服务。KV存储采用基于sstable的粗粒度压缩(CGC)机制,该机制涉及大量不需要更新的数据,从而带来高写放大(WA)和长尾延迟。为了解决这个问题,我们提出了一种基于日志结构补丁合并树(LSpM-tree)的细粒度压缩(FGC)机制,这是一种新的数据组织,可以避免在压缩过程中将无关数据重写到磁盘中。集群是FGC的基本单元,它包含几个补丁和一个重定向表,其中每个补丁都有一个KV区域阵列。我们设计了三种由lspm树提供支持的压缩模式,并实现了一个高性能的键值存储,名为FGKV。大量实验表明,与LevelDB、RocksDB、LDC和ALDC相比,FGKV分别提高了121%、36.8%、38.6%和15.2%的随机写入吞吐量。FGKV降低了替代KV存储的WA高达50%。FGKV将读取性能分别提高了122%、51.4%、96.6%和368%,FGKV将LevelDB、RocksDB、LDC和ALDC的第99百分位延迟分别降低了78.2%、77.6%、78.3%和73.1%。此外,FGKV很容易扩展到其他KV存储。
{"title":"Improving LSM-Tree Based Key-Value Stores With Fine-Grained Compaction Mechanism","authors":"Hui Sun;Guanzhong Chen;Yinliang Yue;Xiao Qin","doi":"10.1109/TCC.2023.3329646","DOIUrl":"10.1109/TCC.2023.3329646","url":null,"abstract":"LSM-tree-based key-value stores (KV stores) render high-performance read/write services to data-intensive applications. KV stores employ an SSTable-based Coarse-Grained Compaction (CGC) mechanism, which involves a huge amount of data that do not need to be updated, thereby bringing a high write amplification (WA) and long tail latency. To address this issue, we propose a Fine-Grained Compaction (FGC) mechanism anchored on a \u0000<bold>L</b>\u0000og-\u0000<bold>S</b>\u0000tructured \u0000<bold>p</b>\u0000atched-\u0000<bold>M</b>\u0000erge tree (LSpM-tree) - a new data organization that averts rewriting irrelevant data into disks amid compaction. A cluster, the basic unit in FGC, encloses several patches and a redirection table, where each patch has an array of KV regions. We devise three compaction modes powered by the LSpM-tree, and we implement a high-performance key-value store, named FGKV. The extensive experiments show that FGKV improves the random-write throughput by up to 121%, 36.8%, 38.6%, and 15.2% compared with LevelDB, RocksDB, LDC, and ALDC, respectively. FGKV lowers the WA of the alternative KV stores by up to 50%. FGKV boosts read performance by up to 122%, 51.4%, 96.6%, and 368%, respectively, and FGKV curbs the 99th percentile latency of LevelDB, RocksDB, LDC, and ALDC by up to 78.2%, 77.6%, 78.3%, and 73.1% under YCSB A, respectively. Moreover, FGKV is readily extended to the other KV stores.","PeriodicalId":13202,"journal":{"name":"IEEE Transactions on Cloud Computing","volume":null,"pages":null},"PeriodicalIF":6.5,"publicationDate":"2023-11-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"134890814","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
AoI-Aware Partial Computation Offloading in IIoT With Edge Computing: A Deep Reinforcement Learning Based Approach 利用边缘计算在 IIoT 中实现 AoI 感知的部分计算卸载:基于深度强化学习的方法
IF 6.5 2区 计算机科学 Q1 Computer Science Pub Date : 2023-10-30 DOI: 10.1109/TCC.2023.3328614
Kai Peng;Peiyun Xiao;Shangguang Wang;Victor C. M. Leung
With the rapid growth of the Industrial Internet of Things, a large amount of industrial data that needs to be processed promptly. Edge computing-based computation offloading can well assist industrial devices to process these data and reduce the overall time overhead. However, there are dependencies among tasks and some tasks have high latency requirements, so completing computation offloading while considering the above factors faces important challenges. In this article, we design a computation offloading method based on a directed acyclic graph task model by modeling task dependencies. In addition to considering traditional optimization objectives in previous computation offloading problems (e.g., latency, energy consumption, etc.), we also propose an age of information (AoI) model to reflect the freshness of information and transform the task offloading problem into an optimization problem for latency, energy consumption, and AoI. To address this issue, we propose a method based on an improved dueling double deep Q-network computation offloading algorithm, named ID3CO. Specifically, it combines the advantages of deep Q-network, double deep Q-network, and dueling deep Q-network algorithms while further utilizing deep residual neural networks to improve convergence. Extensive simulations are conducted to demonstrate that ID3CO outperforms the existing baselines in terms of performance.
随着工业物联网的快速发展,大量工业数据需要及时处理。基于边缘计算的计算卸载可以很好地帮助工业设备处理这些数据,减少整体时间开销。然而,由于任务之间存在依赖性,且部分任务对延迟要求较高,因此在考虑上述因素的同时完成计算卸载面临着重要挑战。本文通过对任务依赖性建模,设计了一种基于有向无环图任务模型的计算卸载方法。除了考虑以往计算卸载问题中的传统优化目标(如延迟、能耗等)外,我们还提出了信息年龄(AoI)模型来反映信息的新鲜度,并将任务卸载问题转化为延迟、能耗和 AoI 的优化问题。针对这一问题,我们提出了一种基于改进的决斗双深度 Q 网络计算卸载算法的方法,命名为 ID3CO。具体来说,它结合了深度 Q 网络、双深度 Q 网络和决斗深度 Q 网络算法的优点,同时进一步利用深度残差神经网络来提高收敛性。大量的仿真证明,ID3CO 在性能上优于现有的基线算法。
{"title":"AoI-Aware Partial Computation Offloading in IIoT With Edge Computing: A Deep Reinforcement Learning Based Approach","authors":"Kai Peng;Peiyun Xiao;Shangguang Wang;Victor C. M. Leung","doi":"10.1109/TCC.2023.3328614","DOIUrl":"10.1109/TCC.2023.3328614","url":null,"abstract":"With the rapid growth of the Industrial Internet of Things, a large amount of industrial data that needs to be processed promptly. Edge computing-based computation offloading can well assist industrial devices to process these data and reduce the overall time overhead. However, there are dependencies among tasks and some tasks have high latency requirements, so completing computation offloading while considering the above factors faces important challenges. In this article, we design a computation offloading method based on a directed acyclic graph task model by modeling task dependencies. In addition to considering traditional optimization objectives in previous computation offloading problems (e.g., latency, energy consumption, etc.), we also propose an age of information (AoI) model to reflect the freshness of information and transform the task offloading problem into an optimization problem for latency, energy consumption, and AoI. To address this issue, we propose a method based on an improved dueling double deep Q-network computation offloading algorithm, named ID3CO. Specifically, it combines the advantages of deep Q-network, double deep Q-network, and dueling deep Q-network algorithms while further utilizing deep residual neural networks to improve convergence. Extensive simulations are conducted to demonstrate that ID3CO outperforms the existing baselines in terms of performance.","PeriodicalId":13202,"journal":{"name":"IEEE Transactions on Cloud Computing","volume":null,"pages":null},"PeriodicalIF":6.5,"publicationDate":"2023-10-30","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"135262391","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Hardware-Assisted Static and Runtime Attestation for Cloud Deployments 针对云部署的硬件辅助静态和运行时认证
IF 6.5 2区 计算机科学 Q1 Computer Science Pub Date : 2023-10-24 DOI: 10.1109/TCC.2023.3327290
Michał Kucab;Piotr Boryło;Piotr Chołda
This article is devoted to the problems of static and runtime integrity for cloud deployments. Existing remote attestation solutions for cloud infrastructure do not cover static and dynamic attestation as a whole. They evaluate either the static or dynamic part, not considering the rest. We address this gap by proposing a runtime attestation process based on hardware CET technology, as an enhancement to static attestation enabled by SGX. We show how hardware-assisted protection for control-flow-related attacks can enhance virtual deployment security with minimal tradeoff. Our solution does not significantly increase the processing time. Moreover, a processing time can even be reduced when this mechanism is used as a default protection method against control-flow related attacks.
本文专门讨论云部署的静态和运行时完整性问题。现有的用于云基础设施的远程认证解决方案没有从整体上涵盖静态和动态认证。它们要么评估静态部分,要么评估动态部分,而不考虑其余部分。我们通过提出基于硬件CET技术的运行时认证过程来解决这一差距,作为SGX支持的静态认证的增强。我们将展示硬件辅助保护控制流相关攻击如何以最小的代价增强虚拟部署安全性。我们的解决方案不会显著增加处理时间。此外,当将此机制用作对抗控制流相关攻击的默认保护方法时,甚至可以减少处理时间。
{"title":"Hardware-Assisted Static and Runtime Attestation for Cloud Deployments","authors":"Michał Kucab;Piotr Boryło;Piotr Chołda","doi":"10.1109/TCC.2023.3327290","DOIUrl":"10.1109/TCC.2023.3327290","url":null,"abstract":"This article is devoted to the problems of static and runtime integrity for cloud deployments. Existing remote attestation solutions for cloud infrastructure do not cover static and dynamic attestation as a whole. They evaluate either the static or dynamic part, not considering the rest. We address this gap by proposing a runtime attestation process based on hardware CET technology, as an enhancement to static attestation enabled by SGX. We show how hardware-assisted protection for control-flow-related attacks can enhance virtual deployment security with minimal tradeoff. Our solution does not significantly increase the processing time. Moreover, a processing time can even be reduced when this mechanism is used as a default protection method against control-flow related attacks.","PeriodicalId":13202,"journal":{"name":"IEEE Transactions on Cloud Computing","volume":null,"pages":null},"PeriodicalIF":6.5,"publicationDate":"2023-10-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"135157477","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Verifiable Cloud-Based Data Publish-Subscribe Service With Hidden Access Policy 具有隐藏访问策略的可验证云数据发布-订阅服务
IF 6.5 2区 计算机科学 Q1 Computer Science Pub Date : 2023-10-23 DOI: 10.1109/TCC.2023.3326339
Chunlin Li;Jinguo Li;Kai Zhang;Yan Yan;Jianting Ning
Cloud-based publish-subscribe (pub-sub) services provide a decoupling method for publishers and subscribers to effectively exchange targeted information and massive data on the cloud platform. Data publishers implement fine-grained access control to set subscription privileges for outsourced data through an access policy. However, in the context of semi-honest cloud platforms, the publisher's access policy may be collected, and incomplete or incorrect subscription results may be returned (e.g., to save communication costs). Existing solutions pay little attention to protecting the data publisher's access policy and cannot provide efficient verification for local results. In this article, we propose a verifiable multi-keyword data publish-subscribe scheme with a hidden access policy (VMP/S). Specifically, VMP/S combines attribute-based keyword search and data aggregation technology to achieve secure fine-grained access control, thereby protecting the privacy of the access policy. Additionally, the scheme provides an effective method for verifying local results by using equal-length verification information to confirm the correctness of feedback subscription data. Furthermore, we introduce a novel verification method for access control to enhance subscription performance efficiency. We demonstrate that VMP/S achieves IND-CKA security and ensures the privacy of the access policy through a comprehensive security analysis. Through experimental simulations, we confirm its effectiveness.
基于云的发布-订阅(pub-sub)服务为发布者和订阅者在云平台上有效交换目标信息和海量数据提供了一种解耦方法。数据发布者实施细粒度访问控制,通过访问策略设置外包数据的订阅权限。然而,在半诚信的云平台背景下,发布者的访问策略可能会被收集,不完整或不正确的订阅结果可能会被返回(例如,为了节省通信成本)。现有的解决方案很少关注保护数据发布者的访问策略,也无法为本地结果提供有效的验证。在本文中,我们提出了一种具有隐藏访问策略的可验证多关键字数据发布-订阅方案(VMP/S)。具体来说,VMP/S 结合了基于属性的关键词搜索和数据聚合技术,实现了安全的细粒度访问控制,从而保护了访问策略的隐私性。此外,该方案通过使用等长验证信息来确认反馈订阅数据的正确性,为验证本地结果提供了一种有效的方法。此外,我们还引入了一种新颖的访问控制验证方法,以提高订阅性能效率。通过全面的安全性分析,我们证明 VMP/S 实现了 IND-CKA 安全性,并确保了访问策略的隐私性。通过实验模拟,我们证实了它的有效性。
{"title":"Verifiable Cloud-Based Data Publish-Subscribe Service With Hidden Access Policy","authors":"Chunlin Li;Jinguo Li;Kai Zhang;Yan Yan;Jianting Ning","doi":"10.1109/TCC.2023.3326339","DOIUrl":"10.1109/TCC.2023.3326339","url":null,"abstract":"Cloud-based publish-subscribe (pub-sub) services provide a decoupling method for publishers and subscribers to effectively exchange targeted information and massive data on the cloud platform. Data publishers implement fine-grained access control to set subscription privileges for outsourced data through an access policy. However, in the context of semi-honest cloud platforms, the publisher's access policy may be collected, and incomplete or incorrect subscription results may be returned (e.g., to save communication costs). Existing solutions pay little attention to protecting the data publisher's access policy and cannot provide efficient verification for local results. In this article, we propose a verifiable multi-keyword data publish-subscribe scheme with a hidden access policy (VMP/S). Specifically, VMP/S combines attribute-based keyword search and data aggregation technology to achieve secure fine-grained access control, thereby protecting the privacy of the access policy. Additionally, the scheme provides an effective method for verifying local results by using equal-length verification information to confirm the correctness of feedback subscription data. Furthermore, we introduce a novel verification method for access control to enhance subscription performance efficiency. We demonstrate that VMP/S achieves IND-CKA security and ensures the privacy of the access policy through a comprehensive security analysis. Through experimental simulations, we confirm its effectiveness.","PeriodicalId":13202,"journal":{"name":"IEEE Transactions on Cloud Computing","volume":null,"pages":null},"PeriodicalIF":6.5,"publicationDate":"2023-10-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"135106395","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Learning Scheduling Policies for Co-Located Workloads in Cloud Datacenters 学习云数据中心共址工作负载的调度策略
IF 6.5 2区 计算机科学 Q1 Computer Science Pub Date : 2023-09-26 DOI: 10.1109/TCC.2023.3319383
Jialun Li;Danyang Xiao;Jieqian Yao;Yujie Long;Weigang Wu
Co-location, which deploys long running applications and batch-processing applications in the same computing cluster, has become a promising way to improve resource utility for large cloud datacenters. However, co-location brings huge challenges to task scheduling because different types of workloads may affect each other. Existing works on task scheduling rarely focus on the scenario of co-location. This article presents Co-ScheRRL, a scheduling algorithm delicately designed for co-located workloads. Co-ScheRRL consists of two major mechanisms: i) a self-attention encoding mechanism which encodes and represents states of the computing cluster as a set of embedding feature vectors; ii) a deep reinforcement learning (DRL) relational reasoning mechanism which calculates and compares different scheduling actions under different co-located workloads pattern via DRL feedback reward signals based on these feature vectors. Our two mechanisms can tackle complicatedly and dynamically varying behaviors of co-located workloads. With the help of these two mechanisms, Co-ScheRRL is able to construct high-quality scheduling policies. Trace-driven simulation demonstrates that Co-ScheRRL outperforms existing scheduling algorithms in terms of makespan by more than 38.4% and throughput by more than 166.7%.
在同一个计算集群中部署长期运行的应用程序和批量处理应用程序的 "同地办公 "已成为提高大型云数据中心资源利用率的一种可行方法。然而,由于不同类型的工作负载可能会相互影响,因此共定位给任务调度带来了巨大挑战。现有的任务调度工作很少关注共址场景。本文介绍的 Co-ScheRRL 是一种专为共址工作负载设计的调度算法。Co-ScheRRL 包括两个主要机制:i) 自注意编码机制,它将计算集群的状态编码并表示为一组嵌入特征向量;ii) 深度强化学习(DRL)关系推理机制,它通过基于这些特征向量的 DRL 反馈奖励信号,计算和比较不同同地工作负载模式下的不同调度行动。我们的这两种机制可以处理复杂且动态变化的同地工作负载行为。在这两种机制的帮助下,Co-ScheRRL 能够构建高质量的调度策略。轨迹驱动的仿真表明,Co-ScheRRL 在时间跨度(makespan)和吞吐量(throughput)方面分别比现有调度算法高出 38.4% 和 166.7% 以上。
{"title":"Learning Scheduling Policies for Co-Located Workloads in Cloud Datacenters","authors":"Jialun Li;Danyang Xiao;Jieqian Yao;Yujie Long;Weigang Wu","doi":"10.1109/TCC.2023.3319383","DOIUrl":"10.1109/TCC.2023.3319383","url":null,"abstract":"Co-location, which deploys long running applications and batch-processing applications in the same computing cluster, has become a promising way to improve resource utility for large cloud datacenters. However, co-location brings huge challenges to task scheduling because different types of workloads may affect each other. Existing works on task scheduling rarely focus on the scenario of co-location. This article presents Co-ScheRRL, a scheduling algorithm delicately designed for co-located workloads. Co-ScheRRL consists of two major mechanisms: i) a self-attention encoding mechanism which encodes and represents states of the computing cluster as a set of embedding feature vectors; ii) a deep reinforcement learning (DRL) relational reasoning mechanism which calculates and compares different scheduling actions under different co-located workloads pattern via DRL feedback reward signals based on these feature vectors. Our two mechanisms can tackle complicatedly and dynamically varying behaviors of co-located workloads. With the help of these two mechanisms, Co-ScheRRL is able to construct high-quality scheduling policies. Trace-driven simulation demonstrates that Co-ScheRRL outperforms existing scheduling algorithms in terms of makespan by more than 38.4% and throughput by more than 166.7%.","PeriodicalId":13202,"journal":{"name":"IEEE Transactions on Cloud Computing","volume":null,"pages":null},"PeriodicalIF":6.5,"publicationDate":"2023-09-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"135748503","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
A Complex Behavioral Interaction Analysis Method for Microservice Systems With Bounded Buffers 有界缓冲区微服务系统的复杂行为交互分析方法
IF 6.5 2区 计算机科学 Q1 Computer Science Pub Date : 2023-09-25 DOI: 10.1109/TCC.2023.3319038
Shuo Wang;Zhijun Ding;Ru Yang;Changjun Jiang
The interaction process in microservice architectures is highly complex, making it very challenging to ensure correct behavioral interactions. The few related works focus only on the verification of interaction soundness in the case of a specific buffer k-value, without considering how to get the suitable buffer k-value. To solve the above problems, this article proposes a method to find the maximum k-value for microservice systems with bounded buffers, which can maximize the analysis of valuable asynchronous interaction paths while avoiding the waste of computer memory resources. Specific contributions include, first, giving the relationship between buffer k-value and asynchronous interaction paths, interaction soundness, and its proof; second, proposing an iterative detection based on the additional added paths algorithm and its correctness proof, which leads to the conclusion that finding the maximum k-value is a decidable problem; finally, validating the proposing methods on ten classical cases of microservice systems, and analyzing effectiveness and performance. The experimental results show that the method can effectively find the maximum k-value of bounded buffers compared with existing methods and thus ensure the correct behavioral interactions of microservice systems.
微服务架构中的交互过程非常复杂,因此确保正确的行为交互非常具有挑战性。为数不多的相关著作只关注特定缓冲区 k 值情况下交互合理性的验证,而没有考虑如何获得合适的缓冲区 k 值。为了解决上述问题,本文提出了一种为缓冲区有界的微服务系统寻找最大 k 值的方法,可以最大限度地分析有价值的异步交互路径,同时避免计算机内存资源的浪费。具体贡献包括:首先,给出了缓冲区k值与异步交互路径之间的关系、交互健全性及其证明;其次,提出了基于额外添加路径算法的迭代检测方法及其正确性证明,从而得出寻找最大k值是一个可解问题的结论;最后,在十个微服务系统的经典案例上验证了所提出的方法,并分析了有效性和性能。实验结果表明,与现有方法相比,该方法能有效求出有界缓冲区的最大k值,从而保证微服务系统行为交互的正确性。
{"title":"A Complex Behavioral Interaction Analysis Method for Microservice Systems With Bounded Buffers","authors":"Shuo Wang;Zhijun Ding;Ru Yang;Changjun Jiang","doi":"10.1109/TCC.2023.3319038","DOIUrl":"10.1109/TCC.2023.3319038","url":null,"abstract":"The interaction process in microservice architectures is highly complex, making it very challenging to ensure correct behavioral interactions. The few related works focus only on the verification of interaction soundness in the case of a specific buffer \u0000<italic>k</i>\u0000-value, without considering how to get the suitable buffer \u0000<italic>k</i>\u0000-value. To solve the above problems, this article proposes a method to find the maximum \u0000<italic>k</i>\u0000-value for microservice systems with bounded buffers, which can maximize the analysis of valuable asynchronous interaction paths while avoiding the waste of computer memory resources. Specific contributions include, first, giving the relationship between buffer \u0000<italic>k</i>\u0000-value and asynchronous interaction paths, interaction soundness, and its proof; second, proposing an iterative detection based on the additional added paths algorithm and its correctness proof, which leads to the conclusion that finding the maximum \u0000<italic>k</i>\u0000-value is a decidable problem; finally, validating the proposing methods on ten classical cases of microservice systems, and analyzing effectiveness and performance. The experimental results show that the method can effectively find the maximum \u0000<italic>k</i>\u0000-value of bounded buffers compared with existing methods and thus ensure the correct behavioral interactions of microservice systems.","PeriodicalId":13202,"journal":{"name":"IEEE Transactions on Cloud Computing","volume":null,"pages":null},"PeriodicalIF":6.5,"publicationDate":"2023-09-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"135699997","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
MARS: A DRL-Based Multi-Task Resource Scheduling Framework for UAV With IRS-Assisted Mobile Edge Computing System MARS:基于 DRL 的无人机多任务资源调度框架与 IRS 辅助移动边缘计算系统
IF 6.5 2区 计算机科学 Q1 Computer Science Pub Date : 2023-09-19 DOI: 10.1109/TCC.2023.3307582
Feibo Jiang;Yubo Peng;Kezhi Wang;Li Dong;Kun Yang
This article studies a dynamic Mobile Edge Computing (MEC) system assisted by Unmanned Aerial Vehicles (UAVs) and Intelligent Reflective Surfaces (IRSs). We propose a scaleable resource scheduling algorithm to minimize the energy consumption of all UEs and UAVs in the MEC system with a variable number of UAVs. We propose a Multi-tAsk Resource Scheduling (MARS) framework based on Deep Reinforcement Learning (DRL) to solve the problem. First, we present a novel Advantage Actor-Critic (A2C) structure with the state-value critic and entropy-enhanced actor to reduce variance and enhance the policy search of DRL. Then, we present a multi-head agent with three different heads in which a classification head is applied to make offloading decisions and a regression head is presented to allocate computational resources, and a critic head is introduced to estimate the state value of the selected action. Next, we introduce a multi-task controller to adjust the agent to adapt to the varying number of UAVs by loading or unloading a part of weights in the agent. Finally, a Light Wolf Search (LWS) is introduced as the action refinement to enhance the exploration in the dynamic action space. The numerical results demonstrate the feasibility and efficiency of the MARS framework.
本文研究了一个由无人机(UAV)和智能反射表面(IRS)辅助的动态移动边缘计算(MEC)系统。我们提出了一种可扩展的资源调度算法,以尽量减少无人机数量可变的 MEC 系统中所有 UE 和 UAV 的能耗。我们提出了一种基于深度强化学习(DRL)的多任务资源调度(MARS)框架来解决这个问题。首先,我们提出了一种新颖的 "优势行动者-批判者"(Advantage Actor-Critic,A2C)结构,其中包含状态值批判者和熵增强行动者,以减少差异并增强 DRL 的策略搜索。然后,我们提出了一个包含三个不同头部的多头代理,其中分类头部用于做出卸载决策,回归头部用于分配计算资源,而批判头部则用于估计所选行动的状态值。接下来,我们引入了多任务控制器,通过加载或卸载代理中的部分权重来调整代理,以适应无人机数量的变化。最后,我们引入了光狼搜索(LWS)作为行动细化,以加强对动态行动空间的探索。数值结果证明了 MARS 框架的可行性和效率。
{"title":"MARS: A DRL-Based Multi-Task Resource Scheduling Framework for UAV With IRS-Assisted Mobile Edge Computing System","authors":"Feibo Jiang;Yubo Peng;Kezhi Wang;Li Dong;Kun Yang","doi":"10.1109/TCC.2023.3307582","DOIUrl":"10.1109/TCC.2023.3307582","url":null,"abstract":"This article studies a dynamic Mobile Edge Computing (MEC) system assisted by Unmanned Aerial Vehicles (UAVs) and Intelligent Reflective Surfaces (IRSs). We propose a scaleable resource scheduling algorithm to minimize the energy consumption of all UEs and UAVs in the MEC system with a variable number of UAVs. We propose a Multi-tAsk Resource Scheduling (MARS) framework based on Deep Reinforcement Learning (DRL) to solve the problem. First, we present a novel Advantage Actor-Critic (A2C) structure with the state-value critic and entropy-enhanced actor to reduce variance and enhance the policy search of DRL. Then, we present a multi-head agent with three different heads in which a classification head is applied to make offloading decisions and a regression head is presented to allocate computational resources, and a critic head is introduced to estimate the state value of the selected action. Next, we introduce a multi-task controller to adjust the agent to adapt to the varying number of UAVs by loading or unloading a part of weights in the agent. Finally, a Light Wolf Search (LWS) is introduced as the action refinement to enhance the exploration in the dynamic action space. The numerical results demonstrate the feasibility and efficiency of the MARS framework.","PeriodicalId":13202,"journal":{"name":"IEEE Transactions on Cloud Computing","volume":null,"pages":null},"PeriodicalIF":6.5,"publicationDate":"2023-09-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"135551147","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Multi-Objective Cloud Task Scheduling Optimization Based on Evolutionary Multi-Factor Algorithm 基于进化多因子算法的多目标云任务调度优化
IF 6.5 2区 计算机科学 Q1 Computer Science Pub Date : 2023-09-13 DOI: 10.1109/TCC.2023.3315014
Zhihua Cui;Tianhao Zhao;Linjie Wu;A. K. Qin;Jianwei Li
Cloud platforms scheduling resources based on the demand of the tasks submitted by the users, is critical to the cloud provider's interest and customer satisfaction. In this paper, we propose a multi-objective cloud task scheduling algorithm based on an evolutionary multi-factorial optimization algorithm. First, we choose execution time, execution cost, and virtual machines load balancing as the objective functions to construct a multi-objective cloud task scheduling model. Second, the multi-factor optimization (MFO) technique is applied to the task scheduling problem, and the task scheduling characteristics are combined with the multi-objective multi-factor optimization (MO-MFO) algorithm to construct an assisted optimization task. Finally, a dynamic adaptive transfer strategy is designed to determine the similarity between tasks according to the degree of overlap of the MFO problem and to control the intensity of knowledge transfer. The results of simulation experiments on the cloud task test dataset show that our method significantly improves scheduling efficiency, compared with other evolutionary algorithms (EAs), the scheduling method simplifies the decomposition of complex problems by a multi-factor approach, while using knowledge transfer to share the convergence direction among sub-populations, which can find the optimal solution interval more quickly and achieve the best results among all objective functions.
云平台根据用户提交的任务需求调度资源,对云提供商的利益和客户满意度至关重要。本文提出了一种基于进化多因子优化算法的多目标云任务调度算法。首先,我们选择执行时间、执行成本和虚拟机负载均衡作为目标函数,构建多目标云任务调度模型。其次,将多因素优化(MFO)技术应用于任务调度问题,并将任务调度特性与多目标多因素优化(MO-MFO)算法相结合,构建辅助优化任务。最后,设计了一种动态自适应转移策略,根据 MFO 问题的重叠程度确定任务间的相似性,控制知识转移的强度。在云任务测试数据集上的仿真实验结果表明,与其他进化算法(EA)相比,我们的方法显著提高了调度效率,调度方法通过多因素方法简化了复杂问题的分解,同时利用知识转移共享子群间的收敛方向,可以更快地找到最优解区间,并在所有目标函数中取得最佳结果。
{"title":"Multi-Objective Cloud Task Scheduling Optimization Based on Evolutionary Multi-Factor Algorithm","authors":"Zhihua Cui;Tianhao Zhao;Linjie Wu;A. K. Qin;Jianwei Li","doi":"10.1109/TCC.2023.3315014","DOIUrl":"10.1109/TCC.2023.3315014","url":null,"abstract":"Cloud platforms scheduling resources based on the demand of the tasks submitted by the users, is critical to the cloud provider's interest and customer satisfaction. In this paper, we propose a multi-objective cloud task scheduling algorithm based on an evolutionary multi-factorial optimization algorithm. First, we choose execution time, execution cost, and virtual machines load balancing as the objective functions to construct a multi-objective cloud task scheduling model. Second, the multi-factor optimization (MFO) technique is applied to the task scheduling problem, and the task scheduling characteristics are combined with the multi-objective multi-factor optimization (MO-MFO) algorithm to construct an assisted optimization task. Finally, a dynamic adaptive transfer strategy is designed to determine the similarity between tasks according to the degree of overlap of the MFO problem and to control the intensity of knowledge transfer. The results of simulation experiments on the cloud task test dataset show that our method significantly improves scheduling efficiency, compared with other evolutionary algorithms (EAs), the scheduling method simplifies the decomposition of complex problems by a multi-factor approach, while using knowledge transfer to share the convergence direction among sub-populations, which can find the optimal solution interval more quickly and achieve the best results among all objective functions.","PeriodicalId":13202,"journal":{"name":"IEEE Transactions on Cloud Computing","volume":null,"pages":null},"PeriodicalIF":6.5,"publicationDate":"2023-09-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"135403631","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
期刊
IEEE Transactions on Cloud Computing
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1