首页 > 最新文献

IEEE Transactions on Cloud Computing最新文献

英文 中文
Achieving Privacy-Preserving Online Multi-Layer Perceptron Model in Smart Grid 在智能电网中实现保护隐私的在线多层感知器模型
IF 6.5 2区 计算机科学 Q1 Computer Science Pub Date : 2024-03-13 DOI: 10.1109/TCC.2024.3399771
Chunqiang Hu;Huijun Zhuang;Jiajun Chen;Pengfei Hu;Tao Xiang;Jiguo Yu
With the development of Big Data technology, the power industry has also entered the data-driven intelligence era. Cloud computing-based smart grids give the power industry stronger capabilities in data analytics. Electricity load forecasting in the cloud helps smart grids allocate resources appropriately. However, the users’ privacy is easily compromised in the load forecasting process with cloud computing. The electricity usage data collected by the system may contain sensitive information about the users, which could lead to serious privacy leakage. In order to solve the issues, we propose a novel privacy-preserving cloud-aided load forecasting scheme for the cloud computing-based smart grid. It contains a secure online training algorithm and an efficient real-time forecasting algorithm. Meanwhile, the two-party interaction security scheme is more suitable for real-world applications. Before being sent to the cloud server, the control center of the smart grids encrypts the data using homomorphic encryption. During the process of model training and forecasting, the data remains securely encrypted at all times to avoid the risk of data privacy breaches. Finally, security and experimental analyses show that our scheme effectively avoids privacy leakage while reducing resource consumption.
随着大数据技术的发展,电力行业也进入了数据驱动的智能时代。基于云计算的智能电网赋予了电力行业更强的数据分析能力。云计算中的电力负荷预测有助于智能电网合理分配资源。然而,在云计算的负荷预测过程中,用户的隐私很容易被泄露。系统收集的用电数据可能包含用户的敏感信息,这可能导致严重的隐私泄露。为了解决这些问题,我们为基于云计算的智能电网提出了一种新颖的保护隐私的云辅助负荷预测方案。该方案包含一个安全的在线训练算法和一个高效的实时预测算法。同时,两方交互安全方案更适合实际应用。在将数据发送到云服务器之前,智能电网的控制中心会使用同态加密技术对数据进行加密。在模型训练和预测过程中,数据始终保持安全加密,避免了数据隐私泄露的风险。最后,安全和实验分析表明,我们的方案能有效避免隐私泄露,同时减少资源消耗。
{"title":"Achieving Privacy-Preserving Online Multi-Layer Perceptron Model in Smart Grid","authors":"Chunqiang Hu;Huijun Zhuang;Jiajun Chen;Pengfei Hu;Tao Xiang;Jiguo Yu","doi":"10.1109/TCC.2024.3399771","DOIUrl":"10.1109/TCC.2024.3399771","url":null,"abstract":"With the development of Big Data technology, the power industry has also entered the data-driven intelligence era. Cloud computing-based smart grids give the power industry stronger capabilities in data analytics. Electricity load forecasting in the cloud helps smart grids allocate resources appropriately. However, the users’ privacy is easily compromised in the load forecasting process with cloud computing. The electricity usage data collected by the system may contain sensitive information about the users, which could lead to serious privacy leakage. In order to solve the issues, we propose a novel privacy-preserving cloud-aided load forecasting scheme for the cloud computing-based smart grid. It contains a secure online training algorithm and an efficient real-time forecasting algorithm. Meanwhile, the two-party interaction security scheme is more suitable for real-world applications. Before being sent to the cloud server, the control center of the smart grids encrypts the data using homomorphic encryption. During the process of model training and forecasting, the data remains securely encrypted at all times to avoid the risk of data privacy breaches. Finally, security and experimental analyses show that our scheme effectively avoids privacy leakage while reducing resource consumption.","PeriodicalId":13202,"journal":{"name":"IEEE Transactions on Cloud Computing","volume":null,"pages":null},"PeriodicalIF":6.5,"publicationDate":"2024-03-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140934347","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Efficient User-Centric Privacy-Friendly and Flexible Wearable Data Aggregation and Sharing 以用户为中心、隐私友好、灵活高效的可穿戴设备数据聚合与共享
IF 6.5 2区 计算机科学 Q1 Computer Science Pub Date : 2024-03-12 DOI: 10.1109/tcc.2024.3375801
Khlood Jastaniah, Ning Zhang, Mustafa A. Mustafa
{"title":"Efficient User-Centric Privacy-Friendly and Flexible Wearable Data Aggregation and Sharing","authors":"Khlood Jastaniah, Ning Zhang, Mustafa A. Mustafa","doi":"10.1109/tcc.2024.3375801","DOIUrl":"https://doi.org/10.1109/tcc.2024.3375801","url":null,"abstract":"","PeriodicalId":13202,"journal":{"name":"IEEE Transactions on Cloud Computing","volume":null,"pages":null},"PeriodicalIF":6.5,"publicationDate":"2024-03-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140115315","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Evaluation of Application Layer DDoS Attack Effect in Cloud Native Applications 云本地应用程序中应用层 DDoS 攻击效果评估
IF 6.5 2区 计算机科学 Q1 Computer Science Pub Date : 2024-03-11 DOI: 10.1109/TCC.2024.3374798
Kewei Wang;Changzhen Hu;Chun Shan
Cloud native application is especially susceptible to application layer DDoS attack. This attributes to the internal service calls, by which microservices cooperate and communicate with each other, amplifying the effect of application layer DDoS attack. Since different services have varying degrees of sensitivity to an attack, a sophisticated attacker can take advantage of those especially expensive API calls to produce serious damage to the availability of services and applications with ease. To better analyze the severity of and mitigate application layer DDoS attacks in cloud native applications, we propose a novel method to evaluate the effect of application layer DDoS attack, that is able to quantitatively characterize the amplifying effect introduced by the complex structure of application system. We first present the descriptive model of the scenario. Then, Riemannian manifolds are constructed as the state spaces of the attack scenarios, in which attacks are described as homeomorphisms. Finally, we apply differential geometry principles to quantitatively calculate the attack effect, which is derived from the action of an attack and the movement it produces in the state spaces. The proposed method is validated in various application scenarios. We show that our approach provides accurate evaluation results, and outperforms existing solutions.
云本地应用程序特别容易受到应用层 DDoS 攻击。这是因为微服务之间通过内部服务调用进行合作和通信,从而扩大了应用层 DDoS 攻击的影响。由于不同的服务对攻击的敏感程度不同,因此老练的攻击者可以利用那些特别昂贵的 API 调用,轻而易举地对服务和应用程序的可用性造成严重破坏。为了更好地分析云原生应用中应用层 DDoS 攻击的严重性并减轻其影响,我们提出了一种评估应用层 DDoS 攻击影响的新方法,该方法能够定量描述应用系统复杂结构所带来的放大效应。我们首先介绍了场景的描述模型。然后,构建黎曼流形作为攻击场景的状态空间,其中的攻击被描述为同构。最后,我们运用微分几何原理定量计算攻击效果,攻击效果来自攻击动作及其在状态空间中产生的运动。我们在各种应用场景中对所提出的方法进行了验证。结果表明,我们的方法能提供准确的评估结果,并优于现有的解决方案。
{"title":"Evaluation of Application Layer DDoS Attack Effect in Cloud Native Applications","authors":"Kewei Wang;Changzhen Hu;Chun Shan","doi":"10.1109/TCC.2024.3374798","DOIUrl":"10.1109/TCC.2024.3374798","url":null,"abstract":"Cloud native application is especially susceptible to application layer DDoS attack. This attributes to the internal service calls, by which microservices cooperate and communicate with each other, amplifying the effect of application layer DDoS attack. Since different services have varying degrees of sensitivity to an attack, a sophisticated attacker can take advantage of those especially expensive API calls to produce serious damage to the availability of services and applications with ease. To better analyze the severity of and mitigate application layer DDoS attacks in cloud native applications, we propose a novel method to evaluate the effect of application layer DDoS attack, that is able to quantitatively characterize the amplifying effect introduced by the complex structure of application system. We first present the descriptive model of the scenario. Then, Riemannian manifolds are constructed as the state spaces of the attack scenarios, in which attacks are described as homeomorphisms. Finally, we apply differential geometry principles to quantitatively calculate the attack effect, which is derived from the action of an attack and the movement it produces in the state spaces. The proposed method is validated in various application scenarios. We show that our approach provides accurate evaluation results, and outperforms existing solutions.","PeriodicalId":13202,"journal":{"name":"IEEE Transactions on Cloud Computing","volume":null,"pages":null},"PeriodicalIF":6.5,"publicationDate":"2024-03-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140105636","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
BPS: Batching, Pipelining, Surgeon of Continuous Deep Inference on Collaborative Edge Intelligence BPS:批处理、流水线作业、协作边缘智能连续深度推理的外科医生
IF 5.3 2区 计算机科学 Q1 COMPUTER SCIENCE, INFORMATION SYSTEMS Pub Date : 2024-03-10 DOI: 10.1109/TCC.2024.3399616
Xueyu Hou;Yongjie Guan;Nakjung Choi;Tao Han
Users on edge generate deep inference requests continuously over time. Mobile/edge devices located near users can undertake the computation of inference locally for users, e.g., the embedded edge device on an autonomous vehicle. Due to limited computing resources on one mobile/edge device, it may be challenging to process the inference requests from users with high throughput. An attractive solution is to (partially) offload the computation to a remote device in the network. In this paper, we examine the existing inference execution solutions across local and remote devices and propose an adaptive scheduler, a BPS scheduler, for continuous deep inference on collaborative edge intelligence. By leveraging data parallel, neurosurgeon, reinforcement learning techniques, BPS can boost the overall inference performance by up to $8.2 times$ over the baseline schedulers. A lightweight compressor, FF, specialized in compressing intermediate output data for neurosurgeon, is proposed and integrated into the BPS scheduler. FF exploits the operating character of convolutional layers and utilizes efficient approximation algorithms. Compared to existing compression methods, FF achieves up to 86.9% lower accuracy loss and up to 83.6% lower latency overhead.
边缘用户会不断产生深度推理请求。位于用户附近的移动/边缘设备(如自动驾驶汽车上的嵌入式边缘设备)可以在本地为用户进行推理计算。由于单个移动/边缘设备的计算资源有限,要以高吞吐量处理用户的推理请求可能具有挑战性。一个有吸引力的解决方案是将计算(部分)卸载到网络中的远程设备上。在本文中,我们研究了现有的跨本地和远程设备的推理执行解决方案,并提出了一种自适应调度器--BPS 调度器,用于协作边缘智能上的连续深度推理。通过利用数据并行、神经外科、强化学习等技术,BPS 可以将整体推理性能比基线调度器提高多达 8.2 美元(times$)。我们提出了一种轻量级压缩器 FF,专门用于压缩神经外科的中间输出数据,并将其集成到 BPS 调度器中。FF 利用了卷积层的工作特性,并采用了高效的近似算法。与现有的压缩方法相比,FF 最多可降低 86.9% 的精度损失和 83.6% 的延迟开销。
{"title":"BPS: Batching, Pipelining, Surgeon of Continuous Deep Inference on Collaborative Edge Intelligence","authors":"Xueyu Hou;Yongjie Guan;Nakjung Choi;Tao Han","doi":"10.1109/TCC.2024.3399616","DOIUrl":"10.1109/TCC.2024.3399616","url":null,"abstract":"Users on edge generate deep inference requests continuously over time. Mobile/edge devices located near users can undertake the computation of inference locally for users, e.g., the embedded edge device on an autonomous vehicle. Due to limited computing resources on one mobile/edge device, it may be challenging to process the inference requests from users with high throughput. An attractive solution is to (partially) offload the computation to a remote device in the network. In this paper, we examine the existing inference execution solutions across local and remote devices and propose an adaptive scheduler, a BPS scheduler, for continuous deep inference on collaborative edge intelligence. By leveraging data parallel, neurosurgeon, reinforcement learning techniques, BPS can boost the overall inference performance by up to \u0000<inline-formula><tex-math>$8.2 times$</tex-math></inline-formula>\u0000 over the baseline schedulers. A lightweight compressor, FF, specialized in compressing intermediate output data for neurosurgeon, is proposed and integrated into the BPS scheduler. FF exploits the operating character of convolutional layers and utilizes efficient approximation algorithms. Compared to existing compression methods, FF achieves up to 86.9% lower accuracy loss and up to 83.6% lower latency overhead.","PeriodicalId":13202,"journal":{"name":"IEEE Transactions on Cloud Computing","volume":null,"pages":null},"PeriodicalIF":5.3,"publicationDate":"2024-03-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140933079","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
An Open API Architecture to Discover the Trustworthy Explanation of Cloud AI Services 探索云人工智能服务可信解释的开放式 API 架构
IF 6.5 2区 计算机科学 Q1 Computer Science Pub Date : 2024-03-10 DOI: 10.1109/TCC.2024.3398609
Zerui Wang;Yan Liu;Jun Huang
This article presents the design of an open-API-based explainable AI (XAI) service to provide feature contribution explanations for cloud AI services. Cloud AI services are widely used to develop domain-specific applications with precise learning metrics. However, the underlying cloud AI services remain opaque on how the model produces the prediction. We argue that XAI operations are accessible as open APIs to enable the consolidation of the XAI operations into the cloud AI services assessment. We propose a design using a microservice architecture that offers feature contribution explanations for cloud AI services without unfolding the network structure of the cloud models. We can also utilize this architecture to evaluate the model performance and XAI consistency metrics showing cloud AI services’ trustworthiness. We collect provenance data from operational pipelines to enable reproducibility within the XAI service. Furthermore, we present the discovery scenarios for the experimental tests regarding model performance and XAI consistency metrics for the leading cloud vision AI services. The results confirm that the architecture, based on open APIs, is cloud-agnostic. Additionally, data augmentations result in measurable improvements in XAI consistency metrics for cloud AI services.
本文介绍了一种基于开放式 API 的可解释人工智能(XAI)服务的设计,该服务可为云人工智能服务提供特征贡献解释。云人工智能服务被广泛用于开发具有精确学习指标的特定领域应用程序。然而,底层的云人工智能服务在模型如何产生预测方面仍然不透明。我们认为,XAI 操作可以作为开放式 API 访问,以便将 XAI 操作整合到云人工智能服务评估中。我们提出了一种使用微服务架构的设计,该架构可为云人工智能服务提供特征贡献解释,而无需展开云模型的网络结构。我们还可以利用这种架构来评估模型性能和 XAI 一致性指标,从而显示云人工智能服务的可信度。我们从运行管道中收集出处数据,以便在 XAI 服务中实现可重现性。此外,我们还介绍了针对领先云视觉人工智能服务的模型性能和 XAI 一致性指标进行实验测试的发现场景。结果证实,基于开放式应用程序接口的架构与云无关。此外,数据增强还能显著改善云人工智能服务的 XAI 一致性指标。
{"title":"An Open API Architecture to Discover the Trustworthy Explanation of Cloud AI Services","authors":"Zerui Wang;Yan Liu;Jun Huang","doi":"10.1109/TCC.2024.3398609","DOIUrl":"10.1109/TCC.2024.3398609","url":null,"abstract":"This article presents the design of an open-API-based explainable AI (XAI) service to provide feature contribution explanations for cloud AI services. Cloud AI services are widely used to develop domain-specific applications with precise learning metrics. However, the underlying cloud AI services remain opaque on how the model produces the prediction. We argue that XAI operations are accessible as open APIs to enable the consolidation of the XAI operations into the cloud AI services assessment. We propose a design using a microservice architecture that offers feature contribution explanations for cloud AI services without unfolding the network structure of the cloud models. We can also utilize this architecture to evaluate the model performance and XAI consistency metrics showing cloud AI services’ trustworthiness. We collect provenance data from operational pipelines to enable reproducibility within the XAI service. Furthermore, we present the discovery scenarios for the experimental tests regarding model performance and XAI consistency metrics for the leading cloud vision AI services. The results confirm that the architecture, based on open APIs, is cloud-agnostic. Additionally, data augmentations result in measurable improvements in XAI consistency metrics for cloud AI services.","PeriodicalId":13202,"journal":{"name":"IEEE Transactions on Cloud Computing","volume":null,"pages":null},"PeriodicalIF":6.5,"publicationDate":"2024-03-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140933071","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Lightweight and Privacy-Preserving Dual Incentives for Mobile Crowdsensing 移动人群感应的轻量级和隐私保护双重激励机制
IF 6.5 2区 计算机科学 Q1 Computer Science Pub Date : 2024-03-05 DOI: 10.1109/TCC.2024.3372598
Lin Wan;Zhiquan Liu;Yong Ma;Yudan Cheng;Yongdong Wu;Runchuan Li;Jianfeng Ma
Incentive plays an important role in mobile crowdsensing (MCS), as it impels mobile users to participate in sensing tasks and provide high-quality sensing data. However, considering the privacy (including identity privacy, sensing data privacy, and reputation value privacy) and practicality (including reliability, quality awareness, and efficiency) issues in practice, it is a challenge to design such an effective incentive scheme for MCS applications. Existing studies either fail to provide adequate privacy-preserving capabilities or have low practicality. To address these issues, we propose a scheme called BRRV in MCS which relies on two rounds of range reliability assessment to guarantee the reliability of data while achieving privacy preservation. In addition, we also present a lightweight scheme called LRRV in MCS which relies on a single round of range reliability assessment to guarantee the reliability of data while achieving lightweight and privacy preservation. Moreover, to fairly stimulate participants, constrain participants’ malicious behavior, and improve the probability of high-quality data, we design a quality-aware reputation-based reward and penalty strategy to achieve dual incentives (including money incentives and reputation incentives) for participants. Furthermore, comprehensive theoretical analysis and experimental evaluation demonstrate that our proposed schemes are significantly superior to the existing schemes in several aspects.
激励机制在移动众感应(MCS)中发挥着重要作用,因为它能促使移动用户参与感应任务并提供高质量的感应数据。然而,考虑到实际应用中的隐私(包括身份隐私、感知数据隐私和声誉值隐私)和实用性(包括可靠性、质量意识和效率)问题,为 MCS 应用设计有效的激励方案是一项挑战。现有的研究要么无法提供足够的隐私保护能力,要么实用性较低。为了解决这些问题,我们提出了一种在 MCS 中称为 BRRV 的方案,它依靠两轮范围可靠性评估来保证数据的可靠性,同时实现隐私保护。此外,我们还提出了一种轻量级方案--MCS 中的 LRRV,它只需进行一轮范围可靠性评估,即可在保证数据可靠性的同时实现轻量级和隐私保护。此外,为了公平激励参与者,约束参与者的恶意行为,提高高质量数据的概率,我们设计了基于质量感知声誉的奖惩策略,实现对参与者的双重激励(包括金钱激励和声誉激励)。此外,综合理论分析和实验评估表明,我们提出的方案在多个方面明显优于现有方案。
{"title":"Lightweight and Privacy-Preserving Dual Incentives for Mobile Crowdsensing","authors":"Lin Wan;Zhiquan Liu;Yong Ma;Yudan Cheng;Yongdong Wu;Runchuan Li;Jianfeng Ma","doi":"10.1109/TCC.2024.3372598","DOIUrl":"10.1109/TCC.2024.3372598","url":null,"abstract":"Incentive plays an important role in mobile crowdsensing (MCS), as it impels mobile users to participate in sensing tasks and provide high-quality sensing data. However, considering the privacy (including identity privacy, sensing data privacy, and reputation value privacy) and practicality (including reliability, quality awareness, and efficiency) issues in practice, it is a challenge to design such an effective incentive scheme for MCS applications. Existing studies either fail to provide adequate privacy-preserving capabilities or have low practicality. To address these issues, we propose a scheme called BRRV in MCS which relies on two rounds of range reliability assessment to guarantee the reliability of data while achieving privacy preservation. In addition, we also present a lightweight scheme called LRRV in MCS which relies on a single round of range reliability assessment to guarantee the reliability of data while achieving lightweight and privacy preservation. Moreover, to fairly stimulate participants, constrain participants’ malicious behavior, and improve the probability of high-quality data, we design a quality-aware reputation-based reward and penalty strategy to achieve dual incentives (including money incentives and reputation incentives) for participants. Furthermore, comprehensive theoretical analysis and experimental evaluation demonstrate that our proposed schemes are significantly superior to the existing schemes in several aspects.","PeriodicalId":13202,"journal":{"name":"IEEE Transactions on Cloud Computing","volume":null,"pages":null},"PeriodicalIF":6.5,"publicationDate":"2024-03-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140045412","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Context-Aware Consensus Algorithm for Blockchain-Empowered Federated Learning 区块链赋能联合学习的上下文感知共识算法
IF 6.5 2区 计算机科学 Q1 Computer Science Pub Date : 2024-03-05 DOI: 10.1109/TCC.2024.3372814
Yao Zhao;Youyang Qu;Yong Xiang;Feifei Chen;Longxiang Gao
Supported by cloud computing, Federated Learning (FL) has experienced rapid advancement, as a promising technique to motivate clients to collaboratively train models without sharing local data. To improve the security and fairness of FL implementation, numerous Blockchain-empowered Federated Learning (BFL) frameworks have emerged accordingly. Among them, consensus algorithms play a pivotal role in determining the scalability, security, and consistency of BFL systems. Existing consensus solutions to block producer selection and reward allocation either focus on well-resourced scenarios or accommodate BFL based on clients’ contributions to model training. However, these approaches limit consensus efficiency and undermine reward fairness, due to involving intricate consensus processes, disregarding clients’ contributions during blockchain consensus, and failing to address lazy client problems (malicious clients plagiarizing local model updates from others to reap rewards). Given the aforementioned challenges, we make the first attempt to design a joint solution for efficient consensus and fair reward allocation in heterogeneous BFL systems with lazy clients. Specifically, we introduce a generalizable BFL workflow that can address lazy client problems well. Based on it, the global contribution of BFL clients is decoupled into five dominant metrics, and the block producer selection problem is formulated as a reward-constraint contribution maximization problem. By addressing this problem, the optimal block producer that maximizes global contribution can be identified to orchestrate consensus processes, and rewards are distributed to clients in proportion to their respective global contributions. To achieve it, we develop a Context-aware Proof-of-Contribution consensus algorithm named CPoC to reach consensus and incentive simultaneously, followed by theoretical analysis of lazy client problems and privacy issues. Empirical results on widely-used datasets demonstrate the effectiveness of our design in improving consensus efficiency and maximizing global contribution.
在云计算的支持下,联邦学习(Federated Learning,FL)作为一种有前途的技术,在不共享本地数据的情况下激励客户协作训练模型,取得了快速发展。为了提高联邦学习实施的安全性和公平性,许多区块链赋能的联邦学习(BFL)框架应运而生。其中,共识算法在决定 BFL 系统的可扩展性、安全性和一致性方面发挥着举足轻重的作用。现有的区块生产者选择和奖励分配共识解决方案要么专注于资源充足的场景,要么基于客户对模型训练的贡献来适应 BFL。然而,这些方法由于涉及复杂的共识过程、在区块链共识过程中忽略客户贡献以及未能解决懒惰客户问题(恶意客户剽窃他人的本地模型更新以获取奖励),限制了共识效率并破坏了奖励公平性。鉴于上述挑战,我们首次尝试为具有懒客户端的异构 BFL 系统设计一种高效共识和公平奖励分配的联合解决方案。具体来说,我们引入了一种可通用的 BFL 工作流,它能很好地解决懒客户端问题。在此基础上,我们将 BFL 客户端的全局贡献分解为五个主要指标,并将区块生产者选择问题表述为奖励约束贡献最大化问题。通过解决这个问题,就能确定全局贡献最大化的最优区块生产者,从而协调共识流程,并根据客户各自的全局贡献按比例向其分配奖励。为实现这一目标,我们开发了一种名为 CPoC 的上下文感知贡献证明共识算法,以同时达成共识和激励,并对懒客户端问题和隐私问题进行了理论分析。在广泛使用的数据集上取得的经验结果证明了我们的设计在提高共识效率和最大化全局贡献方面的有效性。
{"title":"Context-Aware Consensus Algorithm for Blockchain-Empowered Federated Learning","authors":"Yao Zhao;Youyang Qu;Yong Xiang;Feifei Chen;Longxiang Gao","doi":"10.1109/TCC.2024.3372814","DOIUrl":"10.1109/TCC.2024.3372814","url":null,"abstract":"Supported by cloud computing, \u0000<underline>F</u>\u0000ederated \u0000<underline>L</u>\u0000earning (FL) has experienced rapid advancement, as a promising technique to motivate clients to collaboratively train models without sharing local data. To improve the security and fairness of FL implementation, numerous \u0000<underline>B</u>\u0000lockchain-empowered \u0000<underline>F</u>\u0000ederated \u0000<underline>L</u>\u0000earning (BFL) frameworks have emerged accordingly. Among them, consensus algorithms play a pivotal role in determining the scalability, security, and consistency of BFL systems. Existing consensus solutions to block producer selection and reward allocation either focus on well-resourced scenarios or accommodate BFL based on clients’ contributions to model training. However, these approaches limit consensus efficiency and undermine reward fairness, due to involving intricate consensus processes, disregarding clients’ contributions during blockchain consensus, and failing to address lazy client problems (malicious clients plagiarizing local model updates from others to reap rewards). Given the aforementioned challenges, we make the first attempt to design a joint solution for efficient consensus and fair reward allocation in heterogeneous BFL systems with lazy clients. Specifically, we introduce a generalizable BFL workflow that can address lazy client problems well. Based on it, the global contribution of BFL clients is decoupled into five dominant metrics, and the block producer selection problem is formulated as a reward-constraint contribution maximization problem. By addressing this problem, the optimal block producer that maximizes global contribution can be identified to orchestrate consensus processes, and rewards are distributed to clients in proportion to their respective global contributions. To achieve it, we develop a \u0000<underline>C</u>\u0000ontext-aware \u0000<underline>P</u>\u0000roof-\u0000<underline>o</u>\u0000f-\u0000<underline>C</u>\u0000ontribution consensus algorithm named CPoC to reach consensus and incentive simultaneously, followed by theoretical analysis of lazy client problems and privacy issues. Empirical results on widely-used datasets demonstrate the effectiveness of our design in improving consensus efficiency and maximizing global contribution.","PeriodicalId":13202,"journal":{"name":"IEEE Transactions on Cloud Computing","volume":null,"pages":null},"PeriodicalIF":6.5,"publicationDate":"2024-03-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140045414","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Low-Carbon Operation of Data Centers With Joint Workload Sharing and Carbon Allowance Trading 通过联合工作量分担和碳配额交易实现数据中心的低碳运营
IF 6.5 2区 计算机科学 Q1 Computer Science Pub Date : 2024-03-03 DOI: 10.1109/TCC.2024.3396476
Dongxiang Yan;Mo-Yuen Chow;Yue Chen
Data centers (DCs) have witnessed rapid growth due to the proliferation of cloud computing and internet services. The huge electricity demand and the associated carbon emissions of DCs have great impacts on power system reliability and environmental sustainability. This paper proposes a bilevel model for low-carbon operation of DCs via carbon-integrated locational marginal prices (CLMPs). In the upper level, the power system operator sequentially solves the optimal power flow and the carbon emission flow problems to determine the CLMPs. In the lower level, a joint workload sharing and carbon trading model for DCs is developed to minimize their overall operation cost while keeping each DC's carbon footprint within its carbon allowance. To solve the bilevel model and preserve the privacy of DCs, we propose a bisection-embedded iterative method. It can tackle the issue of oscillation, thereby ensuring convergence. In addition, a filtering mechanism-based distributed algorithm is proposed to solve the lower-level DC problem in a distributed manner with much reduced communication overhead. Case studies on both small-scale and large-scale systems demonstrate the effectiveness and benefits of the proposed method.
由于云计算和互联网服务的激增,数据中心(DC)见证了快速增长。数据中心巨大的电力需求和相关的碳排放对电力系统的可靠性和环境可持续性产生了巨大影响。本文提出了一种通过碳整合区位边际价格(CLMP)实现直流电低碳运行的双层模型。在上层,电力系统运营商依次求解最优电力流和碳排放流问题,以确定 CLMPs。在下层,为直流电开发了一个联合工作量分担和碳交易模型,以最大限度地降低其总体运营成本,同时将每个直流电的碳足迹控制在其碳限额内。为了解决双层模型并保护 DC 的隐私,我们提出了一种分段嵌入迭代法。它可以解决振荡问题,从而确保收敛。此外,我们还提出了一种基于过滤机制的分布式算法,以分布式方式解决底层 DC 问题,大大减少了通信开销。对小型和大型系统的案例研究证明了所提方法的有效性和优势。
{"title":"Low-Carbon Operation of Data Centers With Joint Workload Sharing and Carbon Allowance Trading","authors":"Dongxiang Yan;Mo-Yuen Chow;Yue Chen","doi":"10.1109/TCC.2024.3396476","DOIUrl":"10.1109/TCC.2024.3396476","url":null,"abstract":"Data centers (DCs) have witnessed rapid growth due to the proliferation of cloud computing and internet services. The huge electricity demand and the associated carbon emissions of DCs have great impacts on power system reliability and environmental sustainability. This paper proposes a bilevel model for low-carbon operation of DCs via carbon-integrated locational marginal prices (CLMPs). In the upper level, the power system operator sequentially solves the optimal power flow and the carbon emission flow problems to determine the CLMPs. In the lower level, a joint workload sharing and carbon trading model for DCs is developed to minimize their overall operation cost while keeping each DC's carbon footprint within its carbon allowance. To solve the bilevel model and preserve the privacy of DCs, we propose a bisection-embedded iterative method. It can tackle the issue of oscillation, thereby ensuring convergence. In addition, a filtering mechanism-based distributed algorithm is proposed to solve the lower-level DC problem in a distributed manner with much reduced communication overhead. Case studies on both small-scale and large-scale systems demonstrate the effectiveness and benefits of the proposed method.","PeriodicalId":13202,"journal":{"name":"IEEE Transactions on Cloud Computing","volume":null,"pages":null},"PeriodicalIF":6.5,"publicationDate":"2024-03-03","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140836754","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
FLAIR: A Fast and Low-Redundancy Failure Recovery Framework for Inter Data Center Network FLAIR:数据中心间网络的快速低冗余故障恢复框架
IF 6.5 2区 计算机科学 Q1 Computer Science Pub Date : 2024-03-02 DOI: 10.1109/TCC.2024.3393735
Yuchao Zhang;Haoqiang Huang;Ahmed M. Abdelmoniem;Gaoxiong Zeng;Chenyue Zheng;Xirong Que;Wendong Wang;Ke Xu
Due to the fast developments of 5G and IoT technologies, Inter-Datacenter (Inter-DC) networks are facing unprecedented pressure to duplicate large volumes of geographically distributed user data in a real-time manner. Meanwhile, with the expansion of Inter-DC networks scale, link/node failures also become increasingly frequent, negatively affecting the data transmission efficiency. Therefore, link failure recovery methods become of utmost importance. Many works investigated fast failure recovery, yet none of them consider the deployment overhead of such recovery schemes. While in this article, we found that the side-effect of deploying recovery strategies and the future availability of the recovered transmissions are also crucial for fast recovery. So we propose a fast and low-redundancy failure recovery framework, FLAIR, which consists of a fast recovery strategy FRAVaR and a redundancy removal algorithm ROSE. FRAVaR takes full consideration of deployment overhead by minimizing shuffle traffic. On its base, ROSE regularly eliminates the cumulative rerouting redundancy by removing unnecessary routing updates. The experiment results on 4 realistic network topologies show that FLAIR successfully reduces up to 48.2% deployment overhead compared with the state-of-the-art solutions, and thus reduces up to 70.2% recovery speed and improves up to 36% network utilization.
由于 5G 和物联网技术的快速发展,数据中心间(Inter-DC)网络正面临着前所未有的压力,需要实时复制大量地理分布广泛的用户数据。同时,随着跨数据中心网络规模的扩大,链路/节点故障也变得越来越频繁,对数据传输效率造成了负面影响。因此,链路故障恢复方法变得至关重要。许多著作对快速故障恢复进行了研究,但都没有考虑这种恢复方案的部署开销。在本文中,我们发现部署恢复策略的副作用和恢复传输的未来可用性也是快速恢复的关键。因此,我们提出了一种快速、低冗余的故障恢复框架--FLAIR,它由快速恢复策略 FRAVaR 和冗余去除算法 ROSE 组成。FRAVaR 通过最大限度地减少洗牌流量,充分考虑了部署开销。在此基础上,ROSE 通过删除不必要的路由更新,定期消除累积重路由冗余。在 4 个现实网络拓扑结构上的实验结果表明,与最先进的解决方案相比,FLAIR 成功地减少了高达 48.2% 的部署开销,从而降低了高达 70.2% 的恢复速度,提高了高达 36% 的网络利用率。
{"title":"FLAIR: A Fast and Low-Redundancy Failure Recovery Framework for Inter Data Center Network","authors":"Yuchao Zhang;Haoqiang Huang;Ahmed M. Abdelmoniem;Gaoxiong Zeng;Chenyue Zheng;Xirong Que;Wendong Wang;Ke Xu","doi":"10.1109/TCC.2024.3393735","DOIUrl":"10.1109/TCC.2024.3393735","url":null,"abstract":"Due to the fast developments of 5G and IoT technologies, Inter-Datacenter (Inter-DC) networks are facing unprecedented pressure to duplicate large volumes of geographically distributed user data in a real-time manner. Meanwhile, with the expansion of Inter-DC networks scale, link/node failures also become increasingly frequent, negatively affecting the data transmission efficiency. Therefore, link failure recovery methods become of utmost importance. Many works investigated fast failure recovery, yet none of them consider the deployment overhead of such recovery schemes. While in this article, we found that the side-effect of deploying recovery strategies and the future availability of the recovered transmissions are also crucial for fast recovery. So we propose a fast and low-redundancy failure recovery framework, FLAIR, which consists of a fast recovery strategy FRAVaR and a redundancy removal algorithm ROSE. FRAVaR takes full consideration of deployment overhead by minimizing shuffle traffic. On its base, ROSE regularly eliminates the cumulative rerouting redundancy by removing unnecessary routing updates. The experiment results on 4 realistic network topologies show that FLAIR successfully reduces up to 48.2% deployment overhead compared with the state-of-the-art solutions, and thus reduces up to 70.2% recovery speed and improves up to 36% network utilization.","PeriodicalId":13202,"journal":{"name":"IEEE Transactions on Cloud Computing","volume":null,"pages":null},"PeriodicalIF":6.5,"publicationDate":"2024-03-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140836776","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Trustless Collaborative Cloud Federation 无信任协作云联盟
IF 6.5 2区 计算机科学 Q1 Computer Science Pub Date : 2024-03-01 DOI: 10.1109/TCC.2024.3372370
Bishakh Chandra Ghosh;Sandip Chakraborty
Multi-cloud environments such as OnApp and Cloudflare have turned the cloud marketplace towards a new horizon where end-users can host applications transparently over different cloud service providers (CSPs) simultaneously by taking the best from each. Existing cloud federations are typically driven by a broker service which provides a trusted interface allowing the participant CSPs and end-users to coordinate. However, such a broker has the limitations of any centralized trusted authority like risk of manipulation, bias, censorship, single point of failure, etc. In this paper, we propose a decentralized trustless cloud federation architecture called CollabCloud which eliminates any central mediator while addressing the challenges introduced by byzantine participants. CollabCloud utilizes blockchain, and introduces a novel interoperability protocol bridging a permissionless blockchain as an open interface for the end-users, and a permissioned blockchain as a coordination platform for the CSPs. We have implemented CollabCloud with Ethereum, Hyperledger Fabric and Burrow platforms. Experiments with a proof-of-concept testbed emulating 3 CSPs show that CollabCloud can operate within an acceptable response latency for resource allocation, while scaling upto 64 parallel requests per second. Scalability analysis over Mininet emulation platform indicates that the platform can scale well with minimal impact on the response latency as the number of participating CSPs increases.
OnApp和Cloudflare等多云环境将云计算市场带入了一个新境界,终端用户可以从不同的云服务提供商(CSP)中各取所长,同时透明地托管应用程序。现有的云联盟通常由代理服务驱动,代理服务提供可信接口,使参与的 CSP 和最终用户能够协调。然而,这种代理服务具有任何集中式可信机构的局限性,如操纵风险、偏见、审查、单点故障等。在本文中,我们提出了一种名为 CollabCloud 的去中心化无信任云联盟架构,它消除了任何中心调解人,同时解决了因参与方杂乱无章而带来的挑战。CollabCloud 利用区块链,引入了一种新颖的互操作性协议,将无权限区块链作为终端用户的开放接口,将有权限区块链作为 CSP 的协调平台。我们利用以太坊、Hyperledger Fabric 和 Burrow 平台实现了 CollabCloud。在模拟 3 个 CSP 的概念验证测试平台上进行的实验表明,CollabCloud 可在可接受的响应延迟内进行资源分配,同时每秒可扩展至 64 个并行请求。在 Mininet 仿真平台上进行的可扩展性分析表明,随着参与的 CSP 数量的增加,该平台可以在对响应延迟影响最小的情况下进行良好扩展。
{"title":"Trustless Collaborative Cloud Federation","authors":"Bishakh Chandra Ghosh;Sandip Chakraborty","doi":"10.1109/TCC.2024.3372370","DOIUrl":"10.1109/TCC.2024.3372370","url":null,"abstract":"Multi-cloud environments such as OnApp and Cloudflare have turned the cloud marketplace towards a new horizon where end-users can host applications transparently over different cloud service providers (CSPs) simultaneously by taking the best from each. Existing cloud federations are typically driven by a broker service which provides a trusted interface allowing the participant CSPs and end-users to coordinate. However, such a broker has the limitations of any centralized trusted authority like risk of manipulation, bias, censorship, single point of failure, etc. In this paper, we propose a decentralized trustless cloud federation architecture called \u0000<italic>CollabCloud</i>\u0000 which eliminates any central mediator while addressing the challenges introduced by byzantine participants. \u0000<italic>CollabCloud</i>\u0000 utilizes blockchain, and introduces a novel interoperability protocol bridging a permissionless blockchain as an open interface for the end-users, and a permissioned blockchain as a coordination platform for the CSPs. We have implemented \u0000<italic>CollabCloud</i>\u0000 with Ethereum, Hyperledger Fabric and Burrow platforms. Experiments with a proof-of-concept testbed emulating 3 CSPs show that \u0000<italic>CollabCloud</i>\u0000 can operate within an acceptable response latency for resource allocation, while scaling upto 64 parallel requests per second. Scalability analysis over Mininet emulation platform indicates that the platform can scale well with minimal impact on the response latency as the number of participating CSPs increases.","PeriodicalId":13202,"journal":{"name":"IEEE Transactions on Cloud Computing","volume":null,"pages":null},"PeriodicalIF":6.5,"publicationDate":"2024-03-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140018441","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
期刊
IEEE Transactions on Cloud Computing
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1