Pub Date : 2025-04-15DOI: 10.1109/TCC.2025.3561313
Sen Hu;Shang Ci;Donghai Guan;Çetin Kaya Koç
Cloud computing offers inexpensive and scalable solutions for data processing, however privacy concerns often hinder the outsourcing of sensitive information. Homomorphic encryption provides a promising approach for secure computations over encrypted data. However, existing models often rely on restrictive assumptions, such as semi-honest adversaries and inaccessible public data. To address these limitations, we introduce the Secure Outsourcing Computation Toolkit (SOCT), which is a novel framework based on the threshold ElGamal cryptosystem. The toolkit employs a dual-server decryption architecture using a (2,2) threshold additively homomorphic ElGamal (TAHEG) algorithm. This architecture ensures that ciphertexts can be decrypted only with the cooperation of both servers, mitigating the risk of data breaches. The TAHEG algorithm requires the input of a secret key for every decryption operation, preventing unauthorized access to plaintext data. Moreover, the key generation process does not burden users with generating or distributing partial secret keys. We provide rigorous security proofs for our threshold ElGamal cryptosystem and associated secure computation functions. Experimental results demonstrate that SOCT achieves significant efficiency gains compared to existing toolkits, making it a practical choice for privacy-preserving data outsourcing.
{"title":"SOCT: Secure Outsourcing Computation Toolkit Using Threshold ElGamal Algorithm","authors":"Sen Hu;Shang Ci;Donghai Guan;Çetin Kaya Koç","doi":"10.1109/TCC.2025.3561313","DOIUrl":"https://doi.org/10.1109/TCC.2025.3561313","url":null,"abstract":"Cloud computing offers inexpensive and scalable solutions for data processing, however privacy concerns often hinder the outsourcing of sensitive information. Homomorphic encryption provides a promising approach for secure computations over encrypted data. However, existing models often rely on restrictive assumptions, such as semi-honest adversaries and inaccessible public data. To address these limitations, we introduce the Secure Outsourcing Computation Toolkit (SOCT), which is a novel framework based on the threshold ElGamal cryptosystem. The toolkit employs a dual-server decryption architecture using a (2,2) threshold additively homomorphic ElGamal (TAHEG) algorithm. This architecture ensures that ciphertexts can be decrypted only with the cooperation of both servers, mitigating the risk of data breaches. The TAHEG algorithm requires the input of a secret key for every decryption operation, preventing unauthorized access to plaintext data. Moreover, the key generation process does not burden users with generating or distributing partial secret keys. We provide rigorous security proofs for our threshold ElGamal cryptosystem and associated secure computation functions. Experimental results demonstrate that SOCT achieves significant efficiency gains compared to existing toolkits, making it a practical choice for privacy-preserving data outsourcing.","PeriodicalId":13202,"journal":{"name":"IEEE Transactions on Cloud Computing","volume":"13 2","pages":"711-720"},"PeriodicalIF":5.3,"publicationDate":"2025-04-15","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144232041","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-04-09DOI: 10.1109/TCC.2025.3559346
Feng Zhang;Chenyang Zhang;Jiawei Guan;Qiangjun Zhou;Kuangyu Chen;Xiao Zhang;Bingsheng He;Jidong Zhai;Xiaoyong Du
Edge computing has gained widespread attention in cloud computing due to the increasing demands of AIoT applications and the evolution of edge architectures. One prevalent application in this domain is neural network inference on edge for computing and processing. This article presents an in-depth exploration of inference on integrated edge devices and introduces EdgeNN, a groundbreaking solution for inference specifically designed for CPU-GPU integrated edge devices. EdgeNN offers three key innovations. First, EdgeNN adaptively employs zero-copy optimization by harnessing unified physical memory. Second, EdgeNN introduces an innovative approach to CPU-GPU hybrid execution tailored for inference tasks. This technique enables concurrent CPU and GPU operation, effectively leveraging edge platforms’ computational capabilities. Third, EdgeNN adopts a finely tuned adaptive inference tuning technique that analyzes complex inference structures. It divides computations into sub-tasks, intelligently assigning them to the two processors for better performance. Experimental results demonstrate EdgeNN's superiority across six popular neural network inference processing. EdgeNN delivers average speed improvements of 3.97×, 4.10×, 3.12×, and 8.80× when compared to inference on four distinct edge CPUs. Furthermore, EdgeNN achieves significant time advantages compared to the direct execution of original programs. This improvement is attributed to better unified memory utilization (44.37%) and the innovative CPU-GPU hybrid execution approach (17.91%). Additionally, EdgeNN exhibits superior energy efficiency, providing 29.14× higher energy efficiency than edge CPUs and 5.70× higher energy efficiency than discrete GPUs. EdgeNN is now open source at https://github.com/ChenyangZhang-cs/EdgeNN.
{"title":"Breaking the Edge: Enabling Efficient Neural Network Inference on Integrated Edge Devices","authors":"Feng Zhang;Chenyang Zhang;Jiawei Guan;Qiangjun Zhou;Kuangyu Chen;Xiao Zhang;Bingsheng He;Jidong Zhai;Xiaoyong Du","doi":"10.1109/TCC.2025.3559346","DOIUrl":"https://doi.org/10.1109/TCC.2025.3559346","url":null,"abstract":"Edge computing has gained widespread attention in cloud computing due to the increasing demands of AIoT applications and the evolution of edge architectures. One prevalent application in this domain is neural network inference on edge for computing and processing. This article presents an in-depth exploration of inference on integrated edge devices and introduces EdgeNN, a groundbreaking solution for inference specifically designed for CPU-GPU integrated edge devices. EdgeNN offers three key innovations. First, EdgeNN adaptively employs <italic>zero-copy</i> optimization by harnessing unified physical memory. Second, EdgeNN introduces an innovative approach to CPU-GPU hybrid execution tailored for inference tasks. This technique enables concurrent CPU and GPU operation, effectively leveraging edge platforms’ computational capabilities. Third, EdgeNN adopts a finely tuned adaptive inference tuning technique that analyzes complex inference structures. It divides computations into sub-tasks, intelligently assigning them to the two processors for better performance. Experimental results demonstrate EdgeNN's superiority across six popular neural network inference processing. EdgeNN delivers average speed improvements of 3.97×, 4.10×, 3.12×, and 8.80× when compared to inference on four distinct edge CPUs. Furthermore, EdgeNN achieves significant time advantages compared to the direct execution of original programs. This improvement is attributed to better unified memory utilization (44.37%) and the innovative CPU-GPU hybrid execution approach (17.91%). Additionally, EdgeNN exhibits superior energy efficiency, providing 29.14× higher energy efficiency than edge CPUs and 5.70× higher energy efficiency than discrete GPUs. EdgeNN is now open source at <uri>https://github.com/ChenyangZhang-cs/EdgeNN</uri>.","PeriodicalId":13202,"journal":{"name":"IEEE Transactions on Cloud Computing","volume":"13 2","pages":"694-710"},"PeriodicalIF":5.3,"publicationDate":"2025-04-09","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144232037","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Cloud computing eliminates the limitations of local hardware architecture while also enabling rapid data sharing between healthcare institutions. Encryption of electronic medical records (EMRs) before uploading to cloud servers is necessary for privacy. However, encryption brings challenges for computation. Public Key Encryption with Equality Test (PKEET) allows cloud servers to test the underlying message equality without decryption. Therefore, it can be used to classify the encrypted EMRs corresponding to different medical symptoms. However, traditional PKEETs have limitations in testing the similarity between the ciphertexts. Undoubtedly, it can not handle EMR classification with similar medical symptoms efficiently. In this work, we propose a lightweight public key encryption with similarity test (PKEST) for the EMR classification shared in medical consortia. Our scheme can resist offline message recovery attacks, which may be launched by the insider manager, and the traditional paring computation is not necessary. Our experiment simulation shows that the similarity error between ciphertext and plaintext is tiny when the parameters are set properly. Compared to previous works, our scheme not only achieves the classification of similar encrypted EMRs but is also more efficient than traditional PKEETs since our construction does not need paring computation anymore.
{"title":"PKEST: Public-Key Encryption With Similarity Test for Medical Consortia Cloud Computing","authors":"Junsong Chen;Shengke Zeng;Song Han;Jin Yin;Peng Chen","doi":"10.1109/TCC.2025.3558858","DOIUrl":"https://doi.org/10.1109/TCC.2025.3558858","url":null,"abstract":"Cloud computing eliminates the limitations of local hardware architecture while also enabling rapid data sharing between healthcare institutions. Encryption of electronic medical records (EMRs) before uploading to cloud servers is necessary for privacy. However, encryption brings challenges for computation. Public Key Encryption with Equality Test (PKEET) allows cloud servers to test the underlying message equality without decryption. Therefore, it can be used to classify the encrypted EMRs corresponding to different medical symptoms. However, traditional PKEETs have limitations in testing the similarity between the ciphertexts. Undoubtedly, it can not handle EMR classification with similar medical symptoms efficiently. In this work, we propose a lightweight public key encryption with similarity test (PKEST) for the EMR classification shared in medical consortia. Our scheme can resist offline message recovery attacks, which may be launched by the insider manager, and the traditional paring computation is not necessary. Our experiment simulation shows that the similarity error between ciphertext and plaintext is tiny when the parameters are set properly. Compared to previous works, our scheme not only achieves the classification of similar encrypted EMRs but is also more efficient than traditional PKEETs since our construction does not need paring computation anymore.","PeriodicalId":13202,"journal":{"name":"IEEE Transactions on Cloud Computing","volume":"13 2","pages":"680-693"},"PeriodicalIF":5.3,"publicationDate":"2025-04-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144230585","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-04-01DOI: 10.1109/TCC.2025.3556925
Muyao Qiu;Jinguang Han;Feng Hao;Chao Sun;Ge Wu
Cloud computing is a distributed infrastructure that centralizes server resources on a platform in order to provide services over the internet. Traditional public-key encryption protects data confidentiality in cloud computing, while functional encryption provides a more fine-grained decryption method, which only reveals a function of the encrypted data. However, functional encryption in cloud computing faces the problem of key sharing. In order to trace malicious users who share keys with others, traceable FE-IP (TFE-IP) schemes were proposed where the key generation center (KGC) knows users’ identities and binds them with different secret keys. Nevertheless, existing schemes fail to protect the privacy of users’ identities. The fundamental challenge to construct a privacy-preserving TFE-IP scheme is that KGC needs to bind a key with a user's identity without knowing the identity. To balance privacy and accountability in cloud computing, we propose the concept of privacy-preserving traceable functional encryption for inner product (PPTFE-IP) and give a concrete construction which offers the features: (1) To prevent key sharing, both a user's identity and a vector are bound together in the key; (2) The KGC and a user execute a two-party secure computing protocol to generate a key without the former knowing anything about the latter's identity; (3) Each user can ensure the integrity and correctness of his/her key through verification; (4) The inner product of the two vectors embedded in a ciphertext and in his/her key can be calculated by an authorized user; (5) Only the tracer can trace the identity embedded in a key. We formally reduce the security of the proposed PPTFE-IP to well-known complexity assumptions, and conduct an implementation to evaluate its efficiency. The novelty of our scheme is to protect the user's privacy and provide traceability if required.
{"title":"Privacy-Preserving and Traceable Functional Encryption for Inner Product in Cloud Computing","authors":"Muyao Qiu;Jinguang Han;Feng Hao;Chao Sun;Ge Wu","doi":"10.1109/TCC.2025.3556925","DOIUrl":"https://doi.org/10.1109/TCC.2025.3556925","url":null,"abstract":"Cloud computing is a distributed infrastructure that centralizes server resources on a platform in order to provide services over the internet. Traditional public-key encryption protects data confidentiality in cloud computing, while functional encryption provides a more fine-grained decryption method, which only reveals a function of the encrypted data. However, functional encryption in cloud computing faces the problem of key sharing. In order to trace malicious users who share keys with others, traceable FE-IP (TFE-IP) schemes were proposed where the key generation center (KGC) knows users’ identities and binds them with different secret keys. Nevertheless, existing schemes fail to protect the privacy of users’ identities. The fundamental challenge to construct a privacy-preserving TFE-IP scheme is that KGC needs to bind a key with a user's identity without knowing the identity. To balance privacy and accountability in cloud computing, we propose the concept of privacy-preserving traceable functional encryption for inner product (PPTFE-IP) and give a concrete construction which offers the features: (1) To prevent key sharing, both a user's identity and a vector are bound together in the key; (2) The KGC and a user execute a two-party secure computing protocol to generate a key without the former knowing anything about the latter's identity; (3) Each user can ensure the integrity and correctness of his/her key through verification; (4) The inner product of the two vectors embedded in a ciphertext and in his/her key can be calculated by an authorized user; (5) Only the tracer can trace the identity embedded in a key. We formally reduce the security of the proposed PPTFE-IP to well-known complexity assumptions, and conduct an implementation to evaluate its efficiency. The novelty of our scheme is to protect the user's privacy and provide traceability if required.","PeriodicalId":13202,"journal":{"name":"IEEE Transactions on Cloud Computing","volume":"13 2","pages":"667-679"},"PeriodicalIF":5.3,"publicationDate":"2025-04-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144232000","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-03-29DOI: 10.1109/TCC.2025.3574823
Tianyu Qi;Yufeng Zhan;Peng Li;Yuanqing Xia
Hierarchical federated learning (HFL) extends traditional federated learning by introducing a cloud-edge-device framework to enhance scalability. However, the challenge of determining when devices and edges should aggregate models remains unresolved, making the design of an effective synchronization scheme crucial. Additionally, the heterogeneity in computing and communication capabilities, coupled with non-independent and identically distributed (non-IID) data distributions, makes synchronization particularly complex. In this article, we propose Robin, a learning-based synchronization scheme for HFL systems. By collecting data such as models’ parameters, CPU usage, communication time, etc., we design a deep reinforcement learning-based approach to decide the frequencies of cloud aggregation and edge aggregation, respectively. The proposed scheme well considers device heterogeneity, non-IID data and device mobility, to maximize the training model accuracy while minimizing the energy overhead. Meanwhile, we prove the convergence of Robin’s synchronization scheme. And we build an HFL testbed and conduct the experiments with real data obtained from Raspberry Pi and Alibaba Cloud. Extensive experiments under various settings are conducted to confirm the effectiveness of Robin, which can improve 31.2% in model accuracy while reducing energy consumption by 36.4%.
{"title":"Robin: An Efficient Hierarchical Federated Learning Framework via a Learning-Based Synchronization Scheme","authors":"Tianyu Qi;Yufeng Zhan;Peng Li;Yuanqing Xia","doi":"10.1109/TCC.2025.3574823","DOIUrl":"https://doi.org/10.1109/TCC.2025.3574823","url":null,"abstract":"Hierarchical federated learning (HFL) extends traditional federated learning by introducing a cloud-edge-device framework to enhance scalability. However, the challenge of determining when devices and edges should aggregate models remains unresolved, making the design of an effective synchronization scheme crucial. Additionally, the heterogeneity in computing and communication capabilities, coupled with non-independent and identically distributed (non-IID) data distributions, makes synchronization particularly complex. In this article, we propose <italic>Robin</i>, a learning-based synchronization scheme for HFL systems. By collecting data such as models’ parameters, CPU usage, communication time, etc., we design a deep reinforcement learning-based approach to decide the frequencies of cloud aggregation and edge aggregation, respectively. The proposed scheme well considers device heterogeneity, non-IID data and device mobility, to maximize the training model accuracy while minimizing the energy overhead. Meanwhile, we prove the convergence of <italic>Robin</i>’s synchronization scheme. And we build an HFL testbed and conduct the experiments with real data obtained from Raspberry Pi and Alibaba Cloud. Extensive experiments under various settings are conducted to confirm the effectiveness of <italic>Robin</i>, which can improve 31.2% in model accuracy while reducing energy consumption by 36.4%.","PeriodicalId":13202,"journal":{"name":"IEEE Transactions on Cloud Computing","volume":"13 3","pages":"895-909"},"PeriodicalIF":5.0,"publicationDate":"2025-03-29","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144998281","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-03-28DOI: 10.1109/TCC.2025.3555519
Meng Sun;Junzuo Lai;Xiaohan Mo;Chi Wu;Peng Li;Cheng-Kang Chu;Robert H. Deng
In cloud computing, users need to authenticate to access various resources. Attribute-based anonymous credentials (ABCs) provide a tool for privacy-preserving authentication, allowing users to prove possession of a set of attributes to cloud service providers anonymously. Most existing works on ABC deal with credentials on attributes issued by a single authority (issuer). In reality, it is more practical for users to obtain credentials on attributes from multiple authorities. There are a few works on multi-authority ABC, which do not support delegation needed in real deployments. In this article, we present the first delegatable multi-authority attribute-based anonymous credential system, which simultaneously achieves revocation and traceability. We also give the security analysis of our construction. Finally, we implement our system, and the experimental results show its efficiency.
{"title":"Delegatable Multi-Authority Attribute-Based Anonymous Credentials","authors":"Meng Sun;Junzuo Lai;Xiaohan Mo;Chi Wu;Peng Li;Cheng-Kang Chu;Robert H. Deng","doi":"10.1109/TCC.2025.3555519","DOIUrl":"https://doi.org/10.1109/TCC.2025.3555519","url":null,"abstract":"In cloud computing, users need to authenticate to access various resources. Attribute-based anonymous credentials (ABCs) provide a tool for privacy-preserving authentication, allowing users to prove possession of a set of attributes to cloud service providers anonymously. Most existing works on ABC deal with credentials on attributes issued by a single authority (issuer). In reality, it is more practical for users to obtain credentials on attributes from multiple authorities. There are a few works on multi-authority ABC, which do not support delegation needed in real deployments. In this article, we present the first delegatable multi-authority attribute-based anonymous credential system, which simultaneously achieves revocation and traceability. We also give the security analysis of our construction. Finally, we implement our system, and the experimental results show its efficiency.","PeriodicalId":13202,"journal":{"name":"IEEE Transactions on Cloud Computing","volume":"13 2","pages":"655-666"},"PeriodicalIF":5.3,"publicationDate":"2025-03-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144230586","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
With the increasing number of devices, the demand for data computation is growing rapidly. In edge-cloud collaborative computing, tasks can be scheduled to servers as interdependent subtasks, enhancing performance through parallel computing. A task is executed in an executor, which must first initialize the runtime environment in a process called task startup. However, most existing research neglects the reuse of executors, leading to considerable delays during task startup. To address this issue, we model the edge-cloud collaborative task scheduling scenario considering executor reuse, task startup, and dependency relationships. We then formulate the dependent task scheduling problem with task startup. To meet real-time demands in edge-cloud collaborative computing, we propose ReflexPilot, an online task scheduling architecture featuring executor management. Building on this architecture, we introduce OTSA-PPO, a task scheduling algorithm based on Proximal Policy Optimization (PPO), and EMA, an advanced executor allocation algorithm. Under constraints of computational and communication resources, ReflexPilot leverages OTSA-PPO for online scheduling of dependent tasks based on current states, while EMA pre-creates and reuses executors to reduce the average task completion time. Extensive simulations demonstrate that ReflexPilot significantly reduces the average task completion time by 31% to 71% compared with existing baselines.
{"title":"ReflexPilot: Startup-Aware Dependent Task Scheduling Based on Deep Reinforcement Learning for Edge-Cloud Collaborative Computing","authors":"Wenhao Zou;Zongshuai Zhang;Nina Wang;Yu Tian;Lin Tian","doi":"10.1109/TCC.2025.3555231","DOIUrl":"https://doi.org/10.1109/TCC.2025.3555231","url":null,"abstract":"With the increasing number of devices, the demand for data computation is growing rapidly. In edge-cloud collaborative computing, tasks can be scheduled to servers as interdependent subtasks, enhancing performance through parallel computing. A task is executed in an executor, which must first initialize the runtime environment in a process called task startup. However, most existing research neglects the reuse of executors, leading to considerable delays during task startup. To address this issue, we model the edge-cloud collaborative task scheduling scenario considering executor reuse, task startup, and dependency relationships. We then formulate the dependent task scheduling problem with task startup. To meet real-time demands in edge-cloud collaborative computing, we propose ReflexPilot, an online task scheduling architecture featuring executor management. Building on this architecture, we introduce OTSA-PPO, a task scheduling algorithm based on Proximal Policy Optimization (PPO), and EMA, an advanced executor allocation algorithm. Under constraints of computational and communication resources, ReflexPilot leverages OTSA-PPO for online scheduling of dependent tasks based on current states, while EMA pre-creates and reuses executors to reduce the average task completion time. Extensive simulations demonstrate that ReflexPilot significantly reduces the average task completion time by 31% to 71% compared with existing baselines.","PeriodicalId":13202,"journal":{"name":"IEEE Transactions on Cloud Computing","volume":"13 2","pages":"641-654"},"PeriodicalIF":5.3,"publicationDate":"2025-03-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144232045","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-03-24DOI: 10.1109/TCC.2025.3551838
Hui Lu;Xiaojiang Du;Dawei Hu;Shen Su;Zhihong Tian
The adoption of container-based cloud computing services has been prevalent, especially with the introduction of Kubernetes, which enables the automated deployment, scaling, and administration of applications in containers, hence boosting the popularity of containers. As a result, researchers have placed greater emphasis on container runtime security, notably investigating the efficacy of traditional techniques such as Capabilities, Seccomp, and Linux security modules in guaranteeing container security. However, due to the limitations imposed by the container environment, the results have been unsatisfactory. In addition, eBPF-based solutions face the problem of being unable to quickly load policies and affect real-time operations when faced with newer kernel vulnerabilities. This paper investigates the limitations of existing container security mechanisms. Additionally, it examines the specific constraints of these mechanisms in Kubernetes environments. The paper classifies container monitoring and obligatory access control into three distinct categories: system call access control, LSM hook access control, and kernel function access control. Therefore, we propose a technique for regulating container access with a variety of granularity levels. This technique is executed using eBPF and is tightly integrated with Kubernetes to collect relevant meta-information. In addition, we suggest implementing a consolidated routing method and employing function tail call chaining to overcome the limitation of eBPF in enforcing mandatory access control for containers. Lastly, we conducted a series of experiment to verify the effectiveness of the system's security using CVE-2022-0492 and to benchmark the system that had BPFGuard enabled. The results indicate that the average performance loss increased merely by 2.16%, demonstrating that there are no adverse effects on the container services. This suggests that greater security can be achieved at a minimal cost.
{"title":"BPFGuard: Multi-Granularity Container Runtime Mandatory Access Control","authors":"Hui Lu;Xiaojiang Du;Dawei Hu;Shen Su;Zhihong Tian","doi":"10.1109/TCC.2025.3551838","DOIUrl":"https://doi.org/10.1109/TCC.2025.3551838","url":null,"abstract":"The adoption of container-based cloud computing services has been prevalent, especially with the introduction of Kubernetes, which enables the automated deployment, scaling, and administration of applications in containers, hence boosting the popularity of containers. As a result, researchers have placed greater emphasis on container runtime security, notably investigating the efficacy of traditional techniques such as Capabilities, Seccomp, and Linux security modules in guaranteeing container security. However, due to the limitations imposed by the container environment, the results have been unsatisfactory. In addition, eBPF-based solutions face the problem of being unable to quickly load policies and affect real-time operations when faced with newer kernel vulnerabilities. This paper investigates the limitations of existing container security mechanisms. Additionally, it examines the specific constraints of these mechanisms in Kubernetes environments. The paper classifies container monitoring and obligatory access control into three distinct categories: system call access control, LSM hook access control, and kernel function access control. Therefore, we propose a technique for regulating container access with a variety of granularity levels. This technique is executed using eBPF and is tightly integrated with Kubernetes to collect relevant meta-information. In addition, we suggest implementing a consolidated routing method and employing function tail call chaining to overcome the limitation of eBPF in enforcing mandatory access control for containers. Lastly, we conducted a series of experiment to verify the effectiveness of the system's security using CVE-2022-0492 and to benchmark the system that had BPFGuard enabled. The results indicate that the average performance loss increased merely by 2.16%, demonstrating that there are no adverse effects on the container services. This suggests that greater security can be achieved at a minimal cost.","PeriodicalId":13202,"journal":{"name":"IEEE Transactions on Cloud Computing","volume":"13 2","pages":"629-640"},"PeriodicalIF":5.3,"publicationDate":"2025-03-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144229455","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-03-23DOI: 10.1109/TCC.2025.3573378
Qinghua Deng;Lanxiang Chen;Yizhao Zhu;Yi Mu
Conjunctive keyword queries on untrusted cloud servers represent one of the most common forms of search in encrypted environments. Extensive research has been devoted to developing efficient schemes that support multi-keyword queries. In particular, the Oblivious Cross-Tags (OXT) protocol has received significant attention and is widely regarded as a benchmark in this domain. However, existing schemes fail to simultaneously hide the Keyword-Pair Result Pattern (KPRP) and the conditional Intersection Pattern (IP), potentially leaking additional information to the server. In this work, we propose a novel searchable symmetric encryption (SSE) scheme, referred to as Result Hiding Search (RHS), which aims to minimize result pattern leakage and achieve query result hiding during the index retrieval phase by integrating Private Set Intersection (PSI) techniques. Our scheme enhances privacy by employing PSI for secure membership testing. To improve query efficiency, we shift the expensive complex computation to the offline phase, and utilize efficient pseudorandom functions and hash functions during the online phase. Moreover, we propose a variant of RHS, called vRHS, designed to reduce client-side storage overhead. A simulation-based security proof demonstrates that our scheme is robust against non-adaptive adversaries. Comprehensive experimental evaluation further shows that our approach achieves better security and efficiency trade-offs compared to existing SSE schemes.
{"title":"Leakage Reduced Searchable Symmetric Encryption for Multi-Keyword Queries","authors":"Qinghua Deng;Lanxiang Chen;Yizhao Zhu;Yi Mu","doi":"10.1109/TCC.2025.3573378","DOIUrl":"https://doi.org/10.1109/TCC.2025.3573378","url":null,"abstract":"Conjunctive keyword queries on untrusted cloud servers represent one of the most common forms of search in encrypted environments. Extensive research has been devoted to developing efficient schemes that support multi-keyword queries. In particular, the Oblivious Cross-Tags (OXT) protocol has received significant attention and is widely regarded as a benchmark in this domain. However, existing schemes fail to simultaneously hide the Keyword-Pair Result Pattern (KPRP) and the conditional Intersection Pattern (IP), potentially leaking additional information to the server. In this work, we propose a novel searchable symmetric encryption (SSE) scheme, referred to as <italic>Result Hiding Search (RHS)</i>, which aims to minimize result pattern leakage and achieve query result hiding during the index retrieval phase by integrating Private Set Intersection (PSI) techniques. Our scheme enhances privacy by employing PSI for secure membership testing. To improve query efficiency, we shift the expensive complex computation to the offline phase, and utilize efficient pseudorandom functions and hash functions during the online phase. Moreover, we propose a variant of RHS, called vRHS, designed to reduce client-side storage overhead. A simulation-based security proof demonstrates that our scheme is robust against non-adaptive adversaries. Comprehensive experimental evaluation further shows that our approach achieves better security and efficiency trade-offs compared to existing SSE schemes.","PeriodicalId":13202,"journal":{"name":"IEEE Transactions on Cloud Computing","volume":"13 3","pages":"882-894"},"PeriodicalIF":5.0,"publicationDate":"2025-03-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144998280","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-03-21DOI: 10.1109/TCC.2025.3572308
Bita Fatemipour;Zhe Zhang;Marc St-Hilaire
With the increasing demands placed on geographically distributed Data Centers (DCs), recent studies have focused on optimizing performance from the perspective of both cloud providers and customers. These studies address a variety of goals, such as minimizing transmission time, reducing resource usage, and optimizing network costs. However, many existing models for workload transfers operate using a uniform time-slot approach, which limits their flexibility in handling variable data transfer requests with different deadline requirements. This lack of adaptability can negatively impact the quality of service for users. Additionally, these models often overlook the potential benefits of incorporating multiple data sources, which can lead to sub-optimal transmission times. To overcome these limitations, this paper introduces T-COMS, a Time-slot-aware, COst-effective, and Multi-Source-aware method for file transfers tailored specifically for geo-distributed DCs, leveraging a multi-source and dynamic time-slot strategy to accelerate transmission and enhance service quality. The proposed model identifies the optimal sources, paths, and time slot lengths required to efficiently transmit workloads to their destinations while minimizing costs. Initially, we introduced a Mixed Integer Non-Linear Programming (MINLP) model and subsequently linearized it within our framework. Given the NP-hard nature of the proposed model, its applicability is limited in large-scale environments. To address this issue, we developed an efficient heuristic algorithm that can derive near-optimal solutions in polynomial time. The simulation results demonstrate the effectiveness of the proposed T-COMS model and the heuristic algorithm in terms of the reduction in cost and transmission time for file transfers between geographically distributed DCs.
{"title":"T-COMS: A Time-Slot-Aware and Cost-Effective Data Transfer Method for Geo-Distributed Data Centers","authors":"Bita Fatemipour;Zhe Zhang;Marc St-Hilaire","doi":"10.1109/TCC.2025.3572308","DOIUrl":"https://doi.org/10.1109/TCC.2025.3572308","url":null,"abstract":"With the increasing demands placed on geographically distributed Data Centers (DCs), recent studies have focused on optimizing performance from the perspective of both cloud providers and customers. These studies address a variety of goals, such as minimizing transmission time, reducing resource usage, and optimizing network costs. However, many existing models for workload transfers operate using a uniform time-slot approach, which limits their flexibility in handling variable data transfer requests with different deadline requirements. This lack of adaptability can negatively impact the quality of service for users. Additionally, these models often overlook the potential benefits of incorporating multiple data sources, which can lead to sub-optimal transmission times. To overcome these limitations, this paper introduces T-COMS, a Time-slot-aware, COst-effective, and Multi-Source-aware method for file transfers tailored specifically for geo-distributed DCs, leveraging a multi-source and dynamic time-slot strategy to accelerate transmission and enhance service quality. The proposed model identifies the optimal sources, paths, and time slot lengths required to efficiently transmit workloads to their destinations while minimizing costs. Initially, we introduced a Mixed Integer Non-Linear Programming (MINLP) model and subsequently linearized it within our framework. Given the NP-hard nature of the proposed model, its applicability is limited in large-scale environments. To address this issue, we developed an efficient heuristic algorithm that can derive near-optimal solutions in polynomial time. The simulation results demonstrate the effectiveness of the proposed T-COMS model and the heuristic algorithm in terms of the reduction in cost and transmission time for file transfers between geographically distributed DCs.","PeriodicalId":13202,"journal":{"name":"IEEE Transactions on Cloud Computing","volume":"13 3","pages":"867-881"},"PeriodicalIF":5.0,"publicationDate":"2025-03-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144998168","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}