首页 > 最新文献

IEEE Transactions on Cloud Computing最新文献

英文 中文
Secure and Efficient Cloud-Based Multi-Party Private Set Intersection With Union Protocol 基于联合协议的安全高效的云多方私有集交集
IF 5.3 2区 计算机科学 Q1 COMPUTER SCIENCE, INFORMATION SYSTEMS Pub Date : 2025-03-06 DOI: 10.1109/TCC.2025.3548570
Qian Liu;Yu Zhan;Baocang Wang
Secure Multi-party Computation (MPC) is a highly active research field, with Private Set Intersection (PSI) being a classic subtopic within it. However, simple intersection computation is insufficient for many real-world scenarios, leading to the development of various PSI variant protocols. In this context, we propose a cloud-based multi-party private set intersection with union protocol, denoted as MPSI-U. This protocol securely computes the intersection of the designated party's set with the union of the sets of all other parties, which can be applied to scenarios such as contact tracing. MPSI-U leverages cloud servers to alleviate the computational burden placed on users, while guaranteeing privacy and security simultaneously for all involved parties with the threshold BGN cryptographic system. Furthermore, a comprehensive formal security analysis of the protocol was conducted under the semi-honest model to prove its resilience against potential security threats. Based on our performance analysis, MPSI-U exhibits favorable characteristics in terms of communication and computation overhead. This enhances the versatility of MPSI-U, rendering it a valuable solution that can be widely applied across various domains and scenarios.
安全多方计算(MPC)是一个非常活跃的研究领域,其中私有集交集(PSI)是其中的一个经典子课题。然而,简单的交集计算对于许多现实场景来说是不够的,这导致了各种PSI变体协议的发展。在此背景下,我们提出了一种基于云的多方私有集交叉口与联合协议,表示为MPSI-U。该协议可以安全地计算指定方集合与所有其他方集合的并集的交集,可以应用于接触追踪等场景。MPSI-U利用云服务器减轻用户的计算负担,同时用阈值BGN加密系统保证所有相关方的隐私和安全。此外,在半诚实模型下对协议进行了全面的形式化安全分析,以证明其对潜在安全威胁的弹性。根据我们的性能分析,MPSI-U在通信和计算开销方面表现出良好的特性。这增强了MPSI-U的通用性,使其成为一种有价值的解决方案,可以广泛应用于各种领域和场景。
{"title":"Secure and Efficient Cloud-Based Multi-Party Private Set Intersection With Union Protocol","authors":"Qian Liu;Yu Zhan;Baocang Wang","doi":"10.1109/TCC.2025.3548570","DOIUrl":"https://doi.org/10.1109/TCC.2025.3548570","url":null,"abstract":"Secure Multi-party Computation (MPC) is a highly active research field, with Private Set Intersection (PSI) being a classic subtopic within it. However, simple intersection computation is insufficient for many real-world scenarios, leading to the development of various PSI variant protocols. In this context, we propose a cloud-based multi-party private set intersection with union protocol, denoted as MPSI-U. This protocol securely computes the intersection of the designated party's set with the union of the sets of all other parties, which can be applied to scenarios such as contact tracing. MPSI-U leverages cloud servers to alleviate the computational burden placed on users, while guaranteeing privacy and security simultaneously for all involved parties with the threshold BGN cryptographic system. Furthermore, a comprehensive formal security analysis of the protocol was conducted under the semi-honest model to prove its resilience against potential security threats. Based on our performance analysis, MPSI-U exhibits favorable characteristics in terms of communication and computation overhead. This enhances the versatility of MPSI-U, rendering it a valuable solution that can be widely applied across various domains and scenarios.","PeriodicalId":13202,"journal":{"name":"IEEE Transactions on Cloud Computing","volume":"13 2","pages":"578-589"},"PeriodicalIF":5.3,"publicationDate":"2025-03-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144229456","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Deadline-Aware Online Job Scheduling for Distributed Training in Heterogeneous Clusters 异构集群分布式训练的截止日期感知在线作业调度
IF 5.3 2区 计算机科学 Q1 COMPUTER SCIENCE, INFORMATION SYSTEMS Pub Date : 2025-03-06 DOI: 10.1109/TCC.2025.3548604
Yuchen Zhang;Long Luo;Gang Sun;Hongfang Yu;Bo Li
The explosive growth in training data and model sizes has spurred the adoption of distributed deep learning (DL) in heterogeneous computing clusters. Efficiently scheduling distributed training jobs in such heterogeneous environments while ensuring they meet user-specified deadlines remains a critical challenge. While most existing works focus on reducing job completion time in homogeneous clusters, they pay little attention to meeting job deadlines in heterogeneous clusters. To address this issue, we propose Dancer (Deadline-Aware dyNamiC GPU allocation approach for Efficient Resource utilization), a novel framework that dynamically adjusts not only the number but the type of GPUs assigned to each job throughout its training lifecycle. Dancer aims to maximize the number of jobs meeting their deadlines in heterogeneous GPU clusters. It decouples job placement from resource allocation and formulates the scheduling optimization problem for maximizing the number of deadline-meeting jobs as an Integer Linear Programming (ILP) problem. To solve this ILP problem in real-time, we propose an online algorithm with a competitive ratio guarantee, leveraging primal-dual and dynamic programming techniques. Extensive trace-driven simulations based on real-world DL workloads demonstrate that Dancer significantly outperforms state-of-the-art approaches, improving the deadline satisfactory ratio up to 58.9%–74.2%.
训练数据和模型规模的爆炸式增长刺激了分布式深度学习(DL)在异构计算集群中的应用。在这样的异构环境中,有效地安排分布式培训工作,同时确保它们满足用户指定的最后期限仍然是一个关键的挑战。虽然现有的大多数工作都集中在减少同构集群中的作业完成时间,但他们很少关注异构集群中的作业完成时间。为了解决这个问题,我们提出了Dancer(有效资源利用的截止日期感知动态GPU分配方法),这是一个新颖的框架,在整个训练生命周期中,不仅动态调整分配给每个作业的GPU的数量,而且动态调整GPU的类型。Dancer的目标是在异构GPU集群中最大限度地满足其截止日期的作业数量。它将工作分配与资源分配解耦,并将最大限度地满足限期作业数量的调度优化问题表述为整数线性规划(ILP)问题。为了实时解决这一ILP问题,我们提出了一种利用原始对偶和动态规划技术的具有竞争比率保证的在线算法。基于真实深度学习工作负载的大量跟踪驱动模拟表明,Dancer明显优于最先进的方法,将截止日期满意率提高到58.9%-74.2%。
{"title":"Deadline-Aware Online Job Scheduling for Distributed Training in Heterogeneous Clusters","authors":"Yuchen Zhang;Long Luo;Gang Sun;Hongfang Yu;Bo Li","doi":"10.1109/TCC.2025.3548604","DOIUrl":"https://doi.org/10.1109/TCC.2025.3548604","url":null,"abstract":"The explosive growth in training data and model sizes has spurred the adoption of distributed deep learning (DL) in heterogeneous computing clusters. Efficiently scheduling distributed training jobs in such heterogeneous environments while ensuring they meet user-specified deadlines remains a critical challenge. While most existing works focus on reducing job completion time in homogeneous clusters, they pay little attention to meeting job deadlines in heterogeneous clusters. To address this issue, we propose <sc>Dancer</small> (Deadline-Aware dyNamiC GPU allocation approach for Efficient Resource utilization), a novel framework that dynamically adjusts not only the number but the type of GPUs assigned to each job throughout its training lifecycle. <sc>Dancer</small> aims to maximize the number of jobs meeting their deadlines in heterogeneous GPU clusters. It decouples job placement from resource allocation and formulates the scheduling optimization problem for maximizing the number of deadline-meeting jobs as an Integer Linear Programming (ILP) problem. To solve this ILP problem in real-time, we propose an online algorithm with a competitive ratio guarantee, leveraging primal-dual and dynamic programming techniques. Extensive trace-driven simulations based on real-world DL workloads demonstrate that <sc>Dancer</small> significantly outperforms state-of-the-art approaches, improving the deadline satisfactory ratio up to 58.9%–74.2%.","PeriodicalId":13202,"journal":{"name":"IEEE Transactions on Cloud Computing","volume":"13 2","pages":"590-604"},"PeriodicalIF":5.3,"publicationDate":"2025-03-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144232135","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Communication Intensive Task Offloading With IDMZ for Secure Industrial Edge Computing 基于IDMZ的安全工业边缘计算通信密集型任务卸载
IF 5.3 2区 计算机科学 Q1 COMPUTER SCIENCE, INFORMATION SYSTEMS Pub Date : 2025-03-05 DOI: 10.1109/TCC.2025.3548043
Yuanjun Laili;Jiabei Gong;Yusheng Kong;Fei Wang;Lei Ren;Lin Zhang
The Industrial Internet of Things provides an opportunity for flexible and collaborative manufacturing, but introduces more risk and more communication overhead from the Internet to the industrial field. To avoid attacks from unreliable service providers and requesters, Industrial Demilitarized Zone (IDMZ) is introduced in conjunction with firewalls to provide new communication modes between edge servers and industrial devices. As the number of tasks being offloaded to the edge side increases, optimal task offloading to balance the risk and the communication overhead with limited demilitarized buffer size becomes a challenge. Therefore, this paper establishes a mathematical model for secure task offloading in the Industrial Internet-of-Things considering dense communication with different communication modes. Then, a Parallel Gbest-centric differential evolution (P-G-DE) is designed to solve this task offloading problem with a heuristic-embedded initialization strategy, a modified Gbest-centric differential evolutionary operator and a circular-rotated parallelization scheme. The experimental results verify that the proposed method is capable of providing a high-quality solution with a lower risk and a shorter execution time in seconds, compared to six state-of-the-art evolutionary algorithms.
工业物联网为灵活和协作制造提供了机会,但从互联网到工业领域带来了更多的风险和更多的通信开销。为了避免不可靠的服务提供商和请求者的攻击,工业非军事区(IDMZ)与防火墙一起引入,在边缘服务器和工业设备之间提供新的通信模式。随着被卸载到边缘端的任务数量的增加,在有限的非军事化缓冲区大小的情况下,平衡风险和通信开销的最佳任务卸载成为一个挑战。因此,本文建立了考虑不同通信模式下密集通信的工业物联网安全任务卸载数学模型。然后,利用启发式嵌入初始化策略、改进的Gbest-centric微分进化算子和圆旋转并行化方案,设计了并行Gbest-centric微分进化算法(P-G-DE)来解决任务卸载问题。实验结果表明,与六种最先进的进化算法相比,该方法能够以更低的风险和更短的秒级执行时间提供高质量的解决方案。
{"title":"Communication Intensive Task Offloading With IDMZ for Secure Industrial Edge Computing","authors":"Yuanjun Laili;Jiabei Gong;Yusheng Kong;Fei Wang;Lei Ren;Lin Zhang","doi":"10.1109/TCC.2025.3548043","DOIUrl":"https://doi.org/10.1109/TCC.2025.3548043","url":null,"abstract":"The Industrial Internet of Things provides an opportunity for flexible and collaborative manufacturing, but introduces more risk and more communication overhead from the Internet to the industrial field. To avoid attacks from unreliable service providers and requesters, Industrial Demilitarized Zone (IDMZ) is introduced in conjunction with firewalls to provide new communication modes between edge servers and industrial devices. As the number of tasks being offloaded to the edge side increases, optimal task offloading to balance the risk and the communication overhead with limited demilitarized buffer size becomes a challenge. Therefore, this paper establishes a mathematical model for secure task offloading in the Industrial Internet-of-Things considering dense communication with different communication modes. Then, a Parallel Gbest-centric differential evolution (P-G-DE) is designed to solve this task offloading problem with a heuristic-embedded initialization strategy, a modified Gbest-centric differential evolutionary operator and a circular-rotated parallelization scheme. The experimental results verify that the proposed method is capable of providing a high-quality solution with a lower risk and a shorter execution time in seconds, compared to six state-of-the-art evolutionary algorithms.","PeriodicalId":13202,"journal":{"name":"IEEE Transactions on Cloud Computing","volume":"13 2","pages":"560-577"},"PeriodicalIF":5.3,"publicationDate":"2025-03-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144232042","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
PPSKSQ: Towards Efficient and Privacy-Preserving Spatial Keyword Similarity Query in Cloud 基于PPSKSQ的云空间关键字相似度查询
IF 5.3 2区 计算机科学 Q1 COMPUTER SCIENCE, INFORMATION SYSTEMS Pub Date : 2025-03-03 DOI: 10.1109/TCC.2025.3547563
Changrui Wang;Lei Wu;Lijuan Xu;Haojie Yuan;Hao Wang;Wenying Zhang;Weizhi Meng
The growth of cloud computing has led to the widespread use of location-based services, such as spatial keyword queries, which return spatial data points within a given range that have the highest similarity in keyword sets to the user’s. As the volume of spatial data increases, providers commonly outsource data to powerful cloud servers. Because cloud servers are untrustworthy, privacy-preserving keyword query schemes have been proposed. However, existing schemes consider only location queries or exact keyword matching. To address these issues, we propose the Privacy-Preserving Spatial Keyword Similarity Query Scheme (PPSKSQ), designed to search for spatial data points with the highest similarity while protecting the privacy of outsourced data, query requests, and results. First, we design two sub-protocols based on improved symmetric homomorphic encryption (iSHE): iSHE-SC for secure size comparison and iSHE-SIP for secure inner product computation. Then, we encode range information and integrate it with a quadtree to construct a novel index structure. Additionally, we use the Jaccard to measure similarity in conjunction with the iSHE-SC protocol, transforming similarity comparison into a matrix trace operation. Finally, rigorous security analysis and extensive simulation experiments confirm the flexibility, efficiency, and scalability of our scheme.
云计算的发展导致了基于位置的服务的广泛使用,例如空间关键字查询,它返回给定范围内的空间数据点,这些数据点在关键字集中与用户的相似度最高。随着空间数据量的增加,提供商通常将数据外包给功能强大的云服务器。由于云服务器不可信,人们提出了保护隐私的关键字查询方案。然而,现有的方案只考虑位置查询或精确关键字匹配。为了解决这些问题,我们提出了保护隐私的空间关键字相似度查询方案(PPSKSQ),该方案旨在搜索具有最高相似度的空间数据点,同时保护外包数据、查询请求和结果的隐私。首先,我们设计了两个基于改进对称同态加密(iSHE)的子协议:用于安全大小比较的iSHE- sc协议和用于安全内积计算的iSHE- sip协议。然后,我们对范围信息进行编码,并将其与四叉树进行整合,构造出一种新的索引结构。此外,我们使用Jaccard与iSHE-SC协议一起测量相似性,将相似性比较转换为矩阵跟踪操作。最后,经过严格的安全性分析和大量的仿真实验,验证了该方案的灵活性、高效性和可扩展性。
{"title":"PPSKSQ: Towards Efficient and Privacy-Preserving Spatial Keyword Similarity Query in Cloud","authors":"Changrui Wang;Lei Wu;Lijuan Xu;Haojie Yuan;Hao Wang;Wenying Zhang;Weizhi Meng","doi":"10.1109/TCC.2025.3547563","DOIUrl":"https://doi.org/10.1109/TCC.2025.3547563","url":null,"abstract":"The growth of cloud computing has led to the widespread use of location-based services, such as spatial keyword queries, which return spatial data points within a given range that have the highest similarity in keyword sets to the user’s. As the volume of spatial data increases, providers commonly outsource data to powerful cloud servers. Because cloud servers are untrustworthy, privacy-preserving keyword query schemes have been proposed. However, existing schemes consider only location queries or exact keyword matching. To address these issues, we propose the Privacy-Preserving Spatial Keyword Similarity Query Scheme (PPSKSQ), designed to search for spatial data points with the highest similarity while protecting the privacy of outsourced data, query requests, and results. First, we design two sub-protocols based on improved symmetric homomorphic encryption (iSHE): iSHE-SC for secure size comparison and iSHE-SIP for secure inner product computation. Then, we encode range information and integrate it with a quadtree to construct a novel index structure. Additionally, we use the Jaccard to measure similarity in conjunction with the iSHE-SC protocol, transforming similarity comparison into a matrix trace operation. Finally, rigorous security analysis and extensive simulation experiments confirm the flexibility, efficiency, and scalability of our scheme.","PeriodicalId":13202,"journal":{"name":"IEEE Transactions on Cloud Computing","volume":"13 2","pages":"544-559"},"PeriodicalIF":5.3,"publicationDate":"2025-03-03","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144230589","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
RLDR: Reinforcement Learning-Based Fast Data Recovery in Cloud-of-Clouds Storage Systems 基于强化学习的云-云存储系统快速数据恢复
IF 5.3 2区 计算机科学 Q1 COMPUTER SCIENCE, INFORMATION SYSTEMS Pub Date : 2025-02-26 DOI: 10.1109/TCC.2025.3546528
Jiajie Shen;Bochun Wu;Maoyi Wang;Sai Zou;Laizhong Cui;Wei Ni
Cloud-of-clouds storage systems are widely used in online applications, where user data are encrypted, encoded, and stored in multiple clouds. When some cloud nodes fail, the storage systems can reconstruct the lost data and store it in the substitute nodes. It is a challenge to reduce the latency of data recovery to ensure data reliability. In this paper, we adopt a Reinforcement Learning-based Data Recovery (RLDR) approach to reduce the regeneration time. By employing the Monte-Carlo method, our approach can construct the tree-topology-based regeneration process, a.k.a. regeneration tree, to effectively reduce the regeneration time. Through rigorous analysis, we apply the information flow graph to optimize the inter-cloud traffic for a given regeneration tree. To verify the merit of RLDR, We conduct extensive experiments on real-world traces. Experiments demonstrate that RLDR can significantly accelerate the regeneration process. Specifically, RLDR can reduce the regeneration time by up to 92% and increase the throughput by up to twelve-fold, compared to the prior art.
云的云存储系统被广泛应用于在线应用中,用户数据被加密、编码并存储在多个云中。当某些云节点出现故障时,存储系统可以重建丢失的数据并将其存储在替代节点中。如何降低数据恢复的延迟,保证数据的可靠性是一个挑战。在本文中,我们采用了一种基于强化学习的数据恢复(RLDR)方法来减少再生时间。该方法通过蒙特卡罗方法构建基于树拓扑的再生过程,即再生树,有效地缩短了再生时间。通过严格的分析,我们应用信息流图来优化给定再生树的云间流量。为了验证RLDR的优点,我们在现实世界的轨迹上进行了大量的实验。实验表明,RLDR能显著加快再生过程。具体来说,与现有技术相比,RLDR可以将再生时间缩短92%,并将吞吐量提高12倍。
{"title":"RLDR: Reinforcement Learning-Based Fast Data Recovery in Cloud-of-Clouds Storage Systems","authors":"Jiajie Shen;Bochun Wu;Maoyi Wang;Sai Zou;Laizhong Cui;Wei Ni","doi":"10.1109/TCC.2025.3546528","DOIUrl":"https://doi.org/10.1109/TCC.2025.3546528","url":null,"abstract":"Cloud-of-clouds storage systems are widely used in online applications, where user data are encrypted, encoded, and stored in multiple clouds. When some cloud nodes fail, the storage systems can reconstruct the lost data and store it in the substitute nodes. It is a challenge to reduce the latency of data recovery to ensure data reliability. In this paper, we adopt a Reinforcement Learning-based Data Recovery (RLDR) approach to reduce the regeneration time. By employing the Monte-Carlo method, our approach can construct the tree-topology-based regeneration process, a.k.a. regeneration tree, to effectively reduce the regeneration time. Through rigorous analysis, we apply the information flow graph to optimize the inter-cloud traffic for a given regeneration tree. To verify the merit of RLDR, We conduct extensive experiments on real-world traces. Experiments demonstrate that RLDR can significantly accelerate the regeneration process. Specifically, RLDR can reduce the regeneration time by up to 92% and increase the throughput by up to twelve-fold, compared to the prior art.","PeriodicalId":13202,"journal":{"name":"IEEE Transactions on Cloud Computing","volume":"13 2","pages":"526-543"},"PeriodicalIF":5.3,"publicationDate":"2025-02-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144232038","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
A Dynamic and Secure Join Query Protocol for Multi-User Environment in Cloud Computing 云计算中多用户环境下动态安全的连接查询协议
IF 5.3 2区 计算机科学 Q1 COMPUTER SCIENCE, INFORMATION SYSTEMS Pub Date : 2025-02-21 DOI: 10.1109/TCC.2025.3544628
Hongjun Li;Debiao He;Qi Feng;Xiaolin Yang;Qingcai Luo
The development of cloud computing needs to continuously improve and perfect the privacy-preserving techniques for the user’s confidential data. Multi-user join query, as an important method of data sharing, allows multiple legitimate data users to perform join query over the data owner’s encrypted database. However, some existing join query protocols may face some challenges in the practical application, such as practicality, security, and efficiency. In this article, we put forward a dynamic and secure join query protocol in the multi-user environment. Compared with some existing protocols, the proposed protocol has the following advantages. On the one hand, we utilize the dynamic oblivious cross tags structure to realize an efficient join query with forward and backward security. On the other hand, we combine the randomizable distributed key-homomorphic pseudo-random functions with join query to support multiple data users, which can provide resilience against the single user’s key leakage and resist collusion attacks between the cloud server and a subset of data users. We formally define and prove the security of proposed protocol. In addition, we give a detailed analysis of computation and communication overheads to demonstrate the efficiency of proposed protocol. Finally, we carry out some experimental evaluations to further demonstrate the superiority of functionality and efficiency.
云计算的发展需要不断改进和完善用户机密数据的隐私保护技术。多用户连接查询是一种重要的数据共享方式,它允许多个合法数据用户对数据所有者的加密数据库执行连接查询。但是,现有的一些连接查询协议在实际应用中可能面临着实用性、安全性和效率等方面的挑战。本文提出了一种多用户环境下动态安全的连接查询协议。与现有协议相比,本文提出的协议具有以下优点:一方面,我们利用动态遗忘交叉标签结构实现了高效的前向和后向安全连接查询。另一方面,我们将可随机的分布式键同态伪随机函数与连接查询相结合,支持多个数据用户,可以提供针对单个用户密钥泄露的弹性,并抵御云服务器与数据用户子集之间的合谋攻击。我们正式定义并证明了所提出协议的安全性。此外,我们对计算和通信开销进行了详细的分析,以证明所提出协议的效率。最后,我们进行了一些实验评估,进一步证明了功能和效率的优越性。
{"title":"A Dynamic and Secure Join Query Protocol for Multi-User Environment in Cloud Computing","authors":"Hongjun Li;Debiao He;Qi Feng;Xiaolin Yang;Qingcai Luo","doi":"10.1109/TCC.2025.3544628","DOIUrl":"https://doi.org/10.1109/TCC.2025.3544628","url":null,"abstract":"The development of cloud computing needs to continuously improve and perfect the privacy-preserving techniques for the user’s confidential data. Multi-user join query, as an important method of data sharing, allows multiple legitimate data users to perform join query over the data owner’s encrypted database. However, some existing join query protocols may face some challenges in the practical application, such as practicality, security, and efficiency. In this article, we put forward a dynamic and secure join query protocol in the multi-user environment. Compared with some existing protocols, the proposed protocol has the following advantages. On the one hand, we utilize the dynamic oblivious cross tags structure to realize an efficient join query with forward and backward security. On the other hand, we combine the randomizable distributed key-homomorphic pseudo-random functions with join query to support multiple data users, which can provide resilience against the single user’s key leakage and resist collusion attacks between the cloud server and a subset of data users. We formally define and prove the security of proposed protocol. In addition, we give a detailed analysis of computation and communication overheads to demonstrate the efficiency of proposed protocol. Finally, we carry out some experimental evaluations to further demonstrate the superiority of functionality and efficiency.","PeriodicalId":13202,"journal":{"name":"IEEE Transactions on Cloud Computing","volume":"13 2","pages":"512-525"},"PeriodicalIF":5.3,"publicationDate":"2025-02-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144229477","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
HyperDrive: Direct Network Telemetry Storage via Programmable Switches HyperDrive:通过可编程开关的直接网络遥测存储
IF 5.3 2区 计算机科学 Q1 COMPUTER SCIENCE, INFORMATION SYSTEMS Pub Date : 2025-02-18 DOI: 10.1109/TCC.2025.3543477
Ziyuan Liu;Zhixiong Niu;Ran Shu;Wenxue Cheng;Lihua Yuan;Jacob Nelson;Dan R. K. Ports;Peng Cheng;Yongqiang Xiong
In cloud datacenter operations, telemetry and logs are indispensable, enabling essential services such as network diagnostics, auditing, and knowledge discovery. The escalating scale of data centers, coupled with increased bandwidth and finer-grained telemetry, results in an overwhelming volume of data. This proliferation poses significant storage challenges for telemetry systems. In this article, we introduce HyperDrive, an innovative system designed to efficiently store large volumes of telemetry and logs in data centers using programmable switches. This in-network approach effectively mitigates bandwidth bottlenecks commonly associated with traditional endpoint-based methods. To our knowledge, we are the first to use a programmable switch to directly control storage, bypassing the CPU to achieve the best performance. With merely 21% of a switch’s resources, our HyperDrive implementation showcases remarkable scalability and efficiency. Through rigorous evaluation, it has demonstrated linear scaling capabilities, efficiently managing 12 SSDs on a single server with minimal host overhead. In an eight-server testbed, HyperDrive achieved an impressive throughput of approximately 730 Gbps, underscoring its potential to transform data center telemetry and logging practices.
在云数据中心操作中,遥测和日志是必不可少的,可以实现网络诊断、审计和知识发现等基本服务。数据中心的规模不断扩大,再加上带宽的增加和更细粒度的遥测技术,导致了数据量的激增。这种扩散给遥测系统带来了巨大的存储挑战。在本文中,我们介绍了HyperDrive,这是一种创新的系统,旨在使用可编程交换机在数据中心有效地存储大量遥测数据和日志。这种网络内方法有效地缓解了通常与传统基于端点的方法相关的带宽瓶颈。据我们所知,我们是第一个使用可编程开关直接控制存储,绕过CPU达到最佳性能。仅使用交换机21%的资源,我们的HyperDrive实现展示了卓越的可扩展性和效率。经过严格的评估,它已经证明了线性扩展能力,能够以最小的主机开销在单个服务器上有效地管理12个ssd。在一个8台服务器的测试平台上,HyperDrive实现了大约730 Gbps的吞吐量,这凸显了它在数据中心遥测和记录实践方面的潜力。
{"title":"HyperDrive: Direct Network Telemetry Storage via Programmable Switches","authors":"Ziyuan Liu;Zhixiong Niu;Ran Shu;Wenxue Cheng;Lihua Yuan;Jacob Nelson;Dan R. K. Ports;Peng Cheng;Yongqiang Xiong","doi":"10.1109/TCC.2025.3543477","DOIUrl":"https://doi.org/10.1109/TCC.2025.3543477","url":null,"abstract":"In cloud datacenter operations, telemetry and logs are indispensable, enabling essential services such as network diagnostics, auditing, and knowledge discovery. The escalating scale of data centers, coupled with increased bandwidth and finer-grained telemetry, results in an overwhelming volume of data. This proliferation poses significant storage challenges for telemetry systems. In this article, we introduce HyperDrive, an innovative system designed to efficiently store large volumes of telemetry and logs in data centers using programmable switches. This in-network approach effectively mitigates bandwidth bottlenecks commonly associated with traditional endpoint-based methods. To our knowledge, we are the first to use a programmable switch to directly control storage, bypassing the CPU to achieve the best performance. With merely 21% of a switch’s resources, our HyperDrive implementation showcases remarkable scalability and efficiency. Through rigorous evaluation, it has demonstrated linear scaling capabilities, efficiently managing 12 SSDs on a single server with minimal host overhead. In an eight-server testbed, HyperDrive achieved an impressive throughput of approximately 730 Gbps, underscoring its potential to transform data center telemetry and logging practices.","PeriodicalId":13202,"journal":{"name":"IEEE Transactions on Cloud Computing","volume":"13 2","pages":"498-511"},"PeriodicalIF":5.3,"publicationDate":"2025-02-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144232040","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
PPEC: A Privacy-Preserving, Cost-Effective Incremental Density Peak Clustering Analysis on Encrypted Outsourced Data PPEC:一种保护隐私、成本效益高的增量密度峰值聚类分析
IF 5.3 2区 计算机科学 Q1 COMPUTER SCIENCE, INFORMATION SYSTEMS Pub Date : 2025-02-13 DOI: 10.1109/TCC.2025.3541749
Haomiao Yang;ZiKang Ding;Ruiheng Lu;Kunlan Xiang;Hongwei Li;Dakui Wu
Call detail records (CDRs) provide valuable insights into user behavior, which are instrumental for telecom companies in optimizing network coverage and service quality. However, while cloud computing facilitates clustering analysis on a vast scale of CDR data, it introduces privacy risks. The challenge lies in striking a balance between efficiency, security, and cost-effectiveness in privacy-preserving algorithms. To tackle this issue, we propose a privacy-preserving and cost-effective incremental density peak clustering scheme. Our approach leverages homomorphic encryption and order-preserving encryption to enable direct computations and clustering on encrypted data. Moreover, it employs reaching definition analysis to optimize the execution flow of static tasks, pinpointing the optimal junctures for transitioning between the two types of encryption to reduce communication overhead. Furthermore, our scheme utilizes a game theory-based verification strategy to ascertain the accuracy of the results. This methodology can be effectively deployed on the Ethereum blockchain via smart contracts. A comprehensive security analysis confirms that our scheme upholds both privacy and data integrity. Experimental evaluations substantiate the clustering accuracy, communication load, and computational efficiency of our scheme, thereby validating its viability in real-world applications.
呼叫详细记录(cdr)提供了对用户行为的宝贵见解,这对电信公司优化网络覆盖和服务质量至关重要。然而,虽然云计算有助于对大规模CDR数据进行聚类分析,但它引入了隐私风险。挑战在于如何在隐私保护算法的效率、安全性和成本效益之间取得平衡。为了解决这一问题,我们提出了一种既保护隐私又具有成本效益的增量密度峰值聚类方案。我们的方法利用同态加密和保序加密来实现对加密数据的直接计算和聚类。此外,它采用到达定义分析来优化静态任务的执行流,确定在两种加密类型之间转换的最佳节点,以减少通信开销。此外,我们的方案利用基于博弈论的验证策略来确定结果的准确性。这种方法可以通过智能合约有效地部署在以太坊区块链上。全面的安全分析证实,我们的方案既维护隐私,又维护数据完整性。实验评估证实了该方案的聚类精度、通信负载和计算效率,从而验证了其在实际应用中的可行性。
{"title":"PPEC: A Privacy-Preserving, Cost-Effective Incremental Density Peak Clustering Analysis on Encrypted Outsourced Data","authors":"Haomiao Yang;ZiKang Ding;Ruiheng Lu;Kunlan Xiang;Hongwei Li;Dakui Wu","doi":"10.1109/TCC.2025.3541749","DOIUrl":"https://doi.org/10.1109/TCC.2025.3541749","url":null,"abstract":"Call detail records (CDRs) provide valuable insights into user behavior, which are instrumental for telecom companies in optimizing network coverage and service quality. However, while cloud computing facilitates clustering analysis on a vast scale of CDR data, it introduces privacy risks. The challenge lies in striking a balance between efficiency, security, and cost-effectiveness in privacy-preserving algorithms. To tackle this issue, we propose a privacy-preserving and cost-effective incremental density peak clustering scheme. Our approach leverages homomorphic encryption and order-preserving encryption to enable direct computations and clustering on encrypted data. Moreover, it employs reaching definition analysis to optimize the execution flow of static tasks, pinpointing the optimal junctures for transitioning between the two types of encryption to reduce communication overhead. Furthermore, our scheme utilizes a game theory-based verification strategy to ascertain the accuracy of the results. This methodology can be effectively deployed on the Ethereum blockchain via smart contracts. A comprehensive security analysis confirms that our scheme upholds both privacy and data integrity. Experimental evaluations substantiate the clustering accuracy, communication load, and computational efficiency of our scheme, thereby validating its viability in real-world applications.","PeriodicalId":13202,"journal":{"name":"IEEE Transactions on Cloud Computing","volume":"13 2","pages":"485-497"},"PeriodicalIF":5.3,"publicationDate":"2025-02-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144232044","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
GHPFL: Advancing Personalized Edge-Based Learning Through Optimized Bandwidth Utilization GHPFL:通过优化带宽利用率推进个性化边缘学习
IF 5.3 2区 计算机科学 Q1 COMPUTER SCIENCE, INFORMATION SYSTEMS Pub Date : 2025-02-11 DOI: 10.1109/TCC.2025.3540023
Kaiwei Mo;Wei Lin;Jiaxun Lu;Chun Jason Xue;Yunfeng Shao;Hong Xu
Federated learning (FL) is increasingly adopted to combine knowledge from clients in training without revealing their private data. In order to improve the performance of different participants, personalized FL has recently been proposed. However, considering the non-independent and identically distributed (non-IID) data and limited bandwidth at clients, the model performance could be compromised. In reality, clients near each other often tend to have similar data distributions. In this work, we train the personalized edge-based model in the client-edge-server FL. While considering the differences in data distribution, we fully utilize the limited bandwidth resources. To make training efficient and accurate at the same time, An intuitive idea is to learn as much useful knowledge as possible from other edges and reduce the accuracy loss incurred by non-IID data. Therefore, we devise Grouping Hierarchical Personalized Federated Learning (GHPFL). In this framework, each edge establishes physical connections with multiple clients, while the server physically connects with edges. It clusters edges into groups and establishes client-edge logical connections for synchronization. This is based on data similarities that the nodes actively identify, as well as the underlying physical topology. We perform a large-scale evaluation to demonstrate GHPFL’s benefits over other schemes.
联邦学习(FL)被越来越多地用于在不泄露客户私有数据的情况下将培训中的客户知识结合起来。为了提高不同参与者的学习成绩,个性化学习最近被提出。然而,考虑到客户端的非独立和同分布(非iid)数据和有限的带宽,模型性能可能会受到影响。实际上,彼此靠近的客户机往往具有相似的数据分布。在本工作中,我们在客户端-边缘-服务器FL中训练个性化的基于边缘的模型,在考虑数据分布差异的同时,充分利用有限的带宽资源。为了使训练既高效又准确,一个直观的思路是尽可能多地从其他边学习有用的知识,减少非iid数据带来的准确率损失。因此,我们设计了分组分层个性化联邦学习(GHPFL)。在此框架中,每个边缘与多个客户端建立物理连接,而服务器与边缘进行物理连接。它将边缘聚类成组,并为同步建立客户端边缘逻辑连接。这是基于节点主动识别的数据相似性,以及底层物理拓扑。我们进行了大规模的评估,以证明GHPFL优于其他方案。
{"title":"GHPFL: Advancing Personalized Edge-Based Learning Through Optimized Bandwidth Utilization","authors":"Kaiwei Mo;Wei Lin;Jiaxun Lu;Chun Jason Xue;Yunfeng Shao;Hong Xu","doi":"10.1109/TCC.2025.3540023","DOIUrl":"https://doi.org/10.1109/TCC.2025.3540023","url":null,"abstract":"Federated learning (FL) is increasingly adopted to combine knowledge from clients in training without revealing their private data. In order to improve the performance of different participants, personalized FL has recently been proposed. However, considering the non-independent and identically distributed (non-IID) data and limited bandwidth at clients, the model performance could be compromised. In reality, clients near each other often tend to have similar data distributions. In this work, we train the personalized edge-based model in the client-edge-server FL. While considering the differences in data distribution, we fully utilize the limited bandwidth resources. To make training efficient and accurate at the same time, An intuitive idea is to learn as much useful knowledge as possible from other edges and reduce the accuracy loss incurred by non-IID data. Therefore, we devise Grouping Hierarchical Personalized Federated Learning (GHPFL). In this framework, each edge establishes physical connections with multiple clients, while the server physically connects with edges. It clusters edges into groups and establishes client-edge logical connections for synchronization. This is based on data similarities that the nodes actively identify, as well as the underlying physical topology. We perform a large-scale evaluation to demonstrate GHPFL’s benefits over other schemes.","PeriodicalId":13202,"journal":{"name":"IEEE Transactions on Cloud Computing","volume":"13 2","pages":"473-484"},"PeriodicalIF":5.3,"publicationDate":"2025-02-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144232043","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Cache Allocation in Multi-Tenant Edge Computing: An Online Model-Based Reinforcement Learning Approach 多租户边缘计算中的缓存分配:一种基于在线模型的强化学习方法
IF 5.3 2区 计算机科学 Q1 COMPUTER SCIENCE, INFORMATION SYSTEMS Pub Date : 2025-02-04 DOI: 10.1109/TCC.2025.3538158
Ayoub Ben-Ameur;Andrea Araldo;Tijani Chahed;György Dán
We consider a Network Operator (NO) that owns Edge Computing (EC) resources, virtualizes them and lets third party Service Providers (SPs) run their services, using the allocated slice of resources. We focus on one specific resource, i.e., cache space, and on the problem of how to allocate it among several SPs in order to minimize the backhaul traffic. Due to confidentiality guarantees, the NO cannot observe the nature of the traffic of SPs, which is encrypted. Allocation decisions are thus challenging, since they must be taken solely based on observed monitoring information. Another challenge is that not all the traffic is cacheable. We propose a data-driven cache allocation strategy, based on Reinforcement Learning (RL). Unlike most RL applications, in which the decision policy is learned offline on a simulator, we assume no previous knowledge is available to build such a simulator. We thus apply RL in an online fashion, i.e., the model and the policy are learned by directly perturbing and monitoring the actual system. Since perturbations generate spurious traffic, we thus need to limit perturbations. This requires learning to be extremely efficient. To this aim, we devise a strategy that learns an approximation of the cost function, while interacting with the system. We then use such an approximation in a Model-Based RL (MB-RL) to speed up convergence. We prove analytically that our strategy brings cache allocation boundedly close to the optimum and stably remains in such an allocation. We show in simulations that such convergence is obtained within few minutes. We also study its fairness, its sensitivity to several scenario characteristics and compare it with a method from the state-of-the-art.
我们考虑拥有边缘计算(EC)资源的网络运营商(NO),将其虚拟化,并允许第三方服务提供商(sp)使用分配的资源片运行其服务。我们专注于一个特定的资源,即缓存空间,以及如何在几个sp之间分配它以最小化回程流量的问题。由于存在机密性保证,因此NO无法观察到被加密的sp的流量性质。因此,分配决策具有挑战性,因为它们必须完全基于观察到的监测信息。另一个挑战是,并非所有的流量都是可缓存的。我们提出了一种基于强化学习(RL)的数据驱动缓存分配策略。与大多数强化学习应用程序(其中决策策略是在模拟器上脱机学习的)不同,我们假设没有可用的先前知识来构建这样的模拟器。因此,我们以在线方式应用强化学习,即通过直接干扰和监控实际系统来学习模型和策略。由于扰动产生虚假流量,因此我们需要限制扰动。这就要求学习的效率极高。为此,我们设计了一种策略,在与系统交互的同时学习成本函数的近似值。然后,我们在基于模型的RL (MB-RL)中使用这种近似来加速收敛。通过分析证明,该策略使缓存分配有界地接近于最优,并稳定地保持在这种分配状态。我们在模拟中证明了这种收敛在几分钟内得到。我们还研究了它的公平性,它对几个场景特征的敏感性,并将其与最先进的方法进行了比较。
{"title":"Cache Allocation in Multi-Tenant Edge Computing: An Online Model-Based Reinforcement Learning Approach","authors":"Ayoub Ben-Ameur;Andrea Araldo;Tijani Chahed;György Dán","doi":"10.1109/TCC.2025.3538158","DOIUrl":"https://doi.org/10.1109/TCC.2025.3538158","url":null,"abstract":"We consider a Network Operator (NO) that owns Edge Computing (EC) resources, virtualizes them and lets third party Service Providers (SPs) run their services, using the allocated slice of resources. We focus on one specific resource, i.e., cache space, and on the problem of how to allocate it among several SPs in order to minimize the backhaul traffic. Due to confidentiality guarantees, the NO cannot observe the nature of the traffic of SPs, which is encrypted. Allocation decisions are thus challenging, since they must be taken solely based on observed monitoring information. Another challenge is that not all the traffic is cacheable. We propose a data-driven cache allocation strategy, based on Reinforcement Learning (RL). Unlike most RL applications, in which the decision policy is learned offline on a simulator, we assume no previous knowledge is available to build such a simulator. We thus apply RL in an <italic>online</i> fashion, i.e., the model and the policy are learned by directly perturbing and monitoring the actual system. Since perturbations generate spurious traffic, we thus need to limit perturbations. This requires learning to be extremely efficient. To this aim, we devise a strategy that learns an approximation of the cost function, while interacting with the system. We then use such an approximation in a Model-Based RL (MB-RL) to speed up convergence. We prove analytically that our strategy brings cache allocation boundedly close to the optimum and stably remains in such an allocation. We show in simulations that such convergence is obtained within few minutes. We also study its fairness, its sensitivity to several scenario characteristics and compare it with a method from the state-of-the-art.","PeriodicalId":13202,"journal":{"name":"IEEE Transactions on Cloud Computing","volume":"13 2","pages":"459-472"},"PeriodicalIF":5.3,"publicationDate":"2025-02-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144230590","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
期刊
IEEE Transactions on Cloud Computing
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1