首页 > 最新文献

IEEE Transactions on Cloud Computing最新文献

英文 中文
DRKC: Deep Reinforcement Learning Enhanced Microservice Scheduling on Kubernetes Clusters in Cloud-Edge Environment 深度强化学习增强Kubernetes集群在云边缘环境下的微服务调度
IF 5 2区 计算机科学 Q1 COMPUTER SCIENCE, INFORMATION SYSTEMS Pub Date : 2025-10-22 DOI: 10.1109/TCC.2025.3624031
Jian Jiang;Qianmu Li;Pengchuan Wang;Yunhuai Liu
In the rapidly evolving landscape of cloud-edge computing, efficient resource scheduling across Kubernetes clusters is essential for optimizing microservice deployment. Traditional scheduling methods, e.g., heuristic and meta-heuristic algorithms, often struggle with the dynamic and heterogeneous nature of cloud-edge environments, relying on fixed parameters and lacking adaptability. We propose and implement DRKC, a novel deep reinforcement learning-based approach that addresses these challenges by improving resource utilization and balancing workloads. We model the scheduling problem as a Markov decision process, enabling DRKC to automatically learn optimal scheduling policies from real-time system data without relying on predefined heuristics. The work synthesizes state information from multiple clusters, using multidimensional resource awareness to effectively respond to changing conditions. We evaluate our performance in three Kubernetes clusters with thirteen nodes and ninety-six test applications with different resource requirements. Experimental results validate the effectiveness of DRKC in enhancing overall resource efficiency and achieving superior load balancing across cloud-edge environments.
在快速发展的云边缘计算环境中,跨Kubernetes集群的高效资源调度对于优化微服务部署至关重要。传统的调度方法,如启发式和元启发式算法,往往难以适应云边缘环境的动态性和异构性,依赖于固定的参数,缺乏适应性。我们提出并实现了DRKC,这是一种新颖的基于深度强化学习的方法,通过提高资源利用率和平衡工作负载来解决这些挑战。我们将调度问题建模为马尔可夫决策过程,使DRKC能够从实时系统数据中自动学习最优调度策略,而不依赖于预定义的启发式。这项工作综合了来自多个集群的状态信息,利用多维资源感知来有效地响应不断变化的条件。我们在三个具有13个节点的Kubernetes集群和96个具有不同资源需求的测试应用程序中评估我们的性能。实验结果验证了DRKC在提高整体资源效率和实现跨云边缘环境的卓越负载平衡方面的有效性。
{"title":"DRKC: Deep Reinforcement Learning Enhanced Microservice Scheduling on Kubernetes Clusters in Cloud-Edge Environment","authors":"Jian Jiang;Qianmu Li;Pengchuan Wang;Yunhuai Liu","doi":"10.1109/TCC.2025.3624031","DOIUrl":"https://doi.org/10.1109/TCC.2025.3624031","url":null,"abstract":"In the rapidly evolving landscape of cloud-edge computing, efficient resource scheduling across Kubernetes clusters is essential for optimizing microservice deployment. Traditional scheduling methods, e.g., heuristic and meta-heuristic algorithms, often struggle with the dynamic and heterogeneous nature of cloud-edge environments, relying on fixed parameters and lacking adaptability. We propose and implement DRKC, a novel deep reinforcement learning-based approach that addresses these challenges by improving resource utilization and balancing workloads. We model the scheduling problem as a Markov decision process, enabling DRKC to automatically learn optimal scheduling policies from real-time system data without relying on predefined heuristics. The work synthesizes state information from multiple clusters, using multidimensional resource awareness to effectively respond to changing conditions. We evaluate our performance in three Kubernetes clusters with thirteen nodes and ninety-six test applications with different resource requirements. Experimental results validate the effectiveness of DRKC in enhancing overall resource efficiency and achieving superior load balancing across cloud-edge environments.","PeriodicalId":13202,"journal":{"name":"IEEE Transactions on Cloud Computing","volume":"13 4","pages":"1472-1486"},"PeriodicalIF":5.0,"publicationDate":"2025-10-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145674862","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Budget-Feasible Clock Mechanism for Hierarchical Computation Offloading in Edge-Vehicle Collaborative Computing 边缘车辆协同计算中分层计算卸载的预算可行时钟机制
IF 5 2区 计算机科学 Q1 COMPUTER SCIENCE, INFORMATION SYSTEMS Pub Date : 2025-10-14 DOI: 10.1109/TCC.2025.3621432
Xi Liu;Jun Liu;Weidong Li
We consider the edge-vehicle computing system (EVCS), where the combination of edge computing and vehicle computing takes respective advantages to provide various services. We address the problem of computation offloading in EVSC, where the computing tasks and the sensing tasks with limited budgets are offloaded to edge servers and vehicles. The resource-sharing model is proposed, where sensing resources of one vehicle are shared by multiple tasks. We consider the vehicle hierarchy, where vehicles with different equipment accuracy are classified into different hierarchies. A sensing task has different values and different demands for different hierarchies. A budget-feasible mechanism based on the clock auction is proposed. We show our proposed mechanism is strategy-proof and group strategy-proof, this drives the system into an equilibrium. In addition, the proposed mechanism achieves individual rationality, budget balance, and consumer sovereignty. The proposed mechanism consists of two algorithms that are based on the idea of dominant resource and iteration to improve resource utilization and reduce costs. Furthermore, the approximate ratios of the two allocation algorithms are analyzed. Experimental results demonstrate that the proposed mechanism achieves the near-optimal value and brings higher utility for participants.
我们考虑边缘车辆计算系统(EVCS),其中边缘计算和车辆计算相结合,各具优势,提供各种服务。我们解决了EVSC中的计算卸载问题,其中预算有限的计算任务和传感任务被卸载到边缘服务器和车辆上。提出了资源共享模型,将一辆车的传感资源由多个任务共享。我们考虑车辆层次,将装备精度不同的车辆划分到不同的层次。不同层次的感知任务具有不同的价值和不同的需求。提出了一种基于时钟拍卖的预算可行机制。我们证明了我们提出的机制是策略证明和群体策略证明,这推动系统进入均衡。此外,该机制还实现了个人理性、预算平衡和消费者主权。该机制包括基于优势资源和迭代思想的两种算法,以提高资源利用率和降低成本。进一步分析了两种分配算法的近似比值。实验结果表明,所提出的机制达到了接近最优值,为参与者带来了更高的效用。
{"title":"Budget-Feasible Clock Mechanism for Hierarchical Computation Offloading in Edge-Vehicle Collaborative Computing","authors":"Xi Liu;Jun Liu;Weidong Li","doi":"10.1109/TCC.2025.3621432","DOIUrl":"https://doi.org/10.1109/TCC.2025.3621432","url":null,"abstract":"We consider the edge-vehicle computing system (EVCS), where the combination of edge computing and vehicle computing takes respective advantages to provide various services. We address the problem of computation offloading in EVSC, where the computing tasks and the sensing tasks with limited budgets are offloaded to edge servers and vehicles. The resource-sharing model is proposed, where sensing resources of one vehicle are shared by multiple tasks. We consider the vehicle hierarchy, where vehicles with different equipment accuracy are classified into different hierarchies. A sensing task has different values and different demands for different hierarchies. A budget-feasible mechanism based on the clock auction is proposed. We show our proposed mechanism is strategy-proof and group strategy-proof, this drives the system into an equilibrium. In addition, the proposed mechanism achieves individual rationality, budget balance, and consumer sovereignty. The proposed mechanism consists of two algorithms that are based on the idea of dominant resource and iteration to improve resource utilization and reduce costs. Furthermore, the approximate ratios of the two allocation algorithms are analyzed. Experimental results demonstrate that the proposed mechanism achieves the near-optimal value and brings higher utility for participants.","PeriodicalId":13202,"journal":{"name":"IEEE Transactions on Cloud Computing","volume":"13 4","pages":"1458-1471"},"PeriodicalIF":5.0,"publicationDate":"2025-10-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145674861","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Lightweight Conditional Privacy-Preserving Scheme for VANET Communications 面向VANET通信的轻量级条件隐私保护方案
IF 5 2区 计算机科学 Q1 COMPUTER SCIENCE, INFORMATION SYSTEMS Pub Date : 2025-09-19 DOI: 10.1109/TCC.2025.3612092
Xiaodong Shen;Jianchang Lai;Jinguang Han;Liquan Chen
As a crucial component of intelligent transportation systems, VANETs are essential for enhancing road safety and enabling efficient traffic management. To ensure secure communication, vehicles often use pseudonyms to protect their identity privacy. However, unconditional anonymity can hinder accountability, making it very necessary to provide conditional privacy protection for vehicles. The conditional privacy-preserving technology not only protects the identity privacy of legitimate vehicles, but also can trace the real identity of malicious vehicles. Some existing schemes lack conditional privacy protection or have large computation and communication costs, which makes them unsuitable for resource-constrained VANETs environments. Hence, we improve the current schnorr-based aggregate signature by eliminating bilinear pairing operations, optimizing the aggregation procedure for batch verification and propose a lightweight certificateless-based aggregate signature scheme (ECPP-CLAS) for VANETs. In our scheme, the aggregation enables multiple signatures to be compressed into an aggregated signature and verified simultaneously, thereby reducing communication consumption, trusted entity generates the pseudonym for the corresponding vehicle through special construction to meet the conditional privacy-preserving requirement. The security analysis and performance evaluation show that our proposed scheme can meet the expected security objectives and lightweight requirements.
作为智能交通系统的重要组成部分,vanet对于提高道路安全和实现高效交通管理至关重要。为了确保安全通信,车辆通常使用假名来保护其身份隐私。然而,无条件匿名可能会阻碍问责制,因此非常有必要为车辆提供有条件的隐私保护。条件隐私保护技术不仅保护了合法车辆的身份隐私,而且可以追踪恶意车辆的真实身份。现有的一些方案缺乏条件隐私保护或计算和通信成本大,不适合资源受限的vanet环境。因此,我们通过消除双线性配对操作来改进当前基于schnorr的聚合签名,优化聚合过程以进行批量验证,并提出了一种用于VANETs的轻量级无证书聚合签名方案(ECPP-CLAS)。在我们的方案中,聚合可以将多个签名压缩成一个聚合签名并同时进行验证,从而减少通信消耗,可信实体通过特殊构造生成对应车辆的假名,以满足有条件的隐私保护要求。安全性分析和性能评估表明,该方案能够满足预期的安全目标和轻量级要求。
{"title":"Lightweight Conditional Privacy-Preserving Scheme for VANET Communications","authors":"Xiaodong Shen;Jianchang Lai;Jinguang Han;Liquan Chen","doi":"10.1109/TCC.2025.3612092","DOIUrl":"https://doi.org/10.1109/TCC.2025.3612092","url":null,"abstract":"As a crucial component of intelligent transportation systems, VANETs are essential for enhancing road safety and enabling efficient traffic management. To ensure secure communication, vehicles often use pseudonyms to protect their identity privacy. However, unconditional anonymity can hinder accountability, making it very necessary to provide conditional privacy protection for vehicles. The conditional privacy-preserving technology not only protects the identity privacy of legitimate vehicles, but also can trace the real identity of malicious vehicles. Some existing schemes lack conditional privacy protection or have large computation and communication costs, which makes them unsuitable for resource-constrained VANETs environments. Hence, we improve the current schnorr-based aggregate signature by eliminating bilinear pairing operations, optimizing the aggregation procedure for batch verification and propose a lightweight certificateless-based aggregate signature scheme (ECPP-CLAS) for VANETs. In our scheme, the aggregation enables multiple signatures to be compressed into an aggregated signature and verified simultaneously, thereby reducing communication consumption, trusted entity generates the pseudonym for the corresponding vehicle through special construction to meet the conditional privacy-preserving requirement. The security analysis and performance evaluation show that our proposed scheme can meet the expected security objectives and lightweight requirements.","PeriodicalId":13202,"journal":{"name":"IEEE Transactions on Cloud Computing","volume":"13 4","pages":"1487-1497"},"PeriodicalIF":5.0,"publicationDate":"2025-09-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145674850","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Comments on “Blockchain-Assisted Public-Key Encryption With Keyword Search Against Keyword Guessing Attacks for Cloud Storage” 关于“针对云存储的关键字搜索的区块链辅助公钥加密”的评论
IF 5 2区 计算机科学 Q1 COMPUTER SCIENCE, INFORMATION SYSTEMS Pub Date : 2025-09-01 DOI: 10.1109/TCC.2025.3604552
Keita Emura
As a variant of PEKS (Public key Encryption with Keyword Search), Zhang et al. (IEEE Transactions on Cloud Computing 2021) introduced a secure and efficient PEKS scheme called SEPSE, where servers issue a servers-derived keyword to a sender or a receiver. In this article, we show that information of keyword is revealed from trapdoor when an adversary is allowed to issue servers-derived keyword queries twice.
作为PEKS (Public key Encryption with Keyword Search)的一种变体,Zhang等人(IEEE Transactions on Cloud Computing 2021)引入了一种安全高效的PEKS方案,称为SEPSE,其中服务器向发送方或接收方发出服务器派生的关键字。在本文中,我们将展示当攻击者被允许两次发出服务器派生的关键字查询时,关键字信息将从陷阱门泄露。
{"title":"Comments on “Blockchain-Assisted Public-Key Encryption With Keyword Search Against Keyword Guessing Attacks for Cloud Storage”","authors":"Keita Emura","doi":"10.1109/TCC.2025.3604552","DOIUrl":"https://doi.org/10.1109/TCC.2025.3604552","url":null,"abstract":"As a variant of PEKS (Public key Encryption with Keyword Search), Zhang et al. (IEEE Transactions on Cloud Computing 2021) introduced a secure and efficient PEKS scheme called SEPSE, where servers issue a servers-derived keyword to a sender or a receiver. In this article, we show that information of keyword is revealed from trapdoor when an adversary is allowed to issue servers-derived keyword queries twice.","PeriodicalId":13202,"journal":{"name":"IEEE Transactions on Cloud Computing","volume":"13 4","pages":"1498-1499"},"PeriodicalIF":5.0,"publicationDate":"2025-09-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145674863","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Forward-Secure Multi-User Graph Searchable Encryption for Exact Shortest Path Queries 精确最短路径查询的前向安全多用户图可搜索加密
IF 5 2区 计算机科学 Q1 COMPUTER SCIENCE, INFORMATION SYSTEMS Pub Date : 2025-08-18 DOI: 10.1109/TCC.2025.3599412
Weixiao Wang;Qing Fan;Chuan Zhang;Cong Zuo;Liehuang Zhu
The rapid development of cloud computing and increasing adoption of unstructured data impose higher requirements on cloud servers to deliver advanced query capabilities tailored for protected complex data. To provide outsourced graph privacy and support the shortest path query, a cornerstone of graph computing, various graph searchable encryption (GSE) schemes have been proposed. However, those GSE schemes are only for single-user setting and barely keep forward security, limiting data sharing and value extraction. Therefore, we propose a forward-secure GSE scheme for multi-user querying the exact shortest path. Specifically, our designed encryption structure seamlessly combines the randomizable distributed key-homomorphic pseudorandom function (RDPRF) for multi-user authentication and reduces database update. We then build a dual-server architecture with secure equality test protocol for query. To our knowledge, our GSE scheme is the first to guarantee forward security without a trusted proxy and support multi-user querying the exact shortest path. We formalize leakage functions and model the dynamic multi-user GSE scheme. Formal security proof is offered under reasonable leakage. Finally, we conduct experiments on ten real-world graph datasets with different scales and exemplify the feasibility of our scheme.
云计算的快速发展和越来越多地采用非结构化数据对云服务器提出了更高的要求,以提供为受保护的复杂数据量身定制的高级查询功能。为了提供外包的图隐私和支持最短路径查询(图计算的基石),各种图可搜索加密(GSE)方案被提出。然而,这些GSE方案仅适用于单用户设置,几乎不能保证前向安全性,限制了数据共享和价值提取。因此,我们提出了一种多用户查询精确最短路径的前向安全GSE方案。具体来说,我们设计的加密结构无缝地结合了用于多用户身份验证的随机分布式密钥同态伪随机函数(RDPRF),并减少了数据库更新。然后,我们构建了一个双服务器架构,并使用安全相等性测试协议进行查询。据我们所知,我们的GSE方案是第一个在没有可信代理的情况下保证转发安全性并支持多用户查询精确的最短路径的方案。我们形式化了泄漏函数,并建立了动态多用户GSE方案的模型。在合理的泄漏情况下提供正式的安全证明。最后,我们在10个不同尺度的真实图形数据集上进行了实验,验证了我们方案的可行性。
{"title":"Forward-Secure Multi-User Graph Searchable Encryption for Exact Shortest Path Queries","authors":"Weixiao Wang;Qing Fan;Chuan Zhang;Cong Zuo;Liehuang Zhu","doi":"10.1109/TCC.2025.3599412","DOIUrl":"https://doi.org/10.1109/TCC.2025.3599412","url":null,"abstract":"The rapid development of cloud computing and increasing adoption of unstructured data impose higher requirements on cloud servers to deliver advanced query capabilities tailored for protected complex data. To provide outsourced graph privacy and support the shortest path query, a cornerstone of graph computing, various graph searchable encryption (GSE) schemes have been proposed. However, those GSE schemes are only for single-user setting and barely keep forward security, limiting data sharing and value extraction. Therefore, we propose a forward-secure GSE scheme for multi-user querying the exact shortest path. Specifically, our designed encryption structure seamlessly combines the randomizable distributed key-homomorphic pseudorandom function (RDPRF) for multi-user authentication and reduces database update. We then build a dual-server architecture with secure equality test protocol for query. To our knowledge, our GSE scheme is the first to guarantee forward security without a trusted proxy and support multi-user querying the exact shortest path. We formalize leakage functions and model the dynamic multi-user GSE scheme. Formal security proof is offered under reasonable leakage. Finally, we conduct experiments on ten real-world graph datasets with different scales and exemplify the feasibility of our scheme.","PeriodicalId":13202,"journal":{"name":"IEEE Transactions on Cloud Computing","volume":"13 4","pages":"1446-1457"},"PeriodicalIF":5.0,"publicationDate":"2025-08-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145729333","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Cloud Load Balancers Need to Stay Off the Data Path 云负载均衡器需要远离数据路径
IF 5 2区 计算机科学 Q1 COMPUTER SCIENCE, INFORMATION SYSTEMS Pub Date : 2025-08-04 DOI: 10.1109/TCC.2025.3595172
Yuchen Zhang;Shuai Jin;Zhenyu Wen;Shibo He;Qingzheng Hou;Yang Song;Zhigang Zong;Xiaomin Wu;Bengbeng Xue;Chenghao Sun;Ku Li;Xing Li;Biao Lyu;Rong Wen;Jiming Chen;Shunmin Zhu
Load balancers (LBs) are crucial in cloud environments, ensuring workload scalability. They route packets destined for a service (identified by a virtual IP address, or VIP) to a group of servers designated to deliver that service, each with its direct IP address (DIP). Consequently, LBs significantly impact the performance of cloud services and the experience of tenants. Many academic studies focus on specific issues such as designing new load balancing algorithms and developing hardware load balancing devices to enhance the LB’s performance, reliability, and scalability. However, we believe this approach is not ideal for cloud data centers for the following reasons: (i) the increasing demands of users and the variety of cloud service types turn the LB into a bottleneck; and (ii) continually adding machines or upgrading hardware devices can incur substantial costs. In this paper, we propose the Next Generation Load Balancer (NGLB), designed to bypass the TCP connection datapath from the LB, thereby eliminating latency overheads and scalability bottlenecks of traditional cloud LBs. The LB only participates in the TCP connection establishment phase. The three key features of our design are: (i) the introduction of an active address learning model to redirect traffic and bypass the LB, (ii) a multi-tenant isolation mechanism for deployment within multi-tenant Virtual Private Cloud networks, and (iii) a distributed flow control method, known as hierarchical connection cleaner, designed to ensure the availability of backend resources. The evaluation results demonstrate that NGLB reduces latency by 16% and increases nearly 3× throughput. With the same LB resources, NGLB improves 10× rate of new connection establishment. More importantly, five years of operational experience has proven NGLB’s stability for high-bandwidth services.
负载平衡器(LBs)在云环境中至关重要,它确保了工作负载的可伸缩性。它们将用于服务(由虚拟IP地址或VIP标识)的数据包路由到指定提供该服务的一组服务器,每个服务器都有其直接IP地址(DIP)。因此,位置服务显著影响云服务的性能和租户的体验。许多学术研究集中在设计新的负载均衡算法和开发硬件负载均衡设备等具体问题上,以提高负载均衡的性能、可靠性和可扩展性。然而,我们认为这种方法对于云数据中心来说并不理想,原因如下:(i)用户需求的增加和云服务类型的多样化使负载均衡成为瓶颈;(ii)不断增加机器或升级硬件设备可能会产生大量成本。在本文中,我们提出了下一代负载均衡器(NGLB),旨在绕过负载均衡器的TCP连接数据路径,从而消除传统云负载均衡器的延迟开销和可扩展性瓶颈。LB只参与TCP连接建立阶段。我们设计的三个关键特征是:(i)引入主动地址学习模型来重定向流量并绕过LB, (ii)在多租户虚拟私有云网络中部署的多租户隔离机制,以及(iii)分布式流量控制方法,称为分层连接清理器,旨在确保后端资源的可用性。评估结果表明,NGLB减少了16%的延迟,提高了近3倍的吞吐量。在LB资源相同的情况下,NGLB的新连接建立率提高了10倍。更重要的是,5年的运行经验证明了NGLB在高带宽服务中的稳定性。
{"title":"Cloud Load Balancers Need to Stay Off the Data Path","authors":"Yuchen Zhang;Shuai Jin;Zhenyu Wen;Shibo He;Qingzheng Hou;Yang Song;Zhigang Zong;Xiaomin Wu;Bengbeng Xue;Chenghao Sun;Ku Li;Xing Li;Biao Lyu;Rong Wen;Jiming Chen;Shunmin Zhu","doi":"10.1109/TCC.2025.3595172","DOIUrl":"https://doi.org/10.1109/TCC.2025.3595172","url":null,"abstract":"Load balancers (LBs) are crucial in cloud environments, ensuring workload scalability. They route packets destined for a service (identified by a virtual IP address, or VIP) to a group of servers designated to deliver that service, each with its direct IP address (DIP). Consequently, LBs significantly impact the performance of cloud services and the experience of tenants. Many academic studies focus on specific issues such as designing new load balancing algorithms and developing hardware load balancing devices to enhance the LB’s performance, reliability, and scalability. However, we believe this approach is not ideal for cloud data centers for the following reasons: (i) the increasing demands of users and the variety of cloud service types turn the LB into a bottleneck; and (ii) continually adding machines or upgrading hardware devices can incur substantial costs. In this paper, we propose the Next Generation Load Balancer (NGLB), designed to bypass the TCP connection datapath from the LB, thereby eliminating latency overheads and scalability bottlenecks of traditional cloud LBs. The LB only participates in the TCP connection establishment phase. The three key features of our design are: (i) the introduction of an <italic>active address learning</i> model to redirect traffic and bypass the LB, (ii) a <italic>multi-tenant isolation</i> mechanism for deployment within multi-tenant Virtual Private Cloud networks, and (iii) a distributed flow control method, known as <italic>hierarchical connection cleaner</i>, designed to ensure the availability of backend resources. The evaluation results demonstrate that NGLB reduces latency by 16% and increases nearly 3× throughput. With the same LB resources, NGLB improves 10× rate of new connection establishment. More importantly, five years of operational experience has proven NGLB’s stability for high-bandwidth services.","PeriodicalId":13202,"journal":{"name":"IEEE Transactions on Cloud Computing","volume":"13 3","pages":"1078-1090"},"PeriodicalIF":5.0,"publicationDate":"2025-08-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144998282","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
MKAC: Efficient and Privacy-Preserving Multi- Keyword Ranked Query With Ciphertext Access Control in Cloud Environments 基于密文访问控制的云环境下高效、保密的多关键字排序查询
IF 5 2区 计算机科学 Q1 COMPUTER SCIENCE, INFORMATION SYSTEMS Pub Date : 2025-07-31 DOI: 10.1109/TCC.2025.3594575
Haiyong Bao;Lu Xing;Honglin Wu;Menghong Guan;Na Ruan;Cheng Huang;Hong-Ning Dai
With the explosion of Big Data in cloud environments, data owners tend to delegate the storage and computation to cloud servers. Since cloud servers are generally untrustworthy, data owners often encrypt data before outsourcing it to the cloud. Numerous privacy-preserving schemes for the multi-keyword ranked query have been proposed, but most of these schemes do not support ciphertext access control, which can easily lead to malicious access by unauthorized users, causing serious damage to personal privacy and commercial secrets. To address the above challenges, we propose an efficient and privacy-preserving multi-keyword ranked query scheme (MKAC) that supports ciphertext access control. Specifically, in order to enhance the efficiency of the multi-keyword ranked query, we employ a vantage point (VP) tree to organize the keyword index. Additionally, we develop a VP tree-based multi-keyword ranked query algorithm, which utilizes the pruning strategy to minimize the number of nodes to search. Next, we propose a privacy-preserving multi-keyword ranked query scheme that combines asymmetric scalar-product-preserving encryption with the VP tree. Furthermore, attribute-based encryption mechanism is used to generate the decryption key based on the query user’s attributes, which is then employed to decrypt the query results and trace any malicious query user who may leak the secret key. Finally, a rigorous analysis of the security of MKAC is conducted. The extensive experimental evaluation shows that the proposed scheme is efficient and practical.
随着云环境中大数据的爆炸式增长,数据所有者倾向于将存储和计算委托给云服务器。由于云服务器通常不值得信任,数据所有者通常在将数据外包到云之前对其进行加密。针对多关键字排名查询提出了许多隐私保护方案,但这些方案大多不支持密文访问控制,容易导致未经授权的用户恶意访问,严重损害个人隐私和商业秘密。为了解决上述挑战,我们提出了一种支持密文访问控制的高效且保护隐私的多关键字排名查询方案(MKAC)。具体来说,为了提高多关键字排序查询的效率,我们采用了一个有利点树来组织关键字索引。此外,我们开发了一种基于VP树的多关键字排序查询算法,该算法利用剪枝策略来最小化要搜索的节点数量。接下来,我们提出了一种将非对称保标量积加密与VP树相结合的保隐私多关键字排序查询方案。此外,使用基于属性的加密机制,根据查询用户的属性生成解密密钥,然后使用该解密密钥对查询结果进行解密,并跟踪任何可能泄露密钥的恶意查询用户。最后,对MKAC的安全性进行了严格的分析。大量的实验评估表明,该方案是有效和实用的。
{"title":"MKAC: Efficient and Privacy-Preserving Multi- Keyword Ranked Query With Ciphertext Access Control in Cloud Environments","authors":"Haiyong Bao;Lu Xing;Honglin Wu;Menghong Guan;Na Ruan;Cheng Huang;Hong-Ning Dai","doi":"10.1109/TCC.2025.3594575","DOIUrl":"https://doi.org/10.1109/TCC.2025.3594575","url":null,"abstract":"With the explosion of Big Data in cloud environments, data owners tend to delegate the storage and computation to cloud servers. Since cloud servers are generally untrustworthy, data owners often encrypt data before outsourcing it to the cloud. Numerous privacy-preserving schemes for the multi-keyword ranked query have been proposed, but most of these schemes do not support ciphertext access control, which can easily lead to malicious access by unauthorized users, causing serious damage to personal privacy and commercial secrets. To address the above challenges, we propose an efficient and privacy-preserving multi-keyword ranked query scheme (MKAC) that supports ciphertext access control. Specifically, in order to enhance the efficiency of the multi-keyword ranked query, we employ a vantage point (VP) tree to organize the keyword index. Additionally, we develop a VP tree-based multi-keyword ranked query algorithm, which utilizes the pruning strategy to minimize the number of nodes to search. Next, we propose a privacy-preserving multi-keyword ranked query scheme that combines asymmetric scalar-product-preserving encryption with the VP tree. Furthermore, attribute-based encryption mechanism is used to generate the decryption key based on the query user’s attributes, which is then employed to decrypt the query results and trace any malicious query user who may leak the secret key. Finally, a rigorous analysis of the security of MKAC is conducted. The extensive experimental evaluation shows that the proposed scheme is efficient and practical.","PeriodicalId":13202,"journal":{"name":"IEEE Transactions on Cloud Computing","volume":"13 3","pages":"1065-1077"},"PeriodicalIF":5.0,"publicationDate":"2025-07-31","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144996104","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Layer Redundancy Aware DNN Model Repository Planning for Fast Model Download in Edge Cloud 边缘云中基于层冗余的DNN模型库规划
IF 5 2区 计算机科学 Q1 COMPUTER SCIENCE, INFORMATION SYSTEMS Pub Date : 2025-07-22 DOI: 10.1109/TCC.2025.3591482
Hongmin Geng;Yuepeng Li;Sheng Wang;Lin Gu;Deze Zeng
The booming development of artificial intelligence (AI) applications has greatly promoted edge intelligence technology. To support latency-sensitive Deep Neural Network (DNN) based applications, the integration of serverless inference paradigm into edge intelligence has become a widely recognized solution. However, the long DNN model downloading time from central clouds to edge servers hinders inference performance, and asks for establishing model repository within the edge cloud. This paper first identifies the inherent layer redundancy in DNN models, which is potentially beneficial to improve the storage efficiency of the model repository in the edge cloud. However, how to exploit the layer redundancy feature and allocate the DNN layers across different edge servers with capacitated storage resources to reduce the model downloading time remains challenging. To address this issue, we first formulate this problem in Quadratic Integer Programming (QIP) form, based on which a randomized rounding layer redundancy aware DNN model storage planning strategy is proposed. Our approach significantly reduces model downloading time by up to 63% compared to state-of-the-art methods, as demonstrated through extensive trace-driven experiments.
人工智能应用的蓬勃发展极大地推动了边缘智能技术的发展。为了支持基于延迟敏感的深度神经网络(DNN)应用,将无服务器推理范式集成到边缘智能中已成为一种广泛认可的解决方案。然而,从中心云到边缘服务器的DNN模型下载时间长,影响了推理性能,并要求在边缘云中建立模型存储库。本文首先识别了DNN模型中固有的层冗余,这可能有利于提高边缘云模型库的存储效率。然而,如何利用层冗余特性并在具有容量存储资源的不同边缘服务器上分配DNN层以减少模型下载时间仍然是一个挑战。为了解决这个问题,我们首先用二次整数规划(Quadratic Integer Programming, QIP)的形式来表述这个问题,并在此基础上提出了一种随机舍入层冗余感知的DNN模型存储规划策略。与最先进的方法相比,我们的方法显著减少了高达63%的模型下载时间,正如通过广泛的跟踪驱动实验所证明的那样。
{"title":"Layer Redundancy Aware DNN Model Repository Planning for Fast Model Download in Edge Cloud","authors":"Hongmin Geng;Yuepeng Li;Sheng Wang;Lin Gu;Deze Zeng","doi":"10.1109/TCC.2025.3591482","DOIUrl":"https://doi.org/10.1109/TCC.2025.3591482","url":null,"abstract":"The booming development of artificial intelligence (AI) applications has greatly promoted edge intelligence technology. To support latency-sensitive Deep Neural Network (DNN) based applications, the integration of serverless inference paradigm into edge intelligence has become a widely recognized solution. However, the long DNN model downloading time from central clouds to edge servers hinders inference performance, and asks for establishing model repository within the edge cloud. This paper first identifies the inherent layer redundancy in DNN models, which is potentially beneficial to improve the storage efficiency of the model repository in the edge cloud. However, how to exploit the layer redundancy feature and allocate the DNN layers across different edge servers with capacitated storage resources to reduce the model downloading time remains challenging. To address this issue, we first formulate this problem in Quadratic Integer Programming (QIP) form, based on which a randomized rounding layer redundancy aware DNN model storage planning strategy is proposed. Our approach significantly reduces model downloading time by up to 63% compared to state-of-the-art methods, as demonstrated through extensive trace-driven experiments.","PeriodicalId":13202,"journal":{"name":"IEEE Transactions on Cloud Computing","volume":"13 3","pages":"1038-1049"},"PeriodicalIF":5.0,"publicationDate":"2025-07-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144997130","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
CSCR: A Cross-View Intelligent Scheduling Method Implemented via Cloud Computing Workflow Reduction CSCR:一种基于云计算工作流简化的跨视图智能调度方法
IF 5 2区 计算机科学 Q1 COMPUTER SCIENCE, INFORMATION SYSTEMS Pub Date : 2025-07-22 DOI: 10.1109/TCC.2025.3591549
Genxin Chen;Jin Qi;Xingjian Zhu;Jialin Hua;Zhenjiang Dong;Yanfei Sun
The surge in the development of artificial intelligence has led to increases in the complexity of computational tasks and the resource demands within cloud computing scenarios. Therefore, intelligent scheduling methods have formed a crucial research area. Solving complex scheduling problems requires many problem feature and long-sequence decision-making observations as possible. To address the workflow scheduling problem under the limited capabilities of models, workflow reduction and cross-view workflow scheduling problems are first proposed in this article, with the optimization objectives and constraints of each problem described. Second, a cross-view intelligent scheduling method implemented via cloud computing workflow reduction (CSCR), including a workflow reduction sorting algorithm (Task-priority ranker), an intelligent reduction algorithm (Workflow view-transformer), and a cross-view intelligent scheduling algorithm (Joint-scheduler), is proposed. We also propose an intelligent scheduling architecture under the workflow reduction paradigm. By reducing the workflow, we provide multiple views that support the decision-making processes of deep reinforcement learning-based scheduling models and coordinate workflow views before and after the reduction step to achieve cross-view joint scheduling. Experimental results show that CSCR achieves minimum advantages of 42.1%, 43.2%, and 33.3% in terms of three workflow reduction indicators over four other algorithms, significantly optimizing the effect of the employed scheduling model.
人工智能的迅猛发展导致了云计算场景中计算任务的复杂性和资源需求的增加。因此,智能调度方法已经形成了一个重要的研究领域。解决复杂的调度问题需要尽可能多的问题特征和长序列的决策观察。为了解决模型能力有限的情况下的工作流调度问题,本文首先提出了工作流约简和跨视图工作流调度问题,并描述了每个问题的优化目标和约束条件。其次,提出了一种基于云计算工作流约简(CSCR)的跨视图智能调度方法,包括工作流约简排序算法(Task-priority ranker)、工作流视图转换智能约简算法(workflow view-transformer)和跨视图智能调度算法(Joint-scheduler)。提出了一种基于工作流简化范式的智能调度架构。通过简化工作流,我们提供了支持基于深度强化学习的调度模型决策过程的多个视图,并在简化步骤前后协调工作流视图,实现跨视图联合调度。实验结果表明,与其他四种算法相比,CSCR在三个工作流减少指标上的最小优势分别为42.1%、43.2%和33.3%,显著优化了所采用调度模型的效果。
{"title":"CSCR: A Cross-View Intelligent Scheduling Method Implemented via Cloud Computing Workflow Reduction","authors":"Genxin Chen;Jin Qi;Xingjian Zhu;Jialin Hua;Zhenjiang Dong;Yanfei Sun","doi":"10.1109/TCC.2025.3591549","DOIUrl":"https://doi.org/10.1109/TCC.2025.3591549","url":null,"abstract":"The surge in the development of artificial intelligence has led to increases in the complexity of computational tasks and the resource demands within cloud computing scenarios. Therefore, intelligent scheduling methods have formed a crucial research area. Solving complex scheduling problems requires many problem feature and long-sequence decision-making observations as possible. To address the workflow scheduling problem under the limited capabilities of models, workflow reduction and cross-view workflow scheduling problems are first proposed in this article, with the optimization objectives and constraints of each problem described. Second, a cross-view intelligent scheduling method implemented via cloud computing workflow reduction (CSCR), including a workflow reduction sorting algorithm (Task-priority ranker), an intelligent reduction algorithm (Workflow view-transformer), and a cross-view intelligent scheduling algorithm (Joint-scheduler), is proposed. We also propose an intelligent scheduling architecture under the workflow reduction paradigm. By reducing the workflow, we provide multiple views that support the decision-making processes of deep reinforcement learning-based scheduling models and coordinate workflow views before and after the reduction step to achieve cross-view joint scheduling. Experimental results show that CSCR achieves minimum advantages of 42.1%, 43.2%, and 33.3% in terms of three workflow reduction indicators over four other algorithms, significantly optimizing the effect of the employed scheduling model.","PeriodicalId":13202,"journal":{"name":"IEEE Transactions on Cloud Computing","volume":"13 3","pages":"1050-1064"},"PeriodicalIF":5.0,"publicationDate":"2025-07-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144998011","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Securing and Sustaining IoT Edge-Computing Architectures Through Nanoservice Integration 通过纳米服务集成保护和维持物联网边缘计算架构
IF 5 2区 计算机科学 Q1 COMPUTER SCIENCE, INFORMATION SYSTEMS Pub Date : 2025-07-14 DOI: 10.1109/TCC.2025.3588681
Cinthya Celina Tamayo Gonzalez;Ijaz Ahmad;Simone Soderi;Erkki Harjula
The rapid proliferation of the Internet of Things (IoT) and edge computing devices calls for solutions that deliver low latency, energy efficiency, and robust security—often challenging goals to balance simultaneously. This paper introduces a novel nanoservice-based framework that dynamically adapts to changing demands while achieving sustainable and secure edge operations. By breaking down functionalities into specialized and narrowly scoped nanoservices that are requested only as needed and eliminated when idle, the approach significantly reduces latency and energy usage compared to conventional, more static methods. Moreover, integrating a Zero-Trust Architecture (ZTA) ensures that every component—computational or security-related—is continuously verified and restricted through strict access controls and micro-segmentation. This framework’s adaptability extends uniformly to all nanoservices, including those providing security features, thereby maintaining strong protective measures even as workloads and network conditions evolve. Experimental evaluations on IoT devices under varying workloads demonstrate that the proposed approach significantly reduces energy consumption and latency while maintaining security and scalability. These results underscore the potential for an integrated, flexible model that simultaneously addresses energy efficiency, performance, and security—an essential trifecta in future edge computing environments.
物联网(IoT)和边缘计算设备的快速扩散需要提供低延迟、能效和强大安全性的解决方案,而这些解决方案往往难以同时实现平衡。本文介绍了一种新的基于纳米服务的框架,该框架可以动态适应不断变化的需求,同时实现可持续和安全的边缘操作。通过将功能分解为专门的、范围狭窄的纳米服务(仅在需要时请求,空闲时消除),与传统的、更静态的方法相比,该方法显著降低了延迟和能耗。此外,集成零信任架构(Zero-Trust Architecture, ZTA)确保每个组件(计算或安全相关)都通过严格的访问控制和微分段得到持续验证和限制。该框架的适应性统一扩展到所有纳米服务,包括那些提供安全特性的纳米服务,从而即使在工作负载和网络条件发生变化时也能保持强大的保护措施。在不同工作负载下对物联网设备进行的实验评估表明,该方法在保持安全性和可扩展性的同时显著降低了能耗和延迟。这些结果强调了一个集成的、灵活的模型的潜力,该模型可以同时解决能源效率、性能和安全问题,这是未来边缘计算环境中必不可少的三要素。
{"title":"Securing and Sustaining IoT Edge-Computing Architectures Through Nanoservice Integration","authors":"Cinthya Celina Tamayo Gonzalez;Ijaz Ahmad;Simone Soderi;Erkki Harjula","doi":"10.1109/TCC.2025.3588681","DOIUrl":"https://doi.org/10.1109/TCC.2025.3588681","url":null,"abstract":"The rapid proliferation of the Internet of Things (IoT) and edge computing devices calls for solutions that deliver low latency, energy efficiency, and robust security—often challenging goals to balance simultaneously. This paper introduces a novel nanoservice-based framework that dynamically adapts to changing demands while achieving sustainable and secure edge operations. By breaking down functionalities into specialized and narrowly scoped nanoservices that are requested only as needed and eliminated when idle, the approach significantly reduces latency and energy usage compared to conventional, more static methods. Moreover, integrating a Zero-Trust Architecture (ZTA) ensures that every component—computational or security-related—is continuously verified and restricted through strict access controls and micro-segmentation. This framework’s adaptability extends uniformly to all nanoservices, including those providing security features, thereby maintaining strong protective measures even as workloads and network conditions evolve. Experimental evaluations on IoT devices under varying workloads demonstrate that the proposed approach significantly reduces energy consumption and latency while maintaining security and scalability. These results underscore the potential for an integrated, flexible model that simultaneously addresses energy efficiency, performance, and security—an essential trifecta in future edge computing environments.","PeriodicalId":13202,"journal":{"name":"IEEE Transactions on Cloud Computing","volume":"13 3","pages":"1026-1037"},"PeriodicalIF":5.0,"publicationDate":"2025-07-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144998169","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
期刊
IEEE Transactions on Cloud Computing
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1