首页 > 最新文献

IEEE Transactions on Cloud Computing最新文献

英文 中文
Edge-Cloud Collaborative UAV Object Detection: Edge-Embedded Lightweight Algorithm Design and Task Offloading Using Fuzzy Neural Network 边缘云协作无人机目标检测:使用模糊神经网络的边缘嵌入式轻量级算法设计和任务卸载
IF 6.5 2区 计算机科学 Q1 Computer Science Pub Date : 2024-02-05 DOI: 10.1109/TCC.2024.3361858
Yazhou Yuan;Shicong Gao;Ziteng Zhang;Wenye Wang;Zhezhuang Xu;Zhixin Liu
With the rapid development of artificial intelligence and Unmanned Aerial Vehicle (UAV) technology, AI-based UAVs are increasingly utilized in various industrial and civilian applications. This paper presents a distributed Edge-Cloud collaborative framework for UAV object detection, aiming to achieve real-time and accurate detection of ground moving targets. The framework incorporates an Edge-Embedded Lightweight (${{text{E}}^{2}}text{L}$) object algorithm with an attention mechanism, enabling real-time object detection on edge-side embedded devices while maintaining high accuracy. Additionally, a decision-making mechanism based on fuzzy neural network facilitates adaptive task allocation between the edge-side and cloud-side. Experimental results demonstrate the improved running rate of the proposed algorithm compared to YOLOv4 on the edge-side NVIDIA Jetson Xavier NX, and the superior performance of the distributed Edge-Cloud collaborative framework over traditional edge computing or cloud computing algorithms in terms of speed and accuracy
随着人工智能和无人机(UAV)技术的快速发展,基于人工智能的无人机越来越多地应用于各种工业和民用领域。本文提出了一种用于无人机目标检测的分布式边缘-云协作框架,旨在实现对地面移动目标的实时、准确检测。该框架将边缘嵌入式轻量级(${{text{E}}^{2}}text{L}$)目标算法与注意力机制相结合,在保持高精度的同时实现了边缘嵌入式设备上的实时目标检测。此外,基于模糊神经网络的决策机制促进了边缘端和云端之间的自适应任务分配。实验结果表明,与 YOLOv4 相比,所提算法在边缘侧英伟达 Jetson Xavier NX 上的运行率有所提高,而且分布式边缘-云协作框架在速度和准确性方面的表现优于传统的边缘计算或云计算算法。
{"title":"Edge-Cloud Collaborative UAV Object Detection: Edge-Embedded Lightweight Algorithm Design and Task Offloading Using Fuzzy Neural Network","authors":"Yazhou Yuan;Shicong Gao;Ziteng Zhang;Wenye Wang;Zhezhuang Xu;Zhixin Liu","doi":"10.1109/TCC.2024.3361858","DOIUrl":"10.1109/TCC.2024.3361858","url":null,"abstract":"With the rapid development of artificial intelligence and Unmanned Aerial Vehicle (UAV) technology, AI-based UAVs are increasingly utilized in various industrial and civilian applications. This paper presents a distributed Edge-Cloud collaborative framework for UAV object detection, aiming to achieve real-time and accurate detection of ground moving targets. The framework incorporates an Edge-Embedded Lightweight (\u0000<inline-formula><tex-math>${{text{E}}^{2}}text{L}$</tex-math></inline-formula>\u0000) object algorithm with an attention mechanism, enabling real-time object detection on edge-side embedded devices while maintaining high accuracy. Additionally, a decision-making mechanism based on fuzzy neural network facilitates adaptive task allocation between the edge-side and cloud-side. Experimental results demonstrate the improved running rate of the proposed algorithm compared to YOLOv4 on the edge-side NVIDIA Jetson Xavier NX, and the superior performance of the distributed Edge-Cloud collaborative framework over traditional edge computing or cloud computing algorithms in terms of speed and accuracy","PeriodicalId":13202,"journal":{"name":"IEEE Transactions on Cloud Computing","volume":null,"pages":null},"PeriodicalIF":6.5,"publicationDate":"2024-02-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"139951290","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Integrating Bayesian Optimization and Machine Learning for the Optimal Configuration of Cloud Systems 整合贝叶斯优化和机器学习,实现云系统的最优配置
IF 6.5 2区 计算机科学 Q1 Computer Science Pub Date : 2024-02-01 DOI: 10.1109/TCC.2024.3361070
Bruno Guindani;Danilo Ardagna;Alessandra Guglielmi;Roberto Rocco;Gianluca Palermo
Bayesian Optimization (BO) is an efficient method for finding optimal cloud configurations for several types of applications. On the other hand, Machine Learning (ML) can provide helpful knowledge about the application at hand thanks to its predicting capabilities. This work proposes a general approach based on BO, which integrates elements from ML techniques in multiple ways, to find an optimal configuration of recurring jobs running in public and private cloud environments, possibly subject to black-box constraints, e.g., application execution time or accuracy. We test our approach by considering several use cases, including edge computing, scientific computing, and Big Data applications. Results show that our solution outperforms other state-of-the-art black-box techniques, including classical autotuning and BO- and ML-based algorithms, reducing the number of unfeasible executions and corresponding costs up to 2–4 times.
贝叶斯优化(BO)是一种高效的方法,可为多种类型的应用找到最佳云配置。另一方面,机器学习(ML)凭借其预测能力,可以提供有关当前应用的有用知识。这项工作提出了一种基于 BO 的通用方法,该方法以多种方式集成了 ML 技术的各种元素,可为在公共和私有云环境中运行的重复性工作找到最佳配置,可能会受到黑盒子约束,例如应用程序的执行时间或准确性。我们考虑了多个使用案例,包括边缘计算、科学计算和大数据应用,对我们的方法进行了测试。结果表明,我们的解决方案优于其他最先进的黑盒技术,包括经典的自动调整以及基于 BO 和 ML 的算法,可将不可行的执行次数和相应的成本最多减少 2-4 倍。
{"title":"Integrating Bayesian Optimization and Machine Learning for the Optimal Configuration of Cloud Systems","authors":"Bruno Guindani;Danilo Ardagna;Alessandra Guglielmi;Roberto Rocco;Gianluca Palermo","doi":"10.1109/TCC.2024.3361070","DOIUrl":"10.1109/TCC.2024.3361070","url":null,"abstract":"Bayesian Optimization (BO) is an efficient method for finding optimal cloud configurations for several types of applications. On the other hand, Machine Learning (ML) can provide helpful knowledge about the application at hand thanks to its predicting capabilities. This work proposes a general approach based on BO, which integrates elements from ML techniques in multiple ways, to find an optimal configuration of recurring jobs running in public and private cloud environments, possibly subject to black-box constraints, e.g., application execution time or accuracy. We test our approach by considering several use cases, including edge computing, scientific computing, and Big Data applications. Results show that our solution outperforms other state-of-the-art black-box techniques, including classical autotuning and BO- and ML-based algorithms, reducing the number of unfeasible executions and corresponding costs up to 2–4 times.","PeriodicalId":13202,"journal":{"name":"IEEE Transactions on Cloud Computing","volume":null,"pages":null},"PeriodicalIF":6.5,"publicationDate":"2024-02-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=10418550","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"139951560","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
DRL-Based Contract Incentive for Wireless-Powered and UAV-Assisted Backscattering MEC System 基于 DRL 的无线供电和无人机辅助反向散射 MEC 系统合同激励机制
IF 6.5 2区 计算机科学 Q1 Computer Science Pub Date : 2024-01-31 DOI: 10.1109/TCC.2024.3360443
Che Chen;Shimin Gong;Wenjie Zhang;Yifeng Zheng;Yeo Chai Kiat
Mobile edge computing (MEC) is viewed as a promising technology to address the challenges of intensive computing demands in hotspots (HSs). In this article, we consider a unmanned aerial vehicle (UAV)-assisted backscattering MEC system. The UAVs can fly from parking aprons to HSs, providing energy to HSs via RF beamforming and collecting data from wireless users in HSs through backscattering. We aim to maximize the long-term utility of all HSs, subject to the stability of the HSs’ energy queues. This problem is a joint optimization of the data offloading decision and contract design that should be adaptive to the users’ random task demands and the time-varying wireless channel conditions. A deep reinforcement learning based contract incentive (DRLCI) strategy is proposed to solve this problem in two steps. First, we use deep Q-network (DQN) algorithm to update the HSs’ offloading decisions according to the changing network environment. Second, to motivate the UAVs to participate in resource sharing, a contract specific to each type of UAVs has been designed, utilizing Lagrangian multiplier method to approach the optimal contract. Simulation results show the feasibility and efficiency of the proposed strategy, demonstrating a better performance than the natural DQN and Double-DQN algorithms.
移动边缘计算(MEC)被认为是解决热点(HS)密集计算需求挑战的一项前景广阔的技术。在本文中,我们考虑了无人驾驶飞行器(UAV)辅助的反向散射 MEC 系统。无人飞行器可以从停机坪飞到热点地区,通过射频波束成形为热点地区提供能量,并通过反向散射收集热点地区无线用户的数据。我们的目标是在保证 HS 能量队列稳定的前提下,使所有 HS 的长期效用最大化。这个问题是数据卸载决策和合同设计的联合优化,应适应用户的随机任务需求和时变无线信道条件。本文提出了一种基于深度强化学习的合约激励(DRLCI)策略,分两步解决这一问题。首先,我们使用深度 Q 网络(DQN)算法,根据不断变化的网络环境更新 HS 的卸载决策。其次,为了激励无人机参与资源共享,我们设计了针对每种无人机的合同,并利用拉格朗日乘数法来接近最优合同。仿真结果表明了所提策略的可行性和效率,其性能优于自然 DQN 算法和双 DQN 算法。
{"title":"DRL-Based Contract Incentive for Wireless-Powered and UAV-Assisted Backscattering MEC System","authors":"Che Chen;Shimin Gong;Wenjie Zhang;Yifeng Zheng;Yeo Chai Kiat","doi":"10.1109/TCC.2024.3360443","DOIUrl":"10.1109/TCC.2024.3360443","url":null,"abstract":"Mobile edge computing (MEC) is viewed as a promising technology to address the challenges of intensive computing demands in hotspots (HSs). In this article, we consider a unmanned aerial vehicle (UAV)-assisted backscattering MEC system. The UAVs can fly from parking aprons to HSs, providing energy to HSs via RF beamforming and collecting data from wireless users in HSs through backscattering. We aim to maximize the long-term utility of all HSs, subject to the stability of the HSs’ energy queues. This problem is a joint optimization of the data offloading decision and contract design that should be adaptive to the users’ random task demands and the time-varying wireless channel conditions. A deep reinforcement learning based contract incentive (DRLCI) strategy is proposed to solve this problem in two steps. First, we use deep Q-network (DQN) algorithm to update the HSs’ offloading decisions according to the changing network environment. Second, to motivate the UAVs to participate in resource sharing, a contract specific to each type of UAVs has been designed, utilizing Lagrangian multiplier method to approach the optimal contract. Simulation results show the feasibility and efficiency of the proposed strategy, demonstrating a better performance than the natural DQN and Double-DQN algorithms.","PeriodicalId":13202,"journal":{"name":"IEEE Transactions on Cloud Computing","volume":null,"pages":null},"PeriodicalIF":6.5,"publicationDate":"2024-01-31","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"139951383","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Efficient Verifiable Cloud-Assisted PSI Cardinality for Privacy-Preserving Contact Tracing 用于保护隐私的联系人追踪的高效可验证云辅助 PSI Cardinality
IF 6.5 2区 计算机科学 Q1 Computer Science Pub Date : 2024-01-30 DOI: 10.1109/TCC.2024.3360098
Yafeng Chen;Axin Wu;Yuer Yang;Xiangjun Xin;Chang Song
Private set intersection cardinality (PSI-CA) allows two parties to learn the size of the intersection between two private sets without revealing other additional information, which is a promising technique to solve privacy concerns in contact tracing. Efficient PSI protocols typically use oblivious transfer, involving multiple rounds of interaction and leading to heavy local computation overheads and protocol delays, especially when interacting with many receivers. Cloud-assisted PSI-CA is a better solution as it relieves participants’ burdens of computation and communication. However, cloud servers may return incorrect or incomplete results for some reason, leading to an incorrectness issue. At present, to our knowledge, existing cloud-assisted PSI-CA protocols cannot address such a concern. To address this, we propose two specific verifiable cloud-assisted PSI-CA protocols: one based on a two-server protocol and the other on a single-server protocol. Further, we employ Cuckoo hashing to optimize these two protocols, enabling the receiver's computational costs independent of the size of the sender's set. We also prove the security of the protocols and implement them. Finally, we analyze and discuss their performance demonstrating that the single-server verifiable PSI-CA protocol does not introduce significant computation or communication costs while adding functionalities.
隐私集交集万有引力(PSI-CA)允许双方在不透露其他额外信息的情况下了解两个隐私集之间的交集大小,是解决接触追踪中隐私问题的一种有前途的技术。高效的 PSI 协议通常使用遗忘传输,涉及多轮交互,导致严重的本地计算开销和协议延迟,尤其是在与许多接收方交互时。云辅助 PSI-CA 是一种更好的解决方案,因为它减轻了参与者的计算和通信负担。但是,云服务器可能会因为某些原因返回不正确或不完整的结果,从而导致不正确性问题。目前,据我们所知,现有的云辅助 PSI-CA 协议无法解决这一问题。为了解决这个问题,我们提出了两个具体的可验证云辅助 PSI-CA 协议:一个基于双服务器协议,另一个基于单服务器协议。此外,我们还利用布谷鸟散列优化了这两个协议,使接收方的计算成本与发送方集合的大小无关。我们还证明了协议的安全性并实现了它们。最后,我们对它们的性能进行了分析和讨论,证明单服务器可验证 PSI-CA 协议在增加功能的同时不会带来显著的计算或通信成本。
{"title":"Efficient Verifiable Cloud-Assisted PSI Cardinality for Privacy-Preserving Contact Tracing","authors":"Yafeng Chen;Axin Wu;Yuer Yang;Xiangjun Xin;Chang Song","doi":"10.1109/TCC.2024.3360098","DOIUrl":"10.1109/TCC.2024.3360098","url":null,"abstract":"Private set intersection cardinality (PSI-CA) allows two parties to learn the size of the intersection between two private sets without revealing other additional information, which is a promising technique to solve privacy concerns in contact tracing. Efficient PSI protocols typically use oblivious transfer, involving multiple rounds of interaction and leading to heavy local computation overheads and protocol delays, especially when interacting with many receivers. Cloud-assisted PSI-CA is a better solution as it relieves participants’ burdens of computation and communication. However, cloud servers may return incorrect or incomplete results for some reason, leading to an incorrectness issue. At present, to our knowledge, existing cloud-assisted PSI-CA protocols cannot address such a concern. To address this, we propose two specific verifiable cloud-assisted PSI-CA protocols: one based on a two-server protocol and the other on a single-server protocol. Further, we employ Cuckoo hashing to optimize these two protocols, enabling the receiver's computational costs independent of the size of the sender's set. We also prove the security of the protocols and implement them. Finally, we analyze and discuss their performance demonstrating that the single-server verifiable PSI-CA protocol does not introduce significant computation or communication costs while adding functionalities.","PeriodicalId":13202,"journal":{"name":"IEEE Transactions on Cloud Computing","volume":null,"pages":null},"PeriodicalIF":6.5,"publicationDate":"2024-01-30","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"139951366","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Root Cause Analysis for Cloud-Native Applications 云原生应用的根本原因分析
IF 6.5 2区 计算机科学 Q1 Computer Science Pub Date : 2024-01-29 DOI: 10.1109/TCC.2024.3358823
Bartosz Żurkowski;Krzysztof Zieliński
Root cause analysis (RCA) is a critical component in maintaining the reliability and performance of modern cloud applications. However, due to the inherent complexity of cloud environments, traditional RCA techniques become insufficient in supporting system administrators in daily incident response routines. This article presents an RCA solution specifically designed for cloud applications, capable of pinpointing failure root causes and recreating complete fault trajectories from the root cause to the effect. The novelty of our approach lies in approximating causal symptom dependencies by synergizing several symptom correlation methods that assess symptoms in terms of structural, semantic, and temporal aspects. The solution integrates statistical methods with system structure and behavior mining, offering a more comprehensive analysis than existing techniques. Based on these concepts, in this work, we provide definitions and construction algorithms for RCA model structures used in the inference, propose a symptom correlation framework encompassing essential elements of symptom data analysis, and provide a detailed description of the elaborated root cause identification process. Functional evaluation on a live microservice-based system demonstrates the effectiveness of our approach in identifying root causes of complex failures across multiple cloud layers.
根源分析(RCA)是维护现代云应用程序可靠性和性能的关键组成部分。然而,由于云环境固有的复杂性,传统的 RCA 技术已不足以支持系统管理员的日常事件响应工作。本文介绍了一种专为云应用程序设计的 RCA 解决方案,它能够精确定位故障根源,并重现从根源到影响的完整故障轨迹。我们的方法的新颖之处在于通过协同几种从结构、语义和时间方面评估症状的症状相关性方法来近似判断症状的因果依赖关系。该解决方案将统计方法与系统结构和行为挖掘相结合,提供了比现有技术更全面的分析。基于这些概念,我们在这项工作中提供了用于推理的 RCA 模型结构的定义和构建算法,提出了一个包含症状数据分析基本要素的症状相关性框架,并详细描述了精心设计的根本原因识别流程。在基于微服务的实时系统上进行的功能评估证明了我们的方法在识别跨多个云层的复杂故障根源方面的有效性。
{"title":"Root Cause Analysis for Cloud-Native Applications","authors":"Bartosz Żurkowski;Krzysztof Zieliński","doi":"10.1109/TCC.2024.3358823","DOIUrl":"10.1109/TCC.2024.3358823","url":null,"abstract":"Root cause analysis (RCA) is a critical component in maintaining the reliability and performance of modern cloud applications. However, due to the inherent complexity of cloud environments, traditional RCA techniques become insufficient in supporting system administrators in daily incident response routines. This article presents an RCA solution specifically designed for cloud applications, capable of pinpointing failure root causes and recreating complete fault trajectories from the root cause to the effect. The novelty of our approach lies in approximating causal symptom dependencies by synergizing several symptom correlation methods that assess symptoms in terms of structural, semantic, and temporal aspects. The solution integrates statistical methods with system structure and behavior mining, offering a more comprehensive analysis than existing techniques. Based on these concepts, in this work, we provide definitions and construction algorithms for RCA model structures used in the inference, propose a symptom correlation framework encompassing essential elements of symptom data analysis, and provide a detailed description of the elaborated root cause identification process. Functional evaluation on a live microservice-based system demonstrates the effectiveness of our approach in identifying root causes of complex failures across multiple cloud layers.","PeriodicalId":13202,"journal":{"name":"IEEE Transactions on Cloud Computing","volume":null,"pages":null},"PeriodicalIF":6.5,"publicationDate":"2024-01-29","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"139951361","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Alleviating Congestion via Switch Design for Fair Buffer Allocation in Datacenters 通过交换机设计缓解拥堵,实现数据中心缓冲区的公平分配
IF 6.5 2区 计算机科学 Q1 Computer Science Pub Date : 2024-01-23 DOI: 10.1109/TCC.2024.3357595
Ahmed M. Abdelmoniem;Brahim Bensaou
In data-centers, the composite origin and bursty nature of traffic, the small bandwidth-delay product and the tiny switch buffers lead to unusual congestion patterns that are not handled well by traditional end-to-end congestion control mechanisms such as those deployed in TCP. Existing works address the problem by modifying TCP to adapt it to the idiosyncrasies of data-centers. While this is feasible in private environments, it remains almost impossible to achieve practically in public multi-tenant clouds where a multitude of operating systems and thus congestion control protocols co-exist. In this work, we design a simple switch-based active queue management scheme to deal with such congestion issues adequately. Our approach requires no modification to TCP which enables seamless deployment in public data-centers via switch firmware updates. We present a simple analysis to show the stability and effectiveness of our approach, then discuss the real implementations in software and hardware on the NetFPGA platform. Numerical results from ns-2 simulation and experimental results from a small testbed cluster demonstrate the effectiveness of our approach in achieving high overall throughput, good fairness, smaller flow completion times (FCT) for short-lived flows, and a significant reduction in the tail of the FCT distribution.
在数据中心,流量的复合来源和突发性质、较小的带宽-延迟乘积以及极小的交换机缓冲区导致了不寻常的拥塞模式,而传统的端到端拥塞控制机制(如 TCP 中部署的机制)无法很好地处理这些问题。现有研究通过修改 TCP 来解决这一问题,使其适应数据中心的特殊性。虽然这在私有环境中是可行的,但在多操作系统和拥塞控制协议并存的公共多租户云中,这几乎是不可能实现的。在这项工作中,我们设计了一种简单的基于交换机的主动队列管理方案,以充分解决此类拥塞问题。我们的方法无需修改 TCP,因此可以通过交换机固件更新在公共数据中心无缝部署。我们通过简单的分析展示了我们方法的稳定性和有效性,然后讨论了在 NetFPGA 平台上软件和硬件的实际实现。来自 ns-2 仿真的数值结果和来自小型测试平台集群的实验结果表明,我们的方法在实现高总体吞吐量、良好的公平性、缩短短时流的流量完成时间 (FCT) 以及显著减少 FCT 分布尾部方面非常有效。
{"title":"Alleviating Congestion via Switch Design for Fair Buffer Allocation in Datacenters","authors":"Ahmed M. Abdelmoniem;Brahim Bensaou","doi":"10.1109/TCC.2024.3357595","DOIUrl":"10.1109/TCC.2024.3357595","url":null,"abstract":"In data-centers, the composite origin and bursty nature of traffic, the small bandwidth-delay product and the tiny switch buffers lead to unusual congestion patterns that are not handled well by traditional end-to-end congestion control mechanisms such as those deployed in TCP. Existing works address the problem by modifying TCP to adapt it to the idiosyncrasies of data-centers. While this is feasible in private environments, it remains almost impossible to achieve practically in public multi-tenant clouds where a multitude of operating systems and thus congestion control protocols co-exist. In this work, we design a simple switch-based active queue management scheme to deal with such congestion issues adequately. Our approach requires no modification to TCP which enables seamless deployment in public data-centers via switch firmware updates. We present a simple analysis to show the stability and effectiveness of our approach, then discuss the real implementations in software and hardware on the NetFPGA platform. Numerical results from ns-2 simulation and experimental results from a small testbed cluster demonstrate the effectiveness of our approach in achieving high overall throughput, good fairness, smaller flow completion times (FCT) for short-lived flows, and a significant reduction in the tail of the FCT distribution.","PeriodicalId":13202,"journal":{"name":"IEEE Transactions on Cloud Computing","volume":null,"pages":null},"PeriodicalIF":6.5,"publicationDate":"2024-01-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=10412648","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"139951562","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Fault Tolerance Oriented SFC Optimization in SDN/NFV-Enabled Cloud Environment Based on Deep Reinforcement Learning 基于深度强化学习的 SDN/NFV 云环境中面向容错的 SFC 优化
IF 6.5 2区 计算机科学 Q1 Computer Science Pub Date : 2024-01-23 DOI: 10.1109/TCC.2024.3357061
Jing Chen;Jia Chen;Kuo Guo;Renkun Hu;Tao Zou;Jun Zhu;Hongke Zhang;Jingjing Liu
In software defined network/network function virtualization (SDN/NFV)-enabled cloud environment, cloud services can be implemented as service function chains (SFCs), which consist of a series of ordered virtual network functions. However, due to fluctuations of cloud traffic and without knowledge of cloud computing network configuration, designing SFC optimization approach to obtain flexible cloud services in dynamic cloud environment is a pivotal challenge. In this paper, we propose a fault tolerance oriented SFC optimization approach based on deep reinforcement learning. We model fault tolerance oriented SFC elastic optimization problem as a Markov decision process, in which the reward is modeled as a weighted function, including minimizing energy consumption and migration cost, maximizing revenue benefit and load balancing. Then, taking binary integer programming model as constraints of quality of cloud services, we design optimization approaches for single-agent double deep Q-network (SADDQN) and multi-agent DDQN (MADDQN). Among them, MADDQN decentralizes training tasks from control plane to data plane to reduce the probability of single point of failure for the centralized controller. Experimental results show that the designed approaches have better performance. MADDQN can almost reach the upper bound of theoretical solution obtained by assuming a prior knowledge of the dynamics of cloud traffic.
在支持软件定义网络/网络功能虚拟化(SDN/NFV)的云环境中,云服务可以作为服务功能链(SFC)来实现,SFC 由一系列有序的虚拟网络功能组成。然而,由于云流量的波动和对云计算网络配置的不了解,设计 SFC 优化方法以在动态云环境中获得灵活的云服务是一个关键挑战。本文提出了一种基于深度强化学习的面向容错的 SFC 优化方法。我们将面向容错的 SFC 弹性优化问题建模为一个马尔可夫决策过程,其中奖励被建模为一个加权函数,包括能耗和迁移成本最小化、收益效益最大化和负载平衡。然后,以二元整数编程模型作为云服务质量的约束条件,设计了单代理双深Q网络(SADDQN)和多代理DDQN(MADDQN)的优化方法。其中,MADDQN 将训练任务从控制平面分散到数据平面,以降低集中控制器出现单点故障的概率。实验结果表明,所设计的方法具有更好的性能。MADDQN 几乎可以达到假设事先了解云流量动态所得到的理论解的上限。
{"title":"Fault Tolerance Oriented SFC Optimization in SDN/NFV-Enabled Cloud Environment Based on Deep Reinforcement Learning","authors":"Jing Chen;Jia Chen;Kuo Guo;Renkun Hu;Tao Zou;Jun Zhu;Hongke Zhang;Jingjing Liu","doi":"10.1109/TCC.2024.3357061","DOIUrl":"10.1109/TCC.2024.3357061","url":null,"abstract":"In software defined network/network function virtualization (SDN/NFV)-enabled cloud environment, cloud services can be implemented as service function chains (SFCs), which consist of a series of ordered virtual network functions. However, due to fluctuations of cloud traffic and without knowledge of cloud computing network configuration, designing SFC optimization approach to obtain flexible cloud services in dynamic cloud environment is a pivotal challenge. In this paper, we propose a fault tolerance oriented SFC optimization approach based on deep reinforcement learning. We model fault tolerance oriented SFC elastic optimization problem as a Markov decision process, in which the reward is modeled as a weighted function, including minimizing energy consumption and migration cost, maximizing revenue benefit and load balancing. Then, taking binary integer programming model as constraints of quality of cloud services, we design optimization approaches for single-agent double deep Q-network (SADDQN) and multi-agent DDQN (MADDQN). Among them, MADDQN decentralizes training tasks from control plane to data plane to reduce the probability of single point of failure for the centralized controller. Experimental results show that the designed approaches have better performance. MADDQN can almost reach the upper bound of theoretical solution obtained by assuming a prior knowledge of the dynamics of cloud traffic.","PeriodicalId":13202,"journal":{"name":"IEEE Transactions on Cloud Computing","volume":null,"pages":null},"PeriodicalIF":6.5,"publicationDate":"2024-01-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"139956357","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Reversible Data Hiding in Shared Images With Separate Cover Image Reconstruction and Secret Extraction 利用独立的封面图像重构和秘密提取在共享图像中进行可逆数据隐藏
IF 6.5 2区 计算机科学 Q1 Computer Science Pub Date : 2024-01-09 DOI: 10.1109/TCC.2024.3351143
Lizhi Xiong;Xiao Han;Ching-Nung Yang;Yun-Qing Shi
Reversible data hiding is widely utilized for secure communication and copyright protection. Recently, to improve embedding capacity and visual quality of stego-images, some Partial Reversible Data Hiding (PRDH) schemes are proposed. But these schemes are over the plaintext domain. To protect the privacy of the cover image, Reversible Data Hiding in Encrypted Images (RDHEI) techniques are preferred. In addition, the full separability of cover image reconstruction and data restoration is also an important characteristic that cannot be achieved by most RDHEI schemes. To solve the issues, a partial and a complete Reversible Data Hiding in Shared Images with Separate Cover Image Reconstruction and Secret Extraction (RDHSI-SRE) are proposed in this paper. In the proposed schemes, the secret data is divided by Secret Sharing (SS). Then, the marked shared images are generated based on the proposed modify-and-recalculate strategy. The receiver can extract embedded data and reconstruct the image separably using k-out-of-n marked shared images. In the embedding phase of partial RDHSI-SRE (PRDHSI-SRE), the pixel values are modified according to the proposed Minimizing-Square-Errors Strategy to achieve high visual quality, and the complete RDHSI-SRE (CRDHSI-SRE) embeds data by modifying random coefficients to achieve reversibility. The experimental results and theoretical analyses demonstrate that the proposed schemes have a high embedding performance. Most importantly, the proposed schemes are fault-tolerant and completely separable.
可逆数据隐藏被广泛用于安全通信和版权保护。最近,为了提高隐去图像的嵌入能力和视觉质量,人们提出了一些部分可逆数据隐藏(PRDH)方案。但这些方案都是明文领域的。为了保护封面图像的隐私,最好采用加密图像中的可逆数据隐藏(RDHEI)技术。此外,封面图像重建和数据恢复的完全分离性也是大多数 RDHEI 方案无法实现的一个重要特征。为了解决这些问题,本文提出了一种具有分离式覆盖图像重建和秘密提取功能的共享图像中的部分和完全可逆数据隐藏(RDHSI-SRE)方案。在所提出的方案中,秘密数据通过秘密共享(Secret Sharing,SS)进行分割。然后,根据所提出的修改-计算策略生成标记共享图像。接收器可以提取嵌入的数据,并使用 n 个标记共享图像中的 k 个单独重建图像。在部分 RDHSI-SRE (PRDHSI-SRE)的嵌入阶段,根据提出的最小化平方误差策略修改像素值,以实现高视觉质量;完整 RDHSI-SRE (CRDHSI-SRE)通过修改随机系数嵌入数据,以实现可逆性。实验结果和理论分析表明,所提出的方案具有很高的嵌入性能。最重要的是,提出的方案具有容错性和完全可分性。
{"title":"Reversible Data Hiding in Shared Images With Separate Cover Image Reconstruction and Secret Extraction","authors":"Lizhi Xiong;Xiao Han;Ching-Nung Yang;Yun-Qing Shi","doi":"10.1109/TCC.2024.3351143","DOIUrl":"10.1109/TCC.2024.3351143","url":null,"abstract":"Reversible data hiding is widely utilized for secure communication and copyright protection. Recently, to improve embedding capacity and visual quality of stego-images, some Partial Reversible Data Hiding (PRDH) schemes are proposed. But these schemes are over the plaintext domain. To protect the privacy of the cover image, Reversible Data Hiding in Encrypted Images (RDHEI) techniques are preferred. In addition, the full separability of cover image reconstruction and data restoration is also an important characteristic that cannot be achieved by most RDHEI schemes. To solve the issues, a partial and a complete Reversible Data Hiding in Shared Images with Separate Cover Image Reconstruction and Secret Extraction (RDHSI-SRE) are proposed in this paper. In the proposed schemes, the secret data is divided by Secret Sharing (SS). Then, the marked shared images are generated based on the proposed modify-and-recalculate strategy. The receiver can extract embedded data and reconstruct the image separably using \u0000<italic>k</i>\u0000-out-of-\u0000<italic>n</i>\u0000 marked shared images. In the embedding phase of partial RDHSI-SRE (PRDHSI-SRE), the pixel values are modified according to the proposed Minimizing-Square-Errors Strategy to achieve high visual quality, and the complete RDHSI-SRE (CRDHSI-SRE) embeds data by modifying random coefficients to achieve reversibility. The experimental results and theoretical analyses demonstrate that the proposed schemes have a high embedding performance. Most importantly, the proposed schemes are fault-tolerant and completely separable.","PeriodicalId":13202,"journal":{"name":"IEEE Transactions on Cloud Computing","volume":null,"pages":null},"PeriodicalIF":6.5,"publicationDate":"2024-01-09","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"139951163","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Locality-Aware and Fault-Tolerant Batching for Machine Learning on Distributed Datasets 分布式数据集上机器学习的本地感知和容错批处理
IF 6.5 2区 计算机科学 Q1 Computer Science Pub Date : 2024-01-09 DOI: 10.1109/TCC.2024.3351716
Liu Liu;Zhijun Ding;Dazhao Cheng;Xiaobo Zhou
The performance of distributed ML training is largely determined by workers that generate gradients in the slowest pace, i.e., stragglers. The state-of-the-art load balancing approaches consider that each worker stores a complete dataset locally and the data fetching time can be ignored. They only consider the computation capacity of workers in equalizing the gradient computation time. However, we find that in scenarios of ML on distributed datasets, whether in edge computing or distributed data cache systems, the data fetching time is non-negligible and often becomes the primary cause of stragglers. In this paper, we present LOFT, an adaptive load balancing approach for ML upon distributed datasets at the edge. It aims to balance the time to generate gradients at each worker while ensuring the model accuracy. Specifically, LOFT features a locality-aware batching. It builds performance and optimization models upon data fetching and gradient computation time. Leveraging the models, it develops an adaptive scheme based on grid search. Furthermore, it offers Byzantine gradient aggregation upon Ring All-Reduce, which makes itself fault-tolerant under Byzantine gradients brought by a small batch size. Experiments with twelve public DNN models and four open datasets show that LOFT reduces the training time by up to 46%, while reducing the training loss by up to 67% compared to LB-BSP.
分布式人工智能训练的性能在很大程度上取决于以最慢的速度生成梯度的工作者,即落伍者。最先进的负载均衡方法认为,每个工作者都在本地存储一个完整的数据集,数据获取时间可以忽略不计。在均衡梯度计算时间时,它们只考虑工作者的计算能力。然而,我们发现,在分布式数据集上的 ML 场景中,无论是在边缘计算还是分布式数据缓存系统中,数据获取时间都是不可忽略的,往往成为造成滞后的主要原因。在本文中,我们介绍了 LOFT,一种用于边缘分布式数据集上的 ML 的自适应负载平衡方法。它旨在平衡每个工作者生成梯度的时间,同时确保模型的准确性。具体来说,LOFT 具有局部感知批处理功能。它根据数据获取和梯度计算时间建立性能和优化模型。利用这些模型,它开发了一种基于网格搜索的自适应方案。此外,它还在环形全还原(Ring All-Reduce)基础上提供拜占庭梯度聚合,从而在批量较小带来拜占庭梯度的情况下实现容错。用 12 个公共 DNN 模型和 4 个开放数据集进行的实验表明,与 LB-BSP 相比,LOFT 最多缩短了 46% 的训练时间,同时最多减少了 67% 的训练损失。
{"title":"Locality-Aware and Fault-Tolerant Batching for Machine Learning on Distributed Datasets","authors":"Liu Liu;Zhijun Ding;Dazhao Cheng;Xiaobo Zhou","doi":"10.1109/TCC.2024.3351716","DOIUrl":"10.1109/TCC.2024.3351716","url":null,"abstract":"The performance of distributed ML training is largely determined by workers that generate gradients in the slowest pace, i.e., stragglers. The state-of-the-art load balancing approaches consider that each worker stores a complete dataset locally and the data fetching time can be ignored. They only consider the computation capacity of workers in equalizing the gradient computation time. However, we find that in scenarios of ML on distributed datasets, whether in edge computing or distributed data cache systems, the data fetching time is non-negligible and often becomes the primary cause of stragglers. In this paper, we present LOFT, an adaptive load balancing approach for ML upon distributed datasets at the edge. It aims to balance the time to generate gradients at each worker while ensuring the model accuracy. Specifically, LOFT features a locality-aware batching. It builds performance and optimization models upon data fetching and gradient computation time. Leveraging the models, it develops an adaptive scheme based on grid search. Furthermore, it offers Byzantine gradient aggregation upon Ring All-Reduce, which makes itself fault-tolerant under Byzantine gradients brought by a small batch size. Experiments with twelve public DNN models and four open datasets show that LOFT reduces the training time by up to 46%, while reducing the training loss by up to 67% compared to LB-BSP.","PeriodicalId":13202,"journal":{"name":"IEEE Transactions on Cloud Computing","volume":null,"pages":null},"PeriodicalIF":6.5,"publicationDate":"2024-01-09","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"139951172","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
BatOpt: Optimizing GPU-Based Deep Learning Inference Using Dynamic Batch Processing BatOpt:使用动态批处理优化基于 GPU 的深度学习推理
IF 6.5 2区 计算机科学 Q1 Computer Science Pub Date : 2024-01-08 DOI: 10.1109/TCC.2024.3350561
Deyu Zhang;Yunzhen Luo;Yaobo Wang;Xiaoyan Kui;Ju Ren
Deep learning (DL) has been applied in billions of mobile devices due to its astonishing performance in image, text, and audio processing. However, limited by the computing capability of mobile devices, a large amount of DL inference tasks need to be offloaded to edge or cloud servers, which makes powerful GPU servers are struggling to ensure the quality of service(QoS). To better utilize the highly parallel computing architecture of GPU to improve the QoS, we propose BatOpt, a framework that uses dynamic batch processing to strike a good balance between service latency and GPU memory usage in DL inference services. Specifically, BatOpt innovatively models the DL inference service as a $M/G(a,b)/1/N$ queue, with the consideration of stochastic task arrivals, which enables it to predict the service latency accurately in different system states. Furthermore, we propose an optimization algorithm to trade off the service latency and GPU memory usage in different system states by analyzing the queueing model. We have implemented BatOpt on Pytorch and evaluated it on an RTX 2080 GPU using real DL models. BatOpt brings up to 31x and 4.3x times performance boost in terms of service latency, compared to single-input and fixed-batch-size strategies, respectively. And BatOpt's maximum GPU memory usage is only 0.3x that of greedy-dynamic-batch-size strategy on the premise of the same service latency.
深度学习(DL)因其在图像、文本和音频处理方面的惊人性能,已被应用于数十亿台移动设备。然而,受限于移动设备的计算能力,大量的深度学习推理任务需要卸载到边缘或云服务器上,这使得功能强大的 GPU 服务器难以保证服务质量(QoS)。为了更好地利用GPU的高度并行计算架构来提高服务质量,我们提出了BatOpt,一个利用动态批处理在DL推理服务中实现服务延迟和GPU内存使用之间良好平衡的框架。具体来说,BatOpt 创新性地将 DL 推理服务建模为 $M/G(a,b)/1/N$ 队列,并考虑到随机任务到达,从而能够准确预测不同系统状态下的服务延迟。此外,我们还提出了一种优化算法,通过分析队列模型来权衡不同系统状态下的服务延迟和 GPU 内存使用量。我们在 Pytorch 上实现了 BatOpt,并使用真实的 DL 模型在 RTX 2080 GPU 上进行了评估。与单输入和固定批量大小策略相比,BatOpt 在服务延迟方面的性能分别提高了 31 倍和 4.3 倍。在服务延迟相同的前提下,BatOpt 的最大 GPU 内存使用量仅为贪婪动态批量大小策略的 0.3 倍。
{"title":"BatOpt: Optimizing GPU-Based Deep Learning Inference Using Dynamic Batch Processing","authors":"Deyu Zhang;Yunzhen Luo;Yaobo Wang;Xiaoyan Kui;Ju Ren","doi":"10.1109/TCC.2024.3350561","DOIUrl":"10.1109/TCC.2024.3350561","url":null,"abstract":"Deep learning (DL) has been applied in billions of mobile devices due to its astonishing performance in image, text, and audio processing. However, limited by the computing capability of mobile devices, a large amount of DL inference tasks need to be offloaded to edge or cloud servers, which makes powerful GPU servers are struggling to ensure the quality of service(QoS). To better utilize the highly parallel computing architecture of GPU to improve the QoS, we propose BatOpt, a framework that uses dynamic batch processing to strike a good balance between service latency and GPU memory usage in DL inference services. Specifically, BatOpt innovatively models the DL inference service as a \u0000<inline-formula><tex-math>$M/G(a,b)/1/N$</tex-math></inline-formula>\u0000 queue, with the consideration of stochastic task arrivals, which enables it to predict the service latency accurately in different system states. Furthermore, we propose an optimization algorithm to trade off the service latency and GPU memory usage in different system states by analyzing the queueing model. We have implemented BatOpt on Pytorch and evaluated it on an RTX 2080 GPU using real DL models. BatOpt brings up to 31x and 4.3x times performance boost in terms of service latency, compared to single-input and fixed-batch-size strategies, respectively. And BatOpt's maximum GPU memory usage is only 0.3x that of greedy-dynamic-batch-size strategy on the premise of the same service latency.","PeriodicalId":13202,"journal":{"name":"IEEE Transactions on Cloud Computing","volume":null,"pages":null},"PeriodicalIF":6.5,"publicationDate":"2024-01-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"139951519","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
期刊
IEEE Transactions on Cloud Computing
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1