首页 > 最新文献

IEEE Transactions on Parallel and Distributed Systems最新文献

英文 中文
Rethinking Virtual Machines Live Migration for Memory Disaggregation 重新思考虚拟机动态迁移的内存分解
IF 6 2区 计算机科学 Q1 COMPUTER SCIENCE, THEORY & METHODS Pub Date : 2025-08-18 DOI: 10.1109/TPDS.2025.3597149
Xingzi Yu;Xingguo Jia;Jin Zhang;Yun Wang;Senhao Yu;Zhengwei Qi
Resource underutilization has troubled data centers for several decades. On the CPU front, live migration plays a crucial role in reallocating CPU resources. Nevertheless, contemporary Virtual Machine (VM) live migration methods are burdened by substantial resource consumption. In terms of memory management, disaggregated memory offers an effective solution to enhance memory utilization, but leaves a gap in addressing CPU underutilization. Our findings highlight a considerable opportunity to optimize live migration in the context of disaggregated memory systems. We introduce Anemoi, a resource management system that seamlessly integrates VM live migration with memory disaggregation to address the aforementioned gap. In the context of disaggregated memory, remote memory becomes accessible from destination nodes, effectively eliminating the need for extensive network transmission of memory pages, and thereby significantly reducing migration time. In addition, we propose using memory replicas as an optimization to the live migration system. To mitigate the overhead of potential excessive memory consumption, we develop a dedicated compression algorithm. Our evaluations demonstrate that Anemoi leads to a notable 69% reduction in network bandwidth utilization and an impressive 83% reduction in migration time compared to traditional VM live migration. Additionally, our compression algorithm achieves an outstanding space-saving rate of 83.6%.
几十年来,资源利用不足一直困扰着数据中心。在CPU方面,热迁移在重新分配CPU资源方面起着至关重要的作用。然而,当前的虚拟机(VM)热迁移方法存在大量的资源消耗。在内存管理方面,分解内存提供了一种提高内存利用率的有效解决方案,但是在处理CPU利用率不足的问题上留下了空白。我们的发现突出了在分解内存系统的背景下优化实时迁移的相当大的机会。我们介绍Anemoi,一个资源管理系统,无缝集成虚拟机实时迁移和内存分解,以解决上述差距。在分解内存的上下文中,可以从目标节点访问远程内存,从而有效地消除了对内存页面的大量网络传输的需要,从而大大减少了迁移时间。此外,我们建议使用内存副本作为实时迁移系统的优化。为了减轻潜在的过度内存消耗的开销,我们开发了一个专用的压缩算法。我们的评估表明,与传统的虚拟机实时迁移相比,Anemoi使网络带宽利用率显著降低69%,迁移时间显著减少83%。此外,我们的压缩算法实现了出色的空间节省率83.6%。
{"title":"Rethinking Virtual Machines Live Migration for Memory Disaggregation","authors":"Xingzi Yu;Xingguo Jia;Jin Zhang;Yun Wang;Senhao Yu;Zhengwei Qi","doi":"10.1109/TPDS.2025.3597149","DOIUrl":"https://doi.org/10.1109/TPDS.2025.3597149","url":null,"abstract":"Resource underutilization has troubled data centers for several decades. On the CPU front, live migration plays a crucial role in reallocating CPU resources. Nevertheless, contemporary Virtual Machine (VM) live migration methods are burdened by substantial resource consumption. In terms of memory management, disaggregated memory offers an effective solution to enhance memory utilization, but leaves a gap in addressing CPU underutilization. Our findings highlight a considerable opportunity to optimize live migration in the context of disaggregated memory systems. We introduce Anemoi, a resource management system that seamlessly integrates VM live migration with memory disaggregation to address the aforementioned gap. In the context of disaggregated memory, remote memory becomes accessible from destination nodes, effectively eliminating the need for extensive network transmission of memory pages, and thereby significantly reducing migration time. In addition, we propose using memory replicas as an optimization to the live migration system. To mitigate the overhead of potential excessive memory consumption, we develop a dedicated compression algorithm. Our evaluations demonstrate that Anemoi leads to a notable 69% reduction in network bandwidth utilization and an impressive 83% reduction in migration time compared to traditional VM live migration. Additionally, our compression algorithm achieves an outstanding space-saving rate of 83.6%.","PeriodicalId":13257,"journal":{"name":"IEEE Transactions on Parallel and Distributed Systems","volume":"36 11","pages":"2310-2324"},"PeriodicalIF":6.0,"publicationDate":"2025-08-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145141748","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
RL-Based Hybrid CPU Scaling for Soft Deadline Constrained Tasks in Container Clouds 基于rl的容器云软期限约束任务混合CPU扩展
IF 6 2区 计算机科学 Q1 COMPUTER SCIENCE, THEORY & METHODS Pub Date : 2025-08-08 DOI: 10.1109/TPDS.2025.3597195
Yepeng Zhang;Haitao Zhang;Huadong Ma
Existing CPU scaling approaches have limitations that can lead to inefficient resource allocation and increased penalty costs for tasks with soft deadlines running in container clouds. First, quota allocation based approaches overlook the gap between the obtainable CPU time and allocated quota, causing inefficient CPU utilization and unexpected task behaviors. Second, core allocation based approaches ignore workload dynamics within decision intervals, potentially increasing contention for CPU time among tasks on the same core. Third, existing approaches lack strategies to allocate more resources to critical tasks that incur higher penalty costs when the node’s capacity is insufficient. This article proposes a reinforcement learning based hybrid CPU scaling approach that allocates quota and cores jointly, aiming to minimize penalty costs for timeouts. Based on the embedding generated from a fine-grained CPU demand series, we allocate CPU quotas and determine a dynamic workload-aware core sharing scheme using an attention mechanism that combines respective demands and global criticality regarding penalty costs. Additionally, we integrate the resource gap, CPU time contention, and penalty costs into the reward function to update our model online. The experimental results show the proposed approach achieves state-of-the-art performance.
现有的CPU扩展方法存在局限性,可能导致资源分配效率低下,并且会增加在容器云中运行的具有软截止日期的任务的惩罚成本。首先,基于配额分配的方法忽略了可获得的CPU时间和已分配的配额之间的差距,导致CPU利用率低下和意外的任务行为。其次,基于核心分配的方法忽略了决策间隔内的工作负载动态,可能会增加同一核心上的任务之间对CPU时间的争用。第三,现有方法缺乏在节点容量不足时将更多资源分配给关键任务的策略,这些任务会导致更高的惩罚成本。本文提出了一种基于强化学习的混合CPU扩展方法,该方法联合分配配额和内核,旨在最大限度地减少超时的惩罚成本。基于从细粒度CPU需求序列生成的嵌入,我们分配CPU配额,并使用结合各自需求和关于惩罚成本的全局临界性的关注机制确定动态工作负载感知的核心共享方案。此外,我们将资源差距、CPU时间争用和惩罚成本集成到奖励函数中,以在线更新我们的模型。实验结果表明,该方法达到了最先进的性能。
{"title":"RL-Based Hybrid CPU Scaling for Soft Deadline Constrained Tasks in Container Clouds","authors":"Yepeng Zhang;Haitao Zhang;Huadong Ma","doi":"10.1109/TPDS.2025.3597195","DOIUrl":"https://doi.org/10.1109/TPDS.2025.3597195","url":null,"abstract":"Existing CPU scaling approaches have limitations that can lead to inefficient resource allocation and increased penalty costs for tasks with soft deadlines running in container clouds. First, quota allocation based approaches overlook the gap between the obtainable CPU time and allocated quota, causing inefficient CPU utilization and unexpected task behaviors. Second, core allocation based approaches ignore workload dynamics within decision intervals, potentially increasing contention for CPU time among tasks on the same core. Third, existing approaches lack strategies to allocate more resources to critical tasks that incur higher penalty costs when the node’s capacity is insufficient. This article proposes a reinforcement learning based hybrid CPU scaling approach that allocates quota and cores jointly, aiming to minimize penalty costs for timeouts. Based on the embedding generated from a fine-grained CPU demand series, we allocate CPU quotas and determine a dynamic workload-aware core sharing scheme using an attention mechanism that combines respective demands and global criticality regarding penalty costs. Additionally, we integrate the resource gap, CPU time contention, and penalty costs into the reward function to update our model online. The experimental results show the proposed approach achieves state-of-the-art performance.","PeriodicalId":13257,"journal":{"name":"IEEE Transactions on Parallel and Distributed Systems","volume":"36 10","pages":"2104-2118"},"PeriodicalIF":6.0,"publicationDate":"2025-08-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144904881","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Mariana: Exploring Native SkipList Index Design for Disaggregated Memory 探索分解内存的本地SkipList索引设计
IF 6 2区 计算机科学 Q1 COMPUTER SCIENCE, THEORY & METHODS Pub Date : 2025-08-07 DOI: 10.1109/TPDS.2025.3596988
Xing Wei;Ke Wang;Yinjun Han;Hao Jin;Yaofeng Tu;Huiqi Hu;Xuan Zhou;Minghao Zhao
Memory disaggregation has emerged as a promising architecture for improving resource efficiency by decoupling the computing and memory resources. But building efficient range indices in such an architecture faces three critical challenges: (1) coarse-grained concurrency control schemes for coordinating concurrent read/write operations with node splitting incur high contention under the skewed and write-intensive workloads; (2) existing data layouts fail to balance consistency verification and hardware acceleration via SIMD (Single Instruction Multiple Data); and (3) naive caching schemes struggle to adapt to rapidly changing access patterns. To address these challenges, we propose Mariana, a memory-disaggregated skiplist index that integrates three key innovations. First, it uses a fine-grained (i.e., entry-level) latch mechanism combined with dynamic node resizing to minimize the contention and splitting frequency. Second, it employs a tailored data layout for leaf node, which separates keys and values to enable SIMD acceleration while maintaining consistency checks with minimal write overhead. Third, it implements an adaptive caching strategy that tracks node popularity in real-time to optimize network bandwidth utilization during the index traversal. Experimental results show that Mariana achieves $1.7times$ higher throughput under write-intensive workloads and reduces the P90 latency by 23% under the read-intensive workloads, when comparing to the state-of-the-art indices on disaggregated memory.
内存分解通过解耦计算资源和内存资源来提高资源效率,已经成为一种很有前途的架构。但是,在这种架构中构建高效的范围索引面临三个关键挑战:(1)在倾斜和写密集型工作负载下,用于协调并发读写操作的粗粒度并发控制方案会导致高争用;(2)现有数据布局无法通过SIMD (Single Instruction Multiple data)平衡一致性验证和硬件加速;(3)幼稚的缓存方案难以适应快速变化的访问模式。为了应对这些挑战,我们提出了Mariana,这是一个整合了三个关键创新的记忆分类跳跃列表索引。首先,它使用细粒度(即入门级)锁存机制,并结合动态节点调整大小来最小化争用和分裂频率。其次,它为叶节点采用了定制的数据布局,它将键和值分开,以支持SIMD加速,同时以最小的写开销维护一致性检查。第三,它实现了一种自适应缓存策略,实时跟踪节点的流行程度,以优化索引遍历期间的网络带宽利用率。实验结果表明,与分类内存上最先进的索引相比,Mariana在写密集型工作负载下的吞吐量提高了1.7倍,在读密集型工作负载下的P90延迟降低了23%。
{"title":"Mariana: Exploring Native SkipList Index Design for Disaggregated Memory","authors":"Xing Wei;Ke Wang;Yinjun Han;Hao Jin;Yaofeng Tu;Huiqi Hu;Xuan Zhou;Minghao Zhao","doi":"10.1109/TPDS.2025.3596988","DOIUrl":"https://doi.org/10.1109/TPDS.2025.3596988","url":null,"abstract":"Memory disaggregation has emerged as a promising architecture for improving resource efficiency by decoupling the computing and memory resources. But building efficient range indices in such an architecture faces three critical challenges: (1) coarse-grained concurrency control schemes for coordinating concurrent read/write operations with node splitting incur high contention under the skewed and write-intensive workloads; (2) existing data layouts fail to balance consistency verification and hardware acceleration via SIMD (Single Instruction Multiple Data); and (3) naive caching schemes struggle to adapt to rapidly changing access patterns. To address these challenges, we propose <small>Mariana</small>, a memory-disaggregated skiplist index that integrates three key innovations. First, it uses a fine-grained (i.e., entry-level) latch mechanism combined with dynamic node resizing to minimize the contention and splitting frequency. Second, it employs a tailored data layout for leaf node, which separates keys and values to enable SIMD acceleration while maintaining consistency checks with minimal write overhead. Third, it implements an adaptive caching strategy that tracks node popularity in real-time to optimize network bandwidth utilization during the index traversal. Experimental results show that <small>Mariana</small> achieves <inline-formula><tex-math>$1.7times$</tex-math></inline-formula> higher throughput under write-intensive workloads and reduces the P90 latency by 23% under the read-intensive workloads, when comparing to the state-of-the-art indices on disaggregated memory.","PeriodicalId":13257,"journal":{"name":"IEEE Transactions on Parallel and Distributed Systems","volume":"36 10","pages":"2137-2151"},"PeriodicalIF":6.0,"publicationDate":"2025-08-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144990153","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Online Container Caching for IoT Data Processing in Serverless Edge Computing 无服务器边缘计算中物联网数据处理的在线容器缓存
IF 6 2区 计算机科学 Q1 COMPUTER SCIENCE, THEORY & METHODS Pub Date : 2025-08-05 DOI: 10.1109/TPDS.2025.3595965
Guopeng Li;Haisheng Tan;Chi Zhang;Xuan Zhang;Zhenhua Han;Guoliang Chen
Serverless edge computing is an efficient way to execute event-driven, short-duration, and bursty IoT data processing tasks on resource-limited edge servers, using on-demand resource allocation and dynamic auto-scaling. In this paradigm, function requests are handled in virtualized environments, e.g., containers. When a function request arrives online, if there is no container in memory to execute it, the serverless platform will initialize such a container with non-negligible latency, known as cold start. Otherwise, it results in a warm start with no latency in previous studies. However, based on our experiments, we find there is a remarkable third case called Late-Warm, i.e., when a request arrives during the container initializing, its latency is less than a cold start but not zero. In this paper, we study online container caching in serverless edge computing to minimize the total latency with Late-Warm and other practical issues considered. We propose OnCoLa, a novel $O(T_{c}K)$-competitive algorithm supporting request relaying on multiple edge servers. Here, $T_{c}$ and $K$ are the maximum container cold start latency and the memory size, respectively. Extensive simulations on two real-world traces demonstrate that OnCoLa consistently outperforms the state-of-the-art container caching algorithms and reduces the latency by 23.33%. Experiments on Raspberry Pi and Jetson Nano show that OnCoLa reduces latency by up to 21.38% compared with the representative lightweight policy.
无服务器边缘计算是一种在资源有限的边缘服务器上执行事件驱动、短时间和突发物联网数据处理任务的有效方法,使用按需资源分配和动态自动扩展。在这个范例中,功能请求是在虚拟化环境中处理的,例如容器。当一个函数请求在线到达时,如果内存中没有容器来执行它,无服务器平台将初始化这样一个容器,具有不可忽略的延迟,称为冷启动。否则,在以前的研究中,它会导致一个温暖的开始,没有延迟。然而,根据我们的实验,我们发现了第三种值得注意的情况,即当请求在容器初始化期间到达时,其延迟时间小于冷启动,但不为零。在本文中,我们研究了无服务器边缘计算中的在线容器缓存,以最大限度地减少延迟和其他实际问题。我们提出了一种新的$O(T_{c}K)$竞争算法OnCoLa,支持在多个边缘服务器上中继请求。这里,$T_{c}$和$K$分别是最大容器冷启动延迟和内存大小。在两个真实世界中进行的大量模拟表明,OnCoLa始终优于最先进的容器缓存算法,并将延迟降低了23.33%。在Raspberry Pi和Jetson Nano上的实验表明,与代表性的轻量级策略相比,OnCoLa可减少高达21.38%的延迟。
{"title":"Online Container Caching for IoT Data Processing in Serverless Edge Computing","authors":"Guopeng Li;Haisheng Tan;Chi Zhang;Xuan Zhang;Zhenhua Han;Guoliang Chen","doi":"10.1109/TPDS.2025.3595965","DOIUrl":"https://doi.org/10.1109/TPDS.2025.3595965","url":null,"abstract":"Serverless edge computing is an efficient way to execute event-driven, short-duration, and bursty IoT data processing tasks on resource-limited edge servers, using on-demand resource allocation and dynamic auto-scaling. In this paradigm, function requests are handled in virtualized environments, e.g., containers. When a function request arrives online, if there is no container in memory to execute it, the serverless platform will initialize such a container with non-negligible latency, known as cold start. Otherwise, it results in a warm start with no latency in previous studies. However, based on our experiments, we find there is a remarkable third case called Late-Warm, i.e., when a request arrives during the container initializing, its latency is less than a cold start but not zero. In this paper, we study online container caching in serverless edge computing to minimize the total latency with Late-Warm and other practical issues considered. We propose <monospace>OnCoLa</monospace>, a novel <inline-formula><tex-math>$O(T_{c}K)$</tex-math></inline-formula>-competitive algorithm supporting request relaying on multiple edge servers. Here, <inline-formula><tex-math>$T_{c}$</tex-math></inline-formula> and <inline-formula><tex-math>$K$</tex-math></inline-formula> are the maximum container cold start latency and the memory size, respectively. Extensive simulations on two real-world traces demonstrate that <monospace>OnCoLa</monospace> consistently outperforms the state-of-the-art container caching algorithms and reduces the latency by 23.33%. Experiments on Raspberry Pi and Jetson Nano show that <monospace>OnCoLa</monospace> reduces latency by up to 21.38% compared with the representative lightweight policy.","PeriodicalId":13257,"journal":{"name":"IEEE Transactions on Parallel and Distributed Systems","volume":"36 12","pages":"2524-2536"},"PeriodicalIF":6.0,"publicationDate":"2025-08-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145352070","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
MUCVR: Edge Computing-Enabled High-Quality Multi-User Collaboration for Interactive MVR MUCVR:支持边缘计算的交互式MVR高质量多用户协作
IF 6 2区 计算机科学 Q1 COMPUTER SCIENCE, THEORY & METHODS Pub Date : 2025-08-04 DOI: 10.1109/TPDS.2025.3595801
Weimin Li;Qin Li;Weihong Tian;Jie Gao;Fan Wu;Jianxun Liu;Ju Ren
Mobile Virtual Reality (MVR), which aims to provide high-quality VR services to mobile devices of end users, has become the latest trend in virtual reality developments. The current MVR solution is to remotely render frame data from a cloud server, while the potential of edge computing in MVR is underexploited. In this paper, we propose a new approach named MUCVR to achieve high-quality interactive MVR collaboration for multiple users by exploiting edge computing. First, we design “vertical” edge–cloud collaboration for VR task rendering, in which foreground interaction is offloaded to an edge server for rendering, while the background environment is rendered by the cloud server. Correspondingly, the VR device of a user is only responsible for decoding and displaying. Second, we propose the “horizontal” multi-user collaboration based on edge–edge cooperation, which synchronizes the data among edge servers. Finally, we implement the proposed MUCVR on an MVR device and the Unity VR application engine. The results show that MUCVR can effectively reduce the MVR service latency, improve the rendering performance, reduce the computing load on the VR device, and, ultimately, improve users’ quality of experience.
移动虚拟现实(MVR)旨在为终端用户的移动设备提供高质量的虚拟现实服务,已成为虚拟现实发展的最新趋势。目前的MVR解决方案是从云服务器远程渲染帧数据,而边缘计算在MVR中的潜力尚未得到充分利用。在本文中,我们提出了一种名为MUCVR的新方法,利用边缘计算实现多用户的高质量交互式MVR协作。首先,我们设计了用于VR任务渲染的“垂直”边缘云协作,其中前台交互被卸载到边缘服务器进行渲染,而后台环境由云服务器渲染。相应的,用户的VR设备只负责解码和显示。其次,我们提出了基于边缘协作的“横向”多用户协作,在边缘服务器之间同步数据。最后,我们在MVR设备和Unity VR应用引擎上实现了所提出的MUCVR。结果表明,MUCVR可以有效降低MVR业务延迟,提高渲染性能,降低VR设备的计算负荷,最终提高用户的体验质量。
{"title":"MUCVR: Edge Computing-Enabled High-Quality Multi-User Collaboration for Interactive MVR","authors":"Weimin Li;Qin Li;Weihong Tian;Jie Gao;Fan Wu;Jianxun Liu;Ju Ren","doi":"10.1109/TPDS.2025.3595801","DOIUrl":"https://doi.org/10.1109/TPDS.2025.3595801","url":null,"abstract":"Mobile Virtual Reality (MVR), which aims to provide high-quality VR services to mobile devices of end users, has become the latest trend in virtual reality developments. The current MVR solution is to remotely render frame data from a cloud server, while the potential of edge computing in MVR is underexploited. In this paper, we propose a new approach named MUCVR to achieve high-quality interactive MVR collaboration for multiple users by exploiting edge computing. First, we design “vertical” edge–cloud collaboration for VR task rendering, in which foreground interaction is offloaded to an edge server for rendering, while the background environment is rendered by the cloud server. Correspondingly, the VR device of a user is only responsible for decoding and displaying. Second, we propose the “horizontal” multi-user collaboration based on edge–edge cooperation, which synchronizes the data among edge servers. Finally, we implement the proposed MUCVR on an MVR device and the Unity VR application engine. The results show that MUCVR can effectively reduce the MVR service latency, improve the rendering performance, reduce the computing load on the VR device, and, ultimately, improve users’ quality of experience.","PeriodicalId":13257,"journal":{"name":"IEEE Transactions on Parallel and Distributed Systems","volume":"36 10","pages":"2058-2072"},"PeriodicalIF":6.0,"publicationDate":"2025-08-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144843117","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Decentralized QoS-Aware Model Inference Using Federated Split Learning for Cloud-Edge Medical Detection 基于联邦分裂学习的分布式qos感知模型推理用于云边缘医疗检测
IF 6 2区 计算机科学 Q1 COMPUTER SCIENCE, THEORY & METHODS Pub Date : 2025-08-01 DOI: 10.1109/TPDS.2025.3594694
Yishan Chen;Xiangwei Zeng;Huashuai Cai;Qing Xu;Zhiquan Liu
The application of federated learning (FL) has been widely extended to medical domains, including medical image analysis and health monitoring. With the increasing computation power demand on edge devices, split federated learning has emerged as a promising FL architecture. In this work, a home healthcare monitoring scenario is explored. Unlike existing split federated learning studies that primarily focus on model-level optimization, this study considers a system-level optimization involving latency, packet error rate, and federated training time. Specifically, a k-means algorithm is presented to select inference nodes, participating training clients, and aggregation servers referring to network conditions and data quality. Furthermore, a reinforcement learning method is utilized to allocate the computation and bandwidth resources during inference, training, and aggregation, thereby further improving the quality of service (QoS) and training efficiency. Simulation results demonstrate that the proposed architecture can achieve the target accuracy while offering the enhanced QoS and reduced the FL training time.
联邦学习的应用已经广泛扩展到医学领域,包括医学图像分析和健康监测。随着边缘设备对计算能力需求的不断增长,分裂联邦学习成为一种很有前途的FL架构。在这项工作中,探讨了家庭医疗保健监控场景。与现有的主要关注模型级优化的分裂联邦学习研究不同,本研究考虑了涉及延迟、数据包错误率和联邦训练时间的系统级优化。具体而言,提出了一种k-means算法,根据网络条件和数据质量选择推理节点、参与训练客户端和聚合服务器。利用强化学习方法对推理、训练和聚合过程中的计算和带宽资源进行分配,进一步提高服务质量(QoS)和训练效率。仿真结果表明,所提出的体系结构在提高QoS和减少FL训练时间的同时,能够达到目标精度。
{"title":"Decentralized QoS-Aware Model Inference Using Federated Split Learning for Cloud-Edge Medical Detection","authors":"Yishan Chen;Xiangwei Zeng;Huashuai Cai;Qing Xu;Zhiquan Liu","doi":"10.1109/TPDS.2025.3594694","DOIUrl":"https://doi.org/10.1109/TPDS.2025.3594694","url":null,"abstract":"The application of federated learning (FL) has been widely extended to medical domains, including medical image analysis and health monitoring. With the increasing computation power demand on edge devices, split federated learning has emerged as a promising FL architecture. In this work, a home healthcare monitoring scenario is explored. Unlike existing split federated learning studies that primarily focus on model-level optimization, this study considers a system-level optimization involving latency, packet error rate, and federated training time. Specifically, a <italic>k</i>-means algorithm is presented to select inference nodes, participating training clients, and aggregation servers referring to network conditions and data quality. Furthermore, a reinforcement learning method is utilized to allocate the computation and bandwidth resources during inference, training, and aggregation, thereby further improving the quality of service (QoS) and training efficiency. Simulation results demonstrate that the proposed architecture can achieve the target accuracy while offering the enhanced QoS and reduced the FL training time.","PeriodicalId":13257,"journal":{"name":"IEEE Transactions on Parallel and Distributed Systems","volume":"36 10","pages":"2119-2136"},"PeriodicalIF":6.0,"publicationDate":"2025-08-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144904887","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Dynamic Multiresource Fair Allocation With Time Discount Utility 具有时间折扣效用的动态多资源公平分配
IF 6 2区 计算机科学 Q1 COMPUTER SCIENCE, THEORY & METHODS Pub Date : 2025-08-01 DOI: 10.1109/TPDS.2025.3594741
Bin Deng;Weidong Li
Multiresource allocation mechanisms have been studied in many scenarios. A new dynamic multiresource fair allocation model with time discount utility is proposed in this article, where users can arrive and depart at different time slots. We propose a new any price share time discount (APS-TD) mechanism for this model, which accounts for the users’ time discount utility while maintaining desirable properties. We prove that the APS-TD mechanism satisfies cumulative incentive sharing (CSI), i.e., that the cumulative utility of each user is not lower than the cumulative utility generated by evenly allocating the available resources in each time slot; cumulative strategyproofness (CSP), where users cannot increase their cumulative utility by falsely reporting their demands in any time slot; cumulative Pareto optimality (CPO), i.e., where no allocation can increase the cumulative utility of one user without reducing the cumulative utility of another user in any time slot; cumulative envy-freeness (CEF), where users who arrive later should not prefer allocations from other users who arrive first in any time slot; time discount share fairness (TDSF), where users with higher time discount values occupy larger resource shares in each time slot unless the utility levels of both users are generated by evenly allocating resources; and bottleneck fairness (BF), where the allocation should satisfy max-min fairness with respect to the bottleneck resources contained in each time slot. We run the APS-TD mechanism on Alibaba trace-driven data to demonstrate the performance enhancement achieved by our proposed mechanism over the existing mechanism extensions. The results show that the APS-TD mechanism is superior to hybrid multiresource fairness (H-MRF) and stateful dominant resource fairness (SDRF) in many ways.
多资源分配机制在许多情况下都得到了研究。本文提出了一种新的具有时间折扣效用的动态多资源公平分配模型,其中用户可以在不同的时隙到达和离开。我们提出了一种新的任意价格份额时间折扣(APS-TD)机制,该机制在保持理想属性的同时考虑了用户的时间折扣效用。证明了APS-TD机制满足累积激励共享(CSI),即每个用户的累积效用不低于每个时隙平均分配可用资源所产生的累积效用;累积策略证明(CSP),用户不能通过在任何时间段错误报告其需求来增加累积效用;累积帕累托最优性(CPO),即在任何时间段内,没有任何分配可以增加一个用户的累积效用而不减少另一个用户的累积效用;累积嫉妒自由(CEF),即晚到达的用户不应该更喜欢在任何时间段内先到达的其他用户的分配;时间折扣份额公平(TDSF),即时间折扣值较高的用户在每个时隙占有较大的资源份额,除非两个用户的效用水平是通过平均分配资源产生的;瓶颈公平性(BF),其中分配应该满足每个时隙中包含的瓶颈资源的最大最小公平性。我们在阿里巴巴跟踪驱动的数据上运行APS-TD机制,以证明我们提出的机制相对于现有机制扩展所实现的性能增强。结果表明,APS-TD机制在许多方面都优于混合多资源公平(H-MRF)和有状态优势资源公平(SDRF)。
{"title":"Dynamic Multiresource Fair Allocation With Time Discount Utility","authors":"Bin Deng;Weidong Li","doi":"10.1109/TPDS.2025.3594741","DOIUrl":"https://doi.org/10.1109/TPDS.2025.3594741","url":null,"abstract":"Multiresource allocation mechanisms have been studied in many scenarios. A new dynamic multiresource fair allocation model with time discount utility is proposed in this article, where users can arrive and depart at different time slots. We propose a new <italic>any price share</i> time discount (APS-TD) mechanism for this model, which accounts for the users’ time discount utility while maintaining desirable properties. We prove that the APS-TD mechanism satisfies cumulative incentive sharing (CSI), i.e., that the cumulative utility of each user is not lower than the cumulative utility generated by evenly allocating the available resources in each time slot; cumulative strategyproofness (CSP), where users cannot increase their cumulative utility by falsely reporting their demands in any time slot; cumulative Pareto optimality (CPO), i.e., where no allocation can increase the cumulative utility of one user without reducing the cumulative utility of another user in any time slot; cumulative envy-freeness (CEF), where users who arrive later should not prefer allocations from other users who arrive first in any time slot; time discount share fairness (TDSF), where users with higher time discount values occupy larger resource shares in each time slot unless the utility levels of both users are generated by evenly allocating resources; and bottleneck fairness (BF), where the allocation should satisfy max-min fairness with respect to the bottleneck resources contained in each time slot. We run the APS-TD mechanism on Alibaba trace-driven data to demonstrate the performance enhancement achieved by our proposed mechanism over the existing mechanism extensions. The results show that the APS-TD mechanism is superior to hybrid multiresource fairness (H-MRF) and stateful dominant resource fairness (SDRF) in many ways.","PeriodicalId":13257,"journal":{"name":"IEEE Transactions on Parallel and Distributed Systems","volume":"36 10","pages":"2089-2103"},"PeriodicalIF":6.0,"publicationDate":"2025-08-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144868146","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
FedEFsz: Fair Cross-Silo Federated Learning System With Error-Bounded Lossy Compression FedEFsz:具有误差有界有损压缩的公平跨竖井联邦学习系统
IF 6 2区 计算机科学 Q1 COMPUTER SCIENCE, THEORY & METHODS Pub Date : 2025-07-31 DOI: 10.1109/TPDS.2025.3593896
Zhaorui Zhang;Sheng Di;Benben Liu;Zhuoran Ji;Guanpeng Li;Xiaoyi Lu;Amelie Chi Zhou;Khalid Ayed Alharthi;Jiannong Cao
Cross-Silo federated learning systems have been identified as an efficient approach to scaling DNN training across geographically-distributed data silos to preserve the privacy of the training data. Communication efficiency and fairness are two major issues that need to be both satisfied when federated learning systems are deployed in practice. Simultaneously guaranteeing both of them, however, is exceptionally difficult because simply combining communication reduction and fairness optimization approaches often causes non-converged training or drastic accuracy degradation. To bridge this gap, we propose FedEFsz. On the one hand, it integrates the state-of-the-art error-bounded lossy compressor SZ3 into cross-silo federated learning systems to significantly reduce communication traffic during the training. On the other hand, it achieves a high fairness (i.e., rather consistent model accuracy and performance across different clients) through a carefully designed heuristic algorithm that can tune the error-bound of SZ3 for different clients during the training. Extensive experimental results based on a GPU cluster with 65 GPU cards show that FedEFsz improves the fairness across different benchmarks by up to 60.88% and meanwhile reduces the communication traffic by up to $315times$.
跨竖井联邦学习系统已被确定为跨地理分布数据竖井扩展DNN训练以保护训练数据隐私的有效方法。通信效率和公平性是联邦学习系统在实际应用中需要同时满足的两个主要问题。然而,同时保证这两者是非常困难的,因为简单地结合通信减少和公平性优化方法通常会导致不收敛的训练或严重的准确性下降。为了弥补这一差距,我们提出了FedEFsz。一方面,它将最先进的错误有界有损压缩器SZ3集成到跨筒仓联邦学习系统中,显著减少了训练过程中的通信流量。另一方面,它通过精心设计的启发式算法实现了较高的公平性(即在不同客户端之间相当一致的模型精度和性能),该算法可以在训练过程中针对不同的客户端调整SZ3的误差边界。基于65张GPU卡的GPU集群的大量实验结果表明,FedEFsz在不同基准测试中的公平性提高了60.88%,同时减少了通信流量高达315times$。
{"title":"FedEFsz: Fair Cross-Silo Federated Learning System With Error-Bounded Lossy Compression","authors":"Zhaorui Zhang;Sheng Di;Benben Liu;Zhuoran Ji;Guanpeng Li;Xiaoyi Lu;Amelie Chi Zhou;Khalid Ayed Alharthi;Jiannong Cao","doi":"10.1109/TPDS.2025.3593896","DOIUrl":"https://doi.org/10.1109/TPDS.2025.3593896","url":null,"abstract":"Cross-Silo federated learning systems have been identified as an efficient approach to scaling DNN training across geographically-distributed data silos to preserve the privacy of the training data. Communication efficiency and fairness are two major issues that need to be both satisfied when federated learning systems are deployed in practice. Simultaneously guaranteeing both of them, however, is exceptionally difficult because simply combining communication reduction and fairness optimization approaches often causes non-converged training or drastic accuracy degradation. To bridge this gap, we propose <i>FedEFsz</i>. On the one hand, it integrates the state-of-the-art error-bounded lossy compressor SZ3 into cross-silo federated learning systems to significantly reduce communication traffic during the training. On the other hand, it achieves a high fairness (i.e., rather consistent model accuracy and performance across different clients) through a carefully designed heuristic algorithm that can tune the error-bound of SZ3 for different clients during the training. Extensive experimental results based on a GPU cluster with 65 GPU cards show that <i>FedEFsz</i> improves the fairness across different benchmarks by up to 60.88% and meanwhile reduces the communication traffic by up to <inline-formula><tex-math>$315times$</tex-math></inline-formula>.","PeriodicalId":13257,"journal":{"name":"IEEE Transactions on Parallel and Distributed Systems","volume":"36 12","pages":"2482-2496"},"PeriodicalIF":6.0,"publicationDate":"2025-07-31","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145352135","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Parallelization of Network Dynamics Computations in Heterogeneous Distributed Environment 异构分布环境下网络动力学计算的并行化
IF 6 2区 计算机科学 Q1 COMPUTER SCIENCE, THEORY & METHODS Pub Date : 2025-07-28 DOI: 10.1109/TPDS.2025.3593154
Oleksandr Sudakov;Volodymyr Maistrenko
This paper addresses the problem of parallelizing computations to study nonlinear dynamics in large networks of non-locally coupled oscillators using heterogeneous computing resources. The proposed approach can be applied to a variety of nonlinear dynamics models with runtime specification of parameters and network topologies. Parallelizing the solution of equations for different network elements is performed transparently and, in contrast to available tools, does not require parallel programming from end-users. The runtime scheduler takes into account the performance of computing and communication resources to reduce downtime and to achieve a quasi-optimal parallelizing speed-up. The proposed approach was implemented, and its efficiency is proven by numerous applications for simulating large dynamical networks with 103-108 elements described by Hodgkin–Huxley, FitzHugh–Nagumo, and Kuramoto models, for investigating pathological synchronization during Parkinson’s disease, analyzing multi-stability, for studying chimera and solitary states in 3D networks, etc. All the above computations may be performed using symmetrical multiprocessors, graphic processing units, and a network of workstations within the same run and it was demonstrated that near-linear speed-up can be achieved for large networks. The proposed approach is promising for extension to new hardware like edge-computing devices.
本文讨论了利用异构计算资源对非局部耦合振子网络进行非线性动力学研究的并行计算问题。该方法可应用于各种具有运行时参数和网络拓扑的非线性动力学模型。不同网络元素的方程解的并行化是透明的,与可用的工具相比,不需要最终用户的并行编程。运行时调度器考虑计算和通信资源的性能,以减少停机时间并实现准最佳的并行化加速。该方法已被实现,其有效性已被大量应用于模拟霍奇金-赫胥黎、FitzHugh-Nagumo和Kuramoto模型描述的103-108个元素的大型动态网络,研究帕金森病的病理同步,分析多重稳定性,研究三维网络中的嵌合体和孤立态等。上述所有计算都可以在同一次运行中使用对称多处理器、图形处理单元和工作站网络来执行,并且证明了在大型网络中可以实现近线性加速。所提出的方法有望扩展到边缘计算设备等新硬件。
{"title":"Parallelization of Network Dynamics Computations in Heterogeneous Distributed Environment","authors":"Oleksandr Sudakov;Volodymyr Maistrenko","doi":"10.1109/TPDS.2025.3593154","DOIUrl":"https://doi.org/10.1109/TPDS.2025.3593154","url":null,"abstract":"This paper addresses the problem of parallelizing computations to study nonlinear dynamics in large networks of non-locally coupled oscillators using heterogeneous computing resources. The proposed approach can be applied to a variety of nonlinear dynamics models with runtime specification of parameters and network topologies. Parallelizing the solution of equations for different network elements is performed transparently and, in contrast to available tools, does not require parallel programming from end-users. The runtime scheduler takes into account the performance of computing and communication resources to reduce downtime and to achieve a quasi-optimal parallelizing speed-up. The proposed approach was implemented, and its efficiency is proven by numerous applications for simulating large dynamical networks with 10<sup>3</sup>-10<sup>8</sup> elements described by Hodgkin–Huxley, FitzHugh–Nagumo, and Kuramoto models, for investigating pathological synchronization during Parkinson’s disease, analyzing multi-stability, for studying chimera and solitary states in 3D networks, etc. All the above computations may be performed using symmetrical multiprocessors, graphic processing units, and a network of workstations within the same run and it was demonstrated that near-linear speed-up can be achieved for large networks. The proposed approach is promising for extension to new hardware like edge-computing devices.","PeriodicalId":13257,"journal":{"name":"IEEE Transactions on Parallel and Distributed Systems","volume":"36 10","pages":"2030-2044"},"PeriodicalIF":6.0,"publicationDate":"2025-07-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144831795","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
ELICA: Efficient and Load Balanced I/O Cache Architecture for Hyperconverged Infrastructures ELICA:面向超融合基础设施的高效负载均衡I/O缓存架构
IF 6 2区 计算机科学 Q1 COMPUTER SCIENCE, THEORY & METHODS Pub Date : 2025-07-24 DOI: 10.1109/TPDS.2025.3592275
Mostafa Kishani;Sina Ahmadi;Saba Ahmadian;Reza Salkhordeh;Zdenek Becvar;Onur Mutlu;André Brinkmann;Hossein Asadi
Hyperconverged Infrastructures (HCIs) combine processing and storage elements to meet the requirements of data-intensive applications in performance, scalability, and quality of service. As an emerging paradigm, HCI should couple with a variety of traditional performance improvement approaches such as I/O caching in virtualized platforms. Contemporary I/O caching schemes are optimized for traditional single-node storage architectures and suffer from two major shortcomings for multi-node architectures: a) imbalanced cache space requirement and b) imbalanced I/O traffic and load. This makes existing schemes inefficient in distributing cache resources over an array of separate physical nodes. In this paper, we propose an Efficient and Load Balanced I/O Cache Architecture (ELICA), managing the solid-state drive (SSD) cache resources across HCI nodes to enhance I/O performance. ELICA dynamically reconfigures and distributes the SSD cache resources throughout the array of HCI nodes and also balances the network traffic and I/O cache load by dynamic reallocation of cache resources. To maximize the performance, we further present an optimization problem defined by Integer Linear Programming to efficiently distribute cache resources and balance the network traffic and I/O cache relocations. Our experimental results on a real platform show that ELICA improves quality of service in terms of average and worst-case latency in HCIs by 3.1× and 23%, respectively, compared to the state-of-the-art.
超融合基础设施(Hyperconverged infrastructure, hci)将处理和存储元素结合起来,满足数据密集型应用对性能、可扩展性和服务质量的需求。作为一种新兴的范例,HCI应该与各种传统的性能改进方法相结合,例如虚拟平台中的I/O缓存。当前的I/O缓存方案针对传统的单节点存储架构进行了优化,但对于多节点架构存在两个主要缺点:a)缓存空间需求不平衡;b) I/O流量和负载不平衡。这使得现有的模式在将缓存资源分配到一组独立的物理节点上时效率低下。在本文中,我们提出了一个高效和负载均衡的I/O缓存架构(ELICA),管理跨HCI节点的固态驱动器(SSD)缓存资源,以提高I/O性能。ELICA在HCI节点阵列中动态重新配置和分配SSD缓存资源,并通过动态重新分配缓存资源来平衡网络流量和I/O缓存负载。为了最大限度地提高性能,我们进一步提出了一个由整数线性规划定义的优化问题,以有效地分配缓存资源,平衡网络流量和I/O缓存重定位。我们在真实平台上的实验结果表明,与最先进的技术相比,ELICA在hci的平均和最坏情况延迟方面分别提高了3.1倍和23%的服务质量。
{"title":"ELICA: Efficient and Load Balanced I/O Cache Architecture for Hyperconverged Infrastructures","authors":"Mostafa Kishani;Sina Ahmadi;Saba Ahmadian;Reza Salkhordeh;Zdenek Becvar;Onur Mutlu;André Brinkmann;Hossein Asadi","doi":"10.1109/TPDS.2025.3592275","DOIUrl":"https://doi.org/10.1109/TPDS.2025.3592275","url":null,"abstract":"<italic>Hyperconverged Infrastructures</i> (HCIs) combine processing and storage elements to meet the requirements of data-intensive applications in performance, scalability, and quality of service. As an emerging paradigm, HCI should couple with a variety of traditional performance improvement approaches such as I/O caching in virtualized platforms. Contemporary I/O caching schemes are optimized for traditional single-node storage architectures and suffer from two major shortcomings for multi-node architectures: a) imbalanced cache space requirement and b) imbalanced I/O traffic and load. This makes existing schemes inefficient in distributing cache resources over an array of separate physical nodes. In this paper, we propose an <italic><u>E</u>fficient and <u>L</u>oad Balanced <u>I</u>/O <u>C</u>ache <u>A</u>rchitecture</i> (ELICA), managing the <italic>solid-state drive</i> (SSD) cache resources across HCI nodes to enhance I/O performance. ELICA dynamically reconfigures and distributes the SSD cache resources throughout the array of HCI nodes and also balances the network traffic and I/O cache load by dynamic reallocation of cache resources. To maximize the performance, we further present an optimization problem defined by <italic>Integer Linear Programming</i> to efficiently distribute cache resources and balance the network traffic and I/O cache relocations. Our experimental results on a real platform show that ELICA improves quality of service in terms of average and worst-case latency in HCIs by 3.1× and 23%, respectively, compared to the state-of-the-art.","PeriodicalId":13257,"journal":{"name":"IEEE Transactions on Parallel and Distributed Systems","volume":"36 10","pages":"2152-2168"},"PeriodicalIF":6.0,"publicationDate":"2025-07-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144990154","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
期刊
IEEE Transactions on Parallel and Distributed Systems
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1