首页 > 最新文献

Journal of Parallel and Distributed Computing最新文献

英文 中文
Topology-aware GPU job scheduling with deep reinforcement learning and heuristics 基于深度强化学习和启发式的拓扑感知GPU作业调度
IF 3.4 3区 计算机科学 Q1 COMPUTER SCIENCE, THEORY & METHODS Pub Date : 2025-10-01 Epub Date: 2025-06-26 DOI: 10.1016/j.jpdc.2025.105138
Hajer Ayadi , Aijun An , Yiming Shao , Hossein Pourmedheji , Junjie Deng , Jimmy X. Huang , Michael Feiman , Hao Zhou
Deep neural networks (DNNs) have gained popularity in many fields such as computer vision, and natural language processing. However, the increasing size of data and complexity of models have made training DNNs time-consuming. While distributed DNN training using multiple GPUs in parallel is a common solution, it introduces challenges in GPU resource management and scheduling. One key challenge is minimizing communication costs among GPUs assigned to a DNN training job. High communication costs—arising from factors such as inter-rack or inter-machine data transfers—can lead to hardware bottlenecks and network delays, ultimately slowing down training. Reducing these costs facilitates more efficient data transfer and synchronization, directly accelerating the training process. Although deep reinforcement learning (DRL) has shown promise in GPU resource scheduling, existing methods often lack considerations for hardware topology. Moreover, most proposed GPU schedulers ignore the possibility of combining heuristic and DRL policies. In response to these challenges, we introduce TopDRL, an innovative hybrid scheduler that integrates deep reinforcement learning (DRL) and heuristic methods to enhance GPU job scheduling. TopDRL uses a multi-branch convolutional neural network (CNN) model for job selection and a heuristic method for GPU allocation. At each time step, the CNN model selects a job, and then a heuristic method selects available GPUs closest to each other from the cluster. Reinforcement learning (RL) is used to train the CNN model to select the job that maximizes throughput-based rewards. Extensive evaluation, conducted on datasets with real jobs, shows that TopDRL significantly outperforms six baseline schedulers that use heuristics or other DRL models for job picking and resource allocation.
深度神经网络(dnn)在计算机视觉和自然语言处理等许多领域都得到了普及。然而,随着数据量和模型复杂度的增加,训练深度神经网络的时间越来越长。虽然使用多个GPU并行进行分布式DNN训练是一种常见的解决方案,但它在GPU资源管理和调度方面带来了挑战。一个关键的挑战是最小化分配给DNN训练任务的gpu之间的通信成本。高昂的通信成本(由机架间或机器间数据传输等因素引起)可能导致硬件瓶颈和网络延迟,最终减慢训练速度。降低这些成本有助于更有效的数据传输和同步,直接加速训练过程。尽管深度强化学习(DRL)在GPU资源调度方面表现出了良好的前景,但现有的方法往往缺乏对硬件拓扑的考虑。此外,大多数提出的GPU调度器忽略了启发式策略和DRL策略相结合的可能性。为了应对这些挑战,我们引入了TopDRL,这是一种创新的混合调度器,它集成了深度强化学习(DRL)和启发式方法来增强GPU的作业调度。TopDRL使用多分支卷积神经网络(CNN)模型进行作业选择,并使用启发式方法进行GPU分配。在每个时间步,CNN模型选择一个作业,然后采用启发式方法从集群中选择彼此最接近的可用gpu。强化学习(RL)用于训练CNN模型选择最大吞吐量奖励的任务。对具有真实作业的数据集进行的广泛评估表明,TopDRL明显优于使用启发式或其他DRL模型进行作业选择和资源分配的六个基准调度器。
{"title":"Topology-aware GPU job scheduling with deep reinforcement learning and heuristics","authors":"Hajer Ayadi ,&nbsp;Aijun An ,&nbsp;Yiming Shao ,&nbsp;Hossein Pourmedheji ,&nbsp;Junjie Deng ,&nbsp;Jimmy X. Huang ,&nbsp;Michael Feiman ,&nbsp;Hao Zhou","doi":"10.1016/j.jpdc.2025.105138","DOIUrl":"10.1016/j.jpdc.2025.105138","url":null,"abstract":"<div><div>Deep neural networks (DNNs) have gained popularity in many fields such as computer vision, and natural language processing. However, the increasing size of data and complexity of models have made training DNNs time-consuming. While distributed DNN training using multiple GPUs in parallel is a common solution, it introduces challenges in GPU resource management and scheduling. One key challenge is minimizing communication costs among GPUs assigned to a DNN training job. High communication costs—arising from factors such as inter-rack or inter-machine data transfers—can lead to hardware bottlenecks and network delays, ultimately slowing down training. Reducing these costs facilitates more efficient data transfer and synchronization, directly accelerating the training process. Although deep reinforcement learning (DRL) has shown promise in GPU resource scheduling, existing methods often lack considerations for hardware topology. Moreover, most proposed GPU schedulers ignore the possibility of combining heuristic and DRL policies. In response to these challenges, we introduce <span><math><mi>T</mi><mi>o</mi><mi>p</mi><mi>D</mi><mi>R</mi><mi>L</mi></math></span>, an innovative hybrid scheduler that integrates deep reinforcement learning (DRL) and heuristic methods to enhance GPU job scheduling. <span><math><mi>T</mi><mi>o</mi><mi>p</mi><mi>D</mi><mi>R</mi><mi>L</mi></math></span> uses a multi-branch convolutional neural network (CNN) model for job selection and a heuristic method for GPU allocation. At each time step, the CNN model selects a job, and then a heuristic method selects available GPUs closest to each other from the cluster. Reinforcement learning (RL) is used to train the CNN model to select the job that maximizes throughput-based rewards. Extensive evaluation, conducted on datasets with real jobs, shows that <span><math><mi>T</mi><mi>o</mi><mi>p</mi><mi>D</mi><mi>R</mi><mi>L</mi></math></span> significantly outperforms six baseline schedulers that use heuristics or other DRL models for job picking and resource allocation.</div></div>","PeriodicalId":54775,"journal":{"name":"Journal of Parallel and Distributed Computing","volume":"204 ","pages":"Article 105138"},"PeriodicalIF":3.4,"publicationDate":"2025-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144518829","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Thermal modeling and optimal allocation of avionics safety-critical tasks on heterogeneous MPSoCs 异构mpsoc上航空电子安全关键任务的热建模和优化分配
IF 3.4 3区 计算机科学 Q1 COMPUTER SCIENCE, THEORY & METHODS Pub Date : 2025-09-01 Epub Date: 2025-05-20 DOI: 10.1016/j.jpdc.2025.105107
Zdeněk Hanzálek , Ondřej Benedikt , Přemysl Šůcha , Pavel Zaykov , Michal Sojka
Multi-Processor Systems-on-Chip (MPSoC) can deliver high performance needed in many industrial domains, including aerospace. However, their high power consumption, combined with avionics safety standards, brings new thermal management challenges. This paper investigates techniques for offline thermal-aware allocation of periodic tasks on heterogeneous MPSoCs running at a fixed clock frequency, as required in avionics. The goal is to find the assignment of tasks to (i) cores and (ii) temporal isolation windows, as required in ARINC 653 standard, while minimizing the MPSoC temperature. To achieve that, we formulate a new optimization problem, we derive its NP-hardness, and we identify its subproblem solvable in polynomial time. Furthermore, we propose and analyze three power models, and integrate them within several novel optimization approaches based on heuristics, a black-box optimizer, and Integer Linear Programming (ILP). We perform the experimental evaluation on three popular MPSoC platforms (NXP i.MX8QM MEK, NXP i.MX8QM Ixora, NVIDIA TX2) and observe a difference of up to 5.5 °C among the tested methods (corresponding to a 22% reduction w.r.t. the ambient temperature). We also show that our method, integrating the empirical power model with the ILP, outperforms the other methods on all tested platforms.
多处理器片上系统(MPSoC)可以提供包括航空航天在内的许多工业领域所需的高性能。然而,它们的高功耗,加上航空电子安全标准,带来了新的热管理挑战。本文研究了在航空电子设备中需要的以固定时钟频率运行的异构mpsoc上的周期性任务的离线热感知分配技术。目标是找到任务分配到(i)核心和(ii)时间隔离窗口,如ARINC 653标准所要求的,同时最小化MPSoC温度。为了实现这一目标,我们提出了一个新的优化问题,我们推导了它的np -硬度,并确定了它的子问题在多项式时间内可解。此外,我们提出并分析了三种幂模型,并将它们集成到基于启发式、黑盒优化器和整数线性规划(ILP)的几种新型优化方法中。我们在三种流行的MPSoC平台(NXP i.MX8QM MEK, NXP i.MX8QM Ixora, NVIDIA TX2)上进行了实验评估,并观察到测试方法之间的差异高达5.5°C(对应于环境温度降低22%)。我们还表明,我们的方法将经验功率模型与ILP相结合,在所有测试平台上都优于其他方法。
{"title":"Thermal modeling and optimal allocation of avionics safety-critical tasks on heterogeneous MPSoCs","authors":"Zdeněk Hanzálek ,&nbsp;Ondřej Benedikt ,&nbsp;Přemysl Šůcha ,&nbsp;Pavel Zaykov ,&nbsp;Michal Sojka","doi":"10.1016/j.jpdc.2025.105107","DOIUrl":"10.1016/j.jpdc.2025.105107","url":null,"abstract":"<div><div>Multi-Processor Systems-on-Chip (MPSoC) can deliver high performance needed in many industrial domains, including aerospace. However, their high power consumption, combined with avionics safety standards, brings new thermal management challenges. This paper investigates techniques for offline thermal-aware allocation of periodic tasks on heterogeneous MPSoCs running at a fixed clock frequency, as required in avionics. The goal is to find the assignment of tasks to (i) cores and (ii) temporal isolation windows, as required in ARINC 653 standard, while minimizing the MPSoC temperature. To achieve that, we formulate a new optimization problem, we derive its NP-hardness, and we identify its subproblem solvable in polynomial time. Furthermore, we propose and analyze three power models, and integrate them within several novel optimization approaches based on heuristics, a black-box optimizer, and Integer Linear Programming (ILP). We perform the experimental evaluation on three popular MPSoC platforms (NXP i.MX8QM MEK, NXP i.MX8QM Ixora, NVIDIA TX2) and observe a difference of up to 5.5<!--> <!-->°C among the tested methods (corresponding to a 22% reduction w.r.t. the ambient temperature). We also show that our method, integrating the empirical power model with the ILP, outperforms the other methods on all tested platforms.</div></div>","PeriodicalId":54775,"journal":{"name":"Journal of Parallel and Distributed Computing","volume":"203 ","pages":"Article 105107"},"PeriodicalIF":3.4,"publicationDate":"2025-09-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144114761","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Flotilla: A scalable, modular and resilient federated learning framework for heterogeneous resources Flotilla:针对异构资源的可伸缩、模块化和弹性的联邦学习框架
IF 3.4 3区 计算机科学 Q1 COMPUTER SCIENCE, THEORY & METHODS Pub Date : 2025-09-01 Epub Date: 2025-05-14 DOI: 10.1016/j.jpdc.2025.105103
Roopkatha Banerjee , Prince Modi , Jinal Vyas , Chunduru Sri Abhijit , Tejus Chandrashekar , Harsha Varun Marisetty , Manik Gupta , Yogesh Simmhan
With the recent improvements in mobile and edge computing and rising concerns of data privacy, Federated Learning (FL) has rapidly gained popularity as a privacy-preserving, distributed machine learning methodology. Several FL frameworks have been built for testing novel FL strategies. However, most focus on validating the learning aspects of FL through pseudo-distributed simulation but not for deploying on real edge hardware in a distributed manner to meaningfully evaluate the federated aspects from a systems perspective. Current frameworks are also inherently not designed to support asynchronous aggregation, which is gaining popularity, and have limited resilience to client and server failures. We introduce Flotilla, a scalable and lightweight FL framework. It adopts a “user-first” modular design to help rapidly compose various synchronous and asynchronous FL strategies while being agnostic to the DNN architecture. It uses stateless clients and a server design that separates out the session state, which are periodically or incrementally checkpointed. We demonstrate the modularity of Flotilla by evaluating five different FL strategies for training five DNN models. We also evaluate the client and server-side fault tolerance on 200+ clients, and showcase its ability to rapidly failover within seconds. Finally, we show that Flotilla's resource usage on Raspberry Pis and Nvidia Jetson edge accelerators are comparable to or better than three state-of-the-art FL frameworks, Flower, OpenFL and FedML. It also scales significantly better compared to Flower for 1000+ clients. This positions Flotilla as a competitive candidate to build novel FL strategies on, compare them uniformly, rapidly deploy them, and perform systems research and optimizations.
随着最近移动和边缘计算的改进以及对数据隐私的日益关注,联邦学习(FL)作为一种保护隐私的分布式机器学习方法迅速受到欢迎。已经建立了几个FL框架来测试新的FL策略。然而,大多数都侧重于通过伪分布式仿真验证FL的学习方面,而不是以分布式方式部署在真正的边缘硬件上,以便从系统的角度有意义地评估联邦方面。当前的框架本身也没有设计成支持异步聚合(异步聚合越来越流行),并且对客户端和服务器故障的恢复能力有限。我们介绍Flotilla,一个可扩展的轻量级FL框架。它采用“用户优先”的模块化设计,帮助快速组合各种同步和异步FL策略,同时与DNN架构无关。它使用无状态客户机和分离会话状态的服务器设计,会话状态是定期或增量检查点。我们通过评估五种不同的FL策略来训练五种DNN模型来展示Flotilla的模块化。我们还在200多个客户机上评估了客户机和服务器端的容错性,并展示了它在几秒钟内快速故障转移的能力。最后,我们表明Flotilla在Raspberry Pis和Nvidia Jetson边缘加速器上的资源使用情况与三个最先进的FL框架Flower, OpenFL和FedML相当或更好。与Flower相比,它的可扩展性也明显更好,可以支持1000多个客户端。这使得Flotilla成为一个有竞争力的候选人,可以在其上建立新的FL策略,统一比较它们,快速部署它们,并进行系统研究和优化。
{"title":"Flotilla: A scalable, modular and resilient federated learning framework for heterogeneous resources","authors":"Roopkatha Banerjee ,&nbsp;Prince Modi ,&nbsp;Jinal Vyas ,&nbsp;Chunduru Sri Abhijit ,&nbsp;Tejus Chandrashekar ,&nbsp;Harsha Varun Marisetty ,&nbsp;Manik Gupta ,&nbsp;Yogesh Simmhan","doi":"10.1016/j.jpdc.2025.105103","DOIUrl":"10.1016/j.jpdc.2025.105103","url":null,"abstract":"<div><div>With the recent improvements in mobile and edge computing and rising concerns of data privacy, <em>Federated Learning (FL)</em> has rapidly gained popularity as a privacy-preserving, distributed machine learning methodology. Several FL frameworks have been built for testing novel FL strategies. However, most focus on validating the <em>learning</em> aspects of FL through pseudo-distributed simulation but not for deploying on real edge hardware in a distributed manner to meaningfully evaluate the <em>federated</em> aspects from a systems perspective. Current frameworks are also inherently not designed to support asynchronous aggregation, which is gaining popularity, and have limited resilience to client and server failures. We introduce <span>Flotilla</span>, a scalable and lightweight FL framework. It adopts a “user-first” modular design to help rapidly compose various synchronous and asynchronous FL strategies while being agnostic to the DNN architecture. It uses stateless clients and a server design that separates out the session state, which are periodically or incrementally checkpointed. We demonstrate the modularity of <span>Flotilla</span> by evaluating five different FL strategies for training five DNN models. We also evaluate the client and server-side fault tolerance on 200+ clients, and showcase its ability to rapidly failover within seconds. Finally, we show that <span>Flotilla</span>'s resource usage on Raspberry Pis and Nvidia Jetson edge accelerators are comparable to or better than three state-of-the-art FL frameworks, Flower, OpenFL and FedML. It also scales significantly better compared to Flower for 1000+ clients. This positions <span>Flotilla</span> as a competitive candidate to build novel FL strategies on, compare them uniformly, rapidly deploy them, and perform systems research and optimizations.</div></div>","PeriodicalId":54775,"journal":{"name":"Journal of Parallel and Distributed Computing","volume":"203 ","pages":"Article 105103"},"PeriodicalIF":3.4,"publicationDate":"2025-09-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144107305","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Front Matter 1 - Full Title Page (regular issues)/Special Issue Title page (special issues) 封面1 -完整的扉页(每期)/特刊扉页(每期)
IF 3.4 3区 计算机科学 Q1 COMPUTER SCIENCE, THEORY & METHODS Pub Date : 2025-09-01 Epub Date: 2025-06-05 DOI: 10.1016/S0743-7315(25)00089-9
{"title":"Front Matter 1 - Full Title Page (regular issues)/Special Issue Title page (special issues)","authors":"","doi":"10.1016/S0743-7315(25)00089-9","DOIUrl":"10.1016/S0743-7315(25)00089-9","url":null,"abstract":"","PeriodicalId":54775,"journal":{"name":"Journal of Parallel and Distributed Computing","volume":"203 ","pages":"Article 105122"},"PeriodicalIF":3.4,"publicationDate":"2025-09-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144213164","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
ConCeal: A Winograd convolution code template for optimising GCU in parallel 一个Winograd卷积代码模板,用于并行优化GCU
IF 3.4 3区 计算机科学 Q1 COMPUTER SCIENCE, THEORY & METHODS Pub Date : 2025-09-01 Epub Date: 2025-05-21 DOI: 10.1016/j.jpdc.2025.105108
Tian Chen , Yu-an Tan , Thar Baker , Haokai Wu , Qiuyu Zhang , Yuanzhang Li
By minimising arithmetic operations, Winograd convolution substantially reduces the computational complexity of convolution, a pivotal operation in the training and inference stages of Convolutional Neural Networks (CNNs). This study leverages the hardware architecture and capabilities of Shanghai Enflame Technology's AI accelerator, the General Computing Unit (GCU). We develop a code template named ConCeal for Winograd convolution with 3 × 3 kernels, employing a set of interrelated optimisations, including task partitioning, memory layout design, and parallelism. These optimisations fully exploit GCU's computing resources by optimising dataflow and parallelizing the execution of tasks on GCU cores, thereby enhancing Winograd convolution. Moreover, the integrated optimisations in the template are efficiently applicable to other operators, such as max pooling. Using this template, we implement and assess the performance of four Winograd convolution operators on GCU. The experimental results showcase that Conceal operators achieve a maximum of 2.04× and an average of 1.49× speedup compared to the fastest GEMM-based convolution implementations on GCU. Additionally, the ConCeal operators demonstrate competitive or superior computing resource utilisation in certain ResNet and VGG convolution layers when compared to cuDNN on RTX2080.
通过最小化算术运算,Winograd卷积大大降低了卷积的计算复杂度,卷积是卷积神经网络(cnn)训练和推理阶段的关键操作。本研究利用了上海恩焰科技人工智能加速器通用计算单元(GCU)的硬件架构和功能。我们开发了一个名为“隐藏”的代码模板,用于3x3内核的Winograd卷积,采用了一组相关的优化,包括任务分区、内存布局设计和并行性。这些优化充分利用了GCU的计算资源,优化了数据流,并在GCU核心上并行执行任务,从而增强了Winograd卷积。此外,模板中的集成优化可以有效地应用于其他操作,例如最大池。使用该模板,我们在GCU上实现并评估了四个Winograd卷积算子的性能。实验结果表明,与GCU上最快的基于gem的卷积实现相比,隐蔽算子的最大加速速度为2.04倍,平均加速速度为1.49倍。此外,与RTX2080上的cuDNN相比,在某些ResNet和VGG卷积层中,hide运算符显示出具有竞争力或更高的计算资源利用率。
{"title":"ConCeal: A Winograd convolution code template for optimising GCU in parallel","authors":"Tian Chen ,&nbsp;Yu-an Tan ,&nbsp;Thar Baker ,&nbsp;Haokai Wu ,&nbsp;Qiuyu Zhang ,&nbsp;Yuanzhang Li","doi":"10.1016/j.jpdc.2025.105108","DOIUrl":"10.1016/j.jpdc.2025.105108","url":null,"abstract":"<div><div>By minimising arithmetic operations, Winograd convolution substantially reduces the computational complexity of convolution, a pivotal operation in the training and inference stages of Convolutional Neural Networks (CNNs). This study leverages the hardware architecture and capabilities of Shanghai Enflame Technology's AI accelerator, the General Computing Unit (GCU). We develop a code template named ConCeal for Winograd convolution with 3 × 3 kernels, employing a set of interrelated optimisations, including task partitioning, memory layout design, and parallelism. These optimisations fully exploit GCU's computing resources by optimising dataflow and parallelizing the execution of tasks on GCU cores, thereby enhancing Winograd convolution. Moreover, the integrated optimisations in the template are efficiently applicable to other operators, such as max pooling. Using this template, we implement and assess the performance of four Winograd convolution operators on GCU. The experimental results showcase that Conceal operators achieve a maximum of 2.04× and an average of 1.49× speedup compared to the fastest GEMM-based convolution implementations on GCU. Additionally, the ConCeal operators demonstrate competitive or superior computing resource utilisation in certain ResNet and VGG convolution layers when compared to cuDNN on RTX2080.</div></div>","PeriodicalId":54775,"journal":{"name":"Journal of Parallel and Distributed Computing","volume":"203 ","pages":"Article 105108"},"PeriodicalIF":3.4,"publicationDate":"2025-09-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144114726","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Throughput of Byzantine Broadcast 拜占庭广播的吞吐量
IF 3.4 3区 计算机科学 Q1 COMPUTER SCIENCE, THEORY & METHODS Pub Date : 2025-09-01 Epub Date: 2025-05-15 DOI: 10.1016/j.jpdc.2025.105104
Ruomu Hou, Haifeng Yu, Prateek Saxena
Byzantine broadcast is a classic problem in distributed computing, with a wide variety of target applications. This work is motivated by the emerging new application of byzantine broadcast in blockchains, which has prompted us to consider the throughput of byzantine broadcast protocols. To our knowledge, this work is the very first to investigate the throughput of byzantine broadcast. We first show that the throughput of existing byzantine broadcast protocols are all far from ideal. We then obtain a simple upper bound on the throughput of byzantine broadcast protocols, showing that no protocol can do better than this upper bound. As the central contribution of this work, we propose a novel byzantine broadcast protocol called OverlayBB. OverlayBB achieves the optimal throughput, and is the very first protocol that can do so. Our protocol does not sacrifice other aspects of the performance.
拜占庭广播是分布式计算中的一个经典问题,其目标应用非常广泛。这项工作的动机是拜占庭广播在区块链中的新应用,这促使我们考虑拜占庭广播协议的吞吐量。据我们所知,这项工作是第一次调查拜占庭广播的吞吐量。我们首先表明,现有的拜占庭广播协议的吞吐量都远不理想。然后,我们得到了拜占庭广播协议吞吐量的一个简单上界,表明没有协议可以比这个上界做得更好。作为这项工作的核心贡献,我们提出了一种新的拜占庭广播协议,称为OverlayBB。OverlayBB实现了最佳吞吐量,并且是第一个可以做到这一点的协议。我们的协议不会牺牲性能的其他方面。
{"title":"Throughput of Byzantine Broadcast","authors":"Ruomu Hou,&nbsp;Haifeng Yu,&nbsp;Prateek Saxena","doi":"10.1016/j.jpdc.2025.105104","DOIUrl":"10.1016/j.jpdc.2025.105104","url":null,"abstract":"<div><div><em>Byzantine broadcast</em> is a classic problem in distributed computing, with a wide variety of target applications. This work is motivated by the emerging new application of byzantine broadcast in blockchains, which has prompted us to consider the <em>throughput</em> of byzantine broadcast protocols. To our knowledge, this work is the very first to investigate the throughput of byzantine broadcast. We first show that the throughput of existing byzantine broadcast protocols are all far from ideal. We then obtain a simple upper bound on the throughput of byzantine broadcast protocols, showing that no protocol can do better than this upper bound. As the central contribution of this work, we propose a novel byzantine broadcast protocol called <span>OverlayBB</span>. <span>OverlayBB</span> achieves the <em>optimal</em> throughput, and is the very first protocol that can do so. Our protocol does not sacrifice other aspects of the performance.</div></div>","PeriodicalId":54775,"journal":{"name":"Journal of Parallel and Distributed Computing","volume":"203 ","pages":"Article 105104"},"PeriodicalIF":3.4,"publicationDate":"2025-09-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144088996","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Lock-free simulation algorithm to enhance the performance of sequential and parallel DEVS simulators in shared-memory architectures 无锁仿真算法在共享内存架构下提高顺序和并行DEVS仿真器的性能
IF 3.4 3区 计算机科学 Q1 COMPUTER SCIENCE, THEORY & METHODS Pub Date : 2025-09-01 Epub Date: 2025-05-15 DOI: 10.1016/j.jpdc.2025.105105
Román Cárdenas , Patricia Arroba , José L. Risco-Martín
This paper presents a new algorithm for the Discrete EVent System Specification (DEVS) formalism that improves the performance of simulating complex systems by reducing the number of iterations through the model components in each simulation step. It also minimizes unnecessary visits to model components by propagating simulation routines only when necessary. Additionally, we provide two parallel versions of this new simulation algorithm that use work-stealing scheduling and avoid locking mechanisms without compromising the validity of the execution in shared-memory architectures. We implemented the proposed algorithms in the xDEVS simulator and evaluated their performance using the DEVStone synthetic benchmark. The results show that the proposed algorithms outperform state-of-the-art alternatives. For computationally intensive models, parallel implementations achieve high parallelism efficiency. Furthermore, they are more resilient to model complexity than the sequential algorithm, showing better performance for complex models even without computational overhead in state transition functions.
本文提出了一种新的离散事件系统规范(DEVS)算法,该算法通过减少模型组件在每个仿真步骤中的迭代次数来提高模拟复杂系统的性能。它还通过仅在必要时传播仿真例程来最大限度地减少对模型组件的不必要访问。此外,我们提供了这种新模拟算法的两个并行版本,它们使用偷工调度并避免锁定机制,而不会影响共享内存体系结构中执行的有效性。我们在xDEVS模拟器中实现了所提出的算法,并使用DEVStone综合基准评估了它们的性能。结果表明,所提出的算法优于最先进的替代方案。对于计算密集型模型,并行实现可以实现较高的并行效率。此外,它们比顺序算法更能适应模型复杂性,即使在状态转换函数中没有计算开销,也能在复杂模型中表现出更好的性能。
{"title":"Lock-free simulation algorithm to enhance the performance of sequential and parallel DEVS simulators in shared-memory architectures","authors":"Román Cárdenas ,&nbsp;Patricia Arroba ,&nbsp;José L. Risco-Martín","doi":"10.1016/j.jpdc.2025.105105","DOIUrl":"10.1016/j.jpdc.2025.105105","url":null,"abstract":"<div><div>This paper presents a new algorithm for the Discrete EVent System Specification (DEVS) formalism that improves the performance of simulating complex systems by reducing the number of iterations through the model components in each simulation step. It also minimizes unnecessary visits to model components by propagating simulation routines only when necessary. Additionally, we provide two parallel versions of this new simulation algorithm that use work-stealing scheduling and avoid locking mechanisms without compromising the validity of the execution in shared-memory architectures. We implemented the proposed algorithms in the xDEVS simulator and evaluated their performance using the DEVStone synthetic benchmark. The results show that the proposed algorithms outperform state-of-the-art alternatives. For computationally intensive models, parallel implementations achieve high parallelism efficiency. Furthermore, they are more resilient to model complexity than the sequential algorithm, showing better performance for complex models even without computational overhead in state transition functions.</div></div>","PeriodicalId":54775,"journal":{"name":"Journal of Parallel and Distributed Computing","volume":"203 ","pages":"Article 105105"},"PeriodicalIF":3.4,"publicationDate":"2025-09-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144084012","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Cognitive behavioural characteristics identification for remote user authentication for cybersecurity 面向网络安全的远程用户认证认知行为特征识别
IF 3.4 3区 计算机科学 Q1 COMPUTER SCIENCE, THEORY & METHODS Pub Date : 2025-08-01 Epub Date: 2025-05-01 DOI: 10.1016/j.jpdc.2025.105102
Ahmet Orun , Emre Orun , Fatih Kurugollu
Nowadays cyber-attacks keep threatening global networks and information infrastructures. Day-by-day, the threat is gradually getting more destructive and harder to counter, as the global networks continue to enlarge exponentially with limited security counter-measures. This occurrence urgently demands more sophisticated methods and techniques, such as multi-factor authentication and soft biometrics to respond to evolving threats. This paper is concerned with behavioural soft biometrics and proposes a multidisciplinary remote cognitive observation technique to meet today’s cybersecurity needs. The proposed method introduces a non-traditional “cognitive psychology” and “artificial intelligence” based approach. According to contemporary cognitive psychology research, human cognitive processes can be affected by many different personal factors and emotional states which are specific to an individual. Those factors mainly include personal perception, memory, decision-making, reasoning, learning, etc. In this study we focus on visual (graphical) perception with the support of graphical stimuli environments and investigate how such personal cognitive factors can be exploited within the cybersecurity area for remote user authentication. This technique enables remote access to the cognitive behavioural parameters of an intruder/hacker without any physical contact via online connection, disregarding the distance of the threat. The results show that cognitive stimuli provide crucial information for a behavioural user authentication system to classify the user as “authentic” or “intruder”. The ultimate goal of this work is to develop a supplementary cognitive cyber security tool for “next generation” secure online banking, finance or trade systems.
当前,网络攻击不断威胁着全球网络和信息基础设施。随着全球网络继续呈指数级增长,而安全应对措施却有限,这种威胁的破坏性日益增强,也越来越难以应对。这种情况迫切需要更复杂的方法和技术,如多因素认证和软生物识别技术来应对不断变化的威胁。本文关注行为软生物识别技术,提出了一种多学科远程认知观察技术,以满足当今的网络安全需求。该方法引入了一种非传统的“认知心理学”和基于“人工智能”的方法。根据当代认知心理学的研究,人类的认知过程可以受到许多不同的个人因素和个人特定的情绪状态的影响。这些因素主要包括个人感知、记忆、决策、推理、学习等。在本研究中,我们将重点放在图形刺激环境下的视觉(图形)感知上,并研究如何在网络安全领域内利用这些个人认知因素进行远程用户身份验证。这种技术可以远程访问入侵者/黑客的认知行为参数,而无需通过在线连接进行任何物理接触,而不考虑威胁的距离。结果表明,认知刺激为行为用户认证系统将用户分类为“真实”或“入侵者”提供了关键信息。这项工作的最终目标是为“下一代”安全的网上银行、金融或贸易系统开发一种补充的认知网络安全工具。
{"title":"Cognitive behavioural characteristics identification for remote user authentication for cybersecurity","authors":"Ahmet Orun ,&nbsp;Emre Orun ,&nbsp;Fatih Kurugollu","doi":"10.1016/j.jpdc.2025.105102","DOIUrl":"10.1016/j.jpdc.2025.105102","url":null,"abstract":"<div><div>Nowadays cyber-attacks keep threatening global networks and information infrastructures. Day-by-day, the threat is gradually getting more destructive and harder to counter, as the global networks continue to enlarge exponentially with limited security counter-measures. This occurrence urgently demands more sophisticated methods and techniques, such as multi-factor authentication and soft biometrics to respond to evolving threats. This paper is concerned with behavioural soft biometrics and proposes a multidisciplinary remote cognitive observation technique to meet today’s cybersecurity needs. The proposed method introduces a non-traditional “cognitive psychology” and “artificial intelligence” based approach. According to contemporary cognitive psychology research, human cognitive processes can be affected by many different personal factors and emotional states which are specific to an individual. Those factors mainly include personal perception, memory, decision-making, reasoning, learning, etc. In this study we focus on visual (graphical) perception with the support of graphical stimuli environments and investigate how such personal cognitive factors can be exploited within the cybersecurity area for remote user authentication. This technique enables remote access to the cognitive behavioural parameters of an intruder/hacker without any physical contact via online connection, disregarding the distance of the threat. The results show that cognitive stimuli provide crucial information for a behavioural user authentication system to classify the user as “authentic” or “intruder”. The ultimate goal of this work is to develop a supplementary cognitive cyber security tool for “next generation” secure online banking, finance or trade systems.</div></div>","PeriodicalId":54775,"journal":{"name":"Journal of Parallel and Distributed Computing","volume":"202 ","pages":"Article 105102"},"PeriodicalIF":3.4,"publicationDate":"2025-08-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143923475","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
2-edge-Hamilton-connected dragonfly network 2边汉密尔顿连接蜻蜓网络
IF 3.4 3区 计算机科学 Q1 COMPUTER SCIENCE, THEORY & METHODS Pub Date : 2025-08-01 Epub Date: 2025-04-23 DOI: 10.1016/j.jpdc.2025.105095
Huimei Guo , Rong-Xia Hao , Jie Wu
The dragonfly networks are being used in the supercomputers of today. It is of interest to study the topological properties of dragonfly networks. Let G=(V(G),E(G)) be a graph. Let X be a subset of {uv:u,vV(G)anduv} such that every component induced by X on V(G) is a path. If, |X|k and after adding all edges in X to G, the resulting graph contains a Hamiltonian cycle that includes all edges in X, then the graph G is called k-edge-Hamilton-connected. This property can be used to design and optimize routing and forwarding algorithms. By finding such Hamiltonian cycle containing specific edges in the network, it can be ensured that every node can act as an intermediate node to forward packets through a specific channel, thus enabling efficient data transmission and routing. For k=2, determining whether a graph is k-edge-Hamilton-connected is a challenging problem, as it is known to be NP-complete. 2-edge-Hamilton-connected is an extension of Hamilton-connected. In this paper, we prove that the relative arrangement dragonfly network, a type of dragonfly network constructed by the global connections based on relative arrangements, is 2-edge-Hamilton-connected, and this property shows that dragonfly networks have strong reliability. In addition, we determined that D(n,h,g) is 1-Hamilton-connected and paired 2-disjoint path coverable with n4 and h2.
蜻蜓网络被用于今天的超级计算机。研究蜻蜓网络的拓扑特性具有重要的意义。设G=(V(G),E(G))是一个图。设X是{uv:u,v∈v (G)且u≠v}的子集,使得X在v (G)上诱导出的每个分量都是一条路径。如果,|X|≤k,将X中的所有边加到G中,得到的图包含一个包含X中所有边的哈密顿循环,则图G称为k边哈密顿连通。此属性可用于设计和优化路由和转发算法。通过在网络中找到这种包含特定边的哈密顿循环,可以保证每个节点都能作为中间节点通过特定通道转发数据包,从而实现高效的数据传输和路由。对于k=2,确定图是否为k边汉密尔顿连通是一个具有挑战性的问题,因为已知它是np完全的。二边哈密顿连通是哈密顿连通的扩展。本文证明了一种基于相对排列的全局连接构建的蜻蜓网络——相对排列蜻蜓网络是2边hamilton连通的,这一性质表明蜻蜓网络具有较强的可靠性。此外,我们确定了D(n,h,g)是1- hamilton连通和配对的2-不相交路径,可被n≥4和h≥2覆盖。
{"title":"2-edge-Hamilton-connected dragonfly network","authors":"Huimei Guo ,&nbsp;Rong-Xia Hao ,&nbsp;Jie Wu","doi":"10.1016/j.jpdc.2025.105095","DOIUrl":"10.1016/j.jpdc.2025.105095","url":null,"abstract":"<div><div>The dragonfly networks are being used in the supercomputers of today. It is of interest to study the topological properties of dragonfly networks. Let <span><math><mi>G</mi><mo>=</mo><mo>(</mo><mi>V</mi><mo>(</mo><mi>G</mi><mo>)</mo><mo>,</mo><mi>E</mi><mo>(</mo><mi>G</mi><mo>)</mo><mo>)</mo></math></span> be a graph. Let <em>X</em> be a subset of <span><math><mo>{</mo><mi>u</mi><mi>v</mi><mo>:</mo><mi>u</mi><mo>,</mo><mi>v</mi><mo>∈</mo><mi>V</mi><mo>(</mo><mi>G</mi><mo>)</mo><mspace></mspace><mtext>and</mtext><mspace></mspace><mi>u</mi><mo>≠</mo><mi>v</mi><mo>}</mo></math></span> such that every component induced by <em>X</em> on <span><math><mi>V</mi><mo>(</mo><mi>G</mi><mo>)</mo></math></span> is a path. If, <span><math><mo>|</mo><mi>X</mi><mo>|</mo><mo>≤</mo><mi>k</mi></math></span> and after adding all edges in <em>X</em> to <em>G</em>, the resulting graph contains a Hamiltonian cycle that includes all edges in <em>X</em>, then the graph <em>G</em> is called <em>k</em>-edge-Hamilton-connected. This property can be used to design and optimize routing and forwarding algorithms. By finding such Hamiltonian cycle containing specific edges in the network, it can be ensured that every node can act as an intermediate node to forward packets through a specific channel, thus enabling efficient data transmission and routing. For <span><math><mi>k</mi><mo>=</mo><mn>2</mn></math></span>, determining whether a graph is <em>k</em>-edge-Hamilton-connected is a challenging problem, as it is known to be NP-complete. 2-edge-Hamilton-connected is an extension of Hamilton-connected. In this paper, we prove that the relative arrangement dragonfly network, a type of dragonfly network constructed by the global connections based on relative arrangements, is 2-edge-Hamilton-connected, and this property shows that dragonfly networks have strong reliability. In addition, we determined that <span><math><mi>D</mi><mo>(</mo><mi>n</mi><mo>,</mo><mi>h</mi><mo>,</mo><mi>g</mi><mo>)</mo></math></span> is 1-Hamilton-connected and paired 2-disjoint path coverable with <span><math><mi>n</mi><mo>≥</mo><mn>4</mn></math></span> and <span><math><mi>h</mi><mo>≥</mo><mn>2</mn></math></span>.</div></div>","PeriodicalId":54775,"journal":{"name":"Journal of Parallel and Distributed Computing","volume":"202 ","pages":"Article 105095"},"PeriodicalIF":3.4,"publicationDate":"2025-08-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143895554","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Schedule multi-instance microservices to minimize response time under budget constraint in cloud HPC systems 在云高性能计算系统中,调度多实例微服务以在预算限制下最小化响应时间
IF 3.4 3区 计算机科学 Q1 COMPUTER SCIENCE, THEORY & METHODS Pub Date : 2025-08-01 Epub Date: 2025-04-08 DOI: 10.1016/j.jpdc.2025.105086
Dong Wang , Hong Shen , Hui Tian , Yuanhao Yang
In the emerging microservice-based architecture of cloud HPC systems, a challenging problem of critical importance for system service capability is how we can schedule microservices to minimize the end-to-end response time for user requests while keeping cost within the specified budget. We address this problem for multi-instance microservices requested by a single application to which no existing result is known to our knowledge. We propose an effective two-stage solution of first allocating budget (resources) to microservices within the budget constraint and then deploying microservice instances on servers to minimize system operational overhead. For budget allocation, we formulate it as the Discrete Time Cost Tradeoff (DTCT) problem which is NP-hard, present a linear program (LP) based algorithm, and provide a rigorous proof of its worst-case performance guarantee of 4 from the optimal solution. For microservice deployment, we show that it is harder than the NP-hard problem of 1-D binpacking through establishing its mathematical model, and propose a heuristic algorithm of Least First Mapping that greedily places microservice instances on fewest possible servers to minimize system operation cost. The experiment results of extensive simulations on DAG-based applications of different sizes demonstrate the superior performance of our algorithm in comparison with the existing approaches.
在新兴的基于微服务的云高性能计算系统架构中,一个对系统服务能力至关重要的具有挑战性的问题是,我们如何调度微服务以最小化用户请求的端到端响应时间,同时将成本保持在指定的预算范围内。对于单个应用程序请求的多实例微服务,我们解决了这个问题,据我们所知,这些应用程序没有已知的现有结果。我们提出了一个有效的两阶段解决方案,首先在预算约束内为微服务分配预算(资源),然后在服务器上部署微服务实例以最小化系统操作开销。对于预算分配问题,我们将其表述为np困难的离散时间成本权衡(DTCT)问题,提出了一种基于线性规划(LP)的算法,并从最优解给出了其最坏情况性能保证4的严格证明。对于微服务部署,我们通过建立其数学模型,证明了它比一维绑定包装的np困难问题更难,并提出了一种启发式的最小优先映射算法,该算法将微服务实例贪心地放置在尽可能少的服务器上,以最小化系统运行成本。在不同规模的基于dag的应用程序上进行了大量的仿真实验,结果表明,与现有方法相比,我们的算法具有优越的性能。
{"title":"Schedule multi-instance microservices to minimize response time under budget constraint in cloud HPC systems","authors":"Dong Wang ,&nbsp;Hong Shen ,&nbsp;Hui Tian ,&nbsp;Yuanhao Yang","doi":"10.1016/j.jpdc.2025.105086","DOIUrl":"10.1016/j.jpdc.2025.105086","url":null,"abstract":"<div><div>In the emerging microservice-based architecture of cloud HPC systems, a challenging problem of critical importance for system service capability is how we can schedule microservices to minimize the end-to-end response time for user requests while keeping cost within the specified budget. We address this problem for multi-instance microservices requested by a single application to which no existing result is known to our knowledge. We propose an effective two-stage solution of first allocating budget (resources) to microservices within the budget constraint and then deploying microservice instances on servers to minimize system operational overhead. For budget allocation, we formulate it as the Discrete Time Cost Tradeoff (DTCT) problem which is NP-hard, present a linear program (LP) based algorithm, and provide a rigorous proof of its worst-case performance guarantee of 4 from the optimal solution. For microservice deployment, we show that it is harder than the NP-hard problem of 1-D binpacking through establishing its mathematical model, and propose a heuristic algorithm of Least First Mapping that greedily places microservice instances on fewest possible servers to minimize system operation cost. The experiment results of extensive simulations on DAG-based applications of different sizes demonstrate the superior performance of our algorithm in comparison with the existing approaches.</div></div>","PeriodicalId":54775,"journal":{"name":"Journal of Parallel and Distributed Computing","volume":"202 ","pages":"Article 105086"},"PeriodicalIF":3.4,"publicationDate":"2025-08-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143839548","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
期刊
Journal of Parallel and Distributed Computing
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1