首页 > 最新文献

Journal of Parallel and Distributed Computing最新文献

英文 中文
Multi-ARCL: Multimodal adaptive relay-based distributed continual learning for encrypted traffic classification Multi-ARCL:基于多模态自适应中继的分布式持续学习,用于加密流量分类
IF 3.4 3区 计算机科学 Q1 COMPUTER SCIENCE, THEORY & METHODS Pub Date : 2025-04-03 DOI: 10.1016/j.jpdc.2025.105083
Zeyi Li , Minyao Liu , Pan Wang , Wangyu Su , Tianshui Chang , Xuejiao Chen , Xiaokang Zhou
Encrypted Traffic Classification (ETC) using Deep Learning (DL) faces two bottlenecks: homogeneous network traffic representation and ineffective model updates. Currently, multimodal-based DL combined with the Continual Learning (CL) approaches mitigate the above problems but overlook silent applications, whose traffic is absent due to guideline violations leading developers to cease their operation and maintenance. Specifically, silent applications accelerate the decay of model stability, while new and active applications challenge model plasticity. This paper presents Multi-ARCL, a multimodal adaptive replay-based distributed CL framework for ETC. The framework prioritizes using crypto-semantic information from flows' payload and flows' statistical features to represent. Additionally, the framework proposes an adaptive relay-based continual learning method that effectively eliminates silent neurons and retrains new samples and a limited subset of old ones. Exemplars of silent applications are selectively removed during new task training. To enhance training efficiency, the framework uses distributed learning to quickly address the stability-plasticity dilemma and reduce the cost of storing silent applications. Experiments show that ARCL outperforms state-of-the-art methods, with an accuracy improvement of over 8.64% on the NJUPT2023 dataset.
基于深度学习(DL)的加密流量分类(ETC)面临两个瓶颈:同质网络流量表示和无效的模型更新。目前,基于多模式的深度学习与持续学习(CL)方法相结合缓解了上述问题,但忽略了静默应用程序,由于违反指导方针导致开发人员停止其操作和维护,其流量缺失。具体来说,沉默应用加速了模型稳定性的衰减,而新的和活跃的应用挑战了模型的可塑性。本文提出了基于多模态自适应重播的分布式CL框架Multi-ARCL。该框架优先使用来自流的有效负载和流的统计特征的加密语义信息来表示。此外,该框架提出了一种基于自适应继电器的持续学习方法,该方法有效地消除了沉默神经元,并重新训练了新样本和旧样本的有限子集。在新任务训练期间有选择地删除沉默应用程序的示例。为了提高训练效率,该框架采用分布式学习,快速解决了稳定性-可塑性难题,降低了存储静默应用程序的成本。实验表明,在NJUPT2023数据集上,ARCL的准确率提高了8.64%以上。
{"title":"Multi-ARCL: Multimodal adaptive relay-based distributed continual learning for encrypted traffic classification","authors":"Zeyi Li ,&nbsp;Minyao Liu ,&nbsp;Pan Wang ,&nbsp;Wangyu Su ,&nbsp;Tianshui Chang ,&nbsp;Xuejiao Chen ,&nbsp;Xiaokang Zhou","doi":"10.1016/j.jpdc.2025.105083","DOIUrl":"10.1016/j.jpdc.2025.105083","url":null,"abstract":"<div><div>Encrypted Traffic Classification (ETC) using Deep Learning (DL) faces two bottlenecks: homogeneous network traffic representation and ineffective model updates. Currently, multimodal-based DL combined with the Continual Learning (CL) approaches mitigate the above problems but overlook silent applications, whose traffic is absent due to guideline violations leading developers to cease their operation and maintenance. Specifically, silent applications accelerate the decay of model stability, while new and active applications challenge model plasticity. This paper presents Multi-ARCL, a multimodal adaptive replay-based distributed CL framework for ETC. The framework prioritizes using crypto-semantic information from flows' payload and flows' statistical features to represent. Additionally, the framework proposes an adaptive relay-based continual learning method that effectively eliminates silent neurons and retrains new samples and a limited subset of old ones. Exemplars of silent applications are selectively removed during new task training. To enhance training efficiency, the framework uses distributed learning to quickly address the stability-plasticity dilemma and reduce the cost of storing silent applications. Experiments show that ARCL outperforms state-of-the-art methods, with an accuracy improvement of over 8.64% on the NJUPT2023 dataset.</div></div>","PeriodicalId":54775,"journal":{"name":"Journal of Parallel and Distributed Computing","volume":"201 ","pages":"Article 105083"},"PeriodicalIF":3.4,"publicationDate":"2025-04-03","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143800423","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
The European master for HPC curriculum 欧洲HPC硕士课程
IF 3.4 3区 计算机科学 Q1 COMPUTER SCIENCE, THEORY & METHODS Pub Date : 2025-04-03 DOI: 10.1016/j.jpdc.2025.105081
Pascal Bouvry , Mats Brorsson , Ramon Canal , Aryan Eftekhari , Siegfried Höfinger , Didier Smets , Harald Köstler , Tomáš Kozubek , Ezhilmathi Krishnasamy , Josep Llosa , Alexandra Lukas-Rother , Xavier Martorell , Dirk Pleiter , Ana Proykova , Maria-Ribera Sancho , Olaf Schenk , Cristina Silvano
The use of High-Performance Computing (HPC) is crucial for addressing various grand challenges. While significant investments are made in digital infrastructures that comprise HPC resources, its realisation, operation, and, in particular, its use critically depends on suitably trained experts. In this paper, we present the results of an effort to design and implement a pan-European reference curriculum for a master's degree in HPC.
高性能计算(HPC)的使用对于解决各种重大挑战至关重要。虽然在包含高性能计算资源的数字基础设施上进行了大量投资,但它的实现、运营,特别是它的使用,在很大程度上取决于受过适当培训的专家。在本文中,我们展示了设计和实施泛欧HPC硕士学位参考课程的成果。
{"title":"The European master for HPC curriculum","authors":"Pascal Bouvry ,&nbsp;Mats Brorsson ,&nbsp;Ramon Canal ,&nbsp;Aryan Eftekhari ,&nbsp;Siegfried Höfinger ,&nbsp;Didier Smets ,&nbsp;Harald Köstler ,&nbsp;Tomáš Kozubek ,&nbsp;Ezhilmathi Krishnasamy ,&nbsp;Josep Llosa ,&nbsp;Alexandra Lukas-Rother ,&nbsp;Xavier Martorell ,&nbsp;Dirk Pleiter ,&nbsp;Ana Proykova ,&nbsp;Maria-Ribera Sancho ,&nbsp;Olaf Schenk ,&nbsp;Cristina Silvano","doi":"10.1016/j.jpdc.2025.105081","DOIUrl":"10.1016/j.jpdc.2025.105081","url":null,"abstract":"<div><div>The use of High-Performance Computing (HPC) is crucial for addressing various grand challenges. While significant investments are made in digital infrastructures that comprise HPC resources, its realisation, operation, and, in particular, its use critically depends on suitably trained experts. In this paper, we present the results of an effort to design and implement a pan-European reference curriculum for a master's degree in HPC.</div></div>","PeriodicalId":54775,"journal":{"name":"Journal of Parallel and Distributed Computing","volume":"201 ","pages":"Article 105081"},"PeriodicalIF":3.4,"publicationDate":"2025-04-03","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143792550","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
STVAI: Exploring spatio-temporal similarity for scalable and efficient intelligent video inference STVAI:探索可扩展和高效智能视频推理的时空相似性
IF 3.4 3区 计算机科学 Q1 COMPUTER SCIENCE, THEORY & METHODS Pub Date : 2025-04-03 DOI: 10.1016/j.jpdc.2025.105079
Chuang Li , Heshi Wang , Yanhua Wen , Qingyu Shi , Qinyu Wang , Chunhua Hu , Dongchen Wu
The integration of video data computation and inference is a cornerstone for the evolution of multimodal artificial intelligence (MAI). The extensive adoption and optimization of CNN-based frameworks have significantly improved the accuracy of video inference, yet they present substantial challenges for real-time and large-scale computational demands. Existing researches primarily utilize the temporal similarity between video frames to reduce redundant computations, but most of them overlooked the spatial similarity within the frames themselves. Hence, we propose STVAI, a scalable and efficient method that leverages both spatial and temporal similarities to accelerate video inference. This approach uses a parallel region merging strategy, which maintains inference accuracy and enhances the sparsity of the computation matrix. Moreover, we have optimized the computation of sparse convolutions by utilizing Tensor Cores, which accelerate dense convolution computations based on the sparsity of the tiles. Experimental results demonstrate that STVAI achieves a stable acceleration of 1.25 times faster than cuDNN implementations, with only a 5% decrease in prediction accuracy. STVAI can achieve accelerations up to 1.53x, surpassing that of existing methods. Our method can be directly applied to various CNN architectures for video inference tasks without the need for retraining the model.
视频数据计算和推理的集成是多模态人工智能(MAI)发展的基石。基于cnn的框架的广泛采用和优化大大提高了视频推理的准确性,但它们对实时和大规模计算需求提出了实质性挑战。现有研究主要利用视频帧之间的时间相似性来减少冗余计算,但大多忽略了帧本身的空间相似性。因此,我们提出了STVAI,这是一种可扩展且高效的方法,它利用空间和时间相似性来加速视频推理。该方法采用并行区域合并策略,既保持了推理精度,又提高了计算矩阵的稀疏性。此外,我们还利用Tensor Cores优化了稀疏卷积的计算,该算法基于贴图的稀疏性加速了密集卷积的计算。实验结果表明,STVAI实现的稳定加速速度比cuDNN实现快1.25倍,预测精度仅下降5%。STVAI可以实现高达1.53倍的加速度,超过了现有的方法。我们的方法可以直接应用于各种CNN架构的视频推理任务,而不需要对模型进行重新训练。
{"title":"STVAI: Exploring spatio-temporal similarity for scalable and efficient intelligent video inference","authors":"Chuang Li ,&nbsp;Heshi Wang ,&nbsp;Yanhua Wen ,&nbsp;Qingyu Shi ,&nbsp;Qinyu Wang ,&nbsp;Chunhua Hu ,&nbsp;Dongchen Wu","doi":"10.1016/j.jpdc.2025.105079","DOIUrl":"10.1016/j.jpdc.2025.105079","url":null,"abstract":"<div><div>The integration of video data computation and inference is a cornerstone for the evolution of multimodal artificial intelligence (MAI). The extensive adoption and optimization of CNN-based frameworks have significantly improved the accuracy of video inference, yet they present substantial challenges for real-time and large-scale computational demands. Existing researches primarily utilize the temporal similarity between video frames to reduce redundant computations, but most of them overlooked the spatial similarity within the frames themselves. Hence, we propose STVAI, a scalable and efficient method that leverages both spatial and temporal similarities to accelerate video inference. This approach uses a parallel region merging strategy, which maintains inference accuracy and enhances the sparsity of the computation matrix. Moreover, we have optimized the computation of sparse convolutions by utilizing Tensor Cores, which accelerate dense convolution computations based on the sparsity of the tiles. Experimental results demonstrate that STVAI achieves a stable acceleration of 1.25 times faster than cuDNN implementations, with only a 5% decrease in prediction accuracy. STVAI can achieve accelerations up to 1.53x, surpassing that of existing methods. Our method can be directly applied to various CNN architectures for video inference tasks without the need for retraining the model.</div></div>","PeriodicalId":54775,"journal":{"name":"Journal of Parallel and Distributed Computing","volume":"201 ","pages":"Article 105079"},"PeriodicalIF":3.4,"publicationDate":"2025-04-03","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143792552","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
MMBypass: Towards efficient multi-modal AI computing with adaptive bypass network MMBypass:利用自适应旁路网络实现高效的多模式人工智能计算
IF 3.4 3区 计算机科学 Q1 COMPUTER SCIENCE, THEORY & METHODS Pub Date : 2025-04-03 DOI: 10.1016/j.jpdc.2025.105078
Yifei Pu , Xinfeng Xia , Xiaofeng Hou , Chi Wang , Cheng Xu , Jiacheng Liu , Jing Wang , Minyi Guo , Jingling Yuan , Chao Li
Multi-modal artificial intelligence systems demonstrate superior performance through cross-modal information fusion and processing mechanisms, surpassing conventional unimodal architectures. However, the enhanced computational complexity required for processing heterogeneous data streams in multi-modal frameworks results in elevated inference latency compared to their uni-modal architectures. This limitation significantly constrains deployment feasibility for real-time and large-scale applications. To address this challenge, we present MMBypass, an adaptive and efficient architecture for multi-modal AI acceleration. Our solution implements intelligent layer-skipping mechanisms through adaptive computational complexity analysis of multi-modal tasks, achieving latency reduction while maintaining predictive accuracy and mitigating model overfitting in specialized scenarios. The architecture's innovation lies in two aspects: 1) We design bypasses for each uni-modal network in multi-modal networks to perform adaptive computing. 2) We design a guider to dynamically choose the optimal bypasses. Distinct from existing methods, MMBypass maintains broad applicability without requiring domain-specific prerequisites, and it shows significantly better performance on data samples with different difficulties. Empirical evaluations demonstrate our architecture achieves 44.5% average latency reduction while matching or exceeding baseline accuracy across diverse multi-modal benchmarks.
多模态人工智能系统通过跨模态信息融合和处理机制,超越了传统的单模态架构,展现出卓越的性能。然而,与单模态架构相比,在多模态框架中处理异构数据流所需的计算复杂性增加了推理延迟。这一限制极大地限制了实时和大规模应用程序的部署可行性。为了应对这一挑战,我们提出了MMBypass,这是一种用于多模态人工智能加速的自适应高效架构。我们的解决方案通过自适应多模式任务的计算复杂性分析实现智能跳层机制,在保持预测准确性的同时减少延迟,并在特定场景中减轻模型过拟合。该体系结构的创新之处在于两个方面:1)在多模态网络中,我们为每个单模态网络设计旁路来进行自适应计算。2)设计了一个导流器来动态选择最优旁路。与现有方法不同,MMBypass方法保持了广泛的适用性,不需要特定领域的先决条件,并且在不同难度的数据样本上表现出明显更好的性能。经验评估表明,我们的架构实现了44.5%的平均延迟减少,同时在不同的多模式基准测试中达到或超过基线精度。
{"title":"MMBypass: Towards efficient multi-modal AI computing with adaptive bypass network","authors":"Yifei Pu ,&nbsp;Xinfeng Xia ,&nbsp;Xiaofeng Hou ,&nbsp;Chi Wang ,&nbsp;Cheng Xu ,&nbsp;Jiacheng Liu ,&nbsp;Jing Wang ,&nbsp;Minyi Guo ,&nbsp;Jingling Yuan ,&nbsp;Chao Li","doi":"10.1016/j.jpdc.2025.105078","DOIUrl":"10.1016/j.jpdc.2025.105078","url":null,"abstract":"<div><div>Multi-modal artificial intelligence systems demonstrate superior performance through cross-modal information fusion and processing mechanisms, surpassing conventional unimodal architectures. However, the enhanced computational complexity required for processing heterogeneous data streams in multi-modal frameworks results in elevated inference latency compared to their uni-modal architectures. This limitation significantly constrains deployment feasibility for real-time and large-scale applications. To address this challenge, we present <em>MMBypass</em>, an adaptive and efficient architecture for multi-modal AI acceleration. Our solution implements intelligent layer-skipping mechanisms through adaptive computational complexity analysis of multi-modal tasks, achieving latency reduction while maintaining predictive accuracy and mitigating model overfitting in specialized scenarios. The architecture's innovation lies in two aspects: 1) We design bypasses for each uni-modal network in multi-modal networks to perform adaptive computing. 2) We design a guider to dynamically choose the optimal bypasses. Distinct from existing methods, <em>MMBypass</em> maintains broad applicability without requiring domain-specific prerequisites, and it shows significantly better performance on data samples with different difficulties. Empirical evaluations demonstrate our architecture achieves 44.5% average latency reduction while matching or exceeding baseline accuracy across diverse multi-modal benchmarks.</div></div>","PeriodicalId":54775,"journal":{"name":"Journal of Parallel and Distributed Computing","volume":"201 ","pages":"Article 105078"},"PeriodicalIF":3.4,"publicationDate":"2025-04-03","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143825455","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Design of energy-aware sensor networks for climate and pollution monitoring 用于气候和污染监测的能量感知传感器网络设计
IF 3.4 3区 计算机科学 Q1 COMPUTER SCIENCE, THEORY & METHODS Pub Date : 2025-04-02 DOI: 10.1016/j.jpdc.2025.105084
Meeniga Vijaya Lakshmi , M. Sri Raghavendra , MaddalaVijaya Lakshmi
The growing concern over climate change and Pollution has driven the development of energy-efficient sensor networks for environmental monitoring. This research proposes an energy-aware sensor network using Spanning Tree-Reinforcement Learning (ST-RL) to optimize data accuracy, minimize energy consumption, and extend the network's lifetime. The proposed method achieves significant performance improvements compared to existing approaches. Experimental results demonstrate that ST-RL enhances network lifetime by 28.57 %, reduces energy consumption by 41.24 %, improves packet delivery ratio by 3.7 %, and reduces transmission delay by 10 % over traditional methods such as EDAL, FT-EEC, and EAEDAR. The data is collected from multiple environmental sensors, processed using spanning tree algorithms for optimized connectivity and refined with reinforcement learning to suppress unnecessary transmissions. The results confirm that the proposed ST-RL technique significantly enhances energy efficiency and network reliability, making it a promising solution for large-scale climate and pollution monitoring applications.
人们对气候变化和污染的日益关注推动了用于环境监测的高能效传感器网络的发展。本研究提出了一种使用生成树-强化学习(ST-RL)的能量感知传感器网络,以优化数据准确性、最小化能耗并延长网络寿命。与现有方法相比,所提出的方法能显著提高性能。实验结果表明,与 EDAL、FT-EEC 和 EAEDAR 等传统方法相比,ST-RL 可将网络寿命延长 28.57%,能耗降低 41.24%,数据包传送率提高 3.7%,传输延迟降低 10%。数据收集自多个环境传感器,使用生成树算法进行处理以优化连接性,并通过强化学习来抑制不必要的传输。研究结果证实,所提出的 ST-RL 技术能显著提高能源效率和网络可靠性,是大规模气候和污染监测应用的理想解决方案。
{"title":"Design of energy-aware sensor networks for climate and pollution monitoring","authors":"Meeniga Vijaya Lakshmi ,&nbsp;M. Sri Raghavendra ,&nbsp;MaddalaVijaya Lakshmi","doi":"10.1016/j.jpdc.2025.105084","DOIUrl":"10.1016/j.jpdc.2025.105084","url":null,"abstract":"<div><div>The growing concern over climate change and Pollution has driven the development of energy-efficient sensor networks for environmental monitoring. This research proposes an energy-aware sensor network using Spanning Tree-Reinforcement Learning (ST-RL) to optimize data accuracy, minimize energy consumption, and extend the network's lifetime. The proposed method achieves significant performance improvements compared to existing approaches. Experimental results demonstrate that ST-RL enhances network lifetime by 28.57 %, reduces energy consumption by 41.24 %, improves packet delivery ratio by 3.7 %, and reduces transmission delay by 10 % over traditional methods such as EDAL, FT-EEC, and EAEDAR. The data is collected from multiple environmental sensors, processed using spanning tree algorithms for optimized connectivity and refined with reinforcement learning to suppress unnecessary transmissions. The results confirm that the proposed ST-RL technique significantly enhances energy efficiency and network reliability, making it a promising solution for large-scale climate and pollution monitoring applications.</div></div>","PeriodicalId":54775,"journal":{"name":"Journal of Parallel and Distributed Computing","volume":"201 ","pages":"Article 105084"},"PeriodicalIF":3.4,"publicationDate":"2025-04-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143824009","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Latency and cost-aware consumer group autoscaling in message broker systems 消息代理系统中的延迟和成本敏感消费者组自动伸缩
IF 3.4 3区 计算机科学 Q1 COMPUTER SCIENCE, THEORY & METHODS Pub Date : 2025-03-28 DOI: 10.1016/j.jpdc.2025.105071
Diogo Landau , Nishant Saurabh , Xavier Andrade , Jorge G. Barbosa
Message brokers often facilitate communication between data producers and consumers by adding variable-sized messages to ordered distributed queues. Our goal is to determine the number of consumers and consumer partition assignments needed to ensure that the data consumption rate matches the data production rate. We model this problem as a variable item size bin packing problem. As the production rate varies, new consumer–partition assignments are computed, potentially requiring the reallocation of partitions from one consumer to another. During reallocation, data in the queue are not read, leading to increased latency costs. To address this problem, we focus on the multiobjective optimization cost of minimizing the number of consumers and reducing latency. We introduce several heuristic algorithms and compare them to state-of-the-art heuristics. In our experimental setup, the proposed modified worst fit (MWF) heuristic achieves a 48% reduction, with a similar number of consumers, in comparison with the best fit decrease (BFD). In addition, MWF achieves a 99th percentile latency of 2.24 seconds compared with that of 364.66 with the approach by Kafka using the same number of consumers. Alternatively, to obtain a lower 99th percentile latency than our approach does, Kafka requires at least 60% more consumers than our method requires.
消息代理通常通过向有序的分布式队列中添加可变大小的消息来促进数据生产者和消费者之间的通信。我们的目标是确定确保数据消耗率与数据产生率匹配所需的消费者数量和消费者分区分配。我们将此问题建模为可变物品大小的装箱问题。随着生产速率的变化,计算新的消费者-分区分配,可能需要将分区从一个消费者重新分配到另一个消费者。在重新分配期间,不会读取队列中的数据,从而导致延迟成本增加。为了解决这个问题,我们将重点放在最小化消费者数量和减少延迟的多目标优化成本上。我们介绍了几种启发式算法,并将它们与最先进的启发式算法进行比较。在我们的实验设置中,与最佳拟合减少(BFD)相比,所提出的改进的最坏拟合(MWF)启发式在消费者数量相似的情况下实现了48%的减少。此外,在使用相同数量的消费者时,MWF实现了2.24秒的99百分位延迟,而Kafka的方法为364.66秒。或者,为了获得比我们的方法更低的99百分位延迟,Kafka需要比我们的方法至少多60%的消费者。
{"title":"Latency and cost-aware consumer group autoscaling in message broker systems","authors":"Diogo Landau ,&nbsp;Nishant Saurabh ,&nbsp;Xavier Andrade ,&nbsp;Jorge G. Barbosa","doi":"10.1016/j.jpdc.2025.105071","DOIUrl":"10.1016/j.jpdc.2025.105071","url":null,"abstract":"<div><div>Message brokers often facilitate communication between data producers and consumers by adding variable-sized messages to ordered distributed queues. Our goal is to determine the number of consumers and consumer partition assignments needed to ensure that the data consumption rate matches the data production rate. We model this problem as a variable item size bin packing problem. As the production rate varies, new consumer–partition assignments are computed, potentially requiring the reallocation of partitions from one consumer to another. During reallocation, data in the queue are not read, leading to increased latency costs. To address this problem, we focus on the multiobjective optimization cost of minimizing the number of consumers and reducing latency. We introduce several heuristic algorithms and compare them to state-of-the-art heuristics. In our experimental setup, the proposed modified worst fit (MWF) heuristic achieves a 48% reduction, with a similar number of consumers, in comparison with the best fit decrease (BFD). In addition, MWF achieves a <span><math><msup><mrow><mn>99</mn></mrow><mrow><mi>t</mi><mi>h</mi></mrow></msup></math></span> percentile latency of 2.24 seconds compared with that of 364.66 with the approach by Kafka using the same number of consumers. Alternatively, to obtain a lower <span><math><msup><mrow><mn>99</mn></mrow><mrow><mi>t</mi><mi>h</mi></mrow></msup></math></span> percentile latency than our approach does, Kafka requires at least 60% more consumers than our method requires.</div></div>","PeriodicalId":54775,"journal":{"name":"Journal of Parallel and Distributed Computing","volume":"201 ","pages":"Article 105071"},"PeriodicalIF":3.4,"publicationDate":"2025-03-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143748704","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Optimizing the layout of embedding BCube into grid architectures 优化BCube嵌入网格架构的布局
IF 3.4 3区 计算机科学 Q1 COMPUTER SCIENCE, THEORY & METHODS Pub Date : 2025-03-27 DOI: 10.1016/j.jpdc.2025.105070
Paul Immanuel, A. Berin Greeni
The storage, processing, and distribution of enormous volumes of data are made possible by data centers, which are vital components of the contemporary computing infrastructure. The BCube network is a type of significant data center network which was developed for modular data centers that are based on shipping containers. Network embedding of data centers into certain topologies offers several benefits, including improved scalability, reduced power consumption, enhanced reliability, and improved overall network performance. Embedding of a guest graph into suitable host graphs have significant applications like: virtualizing the Network-on-Chip layouts, portability of algorithms, and the simulation capabilities of parallel architectures. A crucial key factor that influences the quality of embedding is layout. So far, there have been few results regarding the embedding of graphs into certain data center networks. However, these results are obtained by fixing data center networks as host graphs with linear arrays and cycles as guest graphs. In this work, we investigate the edge isoperimetric features of BCube and embed it into linear arrays and grid structures by considering it as the guest graph. This study is the first that we are aware of, on embedding data center networks for minimum layout.
数据中心是当代计算基础设施的重要组成部分,它使海量数据的存储、处理和分发成为可能。BCube网络是一种重要的数据中心网络,它是为基于集装箱的模块化数据中心而开发的。将数据中心的网络嵌入到某些拓扑结构中有几个好处,包括改进的可伸缩性、降低的功耗、增强的可靠性和改进的整体网络性能。将来宾图嵌入到适当的主机图中具有重要的应用,例如:虚拟化片上网络布局、算法的可移植性和并行体系结构的模拟功能。影响嵌入质量的一个关键因素是布局。到目前为止,关于将图形嵌入到某些数据中心网络中还没有什么结果。然而,这些结果是通过将数据中心网络固定为主机图,将线性阵列和周期固定为访客图来获得的。在这项工作中,我们研究了BCube的边缘等周特征,并将其作为来宾图嵌入到线性阵列和网格结构中。这是我们所知的第一个关于嵌入数据中心网络以实现最小布局的研究。
{"title":"Optimizing the layout of embedding BCube into grid architectures","authors":"Paul Immanuel,&nbsp;A. Berin Greeni","doi":"10.1016/j.jpdc.2025.105070","DOIUrl":"10.1016/j.jpdc.2025.105070","url":null,"abstract":"<div><div>The storage, processing, and distribution of enormous volumes of data are made possible by data centers, which are vital components of the contemporary computing infrastructure. The BCube network is a type of significant data center network which was developed for modular data centers that are based on shipping containers. Network embedding of data centers into certain topologies offers several benefits, including improved scalability, reduced power consumption, enhanced reliability, and improved overall network performance. Embedding of a guest graph into suitable host graphs have significant applications like: virtualizing the Network-on-Chip layouts, portability of algorithms, and the simulation capabilities of parallel architectures. A crucial key factor that influences the quality of embedding is layout. So far, there have been few results regarding the embedding of graphs into certain data center networks. However, these results are obtained by fixing data center networks as host graphs with linear arrays and cycles as guest graphs. In this work, we investigate the edge isoperimetric features of BCube and embed it into linear arrays and grid structures by considering it as the guest graph. This study is the first that we are aware of, on embedding data center networks for minimum layout.</div></div>","PeriodicalId":54775,"journal":{"name":"Journal of Parallel and Distributed Computing","volume":"201 ","pages":"Article 105070"},"PeriodicalIF":3.4,"publicationDate":"2025-03-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143738837","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
The (t,k)-diagnosability of Cayley graph generated by 2-tree 由2-tree生成的Cayley图的(t,k)可诊断性
IF 3.4 3区 计算机科学 Q1 COMPUTER SCIENCE, THEORY & METHODS Pub Date : 2025-03-21 DOI: 10.1016/j.jpdc.2025.105068
Lulu Yang , Shuming Zhou , Eddie Cheng
Multiprocessor systems, which typically use interconnection networks (or graphs) as underlying topologies, are widely utilized for big data analysis in scientific computing due to the advancements in technologies such as cloud computing, IoT, social network. With the dramatic expansion in the scale of multiprocessor systems, the pursuit and optimization of strategies for identifying faulty processors have become crucial to ensuring the normal operation of high-performance computing systems. System-level diagnosis is a process designed to distinguish between faulty processors and fault-free processors in multiprocessor systems. The (t,k)-diagnosis, a generalization of sequential diagnosis, proceeds to identify at least k faulty processors and repair them in each iteration under the assumption that there are at most t faulty processors whenever tk. We show that Cayley graph generated by 2-tree is (2n3,2n4)-diagnosable under the PMC model for n5 while it is (2n3(2n6)2n4,2n4)-diagnosable under the MM model for n4. As an empirical case study, the (t,k)-diagnosabilities of the alternating group graph AGn under the PMC model and the MM* model have been determined.
多处理器系统通常使用互连网络(或图形)作为底层拓扑,由于云计算、物联网、社交网络等技术的进步,多处理器系统被广泛用于科学计算中的大数据分析。随着多处理器系统规模的急剧扩大,故障处理器识别策略的追求和优化已成为保证高性能计算系统正常运行的关键。在多处理机系统中,系统级诊断是一种用于区分故障处理机和无故障处理机的过程。(t,k)-诊断是对顺序诊断的一种推广,它在假设t≥k时最多有t个故障处理器的情况下,在每次迭代中识别出至少k个故障处理器并对其进行修复。我们证明了由2-tree生成的Cayley图在n≥5的PMC模型下是(2n−3,2n−4)可诊断的,而在n≥4的MM模型下是(2n−3(2n−6)2n−4,2n−4)可诊断的。作为实证研究,确定了交替群图AGn在PMC模型和MM*模型下的(t,k)-可诊断性。
{"title":"The (t,k)-diagnosability of Cayley graph generated by 2-tree","authors":"Lulu Yang ,&nbsp;Shuming Zhou ,&nbsp;Eddie Cheng","doi":"10.1016/j.jpdc.2025.105068","DOIUrl":"10.1016/j.jpdc.2025.105068","url":null,"abstract":"<div><div>Multiprocessor systems, which typically use interconnection networks (or graphs) as underlying topologies, are widely utilized for big data analysis in scientific computing due to the advancements in technologies such as cloud computing, IoT, social network. With the dramatic expansion in the scale of multiprocessor systems, the pursuit and optimization of strategies for identifying faulty processors have become crucial to ensuring the normal operation of high-performance computing systems. System-level diagnosis is a process designed to distinguish between faulty processors and fault-free processors in multiprocessor systems. The <span><math><mo>(</mo><mi>t</mi><mo>,</mo><mi>k</mi><mo>)</mo></math></span>-diagnosis, a generalization of sequential diagnosis, proceeds to identify at least <em>k</em> faulty processors and repair them in each iteration under the assumption that there are at most <em>t</em> faulty processors whenever <span><math><mi>t</mi><mo>≥</mo><mi>k</mi></math></span>. We show that Cayley graph generated by 2-tree is <span><math><mo>(</mo><msup><mrow><mn>2</mn></mrow><mrow><mi>n</mi><mo>−</mo><mn>3</mn></mrow></msup><mo>,</mo><mn>2</mn><mi>n</mi><mo>−</mo><mn>4</mn><mo>)</mo></math></span>-diagnosable under the PMC model for <span><math><mi>n</mi><mo>≥</mo><mn>5</mn></math></span> while it is <span><math><mo>(</mo><mfrac><mrow><msup><mrow><mn>2</mn></mrow><mrow><mi>n</mi><mo>−</mo><mn>3</mn></mrow></msup><mo>(</mo><mn>2</mn><mi>n</mi><mo>−</mo><mn>6</mn><mo>)</mo></mrow><mrow><mn>2</mn><mi>n</mi><mo>−</mo><mn>4</mn></mrow></mfrac><mo>,</mo><mn>2</mn><mi>n</mi><mo>−</mo><mn>4</mn><mo>)</mo></math></span>-diagnosable under the MM<sup>⁎</sup> model for <span><math><mi>n</mi><mo>≥</mo><mn>4</mn></math></span>. As an empirical case study, the <span><math><mo>(</mo><mi>t</mi><mo>,</mo><mi>k</mi><mo>)</mo></math></span>-diagnosabilities of the alternating group graph <span><math><mi>A</mi><msub><mrow><mi>G</mi></mrow><mrow><mi>n</mi></mrow></msub></math></span> under the PMC model and the MM* model have been determined.</div></div>","PeriodicalId":54775,"journal":{"name":"Journal of Parallel and Distributed Computing","volume":"200 ","pages":"Article 105068"},"PeriodicalIF":3.4,"publicationDate":"2025-03-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143687634","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
A knowledge-driven approach to multi-objective IoT task graph scheduling in fog-cloud computing 雾云计算中多目标物联网任务图调度的知识驱动方法
IF 3.4 3区 计算机科学 Q1 COMPUTER SCIENCE, THEORY & METHODS Pub Date : 2025-03-18 DOI: 10.1016/j.jpdc.2025.105069
Hadi Gholami, Hongyang Sun
Despite the significant growth of Internet of Things (IoT), there are prominent limitations of this emerging technology, such as limited processing power and storage. Along with the expansion of IoT networks, the fog-cloud computing paradigm has been developed to optimize the provision of services to IoT users by offloading computations to the more powerful processing resources. In this paper, with the aim of optimizing multiple objectives of makespan, energy consumption, and cost, we develop a novel automatic three-module algorithm to schedule multiple task graphs offloaded from IoT devices to the fog-cloud environment. Our algorithm combines the Genetic Algorithm (GA) and the Random Forest (RF) classifier, which we call Hybrid GA-RF (HGARF). Each of the three modules has a responsibility and they are repeated sequentially to extract knowledge from the solution space in the form of IF-THEN rules. The first module is responsible for generating solutions for the training set using a GA. Here, we introduce a chromosome encoding method and a crossover operator to create diversity for multiple task graphs. By expressing a concept called bottleneck and two conditions, we also develop a mutation operator to identify and reduce the workload of certain processing centers. The second module aims at generating rules from the solutions of the training set, and to that end employs an RF classifier. Here, in addition to proposing features to construct decision trees, we develop a format for extracting and recording IF-THEN rules. The third module checks the quality of the generated rules and refines them by predicting the processing resources as well as removing less important rules from the rule set. Finally, the developed HGARF algorithm automatically determines its termination condition based on the quality of the provided solutions. Experimental results demonstrate that our method effectively improves the objective functions in large-size task graphs by up to 13.24 % compared to some state-of-the-art methods.
尽管物联网(IoT)的显著增长,但这种新兴技术存在突出的局限性,例如有限的处理能力和存储。随着物联网网络的扩展,雾云计算范式已经被开发出来,通过将计算卸载到更强大的处理资源来优化为物联网用户提供的服务。在本文中,为了优化完工时间、能耗和成本的多个目标,我们开发了一种新的自动三模块算法来调度从物联网设备卸载到雾云环境的多个任务图。我们的算法结合了遗传算法(GA)和随机森林(RF)分类器,我们称之为混合GA-RF (HGARF)。这三个模块都有各自的职责,它们依次重复,以IF-THEN规则的形式从解空间中提取知识。第一个模块负责使用遗传算法生成训练集的解。在这里,我们引入了染色体编码方法和交叉算子来创建多任务图的多样性。通过表达瓶颈和两个条件的概念,我们还开发了一个突变算子来识别和减少某些加工中心的工作量。第二个模块旨在从训练集的解中生成规则,并为此使用RF分类器。在这里,除了提出构造决策树的特征之外,我们还开发了一种用于提取和记录IF-THEN规则的格式。第三个模块检查生成的规则的质量,并通过预测处理资源以及从规则集中删除不太重要的规则来改进它们。最后,开发的HGARF算法根据所提供的解的质量自动确定其终止条件。实验结果表明,与现有的方法相比,该方法有效地提高了大型任务图的目标函数,提高了13.24%。
{"title":"A knowledge-driven approach to multi-objective IoT task graph scheduling in fog-cloud computing","authors":"Hadi Gholami,&nbsp;Hongyang Sun","doi":"10.1016/j.jpdc.2025.105069","DOIUrl":"10.1016/j.jpdc.2025.105069","url":null,"abstract":"<div><div>Despite the significant growth of Internet of Things (IoT), there are prominent limitations of this emerging technology, such as limited processing power and storage. Along with the expansion of IoT networks, the fog-cloud computing paradigm has been developed to optimize the provision of services to IoT users by offloading computations to the more powerful processing resources. In this paper, with the aim of optimizing multiple objectives of makespan, energy consumption, and cost, we develop a novel automatic three-module algorithm to schedule multiple task graphs offloaded from IoT devices to the fog-cloud environment. Our algorithm combines the Genetic Algorithm (GA) and the Random Forest (RF) classifier, which we call Hybrid GA-RF (HGARF). Each of the three modules has a responsibility and they are repeated sequentially to extract knowledge from the solution space in the form of IF-THEN rules. The first module is responsible for generating solutions for the training set using a GA. Here, we introduce a chromosome encoding method and a crossover operator to create diversity for multiple task graphs. By expressing a concept called bottleneck and two conditions, we also develop a mutation operator to identify and reduce the workload of certain processing centers. The second module aims at generating rules from the solutions of the training set, and to that end employs an RF classifier. Here, in addition to proposing features to construct decision trees, we develop a format for extracting and recording IF-THEN rules. The third module checks the quality of the generated rules and refines them by predicting the processing resources as well as removing less important rules from the rule set. Finally, the developed HGARF algorithm automatically determines its termination condition based on the quality of the provided solutions. Experimental results demonstrate that our method effectively improves the objective functions in large-size task graphs by up to 13.24 % compared to some state-of-the-art methods.</div></div>","PeriodicalId":54775,"journal":{"name":"Journal of Parallel and Distributed Computing","volume":"202 ","pages":"Article 105069"},"PeriodicalIF":3.4,"publicationDate":"2025-03-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143869316","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Data quality management in big data: Strategies, tools, and educational implications 大数据中的数据质量管理:策略、工具和教育意义
IF 3.4 3区 计算机科学 Q1 COMPUTER SCIENCE, THEORY & METHODS Pub Date : 2025-03-13 DOI: 10.1016/j.jpdc.2025.105067
Thu Nguyen , Hong-Tri Nguyen , Tu-Anh Nguyen-Hoang
This study addresses the critical need for effective Big Data Quality Management (BDQM) in education, a field where data quality has profound implications but remains underexplored. The work systematically progresses from requirement analysis and standard development to the deployment of tools for monitoring and enhancing data quality in big data workflows. The study's contributions are substantiated through five research questions that explore the impact of data quality on analytics, the establishment of evaluation standards, centralized management strategies, improvement techniques, and education-specific BDQM adaptations. By addressing these questions, the research advances both theoretical and practical frameworks, equipping stakeholders with the tools to enhance the reliability and efficiency of data-driven educational initiatives. Integrating Artificial Intelligence (AI) and distributed computing, this research introduces a novel multi-stage BDQM framework that emphasizes data quality assessment, centralized governance, and AI-enhanced improvement techniques. This work underscores the transformative potential of robust BDQM systems in supporting informed decision-making and achieving sustainable outcomes in educational projects. The survey findings highlight the potential for automated data management within big data architectures, suggesting that data quality frameworks can be significantly enhanced by leveraging AI and distributed computing. Additionally, the survey emphasizes emerging trends in big data quality management, specifically (i) automated data cleaning and cleansing and (ii) data enrichment and augmentation.
本研究探讨了教育领域对有效的大数据质量管理(BDQM)的迫切需求,数据质量在这一领域具有深远影响,但仍未得到充分探索。这项工作从需求分析和标准制定系统地推进到大数据工作流中用于监控和提高数据质量的工具的部署。本研究的贡献体现在五个研究问题上,即数据质量对分析的影响、评估标准的建立、集中管理策略、改进技术以及针对教育的 BDQM 适应性。通过解决这些问题,研究推进了理论和实践框架,为利益相关者提供了提高数据驱动型教育计划的可靠性和效率的工具。这项研究整合了人工智能(AI)和分布式计算,引入了一个新颖的多阶段 BDQM 框架,强调数据质量评估、集中管理和 AI 增强型改进技术。这项工作强调了强大的 BDQM 系统在支持知情决策和实现教育项目可持续成果方面的变革潜力。调查结果凸显了大数据架构中自动数据管理的潜力,表明数据质量框架可以通过利用人工智能和分布式计算得到显著提升。此外,调查还强调了大数据质量管理的新兴趋势,特别是(i)自动数据清理和清洗以及(ii)数据丰富和增强。
{"title":"Data quality management in big data: Strategies, tools, and educational implications","authors":"Thu Nguyen ,&nbsp;Hong-Tri Nguyen ,&nbsp;Tu-Anh Nguyen-Hoang","doi":"10.1016/j.jpdc.2025.105067","DOIUrl":"10.1016/j.jpdc.2025.105067","url":null,"abstract":"<div><div>This study addresses the critical need for effective Big Data Quality Management (BDQM) in education, a field where data quality has profound implications but remains underexplored. The work systematically progresses from requirement analysis and standard development to the deployment of tools for monitoring and enhancing data quality in big data workflows. The study's contributions are substantiated through five research questions that explore the impact of data quality on analytics, the establishment of evaluation standards, centralized management strategies, improvement techniques, and education-specific BDQM adaptations. By addressing these questions, the research advances both theoretical and practical frameworks, equipping stakeholders with the tools to enhance the reliability and efficiency of data-driven educational initiatives. Integrating Artificial Intelligence (AI) and distributed computing, this research introduces a novel multi-stage BDQM framework that emphasizes data quality assessment, centralized governance, and AI-enhanced improvement techniques. This work underscores the transformative potential of robust BDQM systems in supporting informed decision-making and achieving sustainable outcomes in educational projects. The survey findings highlight the potential for automated data management within big data architectures, suggesting that data quality frameworks can be significantly enhanced by leveraging AI and distributed computing. Additionally, the survey emphasizes emerging trends in big data quality management, specifically (i) automated data cleaning and cleansing and (ii) data enrichment and augmentation.</div></div>","PeriodicalId":54775,"journal":{"name":"Journal of Parallel and Distributed Computing","volume":"200 ","pages":"Article 105067"},"PeriodicalIF":3.4,"publicationDate":"2025-03-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143621250","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
期刊
Journal of Parallel and Distributed Computing
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1