首页 > 最新文献

Future Generation Computer Systems-The International Journal of Escience最新文献

英文 中文
Acceleration offloading for differential privacy protection based on federated learning in edge intelligent controllers 边缘智能控制器中基于联合学习的差异化隐私保护加速卸载
IF 6.2 2区 计算机科学 Q1 COMPUTER SCIENCE, THEORY & METHODS Pub Date : 2024-09-12 DOI: 10.1016/j.future.2024.107526
When implementing Federated Learning (FL) on Edge Intelligence Controllers (EIC) in the Industrial Internet of Things (IIoT), it is important to consider the limitations of the EICs’ computational capabilities and to address potential privacy concerns. For the efficient and secure implementation of FL on EICs, three key issues require attention: (i) efficient deployment on EICs with limited computational capacity, (ii) avoiding privacy issues that arise from offloading strategies when using offloading to accelerate, and (iii) mitigating privacy leaks that may result from disclosed parameters. To address the aforementioned concerns, this paper proposes a task offloading model called FedOffloading. Employing Deep Reinforcement Learning (DRL) techniques, FedOffloading accelerates EIC training by offloading the training tasks of the model to the Edge servers (ES). It utilizes the Laplace distribution to safeguard the privacy of the offloading strategies. Meanwhile, to prevent privacy breaches caused by disclosed parameters, FedOffloading allows EICs to inject different levels of artificial noise before transmitting training data. Experimental studies conducted on a test platform reveal that, compared to classical FL, FedOffloading can reduce training time by 54.70%, and even up to 78.06% when training larger models. The Security Module effectively protects the offloading strategies, meeting privacy requirements while also minimizing training time. In addition, to prevent privacy leakage caused by EICs, we introduce noise in the parameters disclosed during training, and show that the intermediate activation data is more susceptible to noise.
在工业物联网(IIoT)中的边缘智能控制器(EIC)上实施联合学习(FL)时,必须考虑到 EIC 计算能力的限制,并解决潜在的隐私问题。要在 EIC 上高效、安全地实施 FL,需要注意三个关键问题:(i) 在计算能力有限的 EIC 上高效部署;(ii) 在使用卸载加速时避免卸载策略引起的隐私问题;(iii) 减少披露参数可能导致的隐私泄露。为了解决上述问题,本文提出了一种名为 FedOffloading 的任务卸载模型。FedOffloading 采用深度强化学习(DRL)技术,通过将模型的训练任务卸载到边缘服务器(ES)来加速 EIC 训练。它利用拉普拉斯分布来保护卸载策略的隐私。同时,为防止因参数泄露而造成隐私泄露,FedOffloading 允许 EIC 在传输训练数据前注入不同程度的人工噪声。在测试平台上进行的实验研究表明,与经典的 FL 相比,FedOffloading 可以减少 54.70% 的训练时间,在训练较大模型时甚至可以减少 78.06% 的时间。安全模块可有效保护卸载策略,在满足隐私要求的同时最大限度地缩短训练时间。此外,为了防止 EIC 导致的隐私泄露,我们在训练过程中为公开的参数引入了噪声,结果表明中间激活数据更容易受到噪声的影响。
{"title":"Acceleration offloading for differential privacy protection based on federated learning in edge intelligent controllers","authors":"","doi":"10.1016/j.future.2024.107526","DOIUrl":"10.1016/j.future.2024.107526","url":null,"abstract":"<div><div>When implementing Federated Learning (FL) on Edge Intelligence Controllers (EIC) in the Industrial Internet of Things (IIoT), it is important to consider the limitations of the EICs’ computational capabilities and to address potential privacy concerns. For the efficient and secure implementation of FL on EICs, three key issues require attention: (i) efficient deployment on EICs with limited computational capacity, (ii) avoiding privacy issues that arise from offloading strategies when using offloading to accelerate, and (iii) mitigating privacy leaks that may result from disclosed parameters. To address the aforementioned concerns, this paper proposes a task offloading model called <em>FedOffloading</em>. Employing Deep Reinforcement Learning (DRL) techniques, <em>FedOffloading</em> accelerates EIC training by offloading the training tasks of the model to the Edge servers (ES). It utilizes the Laplace distribution to safeguard the privacy of the offloading strategies. Meanwhile, to prevent privacy breaches caused by disclosed parameters, <em>FedOffloading</em> allows EICs to inject different levels of artificial noise before transmitting training data. Experimental studies conducted on a test platform reveal that, compared to classical FL, <em>FedOffloading</em> can reduce training time by 54.70%, and even up to 78.06% when training larger models. The <em>Security Module</em> effectively protects the offloading strategies, meeting privacy requirements while also minimizing training time. In addition, to prevent privacy leakage caused by EICs, we introduce noise in the parameters disclosed during training, and show that the intermediate activation data is more susceptible to noise.</div></div>","PeriodicalId":55132,"journal":{"name":"Future Generation Computer Systems-The International Journal of Escience","volume":null,"pages":null},"PeriodicalIF":6.2,"publicationDate":"2024-09-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142312919","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
UAV-IRS-assisted energy harvesting for edge computing based on deep reinforcement learning 基于深度强化学习的无人机-IRS 辅助边缘计算能量收集技术
IF 6.2 2区 计算机科学 Q1 COMPUTER SCIENCE, THEORY & METHODS Pub Date : 2024-09-12 DOI: 10.1016/j.future.2024.107527

In the internet of everything (IoE) era, the proliferation of internet of things (IoT) devices is accelerating rapidly. Particularly, smaller devices are increasingly constrained by hardware limitations that impact their computational capacity, communication bandwidth, and battery longevity. Our research explores a multi-device, multi-access edge computing (MEC) environment within small cells to address the challenges posed by the hardware limitations of IoT devices in this environment. We employ wireless power transfer (WPT) to ensure these IoT devices have sufficient energy for task processing. We propose a system architecture in which an intelligent reflective surface (IRS) is carried by an unmanned aerial vehicle (UAV) to enhance communication conditions. For sustainable energy harvesting (EH), we integrate a normal distribution into the objective function. We utilize a softmax deep double deterministic policy gradients (SD3) algorithm, based on deep reinforcement learning (DRL), to optimize the computational and communication capabilities of IoT devices. Simulation experiments demonstrate that our SD3-based EH edge computing (EHEC-SD3) algorithm surpasses existing DRL algorithms in our explored environments, achieving more than 90% in overall optimization and EH performance.

在万物互联(IoE)时代,物联网(IoT)设备的扩散速度正在迅速加快。特别是,小型设备越来越受到硬件限制的制约,这些限制影响了它们的计算能力、通信带宽和电池寿命。我们的研究探索了小蜂窝内的多设备、多接入边缘计算(MEC)环境,以解决物联网设备在此环境中的硬件限制所带来的挑战。我们采用无线功率传输(WPT)来确保这些物联网设备有足够的能量进行任务处理。我们提出了一种系统架构,由无人飞行器(UAV)携带智能反射面(IRS)来改善通信条件。为了实现可持续的能量收集(EH),我们在目标函数中加入了正态分布。我们利用基于深度强化学习(DRL)的软最大深度双确定性策略梯度(SD3)算法来优化物联网设备的计算和通信能力。仿真实验证明,我们基于SD3的EH边缘计算(EHEC-SD3)算法在我们探索的环境中超越了现有的DRL算法,在整体优化和EH性能方面达到了90%以上。
{"title":"UAV-IRS-assisted energy harvesting for edge computing based on deep reinforcement learning","authors":"","doi":"10.1016/j.future.2024.107527","DOIUrl":"10.1016/j.future.2024.107527","url":null,"abstract":"<div><p>In the internet of everything (IoE) era, the proliferation of internet of things (IoT) devices is accelerating rapidly. Particularly, smaller devices are increasingly constrained by hardware limitations that impact their computational capacity, communication bandwidth, and battery longevity. Our research explores a multi-device, multi-access edge computing (MEC) environment within small cells to address the challenges posed by the hardware limitations of IoT devices in this environment. We employ wireless power transfer (WPT) to ensure these IoT devices have sufficient energy for task processing. We propose a system architecture in which an intelligent reflective surface (IRS) is carried by an unmanned aerial vehicle (UAV) to enhance communication conditions. For sustainable energy harvesting (EH), we integrate a normal distribution into the objective function. We utilize a softmax deep double deterministic policy gradients (SD3) algorithm, based on deep reinforcement learning (DRL), to optimize the computational and communication capabilities of IoT devices. Simulation experiments demonstrate that our SD3-based EH edge computing (EHEC-SD3) algorithm surpasses existing DRL algorithms in our explored environments, achieving more than 90% in overall optimization and EH performance.</p></div>","PeriodicalId":55132,"journal":{"name":"Future Generation Computer Systems-The International Journal of Escience","volume":null,"pages":null},"PeriodicalIF":6.2,"publicationDate":"2024-09-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142229598","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Split ways: Using GAN watermarking for digital image protection with privacy-preserving split model training 分裂方式:利用 GAN 水印技术进行数字图像保护与隐私保护分割模型训练
IF 6.2 2区 计算机科学 Q1 COMPUTER SCIENCE, THEORY & METHODS Pub Date : 2024-09-12 DOI: 10.1016/j.future.2024.107523

In recent years, the significant importance of digital data in the Industrial Internet of Things (IIoT) is receiving more and more attention, followed by more copyright violation challenges to the transmission and storage of sensitive data. To address this issue, we propose a generative adversarial network (GAN)-based image watermarking in privacy-preserving split model training. In the first stage, we trained our model in split ways without the client sharing raw data to reduce privacy leakage, if any. In the second stage, we designed a GAN-based watermarking embedder and extraction network to imperceptibly embed sensitive information while enhancing robustness. Moreover, the sensitive mark is jointly encrypted and compressed before sending it to the server, thus protecting user confidentiality while reducing the bandwidth and storage demand. We tested our proposed scheme using multiple standard datasets such as div2k, CelebA, and Flickr. The results on the div2k datasets showed that the proposed method surpassed several state-of-the-art methods, with average PSNR and NC increasing by 47.75% and 26.72% respectively. Our joint encryption and compression method also achieved superior performance compared with other methods with an average NPCR and UACI increasing by 18.25% and 16.87% respectively. To the best of our knowledge, we are the first to explore a GAN-based watermarking in split learning ways for digital images.

近年来,数字数据在工业物联网(IIoT)中的重要性日益受到关注,随之而来的是敏感数据的传输和存储面临更多侵犯版权的挑战。针对这一问题,我们提出了一种基于生成式对抗网络(GAN)的图像水印技术,该技术采用了隐私保护分裂模型训练。在第一阶段,我们在客户端不共享原始数据的情况下,以拆分的方式训练模型,以减少隐私泄露(如果有的话)。在第二阶段,我们设计了一个基于 GAN 的水印嵌入和提取网络,在增强鲁棒性的同时不易察觉地嵌入敏感信息。此外,敏感标记在发送到服务器之前会被联合加密和压缩,从而在降低带宽和存储需求的同时保护用户机密。我们使用 div2k、CelebA 和 Flickr 等多个标准数据集测试了我们提出的方案。div2k 数据集的测试结果表明,我们提出的方法超越了几种最先进的方法,平均 PSNR 和 NC 分别提高了 47.75% 和 26.72%。我们的联合加密和压缩方法也取得了优于其他方法的性能,平均 NPCR 和 UACI 分别提高了 18.25% 和 16.87%。据我们所知,我们是第一个探索基于 GAN 的数字图像水印分割学习方法的人。
{"title":"Split ways: Using GAN watermarking for digital image protection with privacy-preserving split model training","authors":"","doi":"10.1016/j.future.2024.107523","DOIUrl":"10.1016/j.future.2024.107523","url":null,"abstract":"<div><p>In recent years, the significant importance of digital data in the Industrial Internet of Things (IIoT) is receiving more and more attention, followed by more copyright violation challenges to the transmission and storage of sensitive data. To address this issue, we propose a generative adversarial network (GAN)-based image watermarking in privacy-preserving split model training. In the first stage, we trained our model in split ways without the client sharing raw data to reduce privacy leakage, if any. In the second stage, we designed a GAN-based watermarking embedder and extraction network to imperceptibly embed sensitive information while enhancing robustness. Moreover, the sensitive mark is jointly encrypted and compressed before sending it to the server, thus protecting user confidentiality while reducing the bandwidth and storage demand. We tested our proposed scheme using multiple standard datasets such as div2k, CelebA, and Flickr. The results on the div2k datasets showed that the proposed method surpassed several state-of-the-art methods, with average PSNR and NC increasing by 47.75% and 26.72% respectively. Our joint encryption and compression method also achieved superior performance compared with other methods with an average NPCR and UACI increasing by 18.25% and 16.87% respectively. To the best of our knowledge, we are the first to explore a GAN-based watermarking in split learning ways for digital images.</p></div>","PeriodicalId":55132,"journal":{"name":"Future Generation Computer Systems-The International Journal of Escience","volume":null,"pages":null},"PeriodicalIF":6.2,"publicationDate":"2024-09-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142233563","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Blockchain based computing power sharing in urban rail transit: System design and performance improvement 城市轨道交通中基于区块链的计算能力共享:系统设计与性能改进
IF 6.2 2区 计算机科学 Q1 COMPUTER SCIENCE, THEORY & METHODS Pub Date : 2024-09-12 DOI: 10.1016/j.future.2024.06.021

With the development of urban rail transit (URT), many latency-sensitive and computationally intensive tasks arise. Edge computing can provide low-latency computing service in URT systems. Edge servers cannot always process all incoming computing tasks in a timely manner when operating independently due to limited computing power resources. They need to collaborate frequently through peer-to-peer offloads. However, it is challenging for the server to select the appropriate computing power resources and corresponding network connections to fulfill its performance and cost requirement. More importantly, edge servers are deployed and managed by different computing departments, putting the task offload process at risk. We propose a blockchain-based computing power sharing system to achieve secure and efficient computing power sharing in URT systems. The blockchain provides auditing and checking functions to guarantee the security of computing power resource sharing. We further propose a method to optimize the computing power sharing strategy and node selection strategy in the computing power sharing workflow. The numerical findings reveal that the proposed scheme provides significant improvements in both departmental utility and business processing capability.

随着城市轨道交通(URT)的发展,出现了许多对延迟敏感的计算密集型任务。边缘计算可为城市轨道交通系统提供低延迟计算服务。由于计算能力资源有限,边缘服务器在独立运行时无法及时处理所有传入的计算任务。它们需要通过点对点卸载频繁协作。然而,如何选择合适的计算能力资源和相应的网络连接来满足其性能和成本要求,对服务器来说是一项挑战。更重要的是,边缘服务器由不同的计算部门部署和管理,使任务卸载过程面临风险。我们提出了一种基于区块链的计算能力共享系统,以实现 URT 系统中安全高效的计算能力共享。区块链提供审计和检查功能,以保证算力资源共享的安全性。我们进一步提出了优化算力共享工作流中算力共享策略和节点选择策略的方法。数值研究结果表明,所提出的方案显著提高了部门效用和业务处理能力。
{"title":"Blockchain based computing power sharing in urban rail transit: System design and performance improvement","authors":"","doi":"10.1016/j.future.2024.06.021","DOIUrl":"10.1016/j.future.2024.06.021","url":null,"abstract":"<div><p>With the development of urban rail transit (URT), many latency-sensitive and computationally intensive tasks arise. Edge computing can provide low-latency computing service in URT systems. Edge servers cannot always process all incoming computing tasks in a timely manner when operating independently due to limited computing power resources. They need to collaborate frequently through peer-to-peer offloads. However, it is challenging for the server to select the appropriate computing power resources and corresponding network connections to fulfill its performance and cost requirement. More importantly, edge servers are deployed and managed by different computing departments, putting the task offload process at risk. We propose a blockchain-based computing power sharing system to achieve secure and efficient computing power sharing in URT systems. The blockchain provides auditing and checking functions to guarantee the security of computing power resource sharing. We further propose a method to optimize the computing power sharing strategy and node selection strategy in the computing power sharing workflow. The numerical findings reveal that the proposed scheme provides significant improvements in both departmental utility and business processing capability.</p></div>","PeriodicalId":55132,"journal":{"name":"Future Generation Computer Systems-The International Journal of Escience","volume":null,"pages":null},"PeriodicalIF":6.2,"publicationDate":"2024-09-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142241709","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
CG-Kit: Code Generation Toolkit for performant and maintainable variants of source code applied to Flash-X hydrodynamics simulations CG-Kit:代码生成工具包,用于生成适用于 Flash-X 流体动力学模拟的高性能、可维护的源代码变体
IF 6.2 2区 计算机科学 Q1 COMPUTER SCIENCE, THEORY & METHODS Pub Date : 2024-09-12 DOI: 10.1016/j.future.2024.107511

CG-Kit is a new Code Generation tool-Kit that we have developed as a part of the solution for portability and maintainability for multiphysics computing applications. The development of CG-Kit is rooted in the urgent need created by the shifting landscape of high-performance computing platforms and the algorithmic complexities of a particular large-scale multiphysics application: Flash-X. To efficiently use computing resources on a heterogeneous node, an application must have a map of computation to resources and a mechanism to move the data and computation to the resources according to the map. Most existing performance portability solutions are focussed on abstracting the expression of computations so that a unified source code can be specialized to run on different resources. However, such an approach is insufficient for a code like Flash-X, which has a multitude of code components that can be assembled in various permutations and combinations to form different instances of applications. Similar challenges apply to any code that has composability, where a single specified way of apportioning work among devices may not be optimal. Additionally, use cases arise where the optimal control flow of computation may differ for different devices while the underlying numerics remain identical. This combination leads to unique challenges including handling an existing large code base in Fortran and/or C/C++, subdivision of code into a great variety of units supporting a wide range of physics and numerical methods, different parallelization techniques for distributed and shared memory systems and accelerator devices, and heterogeneity of computing platforms requiring coexisting variants of parallel algorithms. All of these challenges demand that scientific software developers apply existing knowledge about domain applications, algorithms, and computing platforms to determine custom abstractions and granularity for code generation. There is a critical lack of tools to tackle those problems. CG-Kit is designed to fill this gap by providing a user with the ability to express their desired control flow and computation-to-resource map in the form a pseudocode-like recipe. It consists of standalone tools that can be combined into highly specific and, we argue, highly effective portability and maintainability toolchains. Here we present the design of our new tools: parametrized source trees, control flow graphs, and recipes. The tools are implemented in Python. They are agnostic to the programming language of the source code targeted for code generation. We demonstrate the capabilities of the toolkit with two examples, first, multithreaded variants of the basic AXPY operation, and second, variants of parallel algorithms within a hydrodynamics solver, called Spark, from Flash-X that operates on block-structured adaptive meshes.

CG-Kit 是我们新开发的代码生成工具包,是多物理场计算应用可移植性和可维护性解决方案的一部分。CG-Kit 的开发源于高性能计算平台的不断变化以及特定大规模多物理场应用算法复杂性所带来的迫切需求:Flash-X。为了有效利用异构节点上的计算资源,应用程序必须拥有计算到资源的映射,以及根据映射将数据和计算移动到资源的机制。现有的性能可移植性解决方案大多侧重于抽象计算的表达方式,从而使统一的源代码可以专门用于在不同资源上运行。然而,这种方法对于 Flash-X 这样的代码来说是不够的,因为 Flash-X 有许多代码组件,可以通过各种排列和组合形成不同的应用实例。类似的挑战也适用于任何具有可组合性的代码,在这种情况下,在设备间分配工作的单一指定方式可能并非最佳。此外,在一些使用案例中,不同设备的最佳计算控制流可能会有所不同,而底层数值却保持一致。这种组合带来了独特的挑战,包括处理现有的大量 Fortran 和/或 C/C++ 代码库、将代码细分为支持各种物理和数值方法的大量单元、针对分布式和共享内存系统及加速器设备的不同并行化技术,以及需要并行算法变体共存的计算平台的异质性。所有这些挑战都要求科学软件开发人员应用有关领域应用、算法和计算平台的现有知识来确定代码生成的自定义抽象和粒度。解决这些问题的工具非常缺乏。CG-Kit 就是为了填补这一空白而设计的,它为用户提供了以伪代码形式表达其所需控制流和计算到资源映射的能力。它由独立的工具组成,这些工具可以组合成非常具体的、我们认为非常有效的可移植性和可维护性工具链。在此,我们将介绍新工具的设计:参数化源代码树、控制流图和配方。这些工具是用 Python 实现的。它们与代码生成目标源代码的编程语言无关。我们用两个例子演示了工具包的功能,首先是 AXPY 基本操作的多线程变体,其次是 Flash-X 流体力学求解器 Spark 中并行算法的变体,该求解器可在块结构自适应网格上运行。
{"title":"CG-Kit: Code Generation Toolkit for performant and maintainable variants of source code applied to Flash-X hydrodynamics simulations","authors":"","doi":"10.1016/j.future.2024.107511","DOIUrl":"10.1016/j.future.2024.107511","url":null,"abstract":"<div><p>CG-Kit is a new Code Generation tool-Kit that we have developed as a part of the solution for portability and maintainability for multiphysics computing applications. The development of CG-Kit is rooted in the urgent need created by the shifting landscape of high-performance computing platforms and the algorithmic complexities of a particular large-scale multiphysics application: Flash-X. To efficiently use computing resources on a heterogeneous node, an application must have a map of computation to resources and a mechanism to move the data and computation to the resources according to the map. Most existing performance portability solutions are focussed on abstracting the expression of computations so that a unified source code can be specialized to run on different resources. However, such an approach is insufficient for a code like Flash-X, which has a multitude of code components that can be assembled in various permutations and combinations to form different instances of applications. Similar challenges apply to any code that has composability, where a single specified way of apportioning work among devices may not be optimal. Additionally, use cases arise where the optimal control flow of computation may differ for different devices while the underlying numerics remain identical. This combination leads to unique challenges including handling an existing large code base in Fortran and/or C/C++, subdivision of code into a great variety of units supporting a wide range of physics and numerical methods, different parallelization techniques for distributed and shared memory systems and accelerator devices, and heterogeneity of computing platforms requiring coexisting variants of parallel algorithms. All of these challenges demand that scientific software developers apply existing knowledge about domain applications, algorithms, and computing platforms to determine custom abstractions and granularity for code generation. There is a critical lack of tools to tackle those problems. CG-Kit is designed to fill this gap by providing a user with the ability to express their desired control flow and computation-to-resource map in the form a pseudocode-like recipe. It consists of standalone tools that can be combined into highly specific and, we argue, highly effective portability and maintainability toolchains. Here we present the design of our new tools: parametrized source trees, control flow graphs, and recipes. The tools are implemented in Python. They are agnostic to the programming language of the source code targeted for code generation. We demonstrate the capabilities of the toolkit with two examples, first, multithreaded variants of the basic AXPY operation, and second, variants of parallel algorithms within a hydrodynamics solver, called Spark, from Flash-X that operates on block-structured adaptive meshes.</p></div>","PeriodicalId":55132,"journal":{"name":"Future Generation Computer Systems-The International Journal of Escience","volume":null,"pages":null},"PeriodicalIF":6.2,"publicationDate":"2024-09-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142241711","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
DDPG-AdaptConfig: A deep reinforcement learning framework for adaptive device selection and training configuration in heterogeneity federated learning DDPG-AdaptConfig:异构联合学习中用于自适应设备选择和训练配置的深度强化学习框架
IF 6.2 2区 计算机科学 Q1 COMPUTER SCIENCE, THEORY & METHODS Pub Date : 2024-09-12 DOI: 10.1016/j.future.2024.107528
Federated Learning (FL) is a distributed machine learning approach that protects user privacy by collaboratively training shared models across devices without sharing their raw personal data. Despite its advantages, FL faces issues of increased convergence time and decreased accuracy due to the heterogeneity of data and systems across devices. Existing methods for solving these issues using reinforcement learning often ignore the adaptive configuration of local training hyperparameters to suit varying data characteristics and system resources. Moreover, they frequently overlook the heterogeneous information contained within local model parameters. To address these problems, we propose the DDPG-AdaptConfig framework based on Deep Deterministic Policy Gradient (DDPG) for adaptive device selection and local training hyperparameters configuration in FL to speed up convergence and ensure high model accuracy. Additionally, we develop a new actor network that integrates the transformer mechanism to extract heterogeneous information from model parameters, which assists in device selection and hyperparameters configuration. Furthermore, we introduce a clustering-based aggregation strategy to accommodate heterogeneity and prevent performance declines. Experimental results show that our DDPG-AdaptConfig achieves significant improvements over existing baselines.
联合学习(FL)是一种分布式机器学习方法,它通过跨设备协作训练共享模型来保护用户隐私,而无需共享用户的原始个人数据。尽管联合学习具有诸多优势,但由于各设备间数据和系统的异构性,联合学习面临着收敛时间增加和准确性降低的问题。利用强化学习解决这些问题的现有方法往往忽视了本地训练超参数的自适应配置,以适应不同的数据特征和系统资源。此外,它们还经常忽略本地模型参数中包含的异构信息。为了解决这些问题,我们提出了基于深度确定性策略梯度(DDPG)的 DDPG-AdaptConfig 框架,用于自适应设备选择和 FL 中的局部训练超参数配置,以加快收敛速度并确保高模型精度。此外,我们还开发了一种新的角色网络,该网络集成了转换器机制,可从模型参数中提取异构信息,从而协助设备选择和超参数配置。此外,我们还引入了基于聚类的聚合策略,以适应异质性并防止性能下降。实验结果表明,与现有基线相比,我们的 DDPG-AdaptConfig 实现了显著改进。
{"title":"DDPG-AdaptConfig: A deep reinforcement learning framework for adaptive device selection and training configuration in heterogeneity federated learning","authors":"","doi":"10.1016/j.future.2024.107528","DOIUrl":"10.1016/j.future.2024.107528","url":null,"abstract":"<div><div>Federated Learning (FL) is a distributed machine learning approach that protects user privacy by collaboratively training shared models across devices without sharing their raw personal data. Despite its advantages, FL faces issues of increased convergence time and decreased accuracy due to the heterogeneity of data and systems across devices. Existing methods for solving these issues using reinforcement learning often ignore the adaptive configuration of local training hyperparameters to suit varying data characteristics and system resources. Moreover, they frequently overlook the heterogeneous information contained within local model parameters. To address these problems, we propose the DDPG-AdaptConfig framework based on Deep Deterministic Policy Gradient (DDPG) for adaptive device selection and local training hyperparameters configuration in FL to speed up convergence and ensure high model accuracy. Additionally, we develop a new actor network that integrates the transformer mechanism to extract heterogeneous information from model parameters, which assists in device selection and hyperparameters configuration. Furthermore, we introduce a clustering-based aggregation strategy to accommodate heterogeneity and prevent performance declines. Experimental results show that our DDPG-AdaptConfig achieves significant improvements over existing baselines.</div></div>","PeriodicalId":55132,"journal":{"name":"Future Generation Computer Systems-The International Journal of Escience","volume":null,"pages":null},"PeriodicalIF":6.2,"publicationDate":"2024-09-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142312920","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
ZeroVCS: An efficient authentication protocol without trusted authority for zero-trust vehicular communication systems ZeroVCS:零信任车载通信系统的高效无信任机构认证协议
IF 6.2 2区 计算机科学 Q1 COMPUTER SCIENCE, THEORY & METHODS Pub Date : 2024-09-11 DOI: 10.1016/j.future.2024.107520

Vehicular communication systems can provide two types of communications: Vehicle-to-Infrastructure (V2I) and Vehicle-to-Vehicle (V2V). However, in both cases, there is zero-trust between the communicating entities. This may give privilege to the unauthorized vehicles to join the network. Hence, a strong authentication protocol is required to ensure proper access control and communication security. In traditional protocols, such tasks are typically accomplished via a central Trusted Authority (TA). However, communication with TA may increase the overall authentication delay. Such delay may be incompatible with the future generation vehicular communication systems, where dense deployment of small-cells are required to ensure higher system capacity and seamless mobility (e.g., 5G onward). Further, TA may suffer from denial-of-service when the number of access requests becomes excessively large, because each request must be forwarded to TA for authentication and access control. In this article, we put forward an efficient authentication protocol without trusted authority for zero-trust vehicular communication systems, called ZeroVCS. It does not involve TA for authentication and access control, thus improving the authentication delay, reducing the chance of denial-of-service, and ensuring compatibility with the future generation vehicular communication systems. ZeroVCS can also provide communication security under various passive and active attacks. Finally, the performance-based comparison proves the efficiency of ZeroVCS.

车载通信系统可提供两种类型的通信:车对基础设施(V2I)和车对车(V2V)。然而,在这两种情况下,通信实体之间都是零信任。这可能会给未经授权的车辆提供加入网络的特权。因此,需要一个强大的认证协议来确保适当的访问控制和通信安全。在传统协议中,这些任务通常通过一个中央可信机构(TA)来完成。然而,与 TA 的通信可能会增加整体认证延迟。这种延迟可能与下一代车载通信系统不兼容,因为下一代车载通信系统需要密集部署小蜂窝,以确保更高的系统容量和无缝移动性(如 5G 以后)。此外,由于每个请求都必须转发给 TA 进行身份验证和访问控制,因此当接入请求数量过多时,TA 可能会出现拒绝服务的问题。在本文中,我们为零信任车载通信系统提出了一种无需信任机构的高效认证协议,称为 ZeroVCS。它不涉及 TA 的认证和访问控制,从而改善了认证延迟,减少了拒绝服务的机会,并确保了与下一代车载通信系统的兼容性。ZeroVCS 还能在各种被动和主动攻击下保证通信安全。最后,基于性能的比较证明了 ZeroVCS 的高效性。
{"title":"ZeroVCS: An efficient authentication protocol without trusted authority for zero-trust vehicular communication systems","authors":"","doi":"10.1016/j.future.2024.107520","DOIUrl":"10.1016/j.future.2024.107520","url":null,"abstract":"<div><p>Vehicular communication systems can provide two types of communications: Vehicle-to-Infrastructure (V2I) and Vehicle-to-Vehicle (V2V). However, in both cases, there is zero-trust between the communicating entities. This may give privilege to the unauthorized vehicles to join the network. Hence, a strong authentication protocol is required to ensure proper access control and communication security. In traditional protocols, such tasks are typically accomplished via a central Trusted Authority (TA). However, communication with TA may increase the overall authentication delay. Such delay may be incompatible with the future generation vehicular communication systems, where dense deployment of small-cells are required to ensure higher system capacity and seamless mobility (e.g., 5G onward). Further, TA may suffer from denial-of-service when the number of access requests becomes excessively large, because each request must be forwarded to TA for authentication and access control. In this article, we put forward an efficient authentication protocol without trusted authority for zero-trust vehicular communication systems, called ZeroVCS. It does not involve TA for authentication and access control, thus improving the authentication delay, reducing the chance of denial-of-service, and ensuring compatibility with the future generation vehicular communication systems. ZeroVCS can also provide communication security under various passive and active attacks. Finally, the performance-based comparison proves the efficiency of ZeroVCS.</p></div>","PeriodicalId":55132,"journal":{"name":"Future Generation Computer Systems-The International Journal of Escience","volume":null,"pages":null},"PeriodicalIF":6.2,"publicationDate":"2024-09-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142241706","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Bus travel feature inference with small samples based on multi-clustering topic model over Internet of Things 基于物联网多聚类主题模型的小样本公交出行特征推理
IF 6.2 2区 计算机科学 Q1 COMPUTER SCIENCE, THEORY & METHODS Pub Date : 2024-09-11 DOI: 10.1016/j.future.2024.107525

With the widespread application of Internet of Things (IoT) technology, there has been a shift from a broad-brush to a more refined approach in traffic optimization. An increasing amount of IoT data is being utilized in trajectory mining and inference, offering more precise characteristic information for optimizing public transportation. Services that optimize public transit based on inferred travel characteristics can enhance the appeal of public transport, increase its likelihood as a travel choice, alleviate traffic congestion, and reduce carbon emissions. However, the inherent complexities of disorganized and unstructured public transportation data pose significant challenges to extracting travel features. This study explores the enhancement of bus travel by integrating advanced technologies like positioning systems, IoT, and AI to infer features in public transportation data. It introduces the MK-LDA (MeanShift Kmeans Latent Dirichlet Allocation), a novel thematic modeling technique for deducing characteristics of public transit travel using limited travel trajectory data. The model employs a segmented inference methodology, initially leveraging the Mean-shift clustering algorithm to create POI seeds, followed by the P-K-means algorithm for discerning patterns in user travel behavior and extracting travel modalities. Additionally, a P-LDA (POI-Latent Dirichlet Allocation) inference algorithm is proposed to examine the interplay between travel characteristics and behaviors, specifically targeting attributes significantly correlated with public transit usage, including age, occupation, gender, activity levels, cost, safety, and personality traits. Empirical validation highlights the efficacy of this thematic modeling-based inference technique in identifying and predicting travel characteristics and patterns, boasting enhanced interpretability and outperforming conventional benchmarks.

随着物联网(IoT)技术的广泛应用,交通优化方法也从粗放型向精细化转变。越来越多的物联网数据被用于轨迹挖掘和推理,为优化公共交通提供了更精确的特征信息。根据推断出的出行特征优化公共交通的服务可以增强公共交通的吸引力,提高其作为出行选择的可能性,缓解交通拥堵并减少碳排放。然而,无序和非结构化的公共交通数据固有的复杂性给提取出行特征带来了巨大挑战。本研究探索通过整合定位系统、物联网和人工智能等先进技术来推断公共交通数据中的特征,从而提高公交出行效率。它引入了 MK-LDA(MeanShift Kmeans Latent Dirichlet Allocation),这是一种新颖的主题建模技术,用于利用有限的出行轨迹数据推断公交出行的特征。该模型采用分段推理方法,首先利用均值移动聚类算法创建 POI 种子,然后利用 P-K 均值算法辨别用户出行行为模式并提取出行方式。此外,还提出了一种 P-LDA(POI-Latent Dirichlet Allocation)推理算法,用于研究旅行特征与行为之间的相互作用,特别是针对与公共交通使用显著相关的属性,包括年龄、职业、性别、活动水平、成本、安全性和个性特征。经验验证凸显了这种基于主题建模的推理技术在识别和预测旅行特征和模式方面的功效,具有更强的可解释性,并优于传统基准。
{"title":"Bus travel feature inference with small samples based on multi-clustering topic model over Internet of Things","authors":"","doi":"10.1016/j.future.2024.107525","DOIUrl":"10.1016/j.future.2024.107525","url":null,"abstract":"<div><p>With the widespread application of Internet of Things (IoT) technology, there has been a shift from a broad-brush to a more refined approach in traffic optimization. An increasing amount of IoT data is being utilized in trajectory mining and inference, offering more precise characteristic information for optimizing public transportation. Services that optimize public transit based on inferred travel characteristics can enhance the appeal of public transport, increase its likelihood as a travel choice, alleviate traffic congestion, and reduce carbon emissions. However, the inherent complexities of disorganized and unstructured public transportation data pose significant challenges to extracting travel features. This study explores the enhancement of bus travel by integrating advanced technologies like positioning systems, IoT, and AI to infer features in public transportation data. It introduces the MK-LDA (MeanShift Kmeans Latent Dirichlet Allocation), a novel thematic modeling technique for deducing characteristics of public transit travel using limited travel trajectory data. The model employs a segmented inference methodology, initially leveraging the Mean-shift clustering algorithm to create POI seeds, followed by the P-K-means algorithm for discerning patterns in user travel behavior and extracting travel modalities. Additionally, a P-LDA (POI-Latent Dirichlet Allocation) inference algorithm is proposed to examine the interplay between travel characteristics and behaviors, specifically targeting attributes significantly correlated with public transit usage, including age, occupation, gender, activity levels, cost, safety, and personality traits. Empirical validation highlights the efficacy of this thematic modeling-based inference technique in identifying and predicting travel characteristics and patterns, boasting enhanced interpretability and outperforming conventional benchmarks.</p></div>","PeriodicalId":55132,"journal":{"name":"Future Generation Computer Systems-The International Journal of Escience","volume":null,"pages":null},"PeriodicalIF":6.2,"publicationDate":"2024-09-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142274707","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Expanding SafeSU capabilities by leveraging security frameworks for contention monitoring in complex SoCs 利用安全框架监控复杂 SoC 中的争用情况,扩展 SafeSU 功能
IF 6.2 2区 计算机科学 Q1 COMPUTER SCIENCE, THEORY & METHODS Pub Date : 2024-09-11 DOI: 10.1016/j.future.2024.107518

The increased performance requirements of applications running on safety-critical systems have led to the use of complex platforms with several CPUs, GPUs, and AI accelerators. However, higher platform and system complexity challenge performance verification and validation since timing interference across tasks occurs in unobvious ways, hence defeating attempts to optimize application consolidation informedly during design phases and validating that mutual interference across tasks is within bounds during test phases.

In that respect, the SafeSU has been proposed to extend inter-task interference monitoring capabilities in simple systems. However, modern mixed-criticality systems are complex, with multilayered interconnects, shared caches, and hardware accelerators. To that end, this paper proposes a non-intrusive add-on approach for monitoring interference across tasks in multilayer heterogeneous systems implemented by leveraging existing security frameworks and the SafeSU infrastructure.

The feasibility of the proposed approach has been validated in an RTL RISC-V-based multicore SoC with support for AI hardware acceleration. Our results show that our approach can safely track contention and properly break down contention cycles across the different sources of interference, hence guiding optimization and validation processes.

在安全关键型系统上运行的应用程序对性能的要求越来越高,这导致了使用包含多个 CPU、GPU 和人工智能加速器的复杂平台。然而,更高的平台和系统复杂性给性能验证和确认带来了挑战,因为任务间的时序干扰会以不明显的方式发生,因此无法在设计阶段明智地优化应用整合,也无法在测试阶段验证任务间的相互干扰是否在范围内。然而,现代混合关键性系统非常复杂,具有多层互连、共享缓存和硬件加速器。为此,本文提出了一种非侵入式附加方法,利用现有的安全框架和 SafeSU 基础设施,监控多层异构系统中的任务间干扰。结果表明,我们的方法可以安全地跟踪争用情况,并在不同干扰源之间适当地分解争用周期,从而为优化和验证过程提供指导。
{"title":"Expanding SafeSU capabilities by leveraging security frameworks for contention monitoring in complex SoCs","authors":"","doi":"10.1016/j.future.2024.107518","DOIUrl":"10.1016/j.future.2024.107518","url":null,"abstract":"<div><p>The increased performance requirements of applications running on safety-critical systems have led to the use of complex platforms with several CPUs, GPUs, and AI accelerators. However, higher platform and system complexity challenge performance verification and validation since timing interference across tasks occurs in unobvious ways, hence defeating attempts to optimize application consolidation informedly during design phases and validating that mutual interference across tasks is within bounds during test phases.</p><p>In that respect, the SafeSU has been proposed to extend inter-task interference monitoring capabilities in simple systems. However, modern mixed-criticality systems are complex, with multilayered interconnects, shared caches, and hardware accelerators. To that end, this paper proposes a non-intrusive add-on approach for monitoring interference across tasks in multilayer heterogeneous systems implemented by leveraging existing security frameworks and the SafeSU infrastructure.</p><p>The feasibility of the proposed approach has been validated in an RTL RISC-V-based multicore SoC with support for AI hardware acceleration. Our results show that our approach can safely track contention and properly break down contention cycles across the different sources of interference, hence guiding optimization and validation processes.</p></div>","PeriodicalId":55132,"journal":{"name":"Future Generation Computer Systems-The International Journal of Escience","volume":null,"pages":null},"PeriodicalIF":6.2,"publicationDate":"2024-09-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.sciencedirect.com/science/article/pii/S0167739X24004825/pdfft?md5=297490284a18935898d8133344acc50d&pid=1-s2.0-S0167739X24004825-main.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142167923","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
DetectVul: A statement-level code vulnerability detection for Python DetectVul:针对 Python 的语句级代码漏洞检测
IF 6.2 2区 计算机科学 Q1 COMPUTER SCIENCE, THEORY & METHODS Pub Date : 2024-09-10 DOI: 10.1016/j.future.2024.107504

Detecting vulnerabilities in source code using graph neural networks (GNN) has gained significant attention in recent years. However, the detection performance of these approaches relies highly on the graph structure, and constructing meaningful graphs is expensive. Moreover, they often operate at a coarse level of granularity (such as function-level), which limits their applicability to other scripting languages like Python and their effectiveness in identifying vulnerabilities. To address these limitations, we propose DetectVul, a new approach that accurately detects vulnerable patterns in Python source code at the statement level. DetectVul applies self-attention to directly learn patterns and interactions between statements in a raw Python function; thus, it eliminates the complicated graph extraction process without sacrificing model performance. In addition, the information about each type of statement is also leveraged to enhance the model’s detection accuracy. In our experiments, we used two datasets, CVEFixes and Vudenc, with 211,317 Python statements in 21,571 functions from real-world projects on GitHub, covering seven vulnerability types. Our experiments show that DetectVul outperforms GNN-based models using control flow graphs, achieving the best F1 score of 74.47%, which is 25.45% and 18.05% higher than the best GCN and GAT models, respectively.

近年来,利用图神经网络(GNN)检测源代码中的漏洞受到了广泛关注。然而,这些方法的检测性能在很大程度上依赖于图结构,而构建有意义的图代价高昂。此外,这些方法通常在较粗的粒度级别(如函数级)上运行,这限制了它们对其他脚本语言(如 Python)的适用性,也限制了它们识别漏洞的有效性。为了解决这些局限性,我们提出了一种新方法 DetectVul,它能在语句级准确检测 Python 源代码中的漏洞模式。DetectVul 应用自我关注来直接学习原始 Python 函数中的模式和语句之间的交互;因此,它省去了复杂的图提取过程,同时又不影响模型性能。此外,DetectVul 还利用每种类型语句的相关信息来提高模型的检测准确性。在实验中,我们使用了 CVEFixes 和 Vudenc 两个数据集,其中包含 GitHub 上真实项目中 21,571 个函数中的 211,317 条 Python 语句,涵盖七种漏洞类型。实验结果表明,DetectVul 的表现优于使用控制流图的基于 GNN 的模型,取得了 74.47% 的最佳 F1 分数,比最佳 GCN 和 GAT 模型分别高出 25.45% 和 18.05%。
{"title":"DetectVul: A statement-level code vulnerability detection for Python","authors":"","doi":"10.1016/j.future.2024.107504","DOIUrl":"10.1016/j.future.2024.107504","url":null,"abstract":"<div><p>Detecting vulnerabilities in source code using graph neural networks (GNN) has gained significant attention in recent years. However, the detection performance of these approaches relies highly on the graph structure, and constructing meaningful graphs is expensive. Moreover, they often operate at a coarse level of granularity (such as function-level), which limits their applicability to other scripting languages like Python and their effectiveness in identifying vulnerabilities. To address these limitations, we propose DetectVul, a new approach that accurately detects vulnerable patterns in Python source code at the statement level. DetectVul applies self-attention to directly learn patterns and interactions between statements in a raw Python function; thus, it eliminates the complicated graph extraction process without sacrificing model performance. In addition, the information about each type of statement is also leveraged to enhance the model’s detection accuracy. In our experiments, we used two datasets, CVEFixes and Vudenc, with 211,317 Python statements in 21,571 functions from real-world projects on GitHub, covering seven vulnerability types. Our experiments show that DetectVul outperforms GNN-based models using control flow graphs, achieving the best F1 score of 74.47%, which is 25.45% and 18.05% higher than the best GCN and GAT models, respectively.</p></div>","PeriodicalId":55132,"journal":{"name":"Future Generation Computer Systems-The International Journal of Escience","volume":null,"pages":null},"PeriodicalIF":6.2,"publicationDate":"2024-09-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142241712","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
期刊
Future Generation Computer Systems-The International Journal of Escience
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1