首页 > 最新文献

Future Generation Computer Systems-The International Journal of Escience最新文献

英文 中文
Ensuring the federation correctness: Formal verification of Federated Learning in industrial cyber-physical systems 确保联邦正确性:工业网络物理系统中联邦学习的形式化验证
IF 6.2 2区 计算机科学 Q1 COMPUTER SCIENCE, THEORY & METHODS Pub Date : 2024-12-19 DOI: 10.1016/j.future.2024.107675
Badra Souhila Guendouzi , Samir Ouchani , Hiba Al Assaad , Madeleine El Zaher
In industry 4.0, Industrial Cyber–Physical Systems (ICPS) integrate industrial machines with computer control and data analysis. Federated Learning (FL) improves this by enabling collaborative machine learning and improvement while maintaining data privacy. This method improves the security, and intelligence of industrial processes. FL-based frameworks proposed in the literature do not perform rigorous validation of collaborators’ behaviors, especially with regard to reliability and operational correctness. In contrast, non-FL-based cyber–physical systems have already been verified in the literature using formal methods. Therefore, there is a significant gap in the application of these verification techniques to FL-based systems. To fill this gap, we explore the possibility of introducing formal verification into FL-based cyber–physical systems, starting with our FedGA-Meta published framework. Thus, our research focuses on expanding our FedGA-Meta framework in the context of Industry 4.0, this paper delves into a comprehensive validation of the framework’s operational reliability and correctness within ICPS based on FL. To achieve this, we employ Timed Computation Tree Logic (TCTL) for the precise specification of system requirements, coupled with Labeled Transition Systems (LTS) to construct the ICPS semantic in detail. Through the usage of Uppaal for both simulation and model-checking purposes, we rigorously test the framework under a variety of operational scenarios. This approach allows us to confirm the system’s reliability and correctness, ensuring that the FedGA-Meta framework operates effectively and as intended within the demanding environments of Industry 4.0.
在工业4.0中,工业信息物理系统(ICPS)将工业机器与计算机控制和数据分析集成在一起。联邦学习(FL)通过支持协作机器学习和改进,同时维护数据隐私,从而改善了这一点。这种方法提高了工业过程的安全性和智能化。文献中提出的基于人工智能的框架并没有对合作者的行为进行严格的验证,尤其是在可靠性和操作正确性方面。相比之下,非基于fl的网络物理系统已经在文献中使用形式化方法进行了验证。因此,这些验证技术在基于fl的系统中的应用存在很大的差距。为了填补这一空白,我们探索了将正式验证引入基于fl的网络物理系统的可能性,从我们的FedGA-Meta发布框架开始。因此,我们的研究重点是在工业4.0背景下扩展我们的FedGA-Meta框架,本文深入研究了基于FL的ICPS框架的运行可靠性和正确性的全面验证。为了实现这一目标,我们使用定时计算树逻辑(TCTL)来精确规范系统需求,并结合标记转换系统(LTS)来详细构建ICPS语义。通过使用Uppaal进行仿真和模型检查,我们在各种操作场景下严格测试了框架。这种方法使我们能够确认系统的可靠性和正确性,确保FedGA-Meta框架在工业4.0的苛刻环境中有效运行。
{"title":"Ensuring the federation correctness: Formal verification of Federated Learning in industrial cyber-physical systems","authors":"Badra Souhila Guendouzi ,&nbsp;Samir Ouchani ,&nbsp;Hiba Al Assaad ,&nbsp;Madeleine El Zaher","doi":"10.1016/j.future.2024.107675","DOIUrl":"10.1016/j.future.2024.107675","url":null,"abstract":"<div><div>In industry 4.0, Industrial Cyber–Physical Systems (<span>ICPS</span>) integrate industrial machines with computer control and data analysis. Federated Learning (FL) improves this by enabling collaborative machine learning and improvement while maintaining data privacy. This method improves the security, and intelligence of industrial processes. FL-based frameworks proposed in the literature do not perform rigorous validation of collaborators’ behaviors, especially with regard to reliability and operational correctness. In contrast, non-FL-based cyber–physical systems have already been verified in the literature using formal methods. Therefore, there is a significant gap in the application of these verification techniques to FL-based systems. To fill this gap, we explore the possibility of introducing formal verification into FL-based cyber–physical systems, starting with our <span><strong>FedGA-Meta</strong></span> published framework. Thus, our research focuses on expanding our <span><strong>FedGA-Meta</strong></span> framework in the context of Industry 4.0, this paper delves into a comprehensive validation of the framework’s operational reliability and correctness within <span>ICPS</span> based on FL. To achieve this, we employ Timed Computation Tree Logic (TCTL) for the precise specification of system requirements, coupled with Labeled Transition Systems (LTS) to construct the <span>ICPS</span> semantic in detail. Through the usage of Uppaal for both simulation and model-checking purposes, we rigorously test the framework under a variety of operational scenarios. This approach allows us to confirm the system’s reliability and correctness, ensuring that the <span><strong>FedGA-Meta</strong></span> framework operates effectively and as intended within the demanding environments of Industry 4.0.</div></div>","PeriodicalId":55132,"journal":{"name":"Future Generation Computer Systems-The International Journal of Escience","volume":"166 ","pages":"Article 107675"},"PeriodicalIF":6.2,"publicationDate":"2024-12-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142889247","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Preface of Special Issue on Highlights from the Joint-Laboratory on Extreme Scale Computing
IF 6.2 2区 计算机科学 Q1 COMPUTER SCIENCE, THEORY & METHODS Pub Date : 2024-12-18 DOI: 10.1016/j.future.2024.107688
Franck Cappello , Ruth Partzsch , Daniel S. Katz
{"title":"Preface of Special Issue on Highlights from the Joint-Laboratory on Extreme Scale Computing","authors":"Franck Cappello ,&nbsp;Ruth Partzsch ,&nbsp;Daniel S. Katz","doi":"10.1016/j.future.2024.107688","DOIUrl":"10.1016/j.future.2024.107688","url":null,"abstract":"","PeriodicalId":55132,"journal":{"name":"Future Generation Computer Systems-The International Journal of Escience","volume":"167 ","pages":"Article 107688"},"PeriodicalIF":6.2,"publicationDate":"2024-12-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143419073","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Enhancing E-business in industry 4.0: Integrating fog/edge computing with Data LakeHouse for IIoT 加强工业4.0中的电子商务:将雾/边缘计算与工业物联网的数据湖集成
IF 6.2 2区 计算机科学 Q1 COMPUTER SCIENCE, THEORY & METHODS Pub Date : 2024-12-14 DOI: 10.1016/j.future.2024.107653
Hayat Routaib , Soukaina Seddik , Abdelali Elmounadi , Anass El Haddadi
E-business is evolving towards the creation of a global network of interconnected smart devices, aimed at enhancing a wide array of applications through their ability to sense, connect, and analyze data. At the heart of this evolution, the Industrial Internet of Things (IIoT) emerges as a pivotal element in the era of ‘Industry 4.0.’ This paper proposes a novel framework that integrates fog/edge computing architecture with a Data LakeHouse model for the IIoT ecosystem, incorporating unified meta-metadata for superior data processing and governance. This innovative approach addresses key challenges such as data management, latency, and system efficiency, essential for optimizing operations and reinforcing decision-making. It represents a substantial leap forward in leveraging IIoT capabilities within e-business environments, ensuring data integrity, enabling real-time analytics, and enhancing operational efficiency.
电子商务正朝着创建一个由相互连接的智能设备组成的全球网络的方向发展,其目的是通过它们感知、连接和分析数据的能力来增强广泛的应用程序。在这一演变的核心,工业物联网(IIoT)成为“工业4.0”时代的关键要素。本文提出了一个新的框架,将雾/边缘计算架构与工业物联网生态系统的数据湖之家模型集成在一起,结合统一的元数据,实现卓越的数据处理和治理。这种创新的方法解决了数据管理、延迟和系统效率等关键挑战,对于优化操作和加强决策至关重要。它代表了在电子商务环境中利用工业物联网功能、确保数据完整性、实现实时分析和提高运营效率方面的重大飞跃。
{"title":"Enhancing E-business in industry 4.0: Integrating fog/edge computing with Data LakeHouse for IIoT","authors":"Hayat Routaib ,&nbsp;Soukaina Seddik ,&nbsp;Abdelali Elmounadi ,&nbsp;Anass El Haddadi","doi":"10.1016/j.future.2024.107653","DOIUrl":"10.1016/j.future.2024.107653","url":null,"abstract":"<div><div>E-business is evolving towards the creation of a global network of interconnected smart devices, aimed at enhancing a wide array of applications through their ability to sense, connect, and analyze data. At the heart of this evolution, the Industrial Internet of Things (IIoT) emerges as a pivotal element in the era of ‘Industry 4.0.’ This paper proposes a novel framework that integrates fog/edge computing architecture with a Data LakeHouse model for the IIoT ecosystem, incorporating unified meta-metadata for superior data processing and governance. This innovative approach addresses key challenges such as data management, latency, and system efficiency, essential for optimizing operations and reinforcing decision-making. It represents a substantial leap forward in leveraging IIoT capabilities within e-business environments, ensuring data integrity, enabling real-time analytics, and enhancing operational efficiency.</div></div>","PeriodicalId":55132,"journal":{"name":"Future Generation Computer Systems-The International Journal of Escience","volume":"166 ","pages":"Article 107653"},"PeriodicalIF":6.2,"publicationDate":"2024-12-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142874159","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
KDRSFL: A knowledge distillation resistance transfer framework for defending model inversion attacks in split federated learning KDRSFL:一种用于分离联邦学习中防御模型反转攻击的知识蒸馏抵抗转移框架
IF 6.2 2区 计算机科学 Q1 COMPUTER SCIENCE, THEORY & METHODS Pub Date : 2024-12-14 DOI: 10.1016/j.future.2024.107637
Renlong Chen , Hui Xia , Kai Wang , Shuo Xu , Rui Zhang
Split Federated Learning (SFL) enables organizations such as healthcare to collaborate to improve model performance without sharing private data. However, SFL is currently susceptible to model inversion (MI) attacks, which create a serious problem of risk for private data leakage and loss of accuracy. Therefore, this paper proposes an innovative framework called Knowledge Distillation Resistance Transfer for Split Federated Learning (KDRSFL). The KDRSFL framework combines one-shot distillation techniques with adjustment strategies optimized for attackers, aiming to achieve knowledge distillation-based resistance transfer. KDRSFL enhances the classification accuracy of feature extractors and strengthens their resistance to adversarial attacks. First, a teacher model with strong resistance to MI attacks is constructed, and then this capability is transferred to the client models through knowledge distillation. Second, the defense of the client models is further strengthened through attacker-aware training. Finally, the client models achieve effective defense against MI through local training. Detailed experimental validation shows that KDRSFL performs well against MI attacks on the CIFAR100 dataset. KDRSFL achieved a reconstruction mean squared error (MSE) of 0.058 while maintaining a model accuracy of 67.4% for the VGG11 model. KDRSFL represents a 16% improvement in MI attack error rate over ResSFL, with only 0.1% accuracy loss.
拆分联邦学习(SFL)使医疗保健等组织能够在不共享私有数据的情况下进行协作以提高模型性能。然而,SFL目前容易受到模型反演(MI)攻击,这给私人数据泄露和准确性损失带来了严重的风险问题。因此,本文提出了一个创新的框架,称为知识蒸馏抵抗转移的分裂联邦学习(KDRSFL)。KDRSFL框架将一次性蒸馏技术与针对攻击者优化的调整策略相结合,旨在实现基于知识蒸馏的抵抗转移。KDRSFL提高了特征提取器的分类精度,增强了特征提取器对对抗性攻击的抵抗力。首先,构建具有较强抗MI攻击能力的教师模型,然后通过知识蒸馏将此能力转移到客户模型中。其次,通过攻击感知训练,进一步加强客户端模型的防御能力。最后,客户端模型通过局部训练实现对MI的有效防御。详细的实验验证表明,KDRSFL在CIFAR100数据集上具有良好的抗MI攻击性能。KDRSFL对VGG11模型的重建均方误差(MSE)为0.058,同时保持了67.4%的模型精度。与ResSFL相比,KDRSFL的MI攻击错误率提高了16%,准确性损失仅为0.1%。
{"title":"KDRSFL: A knowledge distillation resistance transfer framework for defending model inversion attacks in split federated learning","authors":"Renlong Chen ,&nbsp;Hui Xia ,&nbsp;Kai Wang ,&nbsp;Shuo Xu ,&nbsp;Rui Zhang","doi":"10.1016/j.future.2024.107637","DOIUrl":"10.1016/j.future.2024.107637","url":null,"abstract":"<div><div>Split Federated Learning (SFL) enables organizations such as healthcare to collaborate to improve model performance without sharing private data. However, SFL is currently susceptible to model inversion (MI) attacks, which create a serious problem of risk for private data leakage and loss of accuracy. Therefore, this paper proposes an innovative framework called Knowledge Distillation Resistance Transfer for Split Federated Learning (KDRSFL). The KDRSFL framework combines one-shot distillation techniques with adjustment strategies optimized for attackers, aiming to achieve knowledge distillation-based resistance transfer. KDRSFL enhances the classification accuracy of feature extractors and strengthens their resistance to adversarial attacks. First, a teacher model with strong resistance to MI attacks is constructed, and then this capability is transferred to the client models through knowledge distillation. Second, the defense of the client models is further strengthened through attacker-aware training. Finally, the client models achieve effective defense against MI through local training. Detailed experimental validation shows that KDRSFL performs well against MI attacks on the CIFAR100 dataset. KDRSFL achieved a reconstruction mean squared error (MSE) of 0.058 while maintaining a model accuracy of 67.4% for the VGG11 model. KDRSFL represents a 16% improvement in MI attack error rate over ResSFL, with only 0.1% accuracy loss.</div></div>","PeriodicalId":55132,"journal":{"name":"Future Generation Computer Systems-The International Journal of Escience","volume":"166 ","pages":"Article 107637"},"PeriodicalIF":6.2,"publicationDate":"2024-12-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142874160","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Straggler mitigation via hierarchical scheduling in elastic stream computing systems 弹性流计算系统中基于分层调度的掉队者缓解
IF 6.2 2区 计算机科学 Q1 COMPUTER SCIENCE, THEORY & METHODS Pub Date : 2024-12-14 DOI: 10.1016/j.future.2024.107673
Minghui Wu , Dawei Sun , Shang Gao , Rajkumar Buyya
Skewed data distribution leads to certain tasks or nodes handling much more data than others, thereby slowing down their execution speed and classifying them as stragglers. Existing solutions attempt to establish a well-balanced workload to mitigate stragglers by using either data stream grouping or task scheduling. This “one size fits all” approach only considers single-level requirements and fails to address the diverse needs of the system across multiple levels, ultimately limiting its performance. To address these issues and mitigate stragglers effectively, we propose a hierarchical collaborative strategy called Ms-Stream. It aims to balance the data stream workloads among tasks and maintain load difference among compute nodes within an acceptable range. This paper discusses this strategy from the following aspects: (1) Ms-Stream constructs models for topology, grouping, and resource, along with the formalization of problems, including data stream grouping, task subgraph partitioning, and task deployment. (2) Ms-Stream employs a lightweight two-level grouping method to support dynamic workload assignment for stateful tasks, selectively offloading resources from task stragglers to others. (3) Ms-Stream allocates communication-intensive tasks to the same group through the directed acyclic graph representations of streaming applications, concurrently ensuring the equitable distribution of computation-intensive tasks across groups. (4) Ms-Stream deploys task groups to compute nodes with varying resource capacities following the descending maximum padding priority rule for a balanced workload. Performance metrics such as system throughput and latency are evaluated with real-world streaming applications. Experimental results demonstrate the significant improvements made by Ms-Stream, reducing maximum system latency by 61% and increasing maximum throughput by more than 2x compared to existing state-of-the-art works.
倾斜的数据分布导致某些任务或节点比其他任务或节点处理更多的数据,从而降低了它们的执行速度,并将它们归类为掉队者。现有的解决方案试图通过使用数据流分组或任务调度来建立一个平衡良好的工作负载,以减少掉队者。这种“一刀切”的方法只考虑了单一层次的需求,而不能处理跨多个层次的系统的不同需求,最终限制了它的性能。为了解决这些问题并有效地减少掉队者,我们提出了一种称为Ms-Stream的分层协作策略。它旨在平衡各任务之间的数据流工作负载,并将计算节点之间的负载差保持在可接受的范围内。本文从以下几个方面对该策略进行了讨论:(1)Ms-Stream构建了拓扑、分组和资源模型,并对问题进行了形式化,包括数据流分组、任务子图划分和任务部署。(2) Ms-Stream采用轻量级的两级分组方法,支持有状态任务的动态工作负载分配,选择性地将资源从任务离散者转移到其他任务。(3) Ms-Stream通过流应用的有向无环图表示将通信密集型任务分配到同一组,同时确保计算密集型任务在组间的公平分配。(4) Ms-Stream将任务组部署到具有不同资源容量的计算节点上,遵循最大填充优先级递减规则,以实现平衡的工作负载。性能指标(如系统吞吐量和延迟)使用真实的流应用程序进行评估。实验结果表明,Ms-Stream取得了显著的改进,与现有的最先进的工作相比,最大系统延迟减少了61%,最大吞吐量增加了2倍以上。
{"title":"Straggler mitigation via hierarchical scheduling in elastic stream computing systems","authors":"Minghui Wu ,&nbsp;Dawei Sun ,&nbsp;Shang Gao ,&nbsp;Rajkumar Buyya","doi":"10.1016/j.future.2024.107673","DOIUrl":"10.1016/j.future.2024.107673","url":null,"abstract":"<div><div>Skewed data distribution leads to certain tasks or nodes handling much more data than others, thereby slowing down their execution speed and classifying them as stragglers. Existing solutions attempt to establish a well-balanced workload to mitigate stragglers by using either data stream grouping or task scheduling. This “one size fits all” approach only considers single-level requirements and fails to address the diverse needs of the system across multiple levels, ultimately limiting its performance. To address these issues and mitigate stragglers effectively, we propose a hierarchical collaborative strategy called Ms-Stream. It aims to balance the data stream workloads among tasks and maintain load difference among compute nodes within an acceptable range. This paper discusses this strategy from the following aspects: (1) Ms-Stream constructs models for topology, grouping, and resource, along with the formalization of problems, including data stream grouping, task subgraph partitioning, and task deployment. (2) Ms-Stream employs a lightweight two-level grouping method to support dynamic workload assignment for stateful tasks, selectively offloading resources from task stragglers to others. (3) Ms-Stream allocates communication-intensive tasks to the same group through the directed acyclic graph representations of streaming applications, concurrently ensuring the equitable distribution of computation-intensive tasks across groups. (4) Ms-Stream deploys task groups to compute nodes with varying resource capacities following the descending maximum padding priority rule for a balanced workload. Performance metrics such as system throughput and latency are evaluated with real-world streaming applications. Experimental results demonstrate the significant improvements made by Ms-Stream, reducing maximum system latency by 61% and increasing maximum throughput by more than 2x compared to existing state-of-the-art works.</div></div>","PeriodicalId":55132,"journal":{"name":"Future Generation Computer Systems-The International Journal of Escience","volume":"166 ","pages":"Article 107673"},"PeriodicalIF":6.2,"publicationDate":"2024-12-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142874158","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Optimizing mobile blockchain networks: A game theoretical approach to cooperative multi-terminal computation 移动区块链网络优化:一种多终端协同计算的博弈方法
IF 6.2 2区 计算机科学 Q1 COMPUTER SCIENCE, THEORY & METHODS Pub Date : 2024-12-13 DOI: 10.1016/j.future.2024.107669
Lin Pan , Fengrui Chen , Yan Ding , Yunan Zhai , Liyuan Zhang , Jia Zhao
Facing the computational challenges in mobile devices within blockchain networks, particularly the scarcity and underutilization of computational resources, this paper introduces the CAGE Framework: a novel architecture based on cooperative game theory within alliance blockchains. Designed to optimize computational resource allocation across multiple mobile terminals, CAGE Framework leverages a tri-layer structure – comprising the Blockchain Network Layer, User Network Layer, and Distributed Collaborative Computing Layer – to facilitate efficient resource sharing and task scheduling. Through intelligent contracts, the framework automatically aggregates user demands, utilizing the InterPlanetary File System (IPFS) for data storage, thereby enhancing privacy protection and blockchain data throughput. Validated on the Hyperledger Fabric platform and benchmarked against state-of-the-art approaches, CAGE demonstrates superior transaction throughput, reduced latency, and enhanced resource efficiency. The core strategy, dubbed CAGE, is predicated on cooperative gaming, aiming to maximize user satisfaction by balancing energy consumption, computational load, and resource allocation multi-objectively. Experiments reveal a notable improvement in system load balancing (by 51%) and a significant reduction in energy consumption (by 62%), affirming the framework’s efficacy in addressing computational resource deficiencies both within and outside the alliance under low energy and balanced load conditions. The CAGE Framework not only charts a new path for computational resource optimization in mobile blockchain networks but also lays a theoretical and practical foundation for the furtherance of blockchain technology application and optimization.
面对区块链网络中移动设备的计算挑战,特别是计算资源的稀缺性和利用率不足,本文介绍了CAGE框架:一种基于联盟区块链内合作博弈论的新型架构。CAGE框架旨在优化跨多个移动终端的计算资源分配,采用区块链网络层、用户网络层和分布式协同计算层三层结构,实现高效的资源共享和任务调度。该框架通过智能合约自动聚合用户需求,利用星际文件系统(IPFS)进行数据存储,从而增强隐私保护和区块链数据吞吐量。在Hyperledger Fabric平台上进行验证,并以最先进的方法为基准,CAGE展示了卓越的交易吞吐量、更低的延迟和更高的资源效率。其核心策略称为CAGE,基于合作博弈,旨在通过多目标平衡能耗、计算负荷和资源分配来最大化用户满意度。实验表明,系统负载平衡显著改善(51%),能耗显著降低(62%),证实了该框架在低能量和平衡负载条件下解决联盟内外计算资源不足的有效性。CAGE框架不仅为移动区块链网络计算资源优化开辟了新的路径,也为进一步推进区块链技术的应用和优化奠定了理论和实践基础。
{"title":"Optimizing mobile blockchain networks: A game theoretical approach to cooperative multi-terminal computation","authors":"Lin Pan ,&nbsp;Fengrui Chen ,&nbsp;Yan Ding ,&nbsp;Yunan Zhai ,&nbsp;Liyuan Zhang ,&nbsp;Jia Zhao","doi":"10.1016/j.future.2024.107669","DOIUrl":"10.1016/j.future.2024.107669","url":null,"abstract":"<div><div>Facing the computational challenges in mobile devices within blockchain networks, particularly the scarcity and underutilization of computational resources, this paper introduces the CAGE Framework: a novel architecture based on cooperative game theory within alliance blockchains. Designed to optimize computational resource allocation across multiple mobile terminals, CAGE Framework leverages a tri-layer structure – comprising the Blockchain Network Layer, User Network Layer, and Distributed Collaborative Computing Layer – to facilitate efficient resource sharing and task scheduling. Through intelligent contracts, the framework automatically aggregates user demands, utilizing the InterPlanetary File System (IPFS) for data storage, thereby enhancing privacy protection and blockchain data throughput. Validated on the Hyperledger Fabric platform and benchmarked against state-of-the-art approaches, CAGE demonstrates superior transaction throughput, reduced latency, and enhanced resource efficiency. The core strategy, dubbed CAGE, is predicated on cooperative gaming, aiming to maximize user satisfaction by balancing energy consumption, computational load, and resource allocation multi-objectively. Experiments reveal a notable improvement in system load balancing (by 51%) and a significant reduction in energy consumption (by 62%), affirming the framework’s efficacy in addressing computational resource deficiencies both within and outside the alliance under low energy and balanced load conditions. The CAGE Framework not only charts a new path for computational resource optimization in mobile blockchain networks but also lays a theoretical and practical foundation for the furtherance of blockchain technology application and optimization.</div></div>","PeriodicalId":55132,"journal":{"name":"Future Generation Computer Systems-The International Journal of Escience","volume":"166 ","pages":"Article 107669"},"PeriodicalIF":6.2,"publicationDate":"2024-12-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142873855","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Devising an actor-based middleware support to federated learning experiments and systems 设计一个基于参与者的中间件来支持联邦学习实验和系统
IF 6.2 2区 计算机科学 Q1 COMPUTER SCIENCE, THEORY & METHODS Pub Date : 2024-12-11 DOI: 10.1016/j.future.2024.107646
Alessio Bechini, José Luis Corcuera Bárcena
Federated Learning (FL) recently emerged as a practical privacy-preserving paradigm to exploit data distributed over separated repositories for Machine Learning purposes, with no need to migrate data. FL algorithms entail concerted activities of multiple distributed players: a dedicated supporting system aims to relieve programmers from dealing with the intricate implementation details of communication and synchronization activities required along the distributed model learning, and the necessary information exchange during operation. Such support plays a crucial role in the experimentation of FL algorithms and their eventual field operation, so its architecture must be carefully designed. In this work, we propose a novel architecture where the pivotal role is assigned to a runtime system based on actors, working at the middleware level. The distinctive points of this approach are portability across diverse platforms, location transparency for the involved nodes, opportunity to choose diverse languages for implementing the core parts of custom software systems. Moreover, with the proposed solution, scalability requirements can be easily met. The implementation of FL algorithms is made easier by APIs to programmatically access the middleware functionalities. Another benefit is that the same code can be used in both simulated and Fed-lang, the reference implementation of the proposed architecture, has been used to quantitatively compare the characteristics of our approach with other existing FL frameworks, showing its ability to address the challenges posed by various operating conditions and settings. The described architecture has shown to be adequate to deliver the functionalities required for the effective development of FL systems.
联邦学习(FL)最近作为一种实用的隐私保护范例出现,用于利用分布在分离存储库上的数据进行机器学习,而无需迁移数据。FL算法需要多个分布式参与者协调一致的活动:专用的支持系统旨在将程序员从处理分布式模型学习所需的通信和同步活动的复杂实现细节以及运行期间必要的信息交换中解脱出来。这种支持在FL算法的实验和最终的现场操作中起着至关重要的作用,因此必须仔细设计其架构。在这项工作中,我们提出了一种新的体系结构,其中关键角色分配给基于参与者的运行时系统,在中间件级别工作。这种方法的独特之处在于跨不同平台的可移植性、所涉及节点的位置透明性、为实现定制软件系统的核心部分选择不同语言的机会。此外,使用提出的解决方案,可伸缩性需求可以很容易地得到满足。通过api以编程方式访问中间件功能,FL算法的实现变得更加容易。另一个好处是,相同的代码可以在模拟和Fed-lang中使用,所提议架构的参考实现已被用于定量比较我们的方法与其他现有FL框架的特征,显示其解决各种操作条件和设置带来的挑战的能力。所描述的体系结构已被证明足以提供FL系统有效开发所需的功能。
{"title":"Devising an actor-based middleware support to federated learning experiments and systems","authors":"Alessio Bechini,&nbsp;José Luis Corcuera Bárcena","doi":"10.1016/j.future.2024.107646","DOIUrl":"10.1016/j.future.2024.107646","url":null,"abstract":"<div><div>Federated Learning (FL) recently emerged as a practical privacy-preserving paradigm to exploit data distributed over separated repositories for Machine Learning purposes, with no need to migrate data. FL algorithms entail concerted activities of multiple distributed players: a dedicated supporting system aims to relieve programmers from dealing with the intricate implementation details of communication and synchronization activities required along the distributed model learning, and the necessary information exchange during operation. Such support plays a crucial role in the experimentation of FL algorithms and their eventual field operation, so its architecture must be carefully designed. In this work, we propose a novel architecture where the pivotal role is assigned to a runtime system based on actors, working at the middleware level. The distinctive points of this approach are portability across diverse platforms, location transparency for the involved nodes, opportunity to choose diverse languages for implementing the core parts of custom software systems. Moreover, with the proposed solution, scalability requirements can be easily met. The implementation of FL algorithms is made easier by APIs to programmatically access the middleware functionalities. Another benefit is that the same code can be used in both simulated and Fed-lang, the reference implementation of the proposed architecture, has been used to quantitatively compare the characteristics of our approach with other existing FL frameworks, showing its ability to address the challenges posed by various operating conditions and settings. The described architecture has shown to be adequate to deliver the functionalities required for the effective development of FL systems.</div></div>","PeriodicalId":55132,"journal":{"name":"Future Generation Computer Systems-The International Journal of Escience","volume":"166 ","pages":"Article 107646"},"PeriodicalIF":6.2,"publicationDate":"2024-12-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142874161","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
VLCQ: Post-training quantization for deep neural networks using variable length coding 使用变长编码的深度神经网络的训练后量化
IF 6.2 2区 计算机科学 Q1 COMPUTER SCIENCE, THEORY & METHODS Pub Date : 2024-12-11 DOI: 10.1016/j.future.2024.107654
Reem Abdel-Salam, Ahmed H. Abdel-Gawad, Amr G. Wassal
Quantization plays a crucial role in efficiently deploying deep learning models on resources constraint devices. Post-training quantization does not require either access to the original dataset or retraining the full model. Current methods that achieve high performance (near baseline results) require INT8 fixed-point integers. However, to achieve high model compression by achieving lower bit-width, significant degradation to the performance becomes the challenge. In this paper, we propose VLCQ, which relaxes the constraint of fixed-point encoding which limits the quantization techniques from better quantizing the weights. Therefore, this work utilizes variable-length encoding which allows for exploring the whole space of quantization techniques. Thus, achieving much better results (close to or even better than the baseline results) while achieving lower bit-widths without the need to access any training data or to fine-tune the model. Extensive experiments were carried out on various deep-learning models for the image classification and segmentation, and object detection tasks. When compared to state-of-the-art post-training quantization approaches, experimental results reveal that our suggested method offers improved performance with better model compression (lower bit-rate). For per-channel quantization, our method surpassed the FP32 accuracy and Piece-Wise Linear Quantization (PWLQ) method in most models while achieving up-to 6X model compression ratio compared to the FP32 and up-to 1.7X compared to PWLQ. If the model compression is the concern with little effect on performance, our method achieves up-to 12.25X compression ratio compared to FP32 within 4% performance loss. For per-tensor, our method is competitive with Data-Free Quantization scheme (DFQ) in achieving the best performance. However, our method is more flexible in getting lower bit rates than DFQ across the different tasks and models.
量化在资源约束设备上有效部署深度学习模型中起着至关重要的作用。训练后量化既不需要访问原始数据集,也不需要重新训练整个模型。当前实现高性能(接近基线结果)的方法需要INT8定点整数。然而,为了通过实现更低的位宽来实现高模型压缩,性能的显著下降成为挑战。在本文中,我们提出了VLCQ,它放宽了定点编码的约束,这限制了量化技术更好地量化权值。因此,这项工作利用可变长度编码,允许探索整个空间的量化技术。因此,在不需要访问任何训练数据或微调模型的情况下,获得更好的结果(接近甚至比基线结果更好),同时获得更低的位宽度。在各种深度学习模型上进行了大量的实验,用于图像分类和分割以及目标检测任务。与最先进的训练后量化方法相比,实验结果表明,我们建议的方法具有更好的模型压缩(更低的比特率),从而提高了性能。对于每个通道量化,我们的方法在大多数模型中超过了FP32的精度和分段线性量化(PWLQ)方法,同时与FP32相比可实现高达6倍的模型压缩比,与PWLQ相比可实现高达1.7倍的模型压缩比。如果模型压缩是对性能影响不大的问题,我们的方法在4%的性能损失下实现了高达12.25倍的压缩比。对于每个张量,我们的方法在获得最佳性能方面与无数据量化方案(DFQ)相竞争。然而,我们的方法在不同的任务和模型中获得比DFQ更低的比特率方面更加灵活。
{"title":"VLCQ: Post-training quantization for deep neural networks using variable length coding","authors":"Reem Abdel-Salam,&nbsp;Ahmed H. Abdel-Gawad,&nbsp;Amr G. Wassal","doi":"10.1016/j.future.2024.107654","DOIUrl":"10.1016/j.future.2024.107654","url":null,"abstract":"<div><div>Quantization plays a crucial role in efficiently deploying deep learning models on resources constraint devices. Post-training quantization does not require either access to the original dataset or retraining the full model. Current methods that achieve high performance (near baseline results) require INT8 fixed-point integers. However, to achieve high model compression by achieving lower bit-width, significant degradation to the performance becomes the challenge. In this paper, we propose VLCQ, which relaxes the constraint of fixed-point encoding which limits the quantization techniques from better quantizing the weights. Therefore, this work utilizes variable-length encoding which allows for exploring the whole space of quantization techniques. Thus, achieving much better results (close to or even better than the baseline results) while achieving lower bit-widths without the need to access any training data or to fine-tune the model. Extensive experiments were carried out on various deep-learning models for the image classification and segmentation, and object detection tasks. When compared to state-of-the-art post-training quantization approaches, experimental results reveal that our suggested method offers improved performance with better model compression (lower bit-rate). For per-channel quantization, our method surpassed the FP32 accuracy and Piece-Wise Linear Quantization (PWLQ) method in most models while achieving up-to 6X model compression ratio compared to the FP32 and up-to 1.7X compared to PWLQ. If the model compression is the concern with little effect on performance, our method achieves up-to 12.25X compression ratio compared to FP32 within 4% performance loss. For per-tensor, our method is competitive with Data-Free Quantization scheme (DFQ) in achieving the best performance. However, our method is more flexible in getting lower bit rates than DFQ across the different tasks and models.</div></div>","PeriodicalId":55132,"journal":{"name":"Future Generation Computer Systems-The International Journal of Escience","volume":"166 ","pages":"Article 107654"},"PeriodicalIF":6.2,"publicationDate":"2024-12-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142873858","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Flow timeout matters: Investigating the impact of active and idle timeouts on the performance of machine learning models in detecting security threats
IF 6.2 2区 计算机科学 Q1 COMPUTER SCIENCE, THEORY & METHODS Pub Date : 2024-12-11 DOI: 10.1016/j.future.2024.107641
Meryem Janati Idrissi , Hamza Alami , Abdelkader El Mahdaouy , Abdelhak Bouayad , Zakaria Yartaoui , Ismail Berrada
In the era of high-speed networks and massive data, several network security technologies are shifting focus from payload-based to flow-based methods. This has led to the incorporation of Machine Learning (ML) models in network security systems, where high-quality network flow features are of paramount importance. However, limited attention has been dedicated to studying the impact of the flow metering hyperparameters, specifically idle and active timeouts, on ML models’ performance. This paper, therefore aims to address this gap by designing a series of experiments related to flow features and learning models in the case of Network Intrusion Detection Systems (NIDS). Our experiments investigate the impact idle and active timeouts have on the quality of the extracted features from network data and their subsequent impact on the performance of ML models. For this end, we consider three flow exporters for feature extraction (NFStream, Zeek, and Argus), three ML models, and different feature sets. We conducted extensive experiments with public datasets including, USTC-TFC2016, CICIDS2017, UNSW-NB15, and CUPID. The results show that the difference between best and worst timeout combinations may reach up to 8.77% in terms of macro F1-score. They also unveil varying sensitivity to changes in timeouts among different models and feature sets. Finally, we propose a distributed learning approach based on federated learning. The latter showcased potential in handling multiple NIDS with different timeout configurations. The code is available at https://github.com/meryemJanatiIdrissi/Flow-timeout-matters.
{"title":"Flow timeout matters: Investigating the impact of active and idle timeouts on the performance of machine learning models in detecting security threats","authors":"Meryem Janati Idrissi ,&nbsp;Hamza Alami ,&nbsp;Abdelkader El Mahdaouy ,&nbsp;Abdelhak Bouayad ,&nbsp;Zakaria Yartaoui ,&nbsp;Ismail Berrada","doi":"10.1016/j.future.2024.107641","DOIUrl":"10.1016/j.future.2024.107641","url":null,"abstract":"<div><div>In the era of high-speed networks and massive data, several network security technologies are shifting focus from payload-based to flow-based methods. This has led to the incorporation of Machine Learning (ML) models in network security systems, where high-quality network flow features are of paramount importance. However, limited attention has been dedicated to studying the impact of the flow metering hyperparameters, specifically idle and active timeouts, on ML models’ performance. This paper, therefore aims to address this gap by designing a series of experiments related to flow features and learning models in the case of Network Intrusion Detection Systems (NIDS). Our experiments investigate the impact idle and active timeouts have on the quality of the extracted features from network data and their subsequent impact on the performance of ML models. For this end, we consider three flow exporters for feature extraction (NFStream, Zeek, and Argus), three ML models, and different feature sets. We conducted extensive experiments with public datasets including, USTC-TFC2016, CICIDS2017, UNSW-NB15, and CUPID. The results show that the difference between best and worst timeout combinations may reach up to 8.77% in terms of macro F1-score. They also unveil varying sensitivity to changes in timeouts among different models and feature sets. Finally, we propose a distributed learning approach based on federated learning. The latter showcased potential in handling multiple NIDS with different timeout configurations. The code is available at <span><span>https://github.com/meryemJanatiIdrissi/Flow-timeout-matters</span><svg><path></path></svg></span>.</div></div>","PeriodicalId":55132,"journal":{"name":"Future Generation Computer Systems-The International Journal of Escience","volume":"166 ","pages":"Article 107641"},"PeriodicalIF":6.2,"publicationDate":"2024-12-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143166507","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
A protocol generation model for protocol-unknown IoT devices 协议未知物联网设备的协议生成模型
IF 6.2 2区 计算机科学 Q1 COMPUTER SCIENCE, THEORY & METHODS Pub Date : 2024-12-10 DOI: 10.1016/j.future.2024.107638
Zheng Gao , Danfeng Sun , Kai Wang , Jia Wu , Huifeng Wu
The rapid growth of Internet of Things (IoT) applications depends on the deployment of numerous heterogeneous devices, and the deployed devices require various communication protocols to be accessed. Matching the correct protocol for accessed devices, particularly those with unknown protocols, is a complex and challenging task due to the diversity of device types, the growing number of protocols, and the reliance on domain-specific knowledge. To address these challenges, we propose a Device Clustering and Deep Reinforcement Learning-based Protocol Generation Model (DCDPM). The DCDPM generates the best-matched protocol for protocol-unknown IoT devices using only device basic information (DBI). The DCDPM employs a two-stage device clustering mechanism based on DBI similarity density to generate device clusters, and extracts protocol features from these clusters. Furthermore, a Weight Twin Delay-DDPG (WTD-DDPG), an enhanced deep reinforcement learning (DRL) method, is developed to determine the optimal weights for identifying the optimal device cluster. The WTD-DDPG addresses issues related to continuous action space and Q-value overestimation. Lastly, a feature-original fusion mechanism is designed to further enhance protocol matching by fusing the extracted protocol features with the original protocols within the optimal device cluster. Experimental validation of the DCDPM is conducted within two distinct scenarios: a communication base station and a copper smelting production line. A device library containing 1296 devices is created and 130 devices are tested. Experimental results demonstrate that DCDPM outperforms existing methods in terms of protocol matching rate, hit rate, and network traffic consumption.
物联网(IoT)应用的快速增长依赖于大量异构设备的部署,而部署的设备需要访问各种通信协议。由于设备类型的多样性、协议数量的增加以及对特定领域知识的依赖,为接入的设备匹配正确的协议,特别是那些协议未知的设备,是一项复杂而具有挑战性的任务。为了解决这些挑战,我们提出了一种基于设备聚类和深度强化学习的协议生成模型(DCDPM)。DCDPM仅使用设备基本信息(DBI)为协议未知的物联网设备生成最匹配的协议。DCDPM采用基于DBI相似密度的两阶段设备聚类机制生成设备聚类,并从这些聚类中提取协议特征。此外,提出了一种增强的深度强化学习(DRL)方法——加权双延迟- ddpg (WTD-DDPG),用于确定最优设备簇的最优权重。WTD-DDPG解决了与连续动作空间和q值高估有关的问题。最后,设计了一种特征-原始融合机制,通过将提取的协议特征与最优设备集群内的原始协议融合,进一步增强协议匹配。DCDPM在两个不同的场景中进行了实验验证:通信基站和铜冶炼生产线。创建了一个包含1296个设备的设备库,测试了130个设备。实验结果表明,DCDPM在协议匹配率、命中率和网络流量消耗方面都优于现有方法。
{"title":"A protocol generation model for protocol-unknown IoT devices","authors":"Zheng Gao ,&nbsp;Danfeng Sun ,&nbsp;Kai Wang ,&nbsp;Jia Wu ,&nbsp;Huifeng Wu","doi":"10.1016/j.future.2024.107638","DOIUrl":"10.1016/j.future.2024.107638","url":null,"abstract":"<div><div>The rapid growth of Internet of Things (IoT) applications depends on the deployment of numerous heterogeneous devices, and the deployed devices require various communication protocols to be accessed. Matching the correct protocol for accessed devices, particularly those with unknown protocols, is a complex and challenging task due to the diversity of device types, the growing number of protocols, and the reliance on domain-specific knowledge. To address these challenges, we propose a Device Clustering and Deep Reinforcement Learning-based Protocol Generation Model (DCDPM). The DCDPM generates the best-matched protocol for protocol-unknown IoT devices using only device basic information (DBI). The DCDPM employs a two-stage device clustering mechanism based on DBI similarity density to generate device clusters, and extracts protocol features from these clusters. Furthermore, a Weight Twin Delay-DDPG (WTD-DDPG), an enhanced deep reinforcement learning (DRL) method, is developed to determine the optimal weights for identifying the optimal device cluster. The WTD-DDPG addresses issues related to continuous action space and Q-value overestimation. Lastly, a feature-original fusion mechanism is designed to further enhance protocol matching by fusing the extracted protocol features with the original protocols within the optimal device cluster. Experimental validation of the DCDPM is conducted within two distinct scenarios: a communication base station and a copper smelting production line. A device library containing 1296 devices is created and 130 devices are tested. Experimental results demonstrate that DCDPM outperforms existing methods in terms of protocol matching rate, hit rate, and network traffic consumption.</div></div>","PeriodicalId":55132,"journal":{"name":"Future Generation Computer Systems-The International Journal of Escience","volume":"166 ","pages":"Article 107638"},"PeriodicalIF":6.2,"publicationDate":"2024-12-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142873862","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
期刊
Future Generation Computer Systems-The International Journal of Escience
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1