首页 > 最新文献

Future Generation Computer Systems-The International Journal of Escience最新文献

英文 中文
TMx-TORU: Transfer learning enhanced location-aware multi-hop task offloading protocol for connected vehicle networks TMx-TORU:网联汽车网络的迁移学习增强位置感知多跳任务卸载协议
IF 6.2 2区 计算机科学 Q1 COMPUTER SCIENCE, THEORY & METHODS Pub Date : 2025-12-12 DOI: 10.1016/j.future.2025.108292
Oğuzhan Akyıldız
Task offloading in Connected Vehicle Networks (CVNs), a key part of the Internet of Vehicles (IoV), requires adaptive decision-making to handle computational heterogeneity and communication volatility. Although traditional fog architectures provide a foundational framework, they frequently exhibit limitations in accommodating dynamic topologies and spatiotemporal variability in resource demand. In this study, we present TMx-TORU, a Transfer Learning (TL)-assisted multi-hop task offloading protocol that operationalizes past experience to reduce selection overhead and enhance offloading precision across dynamic vehicular fog networks. TMx-TORU integrates evolutionary optimization algorithms–namely Genetic Algorithm (GA), Non-dominated Sorting Genetic Algorithm-II (NSGA-II), and our lightweight Resource-Efficient Task Offloading algorithm (RELiOff) strategy–with a TL module that learns from prior task routes and service outcomes to bypass redundant computation. Simulation results under varying CPU capacities and transmission ranges show that TL-enhanced variants consistently outperform their baselines, with up to 40.3 % gains in successfully offloaded task count and noticeable improvements in effective resource utilization. While TL-augmented GA and NSGA-II variants showed superior adaptability in throughput-efficiency balance, RELiOff maintained high offloading volume even when efficiency fluctuated, underscoring its strength in low-latency responsiveness. It seems that TMx-TORU effectively integrates mobility patterns, resource awareness, and experiential inference according to the experimental results.
车联网(CVNs)中的任务卸载是车联网(IoV)的关键部分,需要自适应决策来处理计算异构和通信波动。尽管传统的雾架构提供了一个基础框架,但它们在适应动态拓扑和资源需求的时空变化方面经常表现出局限性。在本研究中,我们提出了TMx-TORU,这是一种迁移学习(TL)辅助的多跳任务卸载协议,它利用过去的经验来减少选择开销并提高动态车辆雾网络的卸载精度。TMx-TORU集成了进化优化算法——即遗传算法(GA)、非支配排序遗传算法- ii (NSGA-II)和我们的轻量级资源高效任务卸载算法(RELiOff)策略——以及一个TL模块,该模块可以从先前的任务路由和服务结果中学习,从而绕过冗余计算。在不同CPU容量和传输范围下的模拟结果表明,tl增强的变体始终优于其基线,成功卸载的任务数最多增加40.3%,有效资源利用率显著提高。虽然tl增强的GA和NSGA-II变体在吞吐量-效率平衡方面表现出优越的适应性,但RELiOff即使在效率波动时也能保持高卸载量,强调其在低延迟响应方面的优势。从实验结果来看,TMx-TORU有效地整合了移动模式、资源感知和经验推理。
{"title":"TMx-TORU: Transfer learning enhanced location-aware multi-hop task offloading protocol for connected vehicle networks","authors":"Oğuzhan Akyıldız","doi":"10.1016/j.future.2025.108292","DOIUrl":"10.1016/j.future.2025.108292","url":null,"abstract":"<div><div>Task offloading in Connected Vehicle Networks (CVNs), a key part of the Internet of Vehicles (IoV), requires adaptive decision-making to handle computational heterogeneity and communication volatility. Although traditional fog architectures provide a foundational framework, they frequently exhibit limitations in accommodating dynamic topologies and spatiotemporal variability in resource demand. In this study, we present TMx-TORU, a Transfer Learning (TL)-assisted multi-hop task offloading protocol that operationalizes past experience to reduce selection overhead and enhance offloading precision across dynamic vehicular fog networks. TMx-TORU integrates evolutionary optimization algorithms–namely Genetic Algorithm (GA), Non-dominated Sorting Genetic Algorithm-II (NSGA-II), and our lightweight Resource-Efficient Task Offloading algorithm (RELiOff) strategy–with a TL module that learns from prior task routes and service outcomes to bypass redundant computation. Simulation results under varying CPU capacities and transmission ranges show that TL-enhanced variants consistently outperform their baselines, with up to 40.3 % gains in successfully offloaded task count and noticeable improvements in effective resource utilization. While TL-augmented GA and NSGA-II variants showed superior adaptability in throughput-efficiency balance, RELiOff maintained high offloading volume even when efficiency fluctuated, underscoring its strength in low-latency responsiveness. It seems that TMx-TORU effectively integrates mobility patterns, resource awareness, and experiential inference according to the experimental results.</div></div>","PeriodicalId":55132,"journal":{"name":"Future Generation Computer Systems-The International Journal of Escience","volume":"178 ","pages":"Article 108292"},"PeriodicalIF":6.2,"publicationDate":"2025-12-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145732462","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Timed-release and partially private access control for decentralized IoT collaboration systems 分布式物联网协作系统的定时释放和部分私有访问控制
IF 6.2 2区 计算机科学 Q1 COMPUTER SCIENCE, THEORY & METHODS Pub Date : 2025-12-10 DOI: 10.1016/j.future.2025.108300
Chi Zhang, Peng Jiang, Qi Liu, Liehuang Zhu
Decentralized Internet of Things (IoT) collaborative systems necessitate robust access control mechanisms to coordinate collaborative computing and access management in decentralized environments. While functional encryption, as one of the access control technologies, demonstrates promising potential for collaborative ecosystems, existing centralized architectures suffer from single-point failure vulnerabilities and an absence of time access control - critical limitations in decentralized IoT collaboration frameworks. This paper introduces a novel access control paradigm addressing these decentralized system challenges through two key innovations: a trust authority-free decentralized key generation framework, and the implementation of time-constrained decryption policies that protect computational outputs in encrypted form prior to predetermined disclosure periods. Specifically, our technology enables individual clients to autonomously generate local cryptographic key pairs, encrypt data, and negotiate time parameters for result publication. The decryption phase aggregates encrypted data and partial decryption keys from multiple clients to ultimately enable data accessibility. We present a concrete implementation and evaluate it under both idealized and resource-constrained simulation environments, confirming the system’s practicality even with 100 clients in simulated IoT setups.
分散的物联网(IoT)协作系统需要强大的访问控制机制来协调分散环境中的协作计算和访问管理。虽然功能加密作为访问控制技术之一,展示了协作生态系统的巨大潜力,但现有的集中式架构存在单点故障漏洞和缺乏时间访问控制——这是分散物联网协作框架的关键限制。本文介绍了一种新的访问控制范式,通过两个关键创新来解决这些去中心化系统的挑战:一个无信任权威的去中心化密钥生成框架,以及在预定披露期之前以加密形式保护计算输出的时间限制解密策略的实施。具体来说,我们的技术使各个客户端能够自主地生成本地加密密钥对、加密数据和协商结果发布的时间参数。解密阶段聚合来自多个客户机的加密数据和部分解密密钥,最终实现数据可访问性。我们提出了一个具体的实现,并在理想化和资源受限的模拟环境下对其进行了评估,即使在模拟物联网设置中有100个客户端,也证实了该系统的实用性。
{"title":"Timed-release and partially private access control for decentralized IoT collaboration systems","authors":"Chi Zhang,&nbsp;Peng Jiang,&nbsp;Qi Liu,&nbsp;Liehuang Zhu","doi":"10.1016/j.future.2025.108300","DOIUrl":"10.1016/j.future.2025.108300","url":null,"abstract":"<div><div>Decentralized Internet of Things (IoT) collaborative systems necessitate robust access control mechanisms to coordinate collaborative computing and access management in decentralized environments. While functional encryption, as one of the access control technologies, demonstrates promising potential for collaborative ecosystems, existing centralized architectures suffer from single-point failure vulnerabilities and an absence of time access control - critical limitations in decentralized IoT collaboration frameworks. This paper introduces a novel access control paradigm addressing these decentralized system challenges through two key innovations: a trust authority-free decentralized key generation framework, and the implementation of time-constrained decryption policies that protect computational outputs in encrypted form prior to predetermined disclosure periods. Specifically, our technology enables individual clients to autonomously generate local cryptographic key pairs, encrypt data, and negotiate time parameters for result publication. The decryption phase aggregates encrypted data and partial decryption keys from multiple clients to ultimately enable data accessibility. We present a concrete implementation and evaluate it under both idealized and resource-constrained simulation environments, confirming the system’s practicality even with 100 clients in simulated IoT setups.</div></div>","PeriodicalId":55132,"journal":{"name":"Future Generation Computer Systems-The International Journal of Escience","volume":"178 ","pages":"Article 108300"},"PeriodicalIF":6.2,"publicationDate":"2025-12-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145731204","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Accelerated co-movement patterns mining: A heterogeneous framework based on GPU clusters 加速协同运动模式挖掘:基于GPU集群的异构框架
IF 6.2 2区 计算机科学 Q1 COMPUTER SCIENCE, THEORY & METHODS Pub Date : 2025-12-10 DOI: 10.1016/j.future.2025.108302
Chaowei Wu , Wen Xiong , Sasa Duan , Yang Wang
In modern urban public transportation systems, tens of thousands of buses traverse on open road networks, serving millions of residents and generating massive GPS trajectory data. Effectively mining this data is critical for improving safety and efficiency. Co-movement pattern mining is a representative compute-intensive technique which is commonly used for bus bunching detection, but when executed on CPU-based systems, it faces scalability and latency challenges. To address this, we present an accelerated co-movement pattern mining framework based on GPU clusters. It integrates workflow management of PySpark with the high-performance computing capabilities of GPUs, and employs a pipeline to perform spatial projection, hybrid indexing, filter-verification, and memory management. We implement our approach on a Spark cluster with three nodes (equipped with six NVIDIA A40 GPUs) and evaluate it on a large-scale dataset comprising 12,788 vehicles, and over 3.22 billion GPS records collected over 31 days. The experimental results show that, compared to CPU-based approaches, our solution achieves a maximum speedup of 15.69 × . These results demonstrate that our solution can effectively support large-scale GPS trajectory analysis in bus transportation systems.
在现代城市公共交通系统中,数以万计的公共汽车在开放的道路网络上穿行,为数百万居民提供服务,并产生大量的GPS轨迹数据。有效地挖掘这些数据对于提高安全性和效率至关重要。协同移动模式挖掘是一种典型的计算密集型技术,通常用于总线群集检测,但在基于cpu的系统上执行时,它面临可伸缩性和延迟的挑战。为了解决这个问题,我们提出了一个基于GPU集群的加速协同运动模式挖掘框架。它集成了PySpark的工作流管理和gpu的高性能计算能力,并采用流水线来执行空间投影、混合索引、过滤器验证和内存管理。我们在一个有三个节点的Spark集群(配备了六个NVIDIA A40 gpu)上实现了我们的方法,并在一个包含12,788辆汽车的大规模数据集上进行了评估,该数据集在31天内收集了超过32.2亿条GPS记录。实验结果表明,与基于cpu的方法相比,我们的解决方案实现了15.69 × 的最大加速。结果表明,该方法可以有效地支持公交系统的大规模GPS轨迹分析。
{"title":"Accelerated co-movement patterns mining: A heterogeneous framework based on GPU clusters","authors":"Chaowei Wu ,&nbsp;Wen Xiong ,&nbsp;Sasa Duan ,&nbsp;Yang Wang","doi":"10.1016/j.future.2025.108302","DOIUrl":"10.1016/j.future.2025.108302","url":null,"abstract":"<div><div>In modern urban public transportation systems, tens of thousands of buses traverse on open road networks, serving millions of residents and generating massive GPS trajectory data. Effectively mining this data is critical for improving safety and efficiency. Co-movement pattern mining is a representative compute-intensive technique which is commonly used for bus bunching detection, but when executed on CPU-based systems, it faces scalability and latency challenges. To address this, we present an accelerated co-movement pattern mining framework based on GPU clusters. It integrates workflow management of PySpark with the high-performance computing capabilities of GPUs, and employs a pipeline to perform spatial projection, hybrid indexing, filter-verification, and memory management. We implement our approach on a Spark cluster with three nodes (equipped with six NVIDIA A40 GPUs) and evaluate it on a large-scale dataset comprising 12,788 vehicles, and over 3.22 billion GPS records collected over 31 days. The experimental results show that, compared to CPU-based approaches, our solution achieves a maximum speedup of 15.69 × . These results demonstrate that our solution can effectively support large-scale GPS trajectory analysis in bus transportation systems.</div></div>","PeriodicalId":55132,"journal":{"name":"Future Generation Computer Systems-The International Journal of Escience","volume":"178 ","pages":"Article 108302"},"PeriodicalIF":6.2,"publicationDate":"2025-12-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145732465","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Spatial-temporal dual interactive graph convolutional networks for traffic flow forecasting 交通流预测的时空双交互图卷积网络
IF 6.2 2区 计算机科学 Q1 COMPUTER SCIENCE, THEORY & METHODS Pub Date : 2025-12-09 DOI: 10.1016/j.future.2025.108298
Wensheng Zhang , Hao Cai , Hongli Shi , Zhenzhen Han
Traffic flow forecasting is central to intelligent transportation systems but remains challenging due to tightly coupled spatial temporal dependencies and high-order interactions. Existing deep models often assume static or single-view spatial structure, emphasize only pairwise relations, and struggle to represent dynamic spatial-temporal interactions, leading to a persistent accuracy-efficiency trade-off. To overcome this challenge, we propose a Spatial-Temporal Dual Interactive Graph Convolutional Network (STDIGCN) built around three coordinated components: (i) an adaptive traffic graph learner with macro-micro branches that infer long- and short-term topologies; (ii) a dynamic hypergraph obtained via dual transformations and embedding-based association learning to capture high-order group interactions; and (iii) a spatial-temporal dual-graph interactive convolution module that exchanges information between the graph and hypergraph streams, aligning pairwise node dependencies with high-order edge patterns while preserving multiscale temporal structure. Extensive experiments across six benchmark traffic datasets and multiple horizons demonstrate that STDIGCN outperforms strong baselines while maintaining computational efficiency.
交通流预测是智能交通系统的核心,但由于紧密耦合的时空依赖性和高阶相互作用,交通流预测仍然具有挑战性。现有的深度模型通常假设静态或单视图空间结构,只强调成对关系,并且难以表示动态的时空相互作用,导致持久的准确性和效率权衡。为了克服这一挑战,我们提出了一个围绕三个协调组件构建的时空双交互图卷积网络(STDIGCN):(i)一个具有宏微观分支的自适应交通图学习器,可以推断长期和短期拓扑;(ii)通过对偶变换和基于嵌入的关联学习获得的动态超图,以捕获高阶群体交互;(iii)一个时空双图交互卷积模块,它在图和超图流之间交换信息,在保留多尺度时间结构的同时,用高阶边缘模式对齐两两节点依赖关系。在六个基准流量数据集和多个视界上进行的广泛实验表明,STDIGCN在保持计算效率的同时优于强基线。
{"title":"Spatial-temporal dual interactive graph convolutional networks for traffic flow forecasting","authors":"Wensheng Zhang ,&nbsp;Hao Cai ,&nbsp;Hongli Shi ,&nbsp;Zhenzhen Han","doi":"10.1016/j.future.2025.108298","DOIUrl":"10.1016/j.future.2025.108298","url":null,"abstract":"<div><div>Traffic flow forecasting is central to intelligent transportation systems but remains challenging due to tightly coupled spatial temporal dependencies and high-order interactions. Existing deep models often assume static or single-view spatial structure, emphasize only pairwise relations, and struggle to represent dynamic spatial-temporal interactions, leading to a persistent accuracy-efficiency trade-off. To overcome this challenge, we propose a Spatial-Temporal Dual Interactive Graph Convolutional Network (STDIGCN) built around three coordinated components: (i) an adaptive traffic graph learner with macro-micro branches that infer long- and short-term topologies; (ii) a dynamic hypergraph obtained via dual transformations and embedding-based association learning to capture high-order group interactions; and (iii) a spatial-temporal dual-graph interactive convolution module that exchanges information between the graph and hypergraph streams, aligning pairwise node dependencies with high-order edge patterns while preserving multiscale temporal structure. Extensive experiments across six benchmark traffic datasets and multiple horizons demonstrate that STDIGCN outperforms strong baselines while maintaining computational efficiency.</div></div>","PeriodicalId":55132,"journal":{"name":"Future Generation Computer Systems-The International Journal of Escience","volume":"179 ","pages":"Article 108298"},"PeriodicalIF":6.2,"publicationDate":"2025-12-09","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145731208","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Structure-aware thread throttling for energy-efficient graph processing on shared-memory systems 面向共享内存系统的高能效图形处理的结构感知线程节流
IF 6.2 2区 计算机科学 Q1 COMPUTER SCIENCE, THEORY & METHODS Pub Date : 2025-12-09 DOI: 10.1016/j.future.2025.108297
Yili Chen, Le Luo, Tao Jiang, Lu Fang, Sheng Xu
Graph processing on shared-memory systems fully utilizes memory bandwidth and avoids communication overhead, yet it is not as energy-efficient as expected. Since memory bandwidth becomes a major bottleneck, using more cores does not always lead to better performance. To address this limitation, we propose two predictive thread-throttling models that infer the optimal number of threads from graph characteristics such as sparsity and skewness, aiming to reduce energy consumption with minimal performance loss. The weighted model is implemented on four representative frameworks, including GreGraphMat, GrePolymer, GreGrazelle and GreLigra, and evaluated on two CPU architectures, Intel Xeon Gold 6230R and Loongson 3A6000. Experimental results show up to beyound 30 % improvement in Energy-Delay Product (EDP) on Intel and consistent 15.8 % reduction with 1.16 ×  speedup on Loongson. These results confirm that the proposed models achieve robust energy efficiency, strong scalability, and cross-architecture generality in shared-memory graph processing.
共享内存系统上的图形处理充分利用了内存带宽,避免了通信开销,但并不像预期的那样节能。由于内存带宽成为主要瓶颈,使用更多的内核并不总是带来更好的性能。为了解决这一限制,我们提出了两个预测性线程节流模型,它们从图形特征(如稀疏性和偏度)推断出最佳线程数,旨在以最小的性能损失降低能耗。该加权模型在四个代表性框架(GreGraphMat、GrePolymer、GreGrazelle和GreLigra)上实现,并在Intel至强Gold 6230R和龙芯3A6000两种CPU架构上进行了评估。实验结果表明,在英特尔上的能量延迟产品(EDP)提高了30%以上,在龙芯上的速度提高了1.16 × ,降低了15.8%。这些结果证实了所提出的模型在共享内存图处理中具有强大的能源效率,强大的可扩展性和跨架构通用性。
{"title":"Structure-aware thread throttling for energy-efficient graph processing on shared-memory systems","authors":"Yili Chen,&nbsp;Le Luo,&nbsp;Tao Jiang,&nbsp;Lu Fang,&nbsp;Sheng Xu","doi":"10.1016/j.future.2025.108297","DOIUrl":"10.1016/j.future.2025.108297","url":null,"abstract":"<div><div>Graph processing on shared-memory systems fully utilizes memory bandwidth and avoids communication overhead, yet it is not as energy-efficient as expected. Since memory bandwidth becomes a major bottleneck, using more cores does not always lead to better performance. To address this limitation, we propose two predictive thread-throttling models that infer the optimal number of threads from graph characteristics such as sparsity and skewness, aiming to reduce energy consumption with minimal performance loss. The weighted model is implemented on four representative frameworks, including GreGraphMat, GrePolymer, GreGrazelle and GreLigra, and evaluated on two CPU architectures, Intel Xeon Gold 6230R and Loongson 3A6000. Experimental results show up to beyound 30 % improvement in Energy-Delay Product (EDP) on Intel and consistent 15.8 % reduction with 1.16 ×  speedup on Loongson. These results confirm that the proposed models achieve robust energy efficiency, strong scalability, and cross-architecture generality in shared-memory graph processing.</div></div>","PeriodicalId":55132,"journal":{"name":"Future Generation Computer Systems-The International Journal of Escience","volume":"178 ","pages":"Article 108297"},"PeriodicalIF":6.2,"publicationDate":"2025-12-09","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145731209","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Improving adversarial resilience for anomaly detection in the heterogeneous internet of things through ensemble models 通过集成模型提高异构物联网异常检测的对抗弹性
IF 6.2 2区 计算机科学 Q1 COMPUTER SCIENCE, THEORY & METHODS Pub Date : 2025-12-09 DOI: 10.1016/j.future.2025.108299
U.E. Abiha , A. Rehman , A. Abbas , M.A. Haider , F.A.M. Al-Yarimi , M.U. Gul , S.R. Hassan
With the continuous expansion and increasing complexity of the Internet of Things (IoT), anomaly detection systems have become prime targets for sophisticated adversarial attacks. These attacks often exploit weaknesses in existing detection frameworks, particularly under conditions of class imbalance and dynamic, heterogeneous data streams. To address this challenge, we propose a robust and scalable ensemble deep learning framework that integrates Conditional Generative Adversarial Networks (cGANs), Denoising Autoencoders (DAEs), and Long Short-Term Memory (LSTM) networks for anomaly detection in IoT environments. Specifically, the framework leverages cGANs to synthesize minority-class samples and alleviate data imbalance, employs DAEs for robust and noise-resilient feature extraction, and utilizes LSTM networks to capture temporal dependencies inherent in sequential IoT data. To further enhance resilience against evasion attacks, we incorporate a tailored multi-layer adversarial training strategy using both clean and dynamically generated adversarial samples along with partial gradient masking. In addition, we introduce a lightweight knowledge distillation framework, enabling a compressed student model to achieve comparable accuracy with reduced inference delay, thereby improving deployment feasibility on edge devices. Our contributions are fivefold: (i) we develop a novel ensemble architecture designed for robust and resilient anomaly detection in heterogeneous IoT systems; (ii) we introduce a customized adversarial training approach optimized for real-time constraints in IoT settings; (iii) we implement a lightweight feature selection and distillation pipeline for complexity reduction; (iv) we conduct comprehensive evaluations using the Distributed Smart Space Orchestration System (DS2OS) and Bot-IoT datasets, achieving strong performance across domains (F1: 96.26 % on DS2OS, 95.94 % on Bot-IoT).; and (v) we demonstrate that the proposed framework consistently outperforms state-of-the-art standalone and hybrid methods across a range of attack scenarios. Overall, the proposed system offers a practical and scalable defense mechanism against emerging threats in future IoT infrastructures.
随着物联网(IoT)的不断扩展和复杂性的增加,异常检测系统已成为复杂对抗性攻击的主要目标。这些攻击通常利用现有检测框架的弱点,特别是在类不平衡和动态异构数据流的情况下。为了应对这一挑战,我们提出了一个强大且可扩展的集成深度学习框架,该框架集成了条件生成对抗网络(cgan)、去噪自动编码器(DAEs)和长短期记忆(LSTM)网络,用于物联网环境中的异常检测。具体而言,该框架利用cgan合成少数类样本并缓解数据不平衡,采用DAEs进行鲁棒性和抗噪声特征提取,并利用LSTM网络捕获顺序物联网数据固有的时间依赖性。为了进一步增强对逃避攻击的弹性,我们结合了一个定制的多层对抗训练策略,使用干净和动态生成的对抗样本以及部分梯度掩蔽。此外,我们引入了一个轻量级的知识蒸馏框架,使压缩的学生模型能够在减少推理延迟的情况下达到相当的精度,从而提高了在边缘设备上部署的可行性。我们的贡献有五个方面:(i)我们开发了一种新颖的集成架构,专为异构物联网系统中的鲁棒和弹性异常检测而设计;(ii)我们引入了针对物联网设置中的实时约束进行优化的定制对抗性训练方法;(iii)我们实现了一个轻量级的特征选择和蒸馏管道,以降低复杂性;(iv)我们使用分布式智能空间编排系统(DS2OS)和Bot-IoT数据集进行综合评估,实现了跨领域的强大性能(F1: DS2OS为96.26%,Bot-IoT为95.94%);并且(v)我们证明所提出的框架在一系列攻击场景中始终优于最先进的独立和混合方法。总体而言,该系统为未来物联网基础设施中的新威胁提供了实用且可扩展的防御机制。
{"title":"Improving adversarial resilience for anomaly detection in the heterogeneous internet of things through ensemble models","authors":"U.E. Abiha ,&nbsp;A. Rehman ,&nbsp;A. Abbas ,&nbsp;M.A. Haider ,&nbsp;F.A.M. Al-Yarimi ,&nbsp;M.U. Gul ,&nbsp;S.R. Hassan","doi":"10.1016/j.future.2025.108299","DOIUrl":"10.1016/j.future.2025.108299","url":null,"abstract":"<div><div>With the continuous expansion and increasing complexity of the Internet of Things (IoT), anomaly detection systems have become prime targets for sophisticated adversarial attacks. These attacks often exploit weaknesses in existing detection frameworks, particularly under conditions of class imbalance and dynamic, heterogeneous data streams. To address this challenge, we propose a robust and scalable ensemble deep learning framework that integrates Conditional Generative Adversarial Networks (cGANs), Denoising Autoencoders (DAEs), and Long Short-Term Memory (LSTM) networks for anomaly detection in IoT environments. Specifically, the framework leverages cGANs to synthesize minority-class samples and alleviate data imbalance, employs DAEs for robust and noise-resilient feature extraction, and utilizes LSTM networks to capture temporal dependencies inherent in sequential IoT data. To further enhance resilience against evasion attacks, we incorporate a tailored multi-layer adversarial training strategy using both clean and dynamically generated adversarial samples along with partial gradient masking. In addition, we introduce a lightweight knowledge distillation framework, enabling a compressed student model to achieve comparable accuracy with reduced inference delay, thereby improving deployment feasibility on edge devices. Our contributions are fivefold: (i) we develop a novel ensemble architecture designed for robust and resilient anomaly detection in heterogeneous IoT systems; (ii) we introduce a customized adversarial training approach optimized for real-time constraints in IoT settings; (iii) we implement a lightweight feature selection and distillation pipeline for complexity reduction; (iv) we conduct comprehensive evaluations using the Distributed Smart Space Orchestration System (DS2OS) and Bot-IoT datasets, achieving strong performance across domains (F1: 96.26 % on DS2OS, 95.94 % on Bot-IoT).; and (v) we demonstrate that the proposed framework consistently outperforms state-of-the-art standalone and hybrid methods across a range of attack scenarios. Overall, the proposed system offers a practical and scalable defense mechanism against emerging threats in future IoT infrastructures.</div></div>","PeriodicalId":55132,"journal":{"name":"Future Generation Computer Systems-The International Journal of Escience","volume":"178 ","pages":"Article 108299"},"PeriodicalIF":6.2,"publicationDate":"2025-12-09","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145731210","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
A joint time and energy-efficient federated learning-based computation offloading method for mobile edge computing 一种基于联合时间和节能的移动边缘计算联邦学习卸载方法
IF 6.2 2区 计算机科学 Q1 COMPUTER SCIENCE, THEORY & METHODS Pub Date : 2025-12-09 DOI: 10.1016/j.future.2025.108295
Anwesha Mukherjee , Rajkumar Buyya
Computation offloading at lower time and lower energy consumption is crucial for resource-constrained mobile devices. This paper proposes an offloading decision-making model using federated learning. Based on the device configuration, task type, and input, the proposed decision-making model predicts whether the task is computationally intensive or not. If the predicted result is computationally intensive, then based on the network parameters the proposed decision-making model predicts whether to offload or locally execute the task. The experimental results show that the proposed method achieves above 90 % prediction accuracy in offloading decision-making, and reduces the response time and energy consumption of the user device by  ∼ 11-31 %. A secure partial computation offloading method for federated learning is also proposed to deal with the Straggler effect of federated learning. The results present that the proposed partial computation offloading method for federated learning has achieved a prediction accuracy of above 98 % for the global model.
在较短的时间和较低的能量消耗下进行计算卸载对于资源受限的移动设备至关重要。提出了一种基于联邦学习的卸载决策模型。该决策模型基于设备配置、任务类型和输入来预测任务是否属于计算密集型。如果预测结果计算量大,则根据网络参数预测是卸载任务还是本地执行任务。实验结果表明,该方法在卸载决策中预测准确率达到90%以上,并将用户设备的响应时间和能耗降低 ~ 11- 31%。针对联邦学习的离散效应,提出了一种安全的部分计算卸载方法。结果表明,所提出的部分计算卸载方法对全局模型的预测精度达到98%以上。
{"title":"A joint time and energy-efficient federated learning-based computation offloading method for mobile edge computing","authors":"Anwesha Mukherjee ,&nbsp;Rajkumar Buyya","doi":"10.1016/j.future.2025.108295","DOIUrl":"10.1016/j.future.2025.108295","url":null,"abstract":"<div><div>Computation offloading at lower time and lower energy consumption is crucial for resource-constrained mobile devices. This paper proposes an offloading decision-making model using federated learning. Based on the device configuration, task type, and input, the proposed decision-making model predicts whether the task is computationally intensive or not. If the predicted result is <em>computationally intensive</em>, then based on the network parameters the proposed decision-making model predicts <em>whether to offload or locally execute</em> the task. The experimental results show that the proposed method achieves above 90 % prediction accuracy in offloading decision-making, and reduces the response time and energy consumption of the user device by  ∼ 11-31 %. A secure partial computation offloading method for federated learning is also proposed to deal with the Straggler effect of federated learning. The results present that the proposed partial computation offloading method for federated learning has achieved a prediction accuracy of above 98 % for the global model.</div></div>","PeriodicalId":55132,"journal":{"name":"Future Generation Computer Systems-The International Journal of Escience","volume":"178 ","pages":"Article 108295"},"PeriodicalIF":6.2,"publicationDate":"2025-12-09","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145731207","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
A volunteer-supported fog computing environment for DVFS based workflow scheduling 基于DVFS工作流调度的志愿者支持雾计算环境
IF 6.2 2区 计算机科学 Q1 COMPUTER SCIENCE, THEORY & METHODS Pub Date : 2025-12-09 DOI: 10.1016/j.future.2025.108301
Anahita Dehshid, Reihaneh Khorsand, Keyvan Mohebbi
Fog computing offers a decentralized paradigm designed to process time-critical Internet of Things (IoT) tasks with minimal latency. However, due to the resource-constrained nature of fog nodes, offloading tasks to the cloud is often necessary, resulting in increased delays. To mitigate this limitation, the integration of volunteer computing with fog environments is proposed, utilizing idle computational resources from nearby devices to support latency-sensitive workloads. Furthermore, energy efficiency is a critical concern in fog computing, as it influences both operational expenditure and environmental impact. This study introduces a twofold contribution to enhance workflow scheduling. First, a volunteer selection algorithm is developed to optimally match urgent workflow tasks with suitable volunteer devices. Second, a hybrid scheduling algorithm, Sobol-FDO-SC, combines the Sobol sequence for population initialization with the Fitness Dependent Optimizer (FDO) and the Sine Cosine Algorithm (SCA). The Sobol sequence improves global search capability by avoiding premature convergence, while SCA enhances convergence speed and balances exploration-exploitation dynamics. Additionally, Dynamic Voltage and Frequency Scaling (DVFS) is applied to optimize energy consumption. Experimental evaluations demonstrate that the proposed method outperforms existing techniques in terms of makespan, energy efficiency, cost, and SLA violations.
雾计算提供了一种分散的范例,旨在以最小的延迟处理时间紧迫的物联网(IoT)任务。然而,由于雾节点的资源约束性质,通常需要将任务卸载到云端,从而导致延迟增加。为了减轻这一限制,提出了将志愿计算与雾环境集成,利用附近设备的空闲计算资源来支持对延迟敏感的工作负载。此外,在雾计算中,能源效率是一个关键问题,因为它会影响运营支出和环境影响。本研究引入了两种方法来增强工作流调度。首先,开发了一种志愿者选择算法,将紧急工作任务与合适的志愿者设备进行最优匹配。其次,将Sobol-FDO- sc混合调度算法与适应度相关优化器(FDO)和正弦余弦算法(SCA)相结合,将Sobol序列用于种群初始化。Sobol序列通过避免过早收敛提高了全局搜索能力,而SCA提高了收敛速度并平衡了探索-开发动态。此外,采用动态电压和频率缩放(DVFS)来优化能耗。实验评估表明,所提出的方法在完工时间、能源效率、成本和SLA违反方面优于现有技术。
{"title":"A volunteer-supported fog computing environment for DVFS based workflow scheduling","authors":"Anahita Dehshid,&nbsp;Reihaneh Khorsand,&nbsp;Keyvan Mohebbi","doi":"10.1016/j.future.2025.108301","DOIUrl":"10.1016/j.future.2025.108301","url":null,"abstract":"<div><div>Fog computing offers a decentralized paradigm designed to process time-critical Internet of Things (IoT) tasks with minimal latency. However, due to the resource-constrained nature of fog nodes, offloading tasks to the cloud is often necessary, resulting in increased delays. To mitigate this limitation, the integration of volunteer computing with fog environments is proposed, utilizing idle computational resources from nearby devices to support latency-sensitive workloads. Furthermore, energy efficiency is a critical concern in fog computing, as it influences both operational expenditure and environmental impact. This study introduces a twofold contribution to enhance workflow scheduling. First, a volunteer selection algorithm is developed to optimally match urgent workflow tasks with suitable volunteer devices. Second, a hybrid scheduling algorithm, Sobol-FDO-SC, combines the Sobol sequence for population initialization with the Fitness Dependent Optimizer (FDO) and the Sine Cosine Algorithm (SCA). The Sobol sequence improves global search capability by avoiding premature convergence, while SCA enhances convergence speed and balances exploration-exploitation dynamics. Additionally, Dynamic Voltage and Frequency Scaling (DVFS) is applied to optimize energy consumption. Experimental evaluations demonstrate that the proposed method outperforms existing techniques in terms of makespan, energy efficiency, cost, and SLA violations.</div></div>","PeriodicalId":55132,"journal":{"name":"Future Generation Computer Systems-The International Journal of Escience","volume":"178 ","pages":"Article 108301"},"PeriodicalIF":6.2,"publicationDate":"2025-12-09","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145732466","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
A two-Layer asynchronous federated learning for heterogeneous IoT devices 异构物联网设备的两层异步联邦学习
IF 6.2 2区 计算机科学 Q1 COMPUTER SCIENCE, THEORY & METHODS Pub Date : 2025-12-09 DOI: 10.1016/j.future.2025.108255
Jian Xu , Bing Guo , Yan Shen , Fei Chen
With the rapid development of Internet of Things (IoT) technology, edge devices such as smartphones and sensors generate large volumes of data. Although traditional synchronous federated learning frameworks can perform distributed model training while ensuring data privacy, they often face training bottlenecks and delays in IoT environments due to device heterogeneity and computational capacity differences. These issues significantly affect training efficiency and model performance. To address these challenges, we propose a two-layer asynchronous federated learning algorithm. The algorithm uses singular value decomposition for quantization and feature extraction of edge node data, and constructs a two-layer training architecture consisting of a central server, cluster leader nodes, and regular nodes through clustering methods. We design a two-stage asynchronous training process, where model parameters are first asynchronously submitted and aggregated within the cluster, and then the aggregation of the global model is improved by distinguishing the local convergence states of the nodes, thereby reducing communication overhead and mitigating model drift. Moreover, the algorithm implements inter-cluster synchronous training by quantifying the similarity of data features across clusters, improving the model’s generalization ability and accuracy. The experimental results on the Fashion-MNIST, CIFAR-10, Sentiment140, and Blue Gene/L datasets validate the effectiveness of our method. Compared with existing approaches, our algorithm demonstrates significant improvements in prediction accuracy while considerably reducing communication requirements.
随着物联网技术的快速发展,智能手机、传感器等边缘设备产生了大量的数据。虽然传统的同步联邦学习框架可以在确保数据隐私的同时执行分布式模型训练,但由于设备异质性和计算能力差异,它们在物联网环境中经常面临训练瓶颈和延迟。这些问题严重影响了训练效率和模型性能。为了解决这些挑战,我们提出了一种两层异步联邦学习算法。该算法采用奇异值分解对边缘节点数据进行量化和特征提取,通过聚类方法构建由中心服务器、集群领导节点和规则节点组成的两层训练体系结构。我们设计了一个两阶段异步训练过程,首先在集群内异步提交和聚合模型参数,然后通过区分节点的局部收敛状态来改进全局模型的聚合,从而减少通信开销和缓解模型漂移。此外,该算法通过量化聚类间数据特征的相似度实现聚类间同步训练,提高了模型的泛化能力和准确率。Fashion-MNIST、CIFAR-10、Sentiment140和Blue Gene/L数据集上的实验结果验证了该方法的有效性。与现有方法相比,我们的算法在显著提高预测精度的同时显著降低了通信要求。
{"title":"A two-Layer asynchronous federated learning for heterogeneous IoT devices","authors":"Jian Xu ,&nbsp;Bing Guo ,&nbsp;Yan Shen ,&nbsp;Fei Chen","doi":"10.1016/j.future.2025.108255","DOIUrl":"10.1016/j.future.2025.108255","url":null,"abstract":"<div><div>With the rapid development of Internet of Things (IoT) technology, edge devices such as smartphones and sensors generate large volumes of data. Although traditional synchronous federated learning frameworks can perform distributed model training while ensuring data privacy, they often face training bottlenecks and delays in IoT environments due to device heterogeneity and computational capacity differences. These issues significantly affect training efficiency and model performance. To address these challenges, we propose a two-layer asynchronous federated learning algorithm. The algorithm uses singular value decomposition for quantization and feature extraction of edge node data, and constructs a two-layer training architecture consisting of a central server, cluster leader nodes, and regular nodes through clustering methods. We design a two-stage asynchronous training process, where model parameters are first asynchronously submitted and aggregated within the cluster, and then the aggregation of the global model is improved by distinguishing the local convergence states of the nodes, thereby reducing communication overhead and mitigating model drift. Moreover, the algorithm implements inter-cluster synchronous training by quantifying the similarity of data features across clusters, improving the model’s generalization ability and accuracy. The experimental results on the Fashion-MNIST, CIFAR-10, Sentiment140, and Blue Gene/L datasets validate the effectiveness of our method. Compared with existing approaches, our algorithm demonstrates significant improvements in prediction accuracy while considerably reducing communication requirements.</div></div>","PeriodicalId":55132,"journal":{"name":"Future Generation Computer Systems-The International Journal of Escience","volume":"178 ","pages":"Article 108255"},"PeriodicalIF":6.2,"publicationDate":"2025-12-09","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145732467","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Federated clustering: An unsupervised cluster-wise training for decentralized data distributions 联邦聚类:分散数据分布的无监督聚类训练
IF 6.2 2区 计算机科学 Q1 COMPUTER SCIENCE, THEORY & METHODS Pub Date : 2025-12-07 DOI: 10.1016/j.future.2025.108294
Mirko Nardi , Lorenzo Valerio , Andrea Passarella
Federated Learning (FL) enables decentralized machine learning while preserving data privacy, making it ideal for sensitive applications where data cannot be shared. While FL has been widely studied in supervised contexts, its application to unsupervised learning remains underdeveloped. This work introduces FedCRef, a novel unsupervised federated learning method designed to uncover all underlying data distributions across decentralized clients without requiring labels. This task, known as Federated Clustering, presents challenges due to heterogeneous, non-uniform data distributions and the lack of centralized coordination. Unlike previous methods that assume a one-cluster-per-client setup or require prior knowledge of the number of clusters, FedCRef generalizes to multi-cluster-per-client scenarios. Clients iteratively refine their data partitions while discovering all distinct distributions in the system. The process combines local clustering, model exchange and evaluation via reconstruction error analysis, and collaborative refinement within federated groups of similar distributions to enhance clustering accuracy. Extensive evaluations on four public datasets (EMNIST, KMNIST, Fashion-MNIST and KMNIST49) show that FedCRef successfully identifies true global data distributions, achieving an average local accuracy of up to 95 %. The method is also robust to noisy conditions, scalable, and lightweight, making it suitable for resource-constrained edge devices.
联邦学习(FL)支持分散的机器学习,同时保护数据隐私,使其成为无法共享数据的敏感应用程序的理想选择。虽然FL在有监督环境中得到了广泛的研究,但它在无监督学习中的应用仍然不发达。这项工作介绍了FedCRef,这是一种新颖的无监督联邦学习方法,旨在发现分散客户端的所有底层数据分布,而不需要标签。这项任务被称为联邦集群,由于异构、非统一的数据分布和缺乏集中协调而面临挑战。与之前假设每个客户端一个集群设置或需要事先了解集群数量的方法不同,FedCRef适用于每个客户端多个集群的场景。客户端在发现系统中所有不同的分布的同时迭代地改进它们的数据分区。该过程结合了局部聚类、模型交换和通过重建误差分析进行评估,以及在相似分布的联邦组内进行协作细化,以提高聚类精度。对四个公共数据集(EMNIST、KMNIST、Fashion-MNIST和KMNIST49)的广泛评估表明,FedCRef成功地识别了真实的全球数据分布,实现了高达95%的平均局部精度。该方法对噪声条件具有鲁棒性、可扩展性和轻量级,适用于资源受限的边缘设备。
{"title":"Federated clustering: An unsupervised cluster-wise training for decentralized data distributions","authors":"Mirko Nardi ,&nbsp;Lorenzo Valerio ,&nbsp;Andrea Passarella","doi":"10.1016/j.future.2025.108294","DOIUrl":"10.1016/j.future.2025.108294","url":null,"abstract":"<div><div>Federated Learning (FL) enables decentralized machine learning while preserving data privacy, making it ideal for sensitive applications where data cannot be shared. While FL has been widely studied in supervised contexts, its application to unsupervised learning remains underdeveloped. This work introduces FedCRef, a novel unsupervised federated learning method designed to uncover all underlying data distributions across decentralized clients without requiring labels. This task, known as Federated Clustering, presents challenges due to heterogeneous, non-uniform data distributions and the lack of centralized coordination. Unlike previous methods that assume a one-cluster-per-client setup or require prior knowledge of the number of clusters, FedCRef generalizes to multi-cluster-per-client scenarios. Clients iteratively refine their data partitions while discovering all distinct distributions in the system. The process combines local clustering, model exchange and evaluation via reconstruction error analysis, and collaborative refinement within federated groups of similar distributions to enhance clustering accuracy. Extensive evaluations on four public datasets (EMNIST, KMNIST, Fashion-MNIST and KMNIST49) show that FedCRef successfully identifies true global data distributions, achieving an average local accuracy of up to 95 %. The method is also robust to noisy conditions, scalable, and lightweight, making it suitable for resource-constrained edge devices.</div></div>","PeriodicalId":55132,"journal":{"name":"Future Generation Computer Systems-The International Journal of Escience","volume":"178 ","pages":"Article 108294"},"PeriodicalIF":6.2,"publicationDate":"2025-12-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145704948","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
期刊
Future Generation Computer Systems-The International Journal of Escience
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1