首页 > 最新文献

Future Generation Computer Systems-The International Journal of Escience最新文献

英文 中文
A hybrid metaheuristics-Bayesian optimization framework with safe transfer learning for continuous spark tuning 基于安全迁移学习的混合元启发式-贝叶斯优化框架
IF 6.2 2区 计算机科学 Q1 COMPUTER SCIENCE, THEORY & METHODS Pub Date : 2025-12-16 DOI: 10.1016/j.future.2025.108325
Mariano Garralda-Barrio, Carlos Eiras-Franco, Verónica Bolón-Canedo
Tuning configuration parameters in distributed Big Data engines such as Apache Spark is a high-dimensional, workload-dependent problem with significant impact on performance and operational cost. We address this challenge with a hybrid optimization framework that integrates Iterated Local Search, Tabu Search, and locally embedded Bayesian Optimization guided by STL-PARN (safe transfer learning with pattern-adaptive robust neighborhoods). Historical executions are partitioned into a Nucleus of reliable neighbors and a Corona of exploratory configurations, ensuring relevance while mitigating negative transfer. The surrogate within the embedded Bayesian Optimization stage decouples performance prediction from uncertainty modeling, enabling parameter-free acquisition functions that self-adapt to diverse workloads. Experiments on a modernized HiBench suite across multiple input scales show consistent gains over state-of-the-art baselines in execution time, convergence, and cost efficiency. Overall, the results demonstrate the robustness and practical value of embedding Bayesian Optimization within a global metaheuristic loop for adaptive, cost-aware Spark tuning. All source code and datasets are publicly available, supporting reproducibility and operational efficiency in large-scale data processing.
在分布式大数据引擎(如Apache Spark)中调优配置参数是一个高维的、依赖于工作负载的问题,对性能和运营成本有重大影响。我们通过一个混合优化框架解决了这一挑战,该框架集成了迭代局部搜索、禁忌搜索和由STL-PARN(具有模式自适应鲁棒邻域的安全迁移学习)指导的局部嵌入贝叶斯优化。历史执行被划分为可靠邻居的核和探索性配置的冕,在确保相关性的同时减轻负传递。嵌入贝叶斯优化阶段中的代理将性能预测与不确定性建模解耦,使无参数获取功能能够自适应不同的工作负载。在现代化HiBench套件上进行的跨多个输入规模的实验表明,在执行时间、收敛性和成本效率方面,与最先进的基准相比,取得了一致的收益。总体而言,结果证明了在全局元启发式循环中嵌入贝叶斯优化的鲁棒性和实用价值,用于自适应,成本感知的Spark调优。所有源代码和数据集都是公开的,支持大规模数据处理的再现性和操作效率。
{"title":"A hybrid metaheuristics-Bayesian optimization framework with safe transfer learning for continuous spark tuning","authors":"Mariano Garralda-Barrio,&nbsp;Carlos Eiras-Franco,&nbsp;Verónica Bolón-Canedo","doi":"10.1016/j.future.2025.108325","DOIUrl":"10.1016/j.future.2025.108325","url":null,"abstract":"<div><div>Tuning configuration parameters in distributed Big Data engines such as Apache Spark is a high-dimensional, workload-dependent problem with significant impact on performance and operational cost. We address this challenge with a hybrid optimization framework that integrates Iterated Local Search, Tabu Search, and locally embedded Bayesian Optimization guided by STL-PARN (safe transfer learning with pattern-adaptive robust neighborhoods). Historical executions are partitioned into a Nucleus of reliable neighbors and a Corona of exploratory configurations, ensuring relevance while mitigating negative transfer. The surrogate within the embedded Bayesian Optimization stage decouples performance prediction from uncertainty modeling, enabling parameter-free acquisition functions that self-adapt to diverse workloads. Experiments on a modernized HiBench suite across multiple input scales show consistent gains over state-of-the-art baselines in execution time, convergence, and cost efficiency. Overall, the results demonstrate the robustness and practical value of embedding Bayesian Optimization within a global metaheuristic loop for adaptive, cost-aware Spark tuning. All source code and datasets are publicly available, supporting reproducibility and operational efficiency in large-scale data processing.</div></div>","PeriodicalId":55132,"journal":{"name":"Future Generation Computer Systems-The International Journal of Escience","volume":"178 ","pages":"Article 108325"},"PeriodicalIF":6.2,"publicationDate":"2025-12-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145785044","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
DART: A state-aware online co-scheduling runtime for data-parallel training DART:用于数据并行训练的状态感知在线协同调度运行时
IF 6.2 2区 计算机科学 Q1 COMPUTER SCIENCE, THEORY & METHODS Pub Date : 2025-12-16 DOI: 10.1016/j.future.2025.108303
Teh-Jen Sun , A-Young Son , Eui-Nam Huh
Data-parallel training at scale is often run with static settings, which waste time when compute, input, and communication bottlenecks shift. Dynamic control can shorten wall clock time but, without a state-aware estimator, it tends to chase high-variance per-node measurements and treats resource importance as time-invariant despite changes in the training and cluster state. We present DART, a framework-agnostic online co-scheduling runtime that infers state-conditioned resource-importance weights (an attribution over compute, memory, input, communication, and thermal headroom inferred from CPU/GPU temperatures) via a cubature Kalman filter and jointly updates per-node dataset shard fraction, batch size, data-loader workers, and learning rate scale using accuracy-tracking, rate-limited steps at epoch boundaries (overhead  < 2 %). Across 12 model-dataset configurations on 2–12 nodes, DART shortens wall clock time by up to 63.44 % (median 31.95 %) while keeping final Top-1 within 0.93 percentage points of static DDP. Trace and correlation analyses indicate fewer synchronizations and reduced compute skew rather than changes to the optimization trajectory.
大规模的数据并行训练通常在静态设置下运行,这会在计算、输入和通信瓶颈发生变化时浪费时间。动态控制可以缩短时钟时间,但是,如果没有状态感知的估计器,它倾向于追求高方差的每个节点测量,并且将资源重要性视为时不变的,尽管训练和集群状态发生了变化。我们提出了DART,这是一个框架无关的在线协同调度运行时,它通过一个cubature Kalman滤波器推断状态条件下的资源重要性权重(从CPU/GPU温度推断出的计算、内存、输入、通信和热余量的属性),并使用精度跟踪、epoch边界的速率限制步骤共同更新每个节点数据集碎片分数、批大小、数据加载器工作器和学习率规模(开销 <; 2%)。在2-12个节点上的12个模型数据集配置中,DART将挂钟时间缩短了63.44%(中位数31.95%),同时将最终的Top-1保持在静态DDP的0.93个百分点以内。跟踪和相关分析表明同步减少,计算倾斜减少,而不是优化轨迹的变化。
{"title":"DART: A state-aware online co-scheduling runtime for data-parallel training","authors":"Teh-Jen Sun ,&nbsp;A-Young Son ,&nbsp;Eui-Nam Huh","doi":"10.1016/j.future.2025.108303","DOIUrl":"10.1016/j.future.2025.108303","url":null,"abstract":"<div><div>Data-parallel training at scale is often run with static settings, which waste time when compute, input, and communication bottlenecks shift. Dynamic control can shorten wall clock time but, without a state-aware estimator, it tends to chase high-variance per-node measurements and treats resource importance as time-invariant despite changes in the training and cluster state. We present DART, a framework-agnostic online co-scheduling runtime that infers state-conditioned resource-importance weights (an attribution over compute, memory, input, communication, and thermal headroom inferred from CPU/GPU temperatures) via a cubature Kalman filter and jointly updates per-node dataset shard fraction, batch size, data-loader workers, and learning rate scale using accuracy-tracking, rate-limited steps at epoch boundaries (overhead  &lt; 2 %). Across 12 model-dataset configurations on 2–12 nodes, DART shortens wall clock time by up to 63.44 % (median 31.95 %) while keeping final Top-1 within 0.93 percentage points of static DDP. Trace and correlation analyses indicate fewer synchronizations and reduced compute skew rather than changes to the optimization trajectory.</div></div>","PeriodicalId":55132,"journal":{"name":"Future Generation Computer Systems-The International Journal of Escience","volume":"178 ","pages":"Article 108303"},"PeriodicalIF":6.2,"publicationDate":"2025-12-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145785048","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Clean up the mess: Addressing data pollution in cryptocurrency abuse reporting services 清理混乱:解决加密货币滥用报告服务中的数据污染
IF 6.2 2区 计算机科学 Q1 COMPUTER SCIENCE, THEORY & METHODS Pub Date : 2025-12-15 DOI: 10.1016/j.future.2025.108313
Gibran Gomez , Kevin van Liebergen , Davide Sanvito , Giuseppe Siracusano , Roberto Gonzalez , Juan Caballero
Cryptocurrency abuse reporting services are a valuable data source about abusive blockchain addresses, prevalent types of cryptocurrency abuse, and their financial impact on victims. However, they may suffer data pollution due to their crowd-sourced nature. This work analyzes the extent and impact of data pollution in cryptocurrency abuse reporting services and proposes a novel LLM-based defense to address the pollution. We collect 289K abuse reports submitted over 6 years to two popular services and use them to answer three research questions. RQ1 analyzes the extent and impact of pollution. We show that spam reports will eventually flood unchecked abuse reporting services, with BitcoinAbuse receiving 75 % of spam before stopping operations. We build a public dataset of 19,443 abuse reports labeled with 19 popular abuse types and use it to reveal the inaccuracy of user-reported abuse types. We identified 91 (0.1 %) benign addresses reported, responsible for 60 % of all the received funds. RQ2 examines whether we can automate identifying valid reports and their classification into abuse types. We propose an unsupervised LLM-based classifier that achieves an F1 score of 0.95 when classifying reports, an F1 of 0.89 when classifying out-of-distribution data, and an F1 of 0.99 when identifying spam reports. Our unsupervised LLM-based classifier clearly outperforms two baselines: a supervised classifier and a naive usage of the LLM. Finally, RQ3 demonstrates the usefulness of our LLM-based classifier for quantifying the financial impact of different cryptocurrency abuse types. We show that victim-reported losses heavily underestimate cybercriminal revenue by estimating a 29 times higher revenue from deposit transactions. We identified that investment scams have the highest financial impact and that extortions have lower conversion rates but compensate for them with massive email campaigns.
加密货币滥用报告服务是关于滥用区块链地址、流行的加密货币滥用类型及其对受害者的经济影响的宝贵数据源。然而,由于它们的众包性质,它们可能会遭受数据污染。这项工作分析了加密货币滥用报告服务中数据污染的程度和影响,并提出了一种新的基于法学硕士的防御方法来解决污染问题。我们收集了28.9万份在过去6年里提交到两个热门服务的虐待报告,并用它们来回答三个研究问题。RQ1分析污染的程度和影响。我们表明,垃圾邮件报告最终会淹没未经检查的滥用报告服务,BitcoinAbuse在停止运营之前会收到75%的垃圾邮件。我们建立了一个包含19443份虐待报告的公共数据集,标记了19种常见的虐待类型,并使用它来揭示用户报告的虐待类型的不准确性。我们确定了91个(0.1%)良性地址报告,负责60%的所有收到的资金。RQ2检查我们是否可以自动识别有效的报告并将其分类为滥用类型。我们提出了一个无监督的基于llm的分类器,在对报告进行分类时F1得分为0.95,在对分布外数据进行分类时F1得分为0.89,在识别垃圾邮件报告时F1得分为0.99。我们的无监督的基于LLM的分类器明显优于两个基线:监督分类器和朴素的LLM使用。最后,RQ3展示了我们基于llm的分类器在量化不同加密货币滥用类型的财务影响方面的有用性。我们发现,受害者报告的损失严重低估了网络犯罪的收入,因为他们估计存款交易的收入是网络犯罪收入的29倍。我们发现,投资诈骗的经济影响最大,敲诈勒索的转化率较低,但可以通过大规模的电子邮件活动来弥补。
{"title":"Clean up the mess: Addressing data pollution in cryptocurrency abuse reporting services","authors":"Gibran Gomez ,&nbsp;Kevin van Liebergen ,&nbsp;Davide Sanvito ,&nbsp;Giuseppe Siracusano ,&nbsp;Roberto Gonzalez ,&nbsp;Juan Caballero","doi":"10.1016/j.future.2025.108313","DOIUrl":"10.1016/j.future.2025.108313","url":null,"abstract":"<div><div>Cryptocurrency abuse reporting services are a valuable data source about abusive blockchain addresses, prevalent types of cryptocurrency abuse, and their financial impact on victims. However, they may suffer data pollution due to their crowd-sourced nature. This work analyzes the extent and impact of data pollution in cryptocurrency abuse reporting services and proposes a novel LLM-based defense to address the pollution. We collect 289K abuse reports submitted over 6 years to two popular services and use them to answer three research questions. RQ1 analyzes the extent and impact of pollution. We show that spam reports will eventually flood unchecked abuse reporting services, with BitcoinAbuse receiving 75 % of spam before stopping operations. We build a public dataset of 19,443 abuse reports labeled with 19 popular abuse types and use it to reveal the inaccuracy of user-reported abuse types. We identified 91 (0.1 %) benign addresses reported, responsible for 60 % of all the received funds. RQ2 examines whether we can automate identifying valid reports and their classification into abuse types. We propose an unsupervised LLM-based classifier that achieves an F1 score of 0.95 when classifying reports, an F1 of 0.89 when classifying out-of-distribution data, and an F1 of 0.99 when identifying spam reports. Our unsupervised LLM-based classifier clearly outperforms two baselines: a supervised classifier and a naive usage of the LLM. Finally, RQ3 demonstrates the usefulness of our LLM-based classifier for quantifying the financial impact of different cryptocurrency abuse types. We show that victim-reported losses heavily underestimate cybercriminal revenue by estimating a 29 times higher revenue from deposit transactions. We identified that investment scams have the highest financial impact and that extortions have lower conversion rates but compensate for them with massive email campaigns.</div></div>","PeriodicalId":55132,"journal":{"name":"Future Generation Computer Systems-The International Journal of Escience","volume":"179 ","pages":"Article 108313"},"PeriodicalIF":6.2,"publicationDate":"2025-12-15","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145785052","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
HRB: A backfilling algorithm for heterogeneous clusters with job prioritization HRB:具有作业优先级的异构集群的回填算法
IF 6.2 2区 计算机科学 Q1 COMPUTER SCIENCE, THEORY & METHODS Pub Date : 2025-12-15 DOI: 10.1016/j.future.2025.108309
Jaime Palacios, Esteban Stafford, José Luis Bosque
Backfilling is a widely used scheduling technique in High-Performance Computing (HPC) systems to improve resource utilization. However, traditional approaches like EASY Backfill were devised for mono-core homogeneous environments, without considering the implications of multi-core architectures or the individual characteristics of nodes in heterogeneous clusters. This article proposes two refinements of EASY called Heterogeneous Backfill (HB) and Heterogeneous Reordering Backfill (HRB). These algorithms adapt the backfilling strategy to heterogeneous multi-core environments by incorporating node properties into the scheduling process. The HB algorithm sorts nodes based on a given criterion, such as power consumption or performance, to improve resource allocation. The HRB algorithm extends this approach by incorporating job reordering criteria, allowing for more efficient backfilling decisions. An evaluation of these algorithms shows that they can significantly reduce energy consumption and improve scheduling efficiency in heterogeneous clusters. The results demonstrate that the proposed algorithms outperform traditional backfilling methods, such as EASY Backfill, in terms of energy consumption, waiting time or makespan. By embracing the heterogeneity of modern HPC systems, these algorithms enable more efficient resource utilization and contribute to the overall performance of large-scale computing environments.
回填是一种广泛应用于高性能计算系统的调度技术,可以提高系统的资源利用率。然而,像EASY Backfill这样的传统方法是为单核同构环境设计的,没有考虑多核架构的影响或异构集群中节点的个体特征。本文提出了EASY的两种改进方法:异质回填法(HB)和异质重排回填法(HRB)。这些算法通过将节点属性纳入调度过程,使回填策略适应异构多核环境。HB算法根据给定的标准(如功耗或性能)对节点进行排序,以改进资源分配。HRB算法通过合并作业重新排序标准扩展了这种方法,从而允许更有效的回填决策。对这些算法的评价表明,它们可以显著降低异构集群的能耗,提高调度效率。结果表明,该算法在能量消耗、等待时间和完工时间等方面均优于EASY回填等传统回填方法。通过采用现代HPC系统的异构性,这些算法能够更有效地利用资源,并有助于大规模计算环境的整体性能。
{"title":"HRB: A backfilling algorithm for heterogeneous clusters with job prioritization","authors":"Jaime Palacios,&nbsp;Esteban Stafford,&nbsp;José Luis Bosque","doi":"10.1016/j.future.2025.108309","DOIUrl":"10.1016/j.future.2025.108309","url":null,"abstract":"<div><div>Backfilling is a widely used scheduling technique in High-Performance Computing (HPC) systems to improve resource utilization. However, traditional approaches like EASY Backfill were devised for mono-core homogeneous environments, without considering the implications of multi-core architectures or the individual characteristics of nodes in heterogeneous clusters. This article proposes two refinements of EASY called Heterogeneous Backfill (HB) and Heterogeneous Reordering Backfill (HRB). These algorithms adapt the backfilling strategy to heterogeneous multi-core environments by incorporating node properties into the scheduling process. The HB algorithm sorts nodes based on a given criterion, such as power consumption or performance, to improve resource allocation. The HRB algorithm extends this approach by incorporating job reordering criteria, allowing for more efficient backfilling decisions. An evaluation of these algorithms shows that they can significantly reduce energy consumption and improve scheduling efficiency in heterogeneous clusters. The results demonstrate that the proposed algorithms outperform traditional backfilling methods, such as EASY Backfill, in terms of energy consumption, waiting time or makespan. By embracing the heterogeneity of modern HPC systems, these algorithms enable more efficient resource utilization and contribute to the overall performance of large-scale computing environments.</div></div>","PeriodicalId":55132,"journal":{"name":"Future Generation Computer Systems-The International Journal of Escience","volume":"178 ","pages":"Article 108309"},"PeriodicalIF":6.2,"publicationDate":"2025-12-15","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145797719","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
interTwin: Advancing Scientific Digital Twins through AI, Federated Computing and Data interTwin:通过人工智能、联邦计算和数据推进科学数字双胞胎
IF 6.2 2区 计算机科学 Q1 COMPUTER SCIENCE, THEORY & METHODS Pub Date : 2025-12-15 DOI: 10.1016/j.future.2025.108312
Andrea Manzi , Raul Bardaji , Ivan Rodero , Germán Moltó , Sandro Fiore , Isabel Campos , Donatello Elia , Francesco Sarandrea , A. Paul Millar , Daniele Spiga , Matteo Bunino , Gabriele Accarino , Lorenzo Asprea , Samuel Bernardo , Miguel Caballer , Charis Chatzikyriakou , Diego Ciangottini , Michele Claus , Andrea Cristofori , Davide Donno , Juraj Zvolensky
The EU project interTwin, co-designed and implemented the prototype of an interdisciplinary Digital Twin Engine (DTE), an open-source platform that provides generic and domain-specific software components for modelling and simulation to integrate application-specific Digital Twins (DTs). The DTE is built upon a co-designed conceptual model - the DTE blueprint architecture - guided by open standards and interoperability principles. The ambition is to develop a unified approach to the implementation of DTs that is applicable across diverse scientific disciplines to foster collaborations and facilitate developments. Co-design involved DT use cases from high-energy physics, radio astronomy, astroparticle physics, climate research, and environmental monitoring, which drove advancements in modelling and simulation by leveraging heterogeneous distributed digital infrastructures, enabling dynamic workflow composition, real-time data management and processing, quality and uncertainty tracing of models, and multi-source data fusion.
欧盟interTwin项目共同设计并实现了跨学科数字孪生引擎(DTE)的原型,这是一个开源平台,提供通用和特定领域的软件组件,用于建模和仿真,以集成特定应用的数字孪生(dt)。DTE建立在一个共同设计的概念模型上——DTE蓝图体系结构——由开放标准和互操作性原则指导。其目标是制定一种适用于不同科学学科的统一方法来实施直接临床试验,以促进合作和促进发展。协同设计涉及来自高能物理、射电天文学、天体粒子物理、气候研究和环境监测的DT用例,通过利用异构分布式数字基础设施,实现动态工作流组成、实时数据管理和处理、模型的质量和不确定性跟踪以及多源数据融合,推动了建模和仿真的进步。
{"title":"interTwin: Advancing Scientific Digital Twins through AI, Federated Computing and Data","authors":"Andrea Manzi ,&nbsp;Raul Bardaji ,&nbsp;Ivan Rodero ,&nbsp;Germán Moltó ,&nbsp;Sandro Fiore ,&nbsp;Isabel Campos ,&nbsp;Donatello Elia ,&nbsp;Francesco Sarandrea ,&nbsp;A. Paul Millar ,&nbsp;Daniele Spiga ,&nbsp;Matteo Bunino ,&nbsp;Gabriele Accarino ,&nbsp;Lorenzo Asprea ,&nbsp;Samuel Bernardo ,&nbsp;Miguel Caballer ,&nbsp;Charis Chatzikyriakou ,&nbsp;Diego Ciangottini ,&nbsp;Michele Claus ,&nbsp;Andrea Cristofori ,&nbsp;Davide Donno ,&nbsp;Juraj Zvolensky","doi":"10.1016/j.future.2025.108312","DOIUrl":"10.1016/j.future.2025.108312","url":null,"abstract":"<div><div>The EU project interTwin, co-designed and implemented the prototype of an interdisciplinary Digital Twin Engine (DTE), an open-source platform that provides generic and domain-specific software components for modelling and simulation to integrate application-specific Digital Twins (DTs). The DTE is built upon a co-designed conceptual model - the DTE blueprint architecture - guided by open standards and interoperability principles. The ambition is to develop a unified approach to the implementation of DTs that is applicable across diverse scientific disciplines to foster collaborations and facilitate developments. Co-design involved DT use cases from high-energy physics, radio astronomy, astroparticle physics, climate research, and environmental monitoring, which drove advancements in modelling and simulation by leveraging heterogeneous distributed digital infrastructures, enabling dynamic workflow composition, real-time data management and processing, quality and uncertainty tracing of models, and multi-source data fusion.</div></div>","PeriodicalId":55132,"journal":{"name":"Future Generation Computer Systems-The International Journal of Escience","volume":"179 ","pages":"Article 108312"},"PeriodicalIF":6.2,"publicationDate":"2025-12-15","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145785045","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
SoA-SDA: Quantum-Resistant, Energy-Efficient In-Network Aggregation Protocol for Resource-Constrained Environment SoA-SDA:资源受限环境下抗量子、节能的网内聚合协议
IF 6.2 2区 计算机科学 Q1 COMPUTER SCIENCE, THEORY & METHODS Pub Date : 2025-12-15 DOI: 10.1016/j.future.2025.108321
Lei Song , Leyi Shi , Xiuli Ren , Xiaoguang Li
With the rapid development of quantum, secure in-network aggregation is essential for sensitive information in resource-constrained environments. However, traditional aggregation methods often fall short due to their high computational costs and security concerns. Meanwhile, their security methods are becoming less effective when defending against quantum attacks. Therefore, it is critical for aggregation techniques to develop solutions that can withstand quantum computing threats while minimizing overhead. In this paper, we use lattice cryptography to defend against quantum attacks. Given the significant computational cost of lattice encryption, we categorize nodes into sensitive and non-sensitive, applying different encryption methods accordingly. Lattice encryption secures sensitive data, while data compression further reduces the computational load. For better differentiation between sensitive types, we design a hypertree. The leaf nodes are assigned α and β values(known as weak game-theoretical perturbations), while the control center, located at the root, uses other weight to determine the optimal routing path. Watermarks are used to distinguish between sensitive and non-sensitive nodes within the same layer. These watermarks help identify nodes at the same level, allowing data packets containing watermark and weight metadata to be forwarded to the next node for secure aggregation. The highest-weight nodes undergo aggregation at the control center. This approach is implemented in the SoA-SDA (State-of-the-Art Secure Data Aggregation) protocol. Evaluations in small-scale settings show that SoA-SDA outperforms existing solutions with lower overhead, better fault tolerance, and reduced latency. Large-scale tests further highlight its strong compatibility and robust security against attacks like MITM, side-channel, DoS, and Sybil, while maintaining quantum resistance.
随着量子技术的快速发展,在资源受限的环境下,敏感信息的网内安全聚合至关重要。然而,传统的聚合方法由于其高计算成本和安全性问题而常常存在不足。与此同时,他们的安全方法在防御量子攻击时变得不那么有效。因此,聚合技术必须开发出能够承受量子计算威胁的解决方案,同时最大限度地减少开销。在本文中,我们使用点阵密码来防御量子攻击。考虑到点阵加密的计算成本很高,我们将节点分为敏感节点和非敏感节点,并采用不同的加密方法。点阵加密保护敏感数据,而数据压缩进一步减少计算负荷。为了更好地区分敏感类型,我们设计了一个超树。叶节点被分配α和β值(称为弱博弈论扰动),而位于根的控制中心使用其他权重来确定最优路由路径。水印用于区分同一层内的敏感节点和非敏感节点。这些水印有助于识别同一级别的节点,允许包含水印和权重元数据的数据包转发到下一个节点进行安全聚合。权重最高的节点在控制中心进行聚合。此方法在SoA-SDA(最先进的安全数据聚合)协议中实现。小规模环境中的评估表明,SoA-SDA优于现有的解决方案,具有更低的开销、更好的容错性和更低的延迟。大规模测试进一步突出了其强大的兼容性和强大的安全性,可以抵御MITM、侧信道、DoS和Sybil等攻击,同时保持量子抗性。
{"title":"SoA-SDA: Quantum-Resistant, Energy-Efficient In-Network Aggregation Protocol for Resource-Constrained Environment","authors":"Lei Song ,&nbsp;Leyi Shi ,&nbsp;Xiuli Ren ,&nbsp;Xiaoguang Li","doi":"10.1016/j.future.2025.108321","DOIUrl":"10.1016/j.future.2025.108321","url":null,"abstract":"<div><div>With the rapid development of quantum, secure in-network aggregation is essential for sensitive information in resource-constrained environments. However, traditional aggregation methods often fall short due to their high computational costs and security concerns. Meanwhile, their security methods are becoming less effective when defending against quantum attacks. Therefore, it is critical for aggregation techniques to develop solutions that can withstand quantum computing threats while minimizing overhead. In this paper, we use lattice cryptography to defend against quantum attacks. Given the significant computational cost of lattice encryption, we categorize nodes into sensitive and non-sensitive, applying different encryption methods accordingly. Lattice encryption secures sensitive data, while data compression further reduces the computational load. For better differentiation between sensitive types, we design a hypertree. The leaf nodes are assigned <em>α</em> and <em>β</em> values(known as weak game-theoretical perturbations), while the control center, located at the root, uses other weight to determine the optimal routing path. Watermarks are used to distinguish between sensitive and non-sensitive nodes within the same layer. These watermarks help identify nodes at the same level, allowing data packets containing watermark and weight metadata to be forwarded to the next node for secure aggregation. The highest-weight nodes undergo aggregation at the control center. This approach is implemented in the SoA-SDA (State-of-the-Art Secure Data Aggregation) protocol. Evaluations in small-scale settings show that SoA-SDA outperforms existing solutions with lower overhead, better fault tolerance, and reduced latency. Large-scale tests further highlight its strong compatibility and robust security against attacks like MITM, side-channel, DoS, and Sybil, while maintaining quantum resistance.</div></div>","PeriodicalId":55132,"journal":{"name":"Future Generation Computer Systems-The International Journal of Escience","volume":"178 ","pages":"Article 108321"},"PeriodicalIF":6.2,"publicationDate":"2025-12-15","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145785046","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
LLM-APTDS: A high-precision advanced persistent threat detection system for imbalanced data based on large language models with strong interpretabilit LLM-APTDS:基于强可解释性的大型语言模型的高精度高级不平衡数据持续威胁检测系统
IF 6.2 2区 计算机科学 Q1 COMPUTER SCIENCE, THEORY & METHODS Pub Date : 2025-12-15 DOI: 10.1016/j.future.2025.108315
Longjing Yang , Ayong Ye , Yuanhuang Liu , Wenting Lu , Chuang Huang
Advanced persistent threats (APTs) pose a significant challenge to global cybersecurity, causing substantial economic losses. Existing detection methods often rely on expert-defined rules to map anomalous events to APT tactics. Still, they are highly dependent on prior knowledge, making them unsuitable for dynamic and complex attack scenarios. This results in insufficient fine-grained activity identification and attack provenance capabilities. This study proposes LLM-APTDS, an APT detection system based on large language models (LLMs). First, a multi-model collaborative detection architecture is constructed to leverage LLMs’ semantic understanding for precise localization of log anomalies. Second, a K-nearest neighbor graph reconstruction algorithm is designed to reconstruct the relevant neighborhood graph of malicious entities, enhancing contextual awareness of attack behavior. Finally, a cyclically enhanced analysis mechanism, guided by the Adversarial Tactics, Techniques, and Common Knowledge (ATT&CK) knowledge graph, allows the LLM to iteratively reason and generate threat intelligence reports with multiple dimensions, while simultaneously providing multi-layered explanations and automated mitigation strategies. Experiments using the Defense Advanced Research Projects Agency Transparent Computing Engagement 3 (DARPA TC-E3) dataset demonstrate that, compared to baseline methods, the proposed system achieves a 5 % improvement in detection precision and a 4 % increase in F1-score, while producing high-quality, multi-dimensional threat intelligence reports.
高级持续性威胁(Advanced persistent threats, apt)对全球网络安全构成了重大挑战,造成了巨大的经济损失。现有的检测方法通常依赖于专家定义的规则来将异常事件映射到APT策略。然而,它们高度依赖于先验知识,使得它们不适合动态和复杂的攻击场景。这导致细粒度的活动识别和攻击来源能力不足。本研究提出了一种基于大语言模型(llm)的APT检测系统LLM-APTDS。首先,构建了一个多模型协同检测架构,利用llm的语义理解来精确定位日志异常。其次,设计了k近邻图重构算法,重构恶意实体的相关邻域图,增强攻击行为的上下文感知;最后,在对抗性战术、技术和常识(att&ck)知识图的指导下,循环增强的分析机制允许LLM迭代推理并生成多维威胁情报报告,同时提供多层解释和自动缓解策略。使用美国国防高级研究计划局透明计算交战3 (DARPA TC-E3)数据集进行的实验表明,与基线方法相比,该系统在产生高质量、多维威胁情报报告的同时,检测精度提高了5%,f1得分提高了4%。
{"title":"LLM-APTDS: A high-precision advanced persistent threat detection system for imbalanced data based on large language models with strong interpretabilit","authors":"Longjing Yang ,&nbsp;Ayong Ye ,&nbsp;Yuanhuang Liu ,&nbsp;Wenting Lu ,&nbsp;Chuang Huang","doi":"10.1016/j.future.2025.108315","DOIUrl":"10.1016/j.future.2025.108315","url":null,"abstract":"<div><div>Advanced persistent threats (APTs) pose a significant challenge to global cybersecurity, causing substantial economic losses. Existing detection methods often rely on expert-defined rules to map anomalous events to APT tactics. Still, they are highly dependent on prior knowledge, making them unsuitable for dynamic and complex attack scenarios. This results in insufficient fine-grained activity identification and attack provenance capabilities. This study proposes LLM-APTDS, an APT detection system based on large language models (LLMs). First, a multi-model collaborative detection architecture is constructed to leverage LLMs’ semantic understanding for precise localization of log anomalies. Second, a K-nearest neighbor graph reconstruction algorithm is designed to reconstruct the relevant neighborhood graph of malicious entities, enhancing contextual awareness of attack behavior. Finally, a cyclically enhanced analysis mechanism, guided by the Adversarial Tactics, Techniques, and Common Knowledge (ATT&amp;CK) knowledge graph, allows the LLM to iteratively reason and generate threat intelligence reports with multiple dimensions, while simultaneously providing multi-layered explanations and automated mitigation strategies. Experiments using the Defense Advanced Research Projects Agency Transparent Computing Engagement 3 (DARPA TC-E3) dataset demonstrate that, compared to baseline methods, the proposed system achieves a 5 % improvement in detection precision and a 4 % increase in F1-score, while producing high-quality, multi-dimensional threat intelligence reports.</div></div>","PeriodicalId":55132,"journal":{"name":"Future Generation Computer Systems-The International Journal of Escience","volume":"178 ","pages":"Article 108315"},"PeriodicalIF":6.2,"publicationDate":"2025-12-15","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145785043","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
A pattern-aware LSTM-based approach for APT detection leveraging a realistic dataset for critical infrastructure security 基于模式感知的lstm的APT检测方法,利用关键基础设施安全的现实数据集
IF 6.2 2区 计算机科学 Q1 COMPUTER SCIENCE, THEORY & METHODS Pub Date : 2025-12-14 DOI: 10.1016/j.future.2025.108308
Eider Iturbe , Christos Dalamagkas , Panagiotis Radoglou-Grammatikis , Erkuden Rios , Nerea Toledo
Advanced Persistent Threats (APTs) represent some of the most sophisticated and coordinated cyberattacks, often targeting critical infrastructure with stealthy, multi-stage techniques. Despite the availability of numerous intrusion detection datasets, most fail to capture the sequential and strategic nature of APT campaigns as outlined in frameworks like MITRE ATT&CK. This paper introduces a novel dataset based on a realistic emulation of the Sandworm APT group targeting the Supervisory Control and Data Acquisition (SCADA) system of a Wide Area Measurement System (WAMS). The dataset captures the full lifecycle of an APT attack, from initial access to impact, in a structured and time-ordered manner, enabling the study of both atomic and multi-step intrusion behaviours. We train and evaluate supervised multiclass sequence-aware models, specifically Long Short-Term Memory (LSTM) and Bidirectional LSTM (BiLSTM) architectures, to detect these behaviours using network flow data, assessing their performance and analysing their strengths and limitations. Our results show that BiLSTM models offer greater stability and generalization, while LSTM models achieve competitive performance with optimal configurations. These findings highlight the importance of realistic, sequence-aware datasets for developing robust intrusion detection systems tailored to modern APT threats.
高级持续性威胁(apt)代表了一些最复杂和最协调的网络攻击,通常通过隐形的多阶段技术针对关键基础设施。尽管有大量的入侵检测数据集,但大多数都无法捕捉到MITRE ATT&;CK等框架中概述的APT活动的顺序和战略性质。针对广域测量系统(WAMS)的监控与数据采集(SCADA)系统,介绍了一种基于真实仿真的沙虫APT群数据集。该数据集以结构化和时间顺序的方式捕获了APT攻击的整个生命周期,从初始访问到影响,从而可以研究原子和多步骤入侵行为。我们训练和评估有监督的多类序列感知模型,特别是长短期记忆(LSTM)和双向LSTM (BiLSTM)架构,使用网络流数据检测这些行为,评估它们的性能并分析它们的优势和局限性。我们的研究结果表明,BiLSTM模型具有更好的稳定性和泛化能力,而LSTM模型在最优配置下具有竞争力。这些发现强调了现实的、序列感知的数据集对于开发针对现代APT威胁的强大入侵检测系统的重要性。
{"title":"A pattern-aware LSTM-based approach for APT detection leveraging a realistic dataset for critical infrastructure security","authors":"Eider Iturbe ,&nbsp;Christos Dalamagkas ,&nbsp;Panagiotis Radoglou-Grammatikis ,&nbsp;Erkuden Rios ,&nbsp;Nerea Toledo","doi":"10.1016/j.future.2025.108308","DOIUrl":"10.1016/j.future.2025.108308","url":null,"abstract":"<div><div>Advanced Persistent Threats (APTs) represent some of the most sophisticated and coordinated cyberattacks, often targeting critical infrastructure with stealthy, multi-stage techniques. Despite the availability of numerous intrusion detection datasets, most fail to capture the sequential and strategic nature of APT campaigns as outlined in frameworks like MITRE ATT&amp;CK. This paper introduces a novel dataset based on a realistic emulation of the Sandworm APT group targeting the Supervisory Control and Data Acquisition (SCADA) system of a Wide Area Measurement System (WAMS). The dataset captures the full lifecycle of an APT attack, from initial access to impact, in a structured and time-ordered manner, enabling the study of both atomic and multi-step intrusion behaviours. We train and evaluate supervised multiclass sequence-aware models, specifically Long Short-Term Memory (LSTM) and Bidirectional LSTM (BiLSTM) architectures, to detect these behaviours using network flow data, assessing their performance and analysing their strengths and limitations. Our results show that BiLSTM models offer greater stability and generalization, while LSTM models achieve competitive performance with optimal configurations. These findings highlight the importance of realistic, sequence-aware datasets for developing robust intrusion detection systems tailored to modern APT threats.</div></div>","PeriodicalId":55132,"journal":{"name":"Future Generation Computer Systems-The International Journal of Escience","volume":"178 ","pages":"Article 108308"},"PeriodicalIF":6.2,"publicationDate":"2025-12-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145753433","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
MPI malleability validation under replayed real-world HPC conditions 重放真实世界HPC条件下MPI延展性验证
IF 6.2 2区 计算机科学 Q1 COMPUTER SCIENCE, THEORY & METHODS Pub Date : 2025-12-13 DOI: 10.1016/j.future.2025.108305
Sergio Iserte , Maël Madon , Georges Da Costa , Jean-Marc Pierson , Antonio J. Peña
Dynamic Resource Management (DRM) techniques can be leveraged to maximize throughput and resource utilization in computational clusters. Although DRM has been extensively studied through analytical workloads and simulations, skepticism persists among end administrators and users regarding their feasibility under real-world conditions. To address this problem, we propose a novel methodology for validating DRM techniques, such as malleability, in realistic scenarios that reproduce actual cluster conditions of jobs and users by replaying workload logs on a High-performance Computing (HPC) infrastructure. Our methodology is capable of adapting the workload to the target cluster. We evaluate our methodology in a malleability-enabled 125-node partition of the Marenostrum 5 supercomputer. Our results validate the proposed method and assess the benefits of MPI malleability on a novel use case of a pioneer user of malleability (our “PhD Student”): parallel-efficiency-aware malleability reduced a malleable workload time by 27 % without delaying the baseline workload, although introducing queueing delays for individual jobs, but maintaining the resource utilization rate.
可以利用动态资源管理(DRM)技术来最大化计算集群中的吞吐量和资源利用率。尽管已经通过分析工作负载和模拟对DRM进行了广泛的研究,但最终管理员和用户仍然对其在实际条件下的可行性持怀疑态度。为了解决这个问题,我们提出了一种新的方法,用于在实际场景中验证DRM技术(例如延展性),这些场景通过在高性能计算(HPC)基础设施上重播工作负载日志来再现作业和用户的实际集群条件。我们的方法能够使工作负载适应目标集群。我们在Marenostrum 5超级计算机的一个启用延展性的125节点分区中评估了我们的方法。我们的结果验证了所提出的方法,并评估了MPI延展性在延展性先锋用户(我们的“博士生”)的新用例中的好处:并行效率感知的延展性在不延迟基准工作负载的情况下减少了27%的延展性工作负载时间,尽管为单个作业引入了排队延迟,但保持了资源利用率。
{"title":"MPI malleability validation under replayed real-world HPC conditions","authors":"Sergio Iserte ,&nbsp;Maël Madon ,&nbsp;Georges Da Costa ,&nbsp;Jean-Marc Pierson ,&nbsp;Antonio J. Peña","doi":"10.1016/j.future.2025.108305","DOIUrl":"10.1016/j.future.2025.108305","url":null,"abstract":"<div><div>Dynamic Resource Management (DRM) techniques can be leveraged to maximize throughput and resource utilization in computational clusters. Although DRM has been extensively studied through analytical workloads and simulations, skepticism persists among end administrators and users regarding their feasibility under real-world conditions. To address this problem, we propose a novel methodology for validating DRM techniques, such as malleability, in realistic scenarios that reproduce actual cluster conditions of jobs and users by replaying workload logs on a High-performance Computing (HPC) infrastructure. Our methodology is capable of adapting the workload to the target cluster. We evaluate our methodology in a malleability-enabled 125-node partition of the Marenostrum 5 supercomputer. Our results validate the proposed method and assess the benefits of MPI malleability on a novel use case of a pioneer user of malleability (our “PhD Student”): parallel-efficiency-aware malleability reduced a malleable workload time by 27 % without delaying the baseline workload, although introducing queueing delays for individual jobs, but maintaining the resource utilization rate.</div></div>","PeriodicalId":55132,"journal":{"name":"Future Generation Computer Systems-The International Journal of Escience","volume":"178 ","pages":"Article 108305"},"PeriodicalIF":6.2,"publicationDate":"2025-12-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145753432","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Automated federated aggregation for dynamic systems and data in mobile edge computing 移动边缘计算中动态系统和数据的自动联邦聚合
IF 6.2 2区 计算机科学 Q1 COMPUTER SCIENCE, THEORY & METHODS Pub Date : 2025-12-12 DOI: 10.1016/j.future.2025.108304
Zhao Yang , Xuanyun Qiu , Haoran Hu , Weiyi Hu , Hua Cui , Qingshuang Sun
Federated Learning (FL) is a privacy-preserving distributed machine learning approach that enables collaborative training using data from Mobile Edge Computing (MEC) devices without accessing raw data. However, deploying FL on MEC devices faces challenges due to resource and data heterogeneity and dynamic changes, which can cause unstable training and fairness issues, limiting global model performance and efficiency. This paper proposes an automated FL method designed for dynamic MEC environments, featuring adjustable synchronization intervals and an adaptive aggregation strategy. By combining Bidirectional Long Short-Term Memory networks with Q-learning, the method predicts device availability and dynamically adjusts synchronization intervals. This improves device participation in aggregation and reduces waiting times. Additionally, a Graph Attention Network with GraphTransformer models device collaboration and evaluates knowledge contributions, optimizing aggregation to maximize the utility of distributed data. Extensive experiments show that the proposed method improves accuracy (by 0.7 % to 21.3 %) and efficiency (by 1.19 ×  to 8.93 × ) compared to baseline methods.
联邦学习(FL)是一种保护隐私的分布式机器学习方法,可以使用来自移动边缘计算(MEC)设备的数据进行协作训练,而无需访问原始数据。然而,由于资源和数据的异质性以及动态变化,在MEC设备上部署FL面临挑战,这可能导致不稳定的训练和公平性问题,限制了全局模型的性能和效率。本文提出了一种针对动态MEC环境的自动FL方法,该方法具有可调的同步间隔和自适应聚合策略。该方法将双向长短期记忆网络与q学习相结合,预测设备可用性并动态调整同步间隔。这提高了设备参与聚合并减少了等待时间。此外,使用GraphTransformer的图形注意力网络为设备协作建模并评估知识贡献,优化聚合以最大化分布式数据的效用。大量的实验表明,与基线方法相比,该方法提高了精度(0.7%至21.3%)和效率(1.19 × 至8.93 × )。
{"title":"Automated federated aggregation for dynamic systems and data in mobile edge computing","authors":"Zhao Yang ,&nbsp;Xuanyun Qiu ,&nbsp;Haoran Hu ,&nbsp;Weiyi Hu ,&nbsp;Hua Cui ,&nbsp;Qingshuang Sun","doi":"10.1016/j.future.2025.108304","DOIUrl":"10.1016/j.future.2025.108304","url":null,"abstract":"<div><div>Federated Learning (FL) is a privacy-preserving distributed machine learning approach that enables collaborative training using data from Mobile Edge Computing (MEC) devices without accessing raw data. However, deploying FL on MEC devices faces challenges due to resource and data heterogeneity and dynamic changes, which can cause unstable training and fairness issues, limiting global model performance and efficiency. This paper proposes an automated FL method designed for dynamic MEC environments, featuring adjustable synchronization intervals and an adaptive aggregation strategy. By combining Bidirectional Long Short-Term Memory networks with Q-learning, the method predicts device availability and dynamically adjusts synchronization intervals. This improves device participation in aggregation and reduces waiting times. Additionally, a Graph Attention Network with GraphTransformer models device collaboration and evaluates knowledge contributions, optimizing aggregation to maximize the utility of distributed data. Extensive experiments show that the proposed method improves accuracy (by 0.7 % to 21.3 %) and efficiency (by 1.19 ×  to 8.93 × ) compared to baseline methods.</div></div>","PeriodicalId":55132,"journal":{"name":"Future Generation Computer Systems-The International Journal of Escience","volume":"178 ","pages":"Article 108304"},"PeriodicalIF":6.2,"publicationDate":"2025-12-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145732461","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
期刊
Future Generation Computer Systems-The International Journal of Escience
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1