首页 > 最新文献

Future Generation Computer Systems-The International Journal of Escience最新文献

英文 中文
A multi-objective multi-stage genetic algorithm for community detection in biological networks 生物网络中群体检测的多目标多阶段遗传算法
IF 6.2 2区 计算机科学 Q1 COMPUTER SCIENCE, THEORY & METHODS Pub Date : 2026-06-01 Epub Date: 2025-12-16 DOI: 10.1016/j.future.2025.108322
Mingyuan Bi , Junliang Shang , Yahan Li , Anqi Xu , Yaxuan Zhang , Feng Li , Jin-Xing Liu
Community detection in biological networks is crucial for capturing genes with similar functions and discovering biomarkers. However, communities discovered by many existing methods are often not closely connected, both topologically and functionally, so studies on community detection are still ongoing. In this paper, a multi-objective multi-stage genetic algorithm, named MOMSGA, is proposed to extract communities in biological networks. Firstly, the pre-reduction and boundary correction strategies are introduced to enhance the scalability of MOMSGA in large-scale networks. Secondly, a genetic algorithm is improved to guide the search process, where two improved objective functions are designed to simultaneously optimize the topological and functional connections to accurately extract information about relevant biological processes. The population initialization strategy and mutation operator are tailored. Thirdly, a multi-stage strategy is proposed that divides the evolutionary process into distinct stages based on the characteristics of the population at each stage, employing different selection and update strategies to obtain better diversity performance. Two notable innovations of MOMSGA lie in its multi-objective and multi-stage strategies. Experiments on 11 synthetic networks and 5 real-world networks demonstrate the superiority of MOMSGA, which outperforms four advanced methods. Furthermore, MOMSGA is applied to four gene expression datasets for biomarker identification. The results consistently show that MOMSGA outperforms other methods in classification performance across six indicators, particularly on the pheochromocytoma dataset, where the AUC reached 0.86, 2.9 % to 10.3 % higher than other methods. Moreover, the identified communities have been shown to be associated with the corresponding diseases through GO and KEGG enrichment analysis.
生物网络中的群落检测对于捕获具有相似功能的基因和发现生物标志物至关重要。然而,现有的许多方法发现的社区往往在拓扑和功能上都不是紧密相连的,因此社区检测的研究仍在进行中。本文提出了一种多目标多阶段遗传算法MOMSGA,用于提取生物网络中的群落。首先,引入了预约简和边界校正策略,提高了MOMSGA在大规模网络中的可扩展性;其次,改进遗传算法指导搜索过程,设计两个改进的目标函数,同时优化拓扑连接和功能连接,准确提取相关生物过程信息;定制了种群初始化策略和变异算子。第三,提出了一种基于种群特征的多阶段进化策略,将进化过程划分为不同的阶段,采用不同的选择和更新策略以获得更好的多样性性能。MOMSGA的两个显著创新是其多目标、多阶段战略。在11个合成网络和5个真实网络上的实验证明了MOMSGA的优越性,优于4种先进的方法。此外,MOMSGA应用于四个基因表达数据集进行生物标志物鉴定。结果一致表明,MOMSGA在六个指标上的分类性能优于其他方法,特别是在嗜铬细胞瘤数据集上,其AUC达到0.86,比其他方法高2.9%至10.3%。此外,通过GO和KEGG富集分析,鉴定的群落已被证明与相应的疾病相关。
{"title":"A multi-objective multi-stage genetic algorithm for community detection in biological networks","authors":"Mingyuan Bi ,&nbsp;Junliang Shang ,&nbsp;Yahan Li ,&nbsp;Anqi Xu ,&nbsp;Yaxuan Zhang ,&nbsp;Feng Li ,&nbsp;Jin-Xing Liu","doi":"10.1016/j.future.2025.108322","DOIUrl":"10.1016/j.future.2025.108322","url":null,"abstract":"<div><div>Community detection in biological networks is crucial for capturing genes with similar functions and discovering biomarkers. However, communities discovered by many existing methods are often not closely connected, both topologically and functionally, so studies on community detection are still ongoing. In this paper, a multi-objective multi-stage genetic algorithm, named MOMSGA, is proposed to extract communities in biological networks. Firstly, the pre-reduction and boundary correction strategies are introduced to enhance the scalability of MOMSGA in large-scale networks. Secondly, a genetic algorithm is improved to guide the search process, where two improved objective functions are designed to simultaneously optimize the topological and functional connections to accurately extract information about relevant biological processes. The population initialization strategy and mutation operator are tailored. Thirdly, a multi-stage strategy is proposed that divides the evolutionary process into distinct stages based on the characteristics of the population at each stage, employing different selection and update strategies to obtain better diversity performance. Two notable innovations of MOMSGA lie in its multi-objective and multi-stage strategies. Experiments on 11 synthetic networks and 5 real-world networks demonstrate the superiority of MOMSGA, which outperforms four advanced methods. Furthermore, MOMSGA is applied to four gene expression datasets for biomarker identification. The results consistently show that MOMSGA outperforms other methods in classification performance across six indicators, particularly on the pheochromocytoma dataset, where the AUC reached 0.86, 2.9 % to 10.3 % higher than other methods. Moreover, the identified communities have been shown to be associated with the corresponding diseases through GO and KEGG enrichment analysis.</div></div>","PeriodicalId":55132,"journal":{"name":"Future Generation Computer Systems-The International Journal of Escience","volume":"179 ","pages":"Article 108322"},"PeriodicalIF":6.2,"publicationDate":"2026-06-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145785041","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Automatic tuning based on hardware performance counters and machine learning 基于硬件性能计数器和机器学习的自动调优
IF 6.2 2区 计算机科学 Q1 COMPUTER SCIENCE, THEORY & METHODS Pub Date : 2026-06-01 Epub Date: 2025-12-30 DOI: 10.1016/j.future.2025.108358
Suren Harutyunyan Gevorgyan , Eduardo César , Anna Sikora , Jiří Filipovič , Jordi Alcaraz
This paper presents a Machine Learning (ML) methodology for automatically tuning parallel applications in heterogeneous High Performance Computing (HPC) environments using Hardware Performance Counters (HwPCs). The methodology addresses three critical challenges: counter quantity versus accessibility tradeoff, data interpretation complexity, and dynamic optimization needs. The introduced ensemble-based methodology automatically identifies minimal yet informative HwPC sets for code region identification and tuning parameter optimization. Experimental validation demonstrates high accuracy in predicting optimal thread allocation ( > 0.90 K-fold accuracy) and thread affinity ( > 0.95 accuracy) while requiring only 4–6 HwPCs. Compared to search-based methods like OpenTuner, the methodology achieves competitive performance with dramatically reduced optimization time. The architecture-agnostic design enables consistent performance across CPU and GPU platforms. These results establish a foundation for efficient, portable, automatic, and scalable tuning of parallel applications.
本文提出了一种机器学习(ML)方法,用于使用硬件性能计数器(hwpc)在异构高性能计算(HPC)环境中自动调整并行应用程序。该方法解决了三个关键挑战:计数器数量与可访问性的权衡、数据解释的复杂性和动态优化需求。引入的基于集成的方法自动识别最小但信息HwPC集代码区域识别和调优参数优化。实验验证表明,在预测最佳线程分配( >; 0.90 k倍精度)和线程亲和性( >; 0.95精度)时,只需要4-6个hwpc。与基于搜索的方法(如OpenTuner)相比,该方法在显著减少优化时间的情况下实现了具有竞争力的性能。与架构无关的设计使CPU和GPU平台的性能保持一致。这些结果为并行应用程序的高效、可移植、自动和可伸缩调优奠定了基础。
{"title":"Automatic tuning based on hardware performance counters and machine learning","authors":"Suren Harutyunyan Gevorgyan ,&nbsp;Eduardo César ,&nbsp;Anna Sikora ,&nbsp;Jiří Filipovič ,&nbsp;Jordi Alcaraz","doi":"10.1016/j.future.2025.108358","DOIUrl":"10.1016/j.future.2025.108358","url":null,"abstract":"<div><div>This paper presents a Machine Learning (ML) methodology for automatically tuning parallel applications in heterogeneous High Performance Computing (HPC) environments using Hardware Performance Counters (HwPCs). The methodology addresses three critical challenges: counter quantity versus accessibility tradeoff, data interpretation complexity, and dynamic optimization needs. The introduced ensemble-based methodology automatically identifies minimal yet informative HwPC sets for code region identification and tuning parameter optimization. Experimental validation demonstrates high accuracy in predicting optimal thread allocation ( &gt; 0.90 K-fold accuracy) and thread affinity ( &gt; 0.95 accuracy) while requiring only 4–6 HwPCs. Compared to search-based methods like OpenTuner, the methodology achieves competitive performance with dramatically reduced optimization time. The architecture-agnostic design enables consistent performance across CPU and GPU platforms. These results establish a foundation for efficient, portable, automatic, and scalable tuning of parallel applications.</div></div>","PeriodicalId":55132,"journal":{"name":"Future Generation Computer Systems-The International Journal of Escience","volume":"179 ","pages":"Article 108358"},"PeriodicalIF":6.2,"publicationDate":"2026-06-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145893756","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Log-Tree: Building log-enhanced B+-tree for hybrid DRAM/PM main memories Log-Tree:为混合DRAM/PM主存构建Log-Enhanced B +树
IF 6.2 2区 计算机科学 Q1 COMPUTER SCIENCE, THEORY & METHODS Pub Date : 2026-06-01 Epub Date: 2025-12-25 DOI: 10.1016/j.future.2025.108332
Zhengzhu Yao, Chaoshu Yang, Runyu Zhang, Hai Yang, Yu Peng
B+-trees are widely used in storage systems, which have been optimized to match the characteristics of the Persistent Memories (PMs) in recent studies. However, existing DRAM/PM hybrid B+-trees still induce write performance penalties, low space utilization, and slow recovery, which are caused by two critical design limitations: (1) massive random writes can lead to severe write performance degradation due to the asymmetric sequential/random write performance of PM; (2) trade-offs among write performance, PM space utilization, and recovery. In this paper, we propose a log-structured B+-tree for hybrid DRAM/PM main memory, called Log-Tree, to solve these problems. First, Log-Tree incorporates a block-grained shadow layer of leaf nodes in PM and designs a lightweight metadata for each block. Then, Log-Tree persists the newly-inserted entry into the corresponding blocks sequentially to reduce cacheline flushes. Finally, Log-Tree employs a dynamic data migration strategy among all blocks to further improve the space utilization of PM. We conducted comprehensive evaluations on the Intel Optane DCPMM platform. Compared with μTree/FPTree/CCL-BTree/FAST&FAIR/SSB-Tree, Log-Tree achieves the highest PM space utilization while providing 4.81/1.23 × , 2.99/1.36 × , 1.46/0.99 × , 4.03/1.59 × , and 4.23/1.99 ×  write/read throughput on average, respectively.
B+树在存储系统中得到了广泛的应用,近年来研究人员对B+树进行了优化,以适应持久记忆的特点。然而,现有的DRAM/PM混合B+树仍然会导致写性能下降、空间利用率低和恢复缓慢,这是由两个关键的设计限制造成的:(1)由于PM的非对称顺序/随机写性能,大量随机写可能导致严重的写性能下降;(2)写性能、PM空间利用率和恢复之间的权衡。在本文中,我们提出了一种用于混合DRAM/PM主存的日志结构B+树,称为Log-Tree,以解决这些问题。首先,Log-Tree在PM中集成了叶节点的块粒度阴影层,并为每个块设计了轻量级元数据。然后,Log-Tree将新插入的条目按顺序持久化到相应的块中,以减少缓存刷新。最后,Log-Tree在所有块之间采用动态数据迁移策略,进一步提高PM的空间利用率。我们对英特尔Optane DCPMM平台进行了全面的评估。与μ树/ FPTree CCL-BTree / FAST&公平/ SSB-Tree Log-Tree达到最高的空间利用率,同时提供4.81/1.23下午 × ,2.99/1.36 × ,1.46/0.99 × ,4.03/1.59 × ,和4.23/1.99 × 写/读平均吞吐量,分别。
{"title":"Log-Tree: Building log-enhanced B+-tree for hybrid DRAM/PM main memories","authors":"Zhengzhu Yao,&nbsp;Chaoshu Yang,&nbsp;Runyu Zhang,&nbsp;Hai Yang,&nbsp;Yu Peng","doi":"10.1016/j.future.2025.108332","DOIUrl":"10.1016/j.future.2025.108332","url":null,"abstract":"<div><div>B<span><math><msup><mrow></mrow><mo>+</mo></msup></math></span>-trees are widely used in storage systems, which have been optimized to match the characteristics of the Persistent Memories (PMs) in recent studies. However, existing DRAM/PM hybrid B<span><math><msup><mrow></mrow><mo>+</mo></msup></math></span>-trees still induce write performance penalties, low space utilization, and slow recovery, which are caused by two critical design limitations: (1) massive random writes can lead to severe write performance degradation due to the asymmetric sequential/random write performance of PM; (2) trade-offs among write performance, PM space utilization, and recovery. In this paper, we propose a log-structured B<span><math><msup><mrow></mrow><mo>+</mo></msup></math></span>-tree for hybrid DRAM/PM main memory, called Log-Tree, to solve these problems. First, Log-Tree incorporates a <em>block</em>-grained shadow layer of leaf nodes in PM and designs a lightweight metadata for each block. Then, Log-Tree persists the newly-inserted entry into the corresponding blocks sequentially to reduce cacheline flushes. Finally, Log-Tree employs a dynamic data migration strategy among all blocks to further improve the space utilization of PM. We conducted comprehensive evaluations on the Intel Optane DCPMM platform. Compared with <em>μ</em>Tree/FPTree/CCL-BTree/FAST&amp;FAIR/SSB-Tree, Log-Tree achieves the highest PM space utilization while providing 4.81/1.23 × , 2.99/1.36 × , 1.46/0.99 × , 4.03/1.59 × , and 4.23/1.99 ×  write/read throughput on average, respectively.</div></div>","PeriodicalId":55132,"journal":{"name":"Future Generation Computer Systems-The International Journal of Escience","volume":"179 ","pages":"Article 108332"},"PeriodicalIF":6.2,"publicationDate":"2026-06-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145823156","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
A stochastic performance model for evaluating ethereum layer-2 rollups 一种用于评估以太坊第2层rollup的随机性能模型
IF 6.2 2区 计算机科学 Q1 COMPUTER SCIENCE, THEORY & METHODS Pub Date : 2026-06-01 Epub Date: 2025-12-18 DOI: 10.1016/j.future.2025.108316
Carlos Melo , José Miqueias , Johnnatan Messias , Glauber Gonçalves , Francisco Airton Silva , André Soares , Jean Araujo
Although Ethereum’s transition to Proof-of-Stake and the emergence of sidechains offer partial improvements on scalability, these approaches still present performance-related trade-offs. To address these challenges, ZK-Rollups have emerged as Layer-2 scalability solutions. They combine off-chain computation with on-chain verification, improving scalability while preserving the underlying security guarantees of Ethereum. This paper proposes the use of Stochastic Petri Nets to assess the feasibility of ZK-Rollups, considering the impact of these solutions on throughput, latency, and the cost-benefit relationship, including the average transaction cost and its relationship to performance metrics. The results show that increased adoption of transactions in Layer-2 can increase system throughput by up to 20 %, rising from 85 tps in an environment without Layer-2 to 105 tps when 90 % of transactions follow this path. On the other hand, latency can increase by more than 100 % when larger batches are used in Layer-2, highlighting a trade-off. While batching improves throughput by reducing per-transaction overhead, it also delays the finalization of individual transactions.
尽管以太坊向权益证明的过渡和侧链的出现在可扩展性方面提供了部分改进,但这些方法仍然存在与性能相关的权衡。为了应对这些挑战,zk - rollup作为第2层可伸缩性解决方案应运而生。它们将链下计算与链上验证相结合,提高了可扩展性,同时保留了以太坊的底层安全保证。本文建议使用随机Petri网来评估zk - rollup的可行性,考虑到这些解决方案对吞吐量、延迟和成本效益关系的影响,包括平均交易成本及其与性能指标的关系。结果表明,在第2层中增加事务的采用可以将系统吞吐量提高20%,从没有第2层的环境中的85 tps提高到90%的事务遵循该路径时的105 tps。另一方面,当在第2层中使用更大的批处理时,延迟可能会增加100%以上,这是一个权衡。虽然批处理通过减少每个事务的开销来提高吞吐量,但它也延迟了单个事务的完成。
{"title":"A stochastic performance model for evaluating ethereum layer-2 rollups","authors":"Carlos Melo ,&nbsp;José Miqueias ,&nbsp;Johnnatan Messias ,&nbsp;Glauber Gonçalves ,&nbsp;Francisco Airton Silva ,&nbsp;André Soares ,&nbsp;Jean Araujo","doi":"10.1016/j.future.2025.108316","DOIUrl":"10.1016/j.future.2025.108316","url":null,"abstract":"<div><div>Although Ethereum’s transition to Proof-of-Stake and the emergence of sidechains offer partial improvements on scalability, these approaches still present performance-related trade-offs. To address these challenges, ZK-Rollups have emerged as Layer-2 scalability solutions. They combine off-chain computation with on-chain verification, improving scalability while preserving the underlying security guarantees of Ethereum. This paper proposes the use of Stochastic Petri Nets to assess the feasibility of ZK-Rollups, considering the impact of these solutions on throughput, latency, and the cost-benefit relationship, including the average transaction cost and its relationship to performance metrics. The results show that increased adoption of transactions in <em>Layer-2</em> can increase system throughput by up to 20 %, rising from 85 tps in an environment without <em>Layer-2</em> to 105 tps when 90 % of transactions follow this path. On the other hand, latency can increase by more than 100 % when larger <em>batches</em> are used in <em>Layer-2</em>, highlighting a trade-off. While batching improves throughput by reducing per-transaction overhead, it also delays the finalization of individual transactions.</div></div>","PeriodicalId":55132,"journal":{"name":"Future Generation Computer Systems-The International Journal of Escience","volume":"179 ","pages":"Article 108316"},"PeriodicalIF":6.2,"publicationDate":"2026-06-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145785034","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
TPQA: Efficient attention architecture with task-aware pattern-guided quantization TPQA:基于任务感知模式导向量化的高效注意架构
IF 6.2 2区 计算机科学 Q1 COMPUTER SCIENCE, THEORY & METHODS Pub Date : 2026-06-01 Epub Date: 2025-12-29 DOI: 10.1016/j.future.2025.108352
Sijia Wang , Shengbing Zhang , Lun Zhang , Yichao Yuan , Yawen Zhao , Xinyu Zhang , Meng Zhang
Attention mechanisms have become a cornerstone of modern deep learning models, yet their computational intensity poses significant deployment challenges for resource-limited devices. While quantization offers a potential solution, current approaches typically employ uniform precision assignment schemes across all attention heads, neglecting critical variations in head-specific contributions across different tasks. This oversight results in substantial computational redundancy for those attention heads with fewer contributions, impacting overall performance. Through systematic analysis of head pattern characteristics in transformer models, we reveal two key insights: different attention heads exhibit distinct task-aware patterns, and their varying contributions to model performance directly dictate differentiated quantization demands across heads. Building on these findings, we propose TPQA, a novel algorithm and accelerator co-design architecture for efficient deployment of transformer models. TPQA strategically assigns adaptive precision levels to each head based on pre-identified patterns, thereby reducing computational overhead while preserving model accuracy. Furthermore, TPQA employs a data reordering strategy to transform irregular workloads into structured formats and introduces a dedicated accelerator with an attention-weights-stationary dataflow to efficiently process these structured workloads. Comprehensive evaluations demonstrate TPQA’s superior performance, achieving up to 2.1 ×  speedup and 3.4 ×  energy efficiency improvement over state-of-the-art accelerators while maintaining <1% accuracy loss on various tasks.
注意机制已成为现代深度学习模型的基石,但其计算强度对资源有限的设备构成了重大的部署挑战。虽然量化提供了一个潜在的解决方案,但目前的方法通常在所有注意头中采用统一的精度分配方案,忽略了不同任务中头部特定贡献的关键变化。这种疏忽导致那些贡献较少的注意力头产生大量的计算冗余,从而影响整体性能。通过系统分析变压器模型中的头部模式特征,我们揭示了两个关键见解:不同的注意头部表现出不同的任务感知模式,它们对模型性能的不同贡献直接决定了不同头部的量化需求。基于这些发现,我们提出了TPQA,一种新的算法和加速器协同设计架构,用于有效部署变压器模型。TPQA基于预先识别的模式有策略地为每个头部分配自适应精度级别,从而在保持模型准确性的同时减少计算开销。此外,TPQA采用数据重新排序策略将不规则的工作负载转换为结构化格式,并引入一个专用加速器,该加速器具有注意力权重固定的数据流,可以有效地处理这些结构化工作负载。综合评估表明,TPQA具有卓越的性能,与最先进的加速器相比,可实现高达2.1 × 的加速和3.4 × 的能效改进,同时在各种任务中保持1%的精度损失。
{"title":"TPQA: Efficient attention architecture with task-aware pattern-guided quantization","authors":"Sijia Wang ,&nbsp;Shengbing Zhang ,&nbsp;Lun Zhang ,&nbsp;Yichao Yuan ,&nbsp;Yawen Zhao ,&nbsp;Xinyu Zhang ,&nbsp;Meng Zhang","doi":"10.1016/j.future.2025.108352","DOIUrl":"10.1016/j.future.2025.108352","url":null,"abstract":"<div><div>Attention mechanisms have become a cornerstone of modern deep learning models, yet their computational intensity poses significant deployment challenges for resource-limited devices. While quantization offers a potential solution, current approaches typically employ uniform precision assignment schemes across all attention heads, neglecting critical variations in head-specific contributions across different tasks. This oversight results in substantial computational redundancy for those attention heads with fewer contributions, impacting overall performance. Through systematic analysis of head pattern characteristics in transformer models, we reveal two key insights: different attention heads exhibit distinct task-aware patterns, and their varying contributions to model performance directly dictate differentiated quantization demands across heads. Building on these findings, we propose TPQA, a novel algorithm and accelerator co-design architecture for efficient deployment of transformer models. TPQA strategically assigns adaptive precision levels to each head based on pre-identified patterns, thereby reducing computational overhead while preserving model accuracy. Furthermore, TPQA employs a data reordering strategy to transform irregular workloads into structured formats and introduces a dedicated accelerator with an attention-weights-stationary dataflow to efficiently process these structured workloads. Comprehensive evaluations demonstrate TPQA’s superior performance, achieving up to 2.1 ×  speedup and 3.4 ×  energy efficiency improvement over state-of-the-art accelerators while maintaining &lt;1% accuracy loss on various tasks.</div></div>","PeriodicalId":55132,"journal":{"name":"Future Generation Computer Systems-The International Journal of Escience","volume":"179 ","pages":"Article 108352"},"PeriodicalIF":6.2,"publicationDate":"2026-06-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145893776","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Topology-aware virtual machine placement for improving cloud servers resource utilization 提高云服务器资源利用率的拓扑感知虚拟机布局
IF 6.2 2区 计算机科学 Q1 COMPUTER SCIENCE, THEORY & METHODS Pub Date : 2026-06-01 Epub Date: 2026-01-01 DOI: 10.1016/j.future.2025.108361
Donglai Ma , Xiaoyu Cao , Jianchen Hu , Tianyi Xia , Yuzhou Zhou , Kang Liu , Lei Zhu , Li Su , Feng Gao
As cloud computing offers increasingly sophisticated services, the optimal decisions of virtual machine (VM) placement become rather complicated, which may significantly influence the efficiency and profitability of cloud data centers (CDCs). In this paper, a realistic and holistic resource allocation model is proposed for the VM placement problem. Both the multi-layer topology of CDCs with complex topology-related user requests and the impact of multi-NUMA structures within servers are incorporated. Moreover, a novel objective function is developed to maximize overall resource utilization over an extended time horizon. The remaining resources of a server are characterized through the provision of three types of value: the value of hosting the current VM request, the potential value of accommodating future VM requests, and the topological value. A sophisticated value function is designed to integrate these components and quantify the overall benefit of placing VMs on a server, accounting for both the present and future values. As the resulting integer programming (IP) formulation is essentially NP-hard, a value-driven online algorithm is customized and developed to make online placement decisions following the proposed value function. By sequentially assigning VMs to feasible servers that maximize the evaluated placement value, our algorithm achieves a desirable trade-off between solution quality and computational efficiency. Numerical experiments on a practical cloud computing dataset demonstrate the effectiveness, efficiency, and scalability of the proposed VM placement method. Our test results indicate that the online placement decisions achieve over 80% of the global optimum (i.e., obtained from offline optimization), which outperforms other popular online methods, e.g., Fit-class heuristics and Deep Q-Network (DQN) based learning method. Besides, even with a challenging scale of 30,000 servers, our algorithm can make efficient placement decisions for 100 VMs within 1 s.
随着云计算提供越来越复杂的服务,虚拟机(VM)布局的最佳决策变得相当复杂,这可能会显著影响云数据中心(cdc)的效率和盈利能力。本文针对虚拟机布局问题,提出了一种现实而全面的资源分配模型。同时考虑了具有复杂拓扑相关用户请求的cdc的多层拓扑以及服务器内多numa结构的影响。此外,还提出了一种新的目标函数,以便在较长的时间范围内最大限度地提高总体资源利用率。服务器的剩余资源通过提供三种类型的值来表征:托管当前VM请求的值、容纳未来VM请求的潜在值和拓扑值。设计了一个复杂的价值函数来集成这些组件,并量化将vm放置在服务器上的总体收益,同时考虑到当前和未来的价值。由于所得到的整数规划(IP)公式本质上是np困难的,因此定制和开发了一个价值驱动的在线算法,以根据建议的价值函数做出在线放置决策。通过顺序地将虚拟机分配到最大评估放置值的可行服务器,我们的算法在解决方案质量和计算效率之间实现了理想的权衡。在一个实际云计算数据集上的数值实验证明了所提出的虚拟机放置方法的有效性、高效性和可扩展性。我们的测试结果表明,在线放置决策实现了80%以上的全局最优(即从离线优化中获得),优于其他流行的在线方法,例如Fit-class启发式和基于深度Q-Network (DQN)的学习方法。此外,即使在具有挑战性的30,000台服务器规模下,我们的算法也可以在1秒内为100个vm做出有效的放置决策。
{"title":"Topology-aware virtual machine placement for improving cloud servers resource utilization","authors":"Donglai Ma ,&nbsp;Xiaoyu Cao ,&nbsp;Jianchen Hu ,&nbsp;Tianyi Xia ,&nbsp;Yuzhou Zhou ,&nbsp;Kang Liu ,&nbsp;Lei Zhu ,&nbsp;Li Su ,&nbsp;Feng Gao","doi":"10.1016/j.future.2025.108361","DOIUrl":"10.1016/j.future.2025.108361","url":null,"abstract":"<div><div>As cloud computing offers increasingly sophisticated services, the optimal decisions of virtual machine (VM) placement become rather complicated, which may significantly influence the efficiency and profitability of cloud data centers (CDCs). In this paper, a realistic and holistic resource allocation model is proposed for the VM placement problem. Both the multi-layer topology of CDCs with complex topology-related user requests and the impact of multi-NUMA structures within servers are incorporated. Moreover, a novel objective function is developed to maximize overall resource utilization over an extended time horizon. The remaining resources of a server are characterized through the provision of three types of value: the value of hosting the current VM request, the potential value of accommodating future VM requests, and the topological value. A sophisticated value function is designed to integrate these components and quantify the overall benefit of placing VMs on a server, accounting for both the present and future values. As the resulting integer programming (IP) formulation is essentially NP-hard, a value-driven online algorithm is customized and developed to make online placement decisions following the proposed value function. By sequentially assigning VMs to feasible servers that maximize the evaluated placement value, our algorithm achieves a desirable trade-off between solution quality and computational efficiency. Numerical experiments on a practical cloud computing dataset demonstrate the effectiveness, efficiency, and scalability of the proposed VM placement method. Our test results indicate that the online placement decisions achieve over 80% of the global optimum (i.e., obtained from offline optimization), which outperforms other popular online methods, e.g., Fit-class heuristics and Deep Q-Network (DQN) based learning method. Besides, even with a challenging scale of 30,000 servers, our algorithm can make efficient placement decisions for 100 VMs within 1 s.</div></div>","PeriodicalId":55132,"journal":{"name":"Future Generation Computer Systems-The International Journal of Escience","volume":"179 ","pages":"Article 108361"},"PeriodicalIF":6.2,"publicationDate":"2026-06-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145893748","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
On-device explainable artificial intelligence for the semantic web of everything 面向万物语义网的设备上可解释人工智能
IF 6.2 2区 计算机科学 Q1 COMPUTER SCIENCE, THEORY & METHODS Pub Date : 2026-06-01 Epub Date: 2025-12-31 DOI: 10.1016/j.future.2025.108310
Davide Loconte , Saverio Ieva , Grazia Mascellaro , Agnese Pinto , Giuseppe Loseto , Floriano Scioscia , Michele Ruta
As the Internet of Things (IoT) evolves into an Internet of Everything (IoE), adapting Artificial Intelligence (AI) and Machine Learning (ML) approaches to pervasive computing devices is not enough. Collaborative intelligence is required, calling for on-device AI frameworks combining adequate accuracy and computational efficiency levels with incremental learning on continuous data streams, federated learning in distributed architectures and symbolic explainability formalisms to foster trustworthiness with interpretable trained models and comprehensible prediction outcomes. To fill this gap, the paper introduces a five-star rating for on-device AI based on the Semantic Web of Everything (SWoE) paradigm, and presents the five-star Mafalda 2.0 framework. It combines statistical data processing with Knowledge Graph technologies for information representation and automated reasoning to support: semi-automatic or fully data-driven ontology definition; on-device training to generate highly interpretable semantics-based models; prediction framed as a semantic matchmaking problem, exploiting non-standard reasoning services endowed with logic-based justifications to provide comprehensible results as well as counterfactual and contrastive explanations. An experimental campaign on four publicly available datasets has been carried out to validate the efficiency and accuracy of the proposal, along with federated learning and explainability examples.
随着物联网(IoT)发展成为万物互联(IoE),将人工智能(AI)和机器学习(ML)方法应用于普适计算设备是不够的。需要协作智能,要求设备上的人工智能框架将足够的准确性和计算效率水平与连续数据流的增量学习、分布式架构中的联邦学习和符号可解释性形式相结合,从而通过可解释的训练模型和可理解的预测结果来培养可信度。为了填补这一空白,本文引入了基于万物语义网(SWoE)范式的设备上人工智能五星评级,并提出了五星的Mafalda 2.0框架。它将统计数据处理与知识图技术相结合,用于信息表示和自动推理,以支持:半自动或完全数据驱动的本体定义;设备上的培训,以生成高度可解释的基于语义的模型;预测是一个语义匹配问题,利用非标准的推理服务,赋予基于逻辑的理由,提供可理解的结果,以及反事实和对比的解释。在四个公开可用的数据集上进行了一项实验,以验证该提议的效率和准确性,以及联邦学习和可解释性示例。
{"title":"On-device explainable artificial intelligence for the semantic web of everything","authors":"Davide Loconte ,&nbsp;Saverio Ieva ,&nbsp;Grazia Mascellaro ,&nbsp;Agnese Pinto ,&nbsp;Giuseppe Loseto ,&nbsp;Floriano Scioscia ,&nbsp;Michele Ruta","doi":"10.1016/j.future.2025.108310","DOIUrl":"10.1016/j.future.2025.108310","url":null,"abstract":"<div><div>As the Internet of Things (IoT) evolves into an Internet of Everything (IoE), adapting Artificial Intelligence (AI) and Machine Learning (ML) approaches to pervasive computing devices is not enough. Collaborative intelligence is required, calling for on-device AI frameworks combining adequate accuracy and computational efficiency levels with incremental learning on continuous data streams, federated learning in distributed architectures and symbolic explainability formalisms to foster trustworthiness with interpretable trained models and comprehensible prediction outcomes. To fill this gap, the paper introduces a five-star rating for on-device AI based on the Semantic Web of Everything (SWoE) paradigm, and presents the five-star <span>Mafalda</span> 2.0 framework. It combines statistical data processing with Knowledge Graph technologies for information representation and automated reasoning to support: semi-automatic or fully data-driven ontology definition; on-device training to generate highly interpretable semantics-based models; prediction framed as a semantic matchmaking problem, exploiting non-standard reasoning services endowed with logic-based justifications to provide comprehensible results as well as counterfactual and contrastive explanations. An experimental campaign on four publicly available datasets has been carried out to validate the efficiency and accuracy of the proposal, along with federated learning and explainability examples.</div></div>","PeriodicalId":55132,"journal":{"name":"Future Generation Computer Systems-The International Journal of Escience","volume":"179 ","pages":"Article 108310"},"PeriodicalIF":6.2,"publicationDate":"2026-06-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145893752","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
A Portable Compiler-Runtime Approach for Scalability Prediction 可伸缩性预测的可移植编译-运行时方法
IF 6.2 2区 计算机科学 Q1 COMPUTER SCIENCE, THEORY & METHODS Pub Date : 2026-06-01 Epub Date: 2025-12-25 DOI: 10.1016/j.future.2025.108337
Nicolai Stawinoga , Sohan Lal , Biagio Cosenza , Philip Salzmann , Peter Thoman , Thomas Fahringer
Highly scalable parallel applications can efficiently solve expensive computational problems when run on a large number of compute nodes. However, selecting the optimal number of nodes for a compute job of a given size is non-trivial, and allocating too few or too many nodes may not yield the expected performance. Knowing the scaling behavior of an application in advance enables us, for example, to make optimal use of the available hardware resources. We introduce a novel, portable approach to predict the scalability of parallel applications written in modern high-level programming models. We propose a predictive compiler-runtime framework based on Celerity, a task-based distributed runtime system that enables executing SYCL codes on clusters. The framework targets a broad range of computing systems, from CPU to GPU clusters, and proposes a model that combines machine learning, communication modeling and DAG heuristics. Experimental results on two large-scale clusters, JUWELS and Marconi-100, show accurate scalability prediction of unseen single and multi-task applications.
高度可扩展的并行应用程序可以在大量计算节点上运行时有效地解决昂贵的计算问题。然而,为给定大小的计算作业选择最优节点数量是非常重要的,分配过少或过多的节点可能无法产生预期的性能。例如,提前了解应用程序的伸缩行为使我们能够最优地利用可用的硬件资源。我们介绍了一种新颖的、可移植的方法来预测用现代高级编程模型编写的并行应用程序的可伸缩性。我们提出了一个基于Celerity的预测编译运行时框架,Celerity是一个基于任务的分布式运行时系统,可以在集群上执行SYCL代码。该框架针对广泛的计算系统,从CPU到GPU集群,并提出了一个结合机器学习,通信建模和DAG启发式的模型。在JUWELS和Marconi-100两个大规模集群上的实验结果显示了对未见过的单任务和多任务应用的准确可扩展性预测。
{"title":"A Portable Compiler-Runtime Approach for Scalability Prediction","authors":"Nicolai Stawinoga ,&nbsp;Sohan Lal ,&nbsp;Biagio Cosenza ,&nbsp;Philip Salzmann ,&nbsp;Peter Thoman ,&nbsp;Thomas Fahringer","doi":"10.1016/j.future.2025.108337","DOIUrl":"10.1016/j.future.2025.108337","url":null,"abstract":"<div><div>Highly scalable parallel applications can efficiently solve expensive computational problems when run on a large number of compute nodes. However, selecting the optimal number of nodes for a compute job of a given size is non-trivial, and allocating too few or too many nodes may not yield the expected performance. Knowing the scaling behavior of an application in advance enables us, for example, to make optimal use of the available hardware resources. We introduce a novel, portable approach to predict the scalability of parallel applications written in modern high-level programming models. We propose a predictive compiler-runtime framework based on Celerity, a task-based distributed runtime system that enables executing SYCL codes on clusters. The framework targets a broad range of computing systems, from CPU to GPU clusters, and proposes a model that combines machine learning, communication modeling and DAG heuristics. Experimental results on two large-scale clusters, JUWELS and Marconi-100, show accurate scalability prediction of unseen single and multi-task applications.</div></div>","PeriodicalId":55132,"journal":{"name":"Future Generation Computer Systems-The International Journal of Escience","volume":"179 ","pages":"Article 108337"},"PeriodicalIF":6.2,"publicationDate":"2026-06-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145845129","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Dispatching advanced and adaptive intrusion responses for IIoT-based systems 基于工业物联网系统的高级自适应入侵响应调度
IF 6.2 2区 计算机科学 Q1 COMPUTER SCIENCE, THEORY & METHODS Pub Date : 2026-06-01 Epub Date: 2025-12-18 DOI: 10.1016/j.future.2025.108314
Jacobo Elicha, Javier Lopez
The ever-increasing number of cyber-attacks poses a serious challenge to incident response teams. Recent cyber-attacks, such as the attack against energy distribution companies in Ukraine, highlight the disruption which can be caused and its consequences. More than 53 % of recorded incidents targeted essential entities, which heavily rely in Industrial Internet of Things (IIoT) devices, according to ENISA. Despite the amount of work in Cyber Threat Intelligence (CTI) and Intrusion Detection Systems (IDSs), automated response systems have been avoided in connected industrial environments mainly due to the criticality of the underlying assets, where a misstep has the potential to result in the disruption of critical processes. This paper therefore presents an Early and Adaptive Automated Intrusion Response Service for industrial environments, named EAIRS, which combines several techniques, including expert systems and reinforcement learning, to classify and mitigate anomalies detected by IDSs. The incidents EAIRS is designed to face range from network to host-based attacks. This paper provides the architecture for the described approach and the evaluation of a proof-of-concept implementation on an experimental testbed.
不断增加的网络攻击给事件响应团队带来了严峻的挑战。最近的网络攻击,例如对乌克兰能源分销公司的攻击,突显了可能造成的破坏及其后果。根据ENISA的数据,超过53%的记录事件针对重要实体,这些实体严重依赖工业物联网(IIoT)设备。尽管在网络威胁情报(CTI)和入侵检测系统(ids)方面做了大量工作,但在互联工业环境中,自动化响应系统一直被避免,主要原因是底层资产的重要性,其中一个失误有可能导致关键流程中断。因此,本文提出了一种用于工业环境的早期和自适应自动入侵响应服务,称为EAIRS,它结合了几种技术,包括专家系统和强化学习,来分类和减轻ids检测到的异常。EAIRS旨在应对从网络到基于主机的攻击。本文为所描述的方法提供了体系结构,并在实验测试平台上对概念验证实现进行了评估。
{"title":"Dispatching advanced and adaptive intrusion responses for IIoT-based systems","authors":"Jacobo Elicha,&nbsp;Javier Lopez","doi":"10.1016/j.future.2025.108314","DOIUrl":"10.1016/j.future.2025.108314","url":null,"abstract":"<div><div>The ever-increasing number of cyber-attacks poses a serious challenge to incident response teams. Recent cyber-attacks, such as the attack against energy distribution companies in Ukraine, highlight the disruption which can be caused and its consequences. More than 53 % of recorded incidents targeted essential entities, which heavily rely in Industrial Internet of Things (IIoT) devices, according to ENISA. Despite the amount of work in Cyber Threat Intelligence (CTI) and Intrusion Detection Systems (IDSs), automated response systems have been avoided in connected industrial environments mainly due to the criticality of the underlying assets, where a misstep has the potential to result in the disruption of critical processes. This paper therefore presents an Early and Adaptive Automated Intrusion Response Service for industrial environments, named EAIRS, which combines several techniques, including expert systems and reinforcement learning, to classify and mitigate anomalies detected by IDSs. The incidents EAIRS is designed to face range from network to host-based attacks. This paper provides the architecture for the described approach and the evaluation of a proof-of-concept implementation on an experimental testbed.</div></div>","PeriodicalId":55132,"journal":{"name":"Future Generation Computer Systems-The International Journal of Escience","volume":"179 ","pages":"Article 108314"},"PeriodicalIF":6.2,"publicationDate":"2026-06-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145785033","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
LLM+m: Dual-model chatGPT-based product training and testing with adversarial attack and defense LLM+m:基于对抗性攻击和防御的双模型chatgpt产品培训和测试
IF 6.2 2区 计算机科学 Q1 COMPUTER SCIENCE, THEORY & METHODS Pub Date : 2026-06-01 Epub Date: 2025-12-18 DOI: 10.1016/j.future.2025.108328
Ren-Hung Hwang , Yu-Hung Hsiao , Ying-Dar Lin , Yuan-Cheng Lai
Large language models (LLMs) like ChatGPT are increasingly leveraged in diverse applications. This study explores the integration of ChatGPT with machine learning models (LLM + m) to develop dual-model systems for intrusion detection–covering host-based (HIDS) and network-based (NIDS) systems–and image classification. By employing ChatGPT for data preprocessing and model generation, we evaluated the systems’ robustness against a range of adversarial attacks, including hypnotic attacks targeting the LLM, traditional adversarial attacks on ML models, and combined attacks affecting both. To counter these threats, we implemented adversarial training, ensemble models, and robustness prompts, significantly enhancing system resilience. Experimental results showed that combined attacks caused F1 score drops of up to 50 %, exposing critical vulnerabilities in dual-model systems. However, the proposed defenses reduced these losses to approximately 5 %, demonstrating their effectiveness in securing LLM-based dual-model systems against increasingly sophisticated adversarial threats.
像ChatGPT这样的大型语言模型(llm)越来越多地用于各种应用程序。本研究探讨了ChatGPT与机器学习模型(LLM + m)的集成,以开发用于入侵检测(覆盖基于主机(HIDS)和基于网络(NIDS)的系统)和图像分类的双模型系统。通过使用ChatGPT进行数据预处理和模型生成,我们评估了系统对一系列对抗性攻击的鲁棒性,包括针对LLM的催眠攻击,针对ML模型的传统对抗性攻击,以及影响两者的组合攻击。为了对抗这些威胁,我们实现了对抗性训练、集成模型和鲁棒性提示,显著地增强了系统的弹性。实验结果表明,联合攻击导致F1分数下降高达50%,暴露了双模型系统的关键漏洞。然而,拟议的防御措施将这些损失减少到大约5%,证明了它们在保护基于llm的双模型系统免受日益复杂的对抗性威胁方面的有效性。
{"title":"LLM+m: Dual-model chatGPT-based product training and testing with adversarial attack and defense","authors":"Ren-Hung Hwang ,&nbsp;Yu-Hung Hsiao ,&nbsp;Ying-Dar Lin ,&nbsp;Yuan-Cheng Lai","doi":"10.1016/j.future.2025.108328","DOIUrl":"10.1016/j.future.2025.108328","url":null,"abstract":"<div><div>Large language models (LLMs) like ChatGPT are increasingly leveraged in diverse applications. This study explores the integration of ChatGPT with machine learning models (LLM + m) to develop dual-model systems for intrusion detection–covering host-based (HIDS) and network-based (NIDS) systems–and image classification. By employing ChatGPT for data preprocessing and model generation, we evaluated the systems’ robustness against a range of adversarial attacks, including hypnotic attacks targeting the LLM, traditional adversarial attacks on ML models, and combined attacks affecting both. To counter these threats, we implemented adversarial training, ensemble models, and robustness prompts, significantly enhancing system resilience. Experimental results showed that combined attacks caused F1 score drops of up to 50 %, exposing critical vulnerabilities in dual-model systems. However, the proposed defenses reduced these losses to approximately 5 %, demonstrating their effectiveness in securing LLM-based dual-model systems against increasingly sophisticated adversarial threats.</div></div>","PeriodicalId":55132,"journal":{"name":"Future Generation Computer Systems-The International Journal of Escience","volume":"179 ","pages":"Article 108328"},"PeriodicalIF":6.2,"publicationDate":"2026-06-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145785054","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
期刊
Future Generation Computer Systems-The International Journal of Escience
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1