首页 > 最新文献

Future Generation Computer Systems-The International Journal of Escience最新文献

英文 中文
Identifying runtime libraries in statically linked linux binaries 识别静态链接 Linux 二进制文件中的运行时库
IF 6.2 2区 计算机科学 Q1 COMPUTER SCIENCE, THEORY & METHODS Pub Date : 2024-11-13 DOI: 10.1016/j.future.2024.107602
Javier Carrillo-Mondéjar , Ricardo J. Rodríguez
Vulnerabilities in unpatched applications can originate from third-party dependencies in statically linked applications, as they must be relinked each time to take advantage of libraries that have been updated to fix any vulnerability. Despite this, malware binaries are often statically linked to ensure they run on target platforms and to complicate malware analysis. In this sense, identification of libraries in malware analysis becomes crucial to help filter out those library functions and focus on malware function analysis. In this paper, we introduce MANTILLA, a system for identifying runtime libraries in statically linked Linux-based binaries. Our system is based on radare2 to identify functions and extract their features (independent of the underlying architecture of the binary) through static binary analysis and on the K-nearest neighbors supervised machine learning model and a majority rule to predict final values. MANTILLA is evaluated on a dataset consisting of binaries built for different architectures (MIPSeb, ARMel, Intel x86, and Intel x86-64) and different runtime libraries (uClibc, glibc, and musl), achieving very high accuracy. We also evaluate it in two case studies. First, using a dataset of binary files belonging to the binutils collection and second, using an IoT malware dataset. In both cases, good accuracy results are obtained both in terms of runtime library detection (94.4% and 95.5%, respectively) and architecture identification (100% and 98.6%, respectively).
未打补丁应用程序中的漏洞可能来自静态链接应用程序中的第三方依赖关系,因为它们每次都必须重新链接,以利用已更新以修复任何漏洞的库。尽管如此,恶意软件二进制文件通常都是静态链接的,以确保它们能在目标平台上运行,并使恶意软件分析复杂化。从这个意义上说,在恶意软件分析中识别库变得至关重要,这有助于过滤掉这些库函数,集中精力进行恶意软件功能分析。在本文中,我们介绍了 MANTILLA,一个用于识别基于静态链接 Linux 的二进制文件中运行时库的系统。我们的系统基于 radare2,通过静态二进制分析识别函数并提取其特征(与二进制的底层架构无关),并基于 K-nearest neighbors 监督机器学习模型和多数规则预测最终值。MANTILLA 在由不同架构(MIPSeb、ARMel、Intel x86 和 Intel x86-64)和不同运行库(uClibc、glibc 和 musl)构建的二进制文件组成的数据集上进行了评估,取得了非常高的准确率。我们还在两个案例研究中对其进行了评估。首先是使用属于 binutils 系列的二进制文件数据集,其次是使用物联网恶意软件数据集。在这两种情况下,运行库检测(分别为 94.4% 和 95.5%)和架构识别(分别为 100% 和 98.6%)的准确率都很高。
{"title":"Identifying runtime libraries in statically linked linux binaries","authors":"Javier Carrillo-Mondéjar ,&nbsp;Ricardo J. Rodríguez","doi":"10.1016/j.future.2024.107602","DOIUrl":"10.1016/j.future.2024.107602","url":null,"abstract":"<div><div>Vulnerabilities in unpatched applications can originate from third-party dependencies in statically linked applications, as they must be relinked each time to take advantage of libraries that have been updated to fix any vulnerability. Despite this, malware binaries are often statically linked to ensure they run on target platforms and to complicate malware analysis. In this sense, identification of libraries in malware analysis becomes crucial to help filter out those library functions and focus on malware function analysis. In this paper, we introduce <span>MANTILLA</span>, a system for identifying runtime libraries in statically linked Linux-based binaries. Our system is based on <span>radare2</span> to identify functions and extract their features (independent of the underlying architecture of the binary) through static binary analysis and on the K-nearest neighbors supervised machine learning model and a majority rule to predict final values. <span>MANTILLA</span> is evaluated on a dataset consisting of binaries built for different architectures (<span>MIPSeb</span>, <span>ARMel</span>, <span>Intel x86</span>, and <span>Intel x86-64</span>) and different runtime libraries (<span>uClibc</span>, <span>glibc</span>, and <span>musl</span>), achieving very high accuracy. We also evaluate it in two case studies. First, using a dataset of binary files belonging to the <span>binutils</span> collection and second, using an IoT malware dataset. In both cases, good accuracy results are obtained both in terms of runtime library detection (94.4% and 95.5%, respectively) and architecture identification (100% and 98.6%, respectively).</div></div>","PeriodicalId":55132,"journal":{"name":"Future Generation Computer Systems-The International Journal of Escience","volume":"164 ","pages":"Article 107602"},"PeriodicalIF":6.2,"publicationDate":"2024-11-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142651752","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
High throughput edit distance computation on FPGA-based accelerators using HLS 利用 HLS 在基于 FPGA 的加速器上实现高吞吐量编辑距离计算
IF 6.2 2区 计算机科学 Q1 COMPUTER SCIENCE, THEORY & METHODS Pub Date : 2024-11-12 DOI: 10.1016/j.future.2024.107591
Sebastiano Fabio Schifano , Marco Reggiani , Enrico Calore , Rino Micheloni , Alessia Marelli , Cristian Zambelli
Edit distance is a computational grand challenge problem to quantify the minimum number of editing operations required to modify one string of characters to the other, finding many applications of natural language processing. In recent years, relevant and increasing interest has also emerged from deoxyribonucleic acid (DNA) applications, like Next Generation Sequencing and DNA storage technologies. Both applications share two crucial features: i) the information is coded into the four bases of DNA and ii) the level of operational noise is still high causing errors in the data, requiring inclusion in the workflow of the computation of algorithms such as the edit distance for finding similarities between sequences. To boost this computation many solutions are available in the literature. Among them, the FPGAs are largely used since the data domain of those applications is strings of 4 characters represented as two-bit values, inconveniently fitting the basic data types of ordinary CPUs and GPUs, with additional benefits of providing a high level of parallelism and low processing latency. This contribution presents a computing- and energy-efficient design implementing the edit distance algorithm combining metaprogramming and High-Level Synthesis. We also assess the performance of our design targeting recent FPGA-based accelerators. Our solution uses nearly 90% of FPGA basic-block hardware resources achieving about 90% of computing efficiency delivering a maximum throughput of 16.8 TCUPS and an energy efficiency of 46 Mpair/Joule, enabling the use of FPGAs as a new class of accelerators for High Performance Computing in DNA applications.
编辑距离是一个计算大挑战问题,旨在量化将一串字符修改为另一串字符所需的最少编辑操作次数,在自然语言处理领域有很多应用。近年来,人们对脱氧核糖核酸(DNA)的相关应用也越来越感兴趣,如下一代测序和 DNA 存储技术。这两种应用都有两个重要特点:i) 信息被编码到 DNA 的四个碱基中;ii) 操作噪声水平仍然很高,会导致数据错误,这就要求在工作流程中加入计算算法,如查找序列间相似性的编辑距离。为了提高计算效率,文献中提供了许多解决方案。其中,FPGA 在很大程度上得到了应用,因为这些应用的数据域是以两位数值表示的 4 个字符的字符串,不方便与普通 CPU 和 GPU 的基本数据类型相匹配,而且还具有提供高并行性和低处理延迟的额外优势。本文介绍了一种结合元编程和高级合成实现编辑距离算法的计算和能效设计。我们还针对基于 FPGA 的最新加速器评估了设计的性能。我们的解决方案使用了近 90% 的 FPGA 基本块硬件资源,实现了约 90% 的计算效率,提供了 16.8 TCUPS 的最大吞吐量和 46 Mpair/Joule 的能效,使 FPGA 成为 DNA 应用中高性能计算的新型加速器。
{"title":"High throughput edit distance computation on FPGA-based accelerators using HLS","authors":"Sebastiano Fabio Schifano ,&nbsp;Marco Reggiani ,&nbsp;Enrico Calore ,&nbsp;Rino Micheloni ,&nbsp;Alessia Marelli ,&nbsp;Cristian Zambelli","doi":"10.1016/j.future.2024.107591","DOIUrl":"10.1016/j.future.2024.107591","url":null,"abstract":"<div><div>Edit distance is a computational grand challenge problem to quantify the minimum number of editing operations required to modify one string of characters to the other, finding many applications of natural language processing. In recent years, relevant and increasing interest has also emerged from deoxyribonucleic acid (DNA) applications, like Next Generation Sequencing and DNA storage technologies. Both applications share two crucial features: i) the information is coded into the four bases of DNA and ii) the level of operational noise is still high causing errors in the data, requiring inclusion in the workflow of the computation of algorithms such as the edit distance for finding similarities between sequences. To boost this computation many solutions are available in the literature. Among them, the FPGAs are largely used since the data domain of those applications is strings of 4 characters represented as two-bit values, inconveniently fitting the basic data types of ordinary CPUs and GPUs, with additional benefits of providing a high level of parallelism and low processing latency. This contribution presents a computing- and energy-efficient design implementing the edit distance algorithm combining metaprogramming and High-Level Synthesis. We also assess the performance of our design targeting recent FPGA-based accelerators. Our solution uses nearly 90% of FPGA basic-block hardware resources achieving about 90% of computing efficiency delivering a maximum throughput of 16.8 TCUPS and an energy efficiency of 46 Mpair/Joule, enabling the use of FPGAs as a new class of accelerators for High Performance Computing in DNA applications.</div></div>","PeriodicalId":55132,"journal":{"name":"Future Generation Computer Systems-The International Journal of Escience","volume":"164 ","pages":"Article 107591"},"PeriodicalIF":6.2,"publicationDate":"2024-11-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142651742","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
In silico framework for genome analysis 基因组分析的硅学框架
IF 6.2 2区 计算机科学 Q1 COMPUTER SCIENCE, THEORY & METHODS Pub Date : 2024-11-12 DOI: 10.1016/j.future.2024.107585
M. Saqib Nawaz , M. Zohaib Nawaz , Yongshun Gong , Philippe Fournier-Viger , Abdoulaye Baniré Diallo
Genomes hold the complete genetic information of an organism. Examining and analyzing genomic data plays a critical role in properly understanding an organism, particularly the main characteristics, functionalities, and evolving nature of harmful viruses. However, the rapid increase in genomic data poses new challenges and demands for extracting meaningful and valuable insights from large and complex genomic datasets. In this paper, a novel Framework for Genome Data Analysis (F4GDA), is developed that offers various methods for the analysis of viral genomic data in various forms. The framework’s methods can not only analyze the changes in genomes but also various genome contents. As a case study, the genomes of five SARS-CoV-2 (severe acute respiratory syndrome coronavirus 2) VoC (variants of concern), which are divided into three types/groups on the basis of geographical locations, are analyzed using this framework to investigate (1) the nucleotides, amino acids and synonymous codon changes in the whole genomes of VoC as well as in the Spike (S) protein, (2) whether different environments affect the rate of changes in genomes, (3) the variations in nucleotide bases, amino acids, and codon base compositions in VoC genomes, and (4) to compare VoC genomes with the reference genome sequence of SARS-CoV-2.
基因组拥有生物体的全部遗传信息。检查和分析基因组数据对于正确理解生物体,特别是有害病毒的主要特征、功能和进化性质起着至关重要的作用。然而,基因组数据的快速增长对从庞大而复杂的基因组数据集中提取有意义、有价值的见解提出了新的挑战和要求。本文开发了一个新颖的基因组数据分析框架(Framework for Genome Data Analysis,F4GDA),为各种形式的病毒基因组数据分析提供了多种方法。该框架的方法不仅能分析基因组的变化,还能分析各种基因组内容。作为一项案例研究,我们利用该框架分析了五种 SARS-CoV-2(严重急性呼吸系统综合征冠状病毒 2)VoC(关注变种)的基因组,这些变种根据地理位置被分为三种类型/组别,研究内容包括:(1) 核苷酸、氨基酸和同义密码子的变化;(2) 核苷酸、氨基酸和同义密码子的变化;(3) 核苷酸、氨基酸和同义密码子的变化、(2)不同环境是否影响基因组的变化率;(3)VoC 基因组中核苷酸碱基、氨基酸和密码子碱基组成的变化;以及(4)VoC 基因组与 SARS-CoV-2 参考基因组序列的比较。
{"title":"In silico framework for genome analysis","authors":"M. Saqib Nawaz ,&nbsp;M. Zohaib Nawaz ,&nbsp;Yongshun Gong ,&nbsp;Philippe Fournier-Viger ,&nbsp;Abdoulaye Baniré Diallo","doi":"10.1016/j.future.2024.107585","DOIUrl":"10.1016/j.future.2024.107585","url":null,"abstract":"<div><div>Genomes hold the complete genetic information of an organism. Examining and analyzing genomic data plays a critical role in properly understanding an organism, particularly the main characteristics, functionalities, and evolving nature of harmful viruses. However, the rapid increase in genomic data poses new challenges and demands for extracting meaningful and valuable insights from large and complex genomic datasets. In this paper, a novel Framework for Genome Data Analysis (F4GDA), is developed that offers various methods for the analysis of viral genomic data in various forms. The framework’s methods can not only analyze the changes in genomes but also various genome contents. As a case study, the genomes of five SARS-CoV-2 (severe acute respiratory syndrome coronavirus 2) VoC (variants of concern), which are divided into three types/groups on the basis of geographical locations, are analyzed using this framework to investigate (1) the nucleotides, amino acids and synonymous codon changes in the whole genomes of VoC as well as in the Spike (S) protein, (2) whether different environments affect the rate of changes in genomes, (3) the variations in nucleotide bases, amino acids, and codon base compositions in VoC genomes, and (4) to compare VoC genomes with the reference genome sequence of SARS-CoV-2.</div></div>","PeriodicalId":55132,"journal":{"name":"Future Generation Computer Systems-The International Journal of Escience","volume":"164 ","pages":"Article 107585"},"PeriodicalIF":6.2,"publicationDate":"2024-11-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142651828","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Adaptive ensemble optimization for memory-related hyperparameters in retraining DNN at edge 在边缘重新训练 DNN 时对与记忆相关的超参数进行自适应集合优化
IF 6.2 2区 计算机科学 Q1 COMPUTER SCIENCE, THEORY & METHODS Pub Date : 2024-11-10 DOI: 10.1016/j.future.2024.107600
Yidong Xu , Rui Han , Xiaojiang Zuo , Junyan Ouyang , Chi Harold Liu , Lydia Y. Chen
Edge applications are increasingly empowered by deep neural networks (DNN) and face the challenges of adapting or retraining models for the changes in input data domains and learning tasks. The existing techniques to enable DNN retraining on edge devices are to configure the memory-related hyperparameters, termed m-hyperparameters, via batch size reduction, parameter freezing, and gradient checkpoint. While those methods show promising results for static DNNs, little is known about how to online and opportunistically optimize all their m-hyperparameters, especially for retraining tasks of edge applications. In this paper, we propose, MPOptimizer, which jointly optimizes an ensemble of m-hyperparameters according to the input distribution and available edge resources at runtime. The key feature of MPOptimizer is to easily emulate the execution of retraining tasks under different m-hyperparameters and thus effectively estimate their influence on task performance. We implement MPOptimizer on prevalent DNNs and demonstrate its effectiveness against state-of-the-art techniques, i.e. successfully find the best configuration that improves model accuracy by an average of 13% (up to 25.3%) while reducing memory and training time by 4.1x and 5.3x under the same model accuracies.
边缘应用越来越多地采用深度神经网络(DNN),并面临着根据输入数据域和学习任务的变化调整或重新训练模型的挑战。在边缘设备上实现 DNN 再训练的现有技术是通过减少批量大小、参数冻结和梯度检查点来配置与记忆相关的超参数(称为 m-超参数)。虽然这些方法在静态 DNN 上显示出良好的效果,但对于如何在线并适时地优化所有 m-hyperparameters 却知之甚少,尤其是在边缘应用的再训练任务中。在本文中,我们提出了 MPOptimizer,它可以在运行时根据输入分布和可用的边缘资源联合优化 m 个全参数集合。MPOptimizer 的主要特点是可以轻松模拟不同 m-hyperparameters 下的再训练任务执行情况,从而有效估计它们对任务性能的影响。我们在流行的 DNN 上实现了 MPOptimizer,并证明了它对最先进技术的有效性,即成功找到了最佳配置,在相同模型精度下,平均提高模型精度 13%(最高达 25.3%),同时减少内存和训练时间 4.1 倍和 5.3 倍。
{"title":"Adaptive ensemble optimization for memory-related hyperparameters in retraining DNN at edge","authors":"Yidong Xu ,&nbsp;Rui Han ,&nbsp;Xiaojiang Zuo ,&nbsp;Junyan Ouyang ,&nbsp;Chi Harold Liu ,&nbsp;Lydia Y. Chen","doi":"10.1016/j.future.2024.107600","DOIUrl":"10.1016/j.future.2024.107600","url":null,"abstract":"<div><div>Edge applications are increasingly empowered by deep neural networks (DNN) and face the challenges of adapting or retraining models for the changes in input data domains and learning tasks. The existing techniques to enable DNN retraining on edge devices are to configure the memory-related hyperparameters, termed <em>m</em>-hyperparameters, via batch size reduction, parameter freezing, and gradient checkpoint. While those methods show promising results for static DNNs, little is known about how to online and opportunistically optimize all their <em>m</em>-hyperparameters, especially for retraining tasks of edge applications. In this paper, we propose, MPOptimizer, which jointly optimizes an ensemble of <em>m</em>-hyperparameters according to the input distribution and available edge resources at runtime. The key feature of MPOptimizer is to easily emulate the execution of retraining tasks under different <em>m</em>-hyperparameters and thus effectively estimate their influence on task performance. We implement MPOptimizer on prevalent DNNs and demonstrate its effectiveness against state-of-the-art techniques, i.e. successfully find the best configuration that improves model accuracy by an average of 13% (up to 25.3%) while reducing memory and training time by 4.1x and 5.3x under the same model accuracies.</div></div>","PeriodicalId":55132,"journal":{"name":"Future Generation Computer Systems-The International Journal of Escience","volume":"164 ","pages":"Article 107600"},"PeriodicalIF":6.2,"publicationDate":"2024-11-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142651663","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Convergence-aware optimal checkpointing for exploratory deep learning training jobs 针对探索性深度学习训练工作的收敛感知优化检查点功能
IF 6.2 2区 计算机科学 Q1 COMPUTER SCIENCE, THEORY & METHODS Pub Date : 2024-11-08 DOI: 10.1016/j.future.2024.107597
Hongliang Li , Zichen Wang , Hairui Zhao , Meng Zhang , Xiang Li , Haixiao Xu
Training Deep Learning (DL) models are becoming more time-consuming, thus interruptions to the training processes are inevitable. We can obtain an optimal checkpointing interval to minimize the fault tolerance overhead for a HPC (High Performance Computing) job with the precondition that the job progress is proportional to its execution time. Unfortunately, it is not the case in DL model training, where a DL training job yields diminishing returns across its lifetime. Meanwhile, training DL models is inherently exploratory, with early termination frequently occurring during model training&developing. It makes the early progress of a DL training job more valuable than the later ones. Even placement of checkpoints would either increase the risks in the early stages or waste resources overprotecting the latter stages. Moreover, in data parallelism, the state-of-the-art quality-driven scheduling strategies allocate more resources for the early stages of a job than the later ones to accelerate the training progress, which further amplifies the issue. In summary, the early stage is more important than the later stages. Allocating more fault-tolerant resources to the early stages is beneficial for the model exploration. Based on the aforementioned conclusion, we present COCI, an approach to compute optimal checkpointing configuration for a exploratory DL training job, minimizing the fault tolerance overhead, including checkpoint cost and recovery cost. We implement COCI based on state-of-the-art iteration-level checkpointing mechanism, as a pluggable module compatible with PyTorch without extra user input. The experimental results show that COCI reduces up to 40.18% fault tolerance overhead compared to existing state-of-the-art DL fault tolerance methods in serial scenario, 60.64% in data parallel scenario.
深度学习(DL)模型的训练越来越耗时,因此训练过程的中断不可避免。我们可以获得最佳的检查点间隔,从而最大限度地减少 HPC(高性能计算)作业的容错开销,前提条件是作业进度与其执行时间成正比。遗憾的是,在 DL 模型训练中情况并非如此,DL 训练作业在其整个生命周期中的收益是递减的。同时,DL 模型的训练本质上是探索性的,在模型训练和开发过程中经常会出现提前终止的情况。这使得 DL 训练工作的早期进展比后期进展更有价值。即使设置检查点,要么会增加早期阶段的风险,要么会浪费资源过度保护后期阶段。此外,在数据并行的情况下,最先进的质量驱动调度策略会为作业的早期阶段分配比后期阶段更多的资源,以加快训练进度,这进一步加剧了问题的严重性。总之,早期阶段比后期阶段更重要。为早期阶段分配更多容错资源有利于模型探索。基于上述结论,我们提出了 COCI,这是一种为探索性 DL 训练作业计算最佳检查点配置的方法,能最大限度地减少容错开销,包括检查点成本和恢复成本。我们基于最先进的迭代级检查点机制实现了 COCI,它是与 PyTorch 兼容的可插拔模块,无需用户额外输入。实验结果表明,与现有最先进的 DL 容错方法相比,COCI 在串行场景下减少了 40.18% 的容错开销,在数据并行场景下减少了 60.64%。
{"title":"Convergence-aware optimal checkpointing for exploratory deep learning training jobs","authors":"Hongliang Li ,&nbsp;Zichen Wang ,&nbsp;Hairui Zhao ,&nbsp;Meng Zhang ,&nbsp;Xiang Li ,&nbsp;Haixiao Xu","doi":"10.1016/j.future.2024.107597","DOIUrl":"10.1016/j.future.2024.107597","url":null,"abstract":"<div><div>Training Deep Learning (DL) models are becoming more time-consuming, thus interruptions to the training processes are inevitable. We can obtain an optimal checkpointing interval to minimize the fault tolerance overhead for a HPC (High Performance Computing) job with the precondition that the job progress is proportional to its execution time. Unfortunately, it is not the case in DL model training, where a DL training job yields diminishing returns across its lifetime. Meanwhile, training DL models is inherently exploratory, with early termination frequently occurring during model training&amp;developing. It makes the early progress of a DL training job more valuable than the later ones. Even placement of checkpoints would either increase the risks in the early stages or waste resources overprotecting the latter stages. Moreover, in data parallelism, the state-of-the-art quality-driven scheduling strategies allocate more resources for the early stages of a job than the later ones to accelerate the training progress, which further amplifies the issue. In summary, the early stage is more important than the later stages. Allocating more fault-tolerant resources to the early stages is beneficial for the model exploration. Based on the aforementioned conclusion, we present COCI, an approach to compute optimal checkpointing configuration for a exploratory DL training job, minimizing the fault tolerance overhead, including checkpoint cost and recovery cost. We implement COCI based on state-of-the-art iteration-level checkpointing mechanism, as a pluggable module compatible with PyTorch without extra user input. The experimental results show that COCI reduces up to 40.18% fault tolerance overhead compared to existing state-of-the-art DL fault tolerance methods in serial scenario, 60.64% in data parallel scenario.</div></div>","PeriodicalId":55132,"journal":{"name":"Future Generation Computer Systems-The International Journal of Escience","volume":"164 ","pages":"Article 107597"},"PeriodicalIF":6.2,"publicationDate":"2024-11-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142651744","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
FedGen: Personalized federated learning with data generation for enhanced model customization and class imbalance FedGen:带有数据生成功能的个性化联合学习,可增强模型定制和类不平衡性
IF 6.2 2区 计算机科学 Q1 COMPUTER SCIENCE, THEORY & METHODS Pub Date : 2024-11-07 DOI: 10.1016/j.future.2024.107595
Peng Zhao , Shaocong Guo , Yanan Li , Shusen Yang , Xuebin Ren
Federated learning has emerged as a prominent solution for the collaborative training of machine learning models without exchanging local data. However, existing approaches often impose rigid constraints on model heterogeneity, limiting the ability of clients to customize unique models and increasing the vulnerability of models to potential attacks. This paper presents FedGen, a novel personalized federated learning framework based on generative adversarial networks (GANs). FedGen shifts the focus from training task-specific models to generating data, especially for minority classes with imbalanced data. With FedGen, clients can gain knowledge from others by training generators, while maintaining a heterogeneous local model and avoiding sharing model information with other participants. Moreover, to address challenges arising from imbalanced data, we propose AT-GAN, a novel generative model incorporating pseudo augmentation and differentiable augmentation modules to foster healthy competition between the generator and discriminator. To evaluate the effectiveness of our approach, we conduct extensive experiments on real-world tabular datasets. The experimental results demonstrate that FedGen significantly enhances the performance of local models, achieving improvements of up to 11.92% in F1 score and up to 9.14% in MCC score compared to existing methods.
联盟学习已成为无需交换本地数据就能协同训练机器学习模型的重要解决方案。然而,现有的方法往往对模型的异质性施加严格的限制,从而限制了客户定制独特模型的能力,并增加了模型遭受潜在攻击的可能性。本文介绍了基于生成式对抗网络(GAN)的新型个性化联合学习框架 FedGen。FedGen 将重点从训练特定任务模型转移到生成数据上,尤其是针对数据不平衡的少数群体类别。有了 FedGen,客户可以通过训练生成器从他人那里获得知识,同时保持异构的本地模型,避免与其他参与者共享模型信息。此外,为了应对不平衡数据带来的挑战,我们提出了 AT-GAN,这是一种新颖的生成模型,包含伪增强和可微分增强模块,可促进生成器和判别器之间的良性竞争。为了评估我们方法的有效性,我们在真实世界的表格数据集上进行了广泛的实验。实验结果表明,FedGen 显著提高了本地模型的性能,与现有方法相比,F1 分数提高了 11.92%,MCC 分数提高了 9.14%。
{"title":"FedGen: Personalized federated learning with data generation for enhanced model customization and class imbalance","authors":"Peng Zhao ,&nbsp;Shaocong Guo ,&nbsp;Yanan Li ,&nbsp;Shusen Yang ,&nbsp;Xuebin Ren","doi":"10.1016/j.future.2024.107595","DOIUrl":"10.1016/j.future.2024.107595","url":null,"abstract":"<div><div>Federated learning has emerged as a prominent solution for the collaborative training of machine learning models without exchanging local data. However, existing approaches often impose rigid constraints on model heterogeneity, limiting the ability of clients to customize unique models and increasing the vulnerability of models to potential attacks. This paper presents FedGen, a novel personalized federated learning framework based on generative adversarial networks (GANs). FedGen shifts the focus from training task-specific models to generating data, especially for minority classes with imbalanced data. With FedGen, clients can gain knowledge from others by training generators, while maintaining a heterogeneous local model and avoiding sharing model information with other participants. Moreover, to address challenges arising from imbalanced data, we propose AT-GAN, a novel generative model incorporating pseudo augmentation and differentiable augmentation modules to foster healthy competition between the generator and discriminator. To evaluate the effectiveness of our approach, we conduct extensive experiments on real-world tabular datasets. The experimental results demonstrate that FedGen significantly enhances the performance of local models, achieving improvements of up to 11.92% in F1 score and up to 9.14% in MCC score compared to existing methods.</div></div>","PeriodicalId":55132,"journal":{"name":"Future Generation Computer Systems-The International Journal of Escience","volume":"164 ","pages":"Article 107595"},"PeriodicalIF":6.2,"publicationDate":"2024-11-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142651851","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Time-constrained persistent deletion for key–value store engine on ZNS SSD 在 ZNS 固态硬盘上为键值存储引擎提供时间受限的持续删除功能
IF 6.2 2区 计算机科学 Q1 COMPUTER SCIENCE, THEORY & METHODS Pub Date : 2024-11-06 DOI: 10.1016/j.future.2024.107598
Shiqiang Nie, Tong Lei, Jie Niu, Qihan Hu, Song Liu, Weiguo Wu
The inherent out-of-place update characteristic of the Log-Structured Merge tree (LSM tree) cannot guarantee persistent deletion within a specific time window, leading to potential data privacy and security issues. Existing solutions like Lethe-Fade ensure time-constrained persistent deletion but introduce considerable write overhead, worsening the write amplification issue, particularly for key–value stores on ZNS SSD. To address this problem, we propose a zone-aware persistent deletion scheme for key–value store engines. Targeting mitigating the write amplification induced by level compaction, we design an adaptive SSTable selection strategy for each level in the LSM tree. Additionally, as the SSTable with deletion records would become invalid after the persistent deletion timer reaches its threshold, we design a tombstone-aware zone allocation strategy to reduce the data migration induced by garbage collection. In further, we optimize the victim zone selection in GC to reduce the invalid migration of tombstone files. Experimental results demonstrate that our scheme effectively ensures that most outdated physical versions are deleted before reaching the persistent deletion time threshold. When deleting 10% of keys in the key–value store engine, this scheme reduces write amplification by 74.7% and the garbage collection-induced write by 87.3% compared to the Lethe-Fade scheme.
日志结构合并树(LSM 树)固有的非就地更新特性无法保证在特定时间窗口内持续删除,从而导致潜在的数据隐私和安全问题。现有的解决方案(如 Lethe-Fade)能确保有时间限制的持久性删除,但会带来相当大的写开销,加剧写放大问题,尤其是对于 ZNS SSD 上的键值存储。为了解决这个问题,我们为键值存储引擎提出了一种区域感知持久删除方案。为了减轻层级压缩引起的写放大,我们为 LSM 树中的每个层级设计了自适应 SSTable 选择策略。此外,由于带有删除记录的 SSTable 会在持续删除计时器达到阈值后失效,因此我们设计了一种墓碑感知区域分配策略,以减少垃圾收集引起的数据迁移。此外,我们还优化了 GC 中的受害区选择,以减少墓碑文件的无效迁移。实验结果表明,我们的方案能有效确保大多数过时的物理版本在达到持续删除时间阈值之前被删除。与 Lethe-Fade 方案相比,当删除键值存储引擎中 10% 的键时,该方案可将写入放大率降低 74.7%,将垃圾收集引起的写入降低 87.3%。
{"title":"Time-constrained persistent deletion for key–value store engine on ZNS SSD","authors":"Shiqiang Nie,&nbsp;Tong Lei,&nbsp;Jie Niu,&nbsp;Qihan Hu,&nbsp;Song Liu,&nbsp;Weiguo Wu","doi":"10.1016/j.future.2024.107598","DOIUrl":"10.1016/j.future.2024.107598","url":null,"abstract":"<div><div>The inherent out-of-place update characteristic of the Log-Structured Merge tree (LSM tree) cannot guarantee persistent deletion within a specific time window, leading to potential data privacy and security issues. Existing solutions like Lethe-Fade ensure time-constrained persistent deletion but introduce considerable write overhead, worsening the write amplification issue, particularly for key–value stores on ZNS SSD. To address this problem, we propose a zone-aware persistent deletion scheme for key–value store engines. Targeting mitigating the write amplification induced by level compaction, we design an adaptive SSTable selection strategy for each level in the LSM tree. Additionally, as the SSTable with deletion records would become invalid after the persistent deletion timer reaches its threshold, we design a tombstone-aware zone allocation strategy to reduce the data migration induced by garbage collection. In further, we optimize the victim zone selection in GC to reduce the invalid migration of tombstone files. Experimental results demonstrate that our scheme effectively ensures that most outdated physical versions are deleted before reaching the persistent deletion time threshold. When deleting 10% of keys in the key–value store engine, this scheme reduces write amplification by 74.7% and the garbage collection-induced write by 87.3% compared to the Lethe-Fade scheme.</div></div>","PeriodicalId":55132,"journal":{"name":"Future Generation Computer Systems-The International Journal of Escience","volume":"164 ","pages":"Article 107598"},"PeriodicalIF":6.2,"publicationDate":"2024-11-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142651746","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
RNC-DP: A personalized trajectory data publishing scheme combining road network constraints and GAN RNC-DP:结合路网约束和 GAN 的个性化轨迹数据发布方案
IF 6.2 2区 计算机科学 Q1 COMPUTER SCIENCE, THEORY & METHODS Pub Date : 2024-11-06 DOI: 10.1016/j.future.2024.107589
Hui Wang , Haiyang Li , Zihao Shen , Peiqian Liu
The popularity of location-based services facilitates people’s lives to a certain extent and generates a large amount of trajectory data. Analyzing these data can contribute to society’s development and provide better location services for users, but it also faces the security problem of personal trajectory privacy leakage. However, existing methods often suffer from either excessive privacy protection or insufficient protection of individual privacy. Therefore, this paper proposes a personalized trajectory data publishing scheme combining road network constraints and GAN (RNC-DP). Firstly, after grid-representing the trajectory data, we remove the unreachable grids and define a trajectory generation constraint. Second, the proposed TraGM model synthesizes the trajectory data to meet the constraints. Again, during the trajectory data publishing process, the proposed TraDP mechanism performs k-means clustering on the synthesized trajectories and assigns appropriate privacy budgets to the clustered generalized trajectory location points. Finally, the protected trajectory data is published. Compared with the existing schemes, the proposed scheme improves privacy protection strength by 10.2%–41.2% while balancing data availability and has low time complexity.
基于位置的服务的普及在一定程度上方便了人们的生活,同时也产生了大量的轨迹数据。分析这些数据可以促进社会发展,为用户提供更好的位置服务,但同时也面临着个人轨迹隐私泄露的安全问题。然而,现有的方法往往存在隐私保护过度或个人隐私保护不足的问题。因此,本文提出了一种结合路网约束和 GAN 的个性化轨迹数据发布方案(RNC-DP)。首先,在对轨迹数据进行网格化表示后,我们删除了无法到达的网格,并定义了轨迹生成约束。其次,建议的 TraGM 模型合成轨迹数据以满足约束条件。再次,在轨迹数据发布过程中,建议的 TraDP 机制会对合成轨迹进行 k-means 聚类,并为聚类后的广义轨迹位置点分配适当的隐私预算。最后,发布受保护的轨迹数据。与现有方案相比,拟议方案在平衡数据可用性的同时,将隐私保护强度提高了 10.2%-41.2%,而且时间复杂度较低。
{"title":"RNC-DP: A personalized trajectory data publishing scheme combining road network constraints and GAN","authors":"Hui Wang ,&nbsp;Haiyang Li ,&nbsp;Zihao Shen ,&nbsp;Peiqian Liu","doi":"10.1016/j.future.2024.107589","DOIUrl":"10.1016/j.future.2024.107589","url":null,"abstract":"<div><div>The popularity of location-based services facilitates people’s lives to a certain extent and generates a large amount of trajectory data. Analyzing these data can contribute to society’s development and provide better location services for users, but it also faces the security problem of personal trajectory privacy leakage. However, existing methods often suffer from either excessive privacy protection or insufficient protection of individual privacy. Therefore, this paper proposes a personalized trajectory data publishing scheme combining road network constraints and GAN (RNC-DP). Firstly, after grid-representing the trajectory data, we remove the unreachable grids and define a trajectory generation constraint. Second, the proposed TraGM model synthesizes the trajectory data to meet the constraints. Again, during the trajectory data publishing process, the proposed TraDP mechanism performs k-means clustering on the synthesized trajectories and assigns appropriate privacy budgets to the clustered generalized trajectory location points. Finally, the protected trajectory data is published. Compared with the existing schemes, the proposed scheme improves privacy protection strength by 10.2%–41.2% while balancing data availability and has low time complexity.</div></div>","PeriodicalId":55132,"journal":{"name":"Future Generation Computer Systems-The International Journal of Escience","volume":"164 ","pages":"Article 107589"},"PeriodicalIF":6.2,"publicationDate":"2024-11-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142651747","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
CMPNet: A cross-modal multi-scale perception network for RGB-T crowd counting CMPNet:用于 RGB-T 人群计数的跨模态多尺度感知网络
IF 6.2 2区 计算机科学 Q1 COMPUTER SCIENCE, THEORY & METHODS Pub Date : 2024-11-06 DOI: 10.1016/j.future.2024.107596
Shihui Zhang , Kun Chen , Gangzheng Zhai , He Li , Shaojie Han
The cross-modal crowd counting method demonstrates better scene adaptability under complex conditions by introducing independent supplementary information. However, existing methods still face problems such as insufficient fusion of modal features, underutilization of crowd structure, and the neglect of scale information. In response to the above issues, this paper proposes a cross-modal multi-scale perception network (CMPNet). Specifically, CMPNet mainly consists of a cross-modal perception fusion module and a multi-scale feature aggregation module. The cross-modal perception fusion module effectively suppresses noise features while sharing features between different modalities, thereby significantly improving the robustness of the crowd counting process. The multi-scale feature aggregation module obtains rich crowd structure information through a spatial context aware graph convolution unit, and then integrates feature information from different scales to enhance the network’s perception ability of crowd density. To the best of our knowledge, CMPNet is the first attempt to model the crowd structure and mine its semantics in the field of cross-modal crowd counting. The experimental results show that CMPNet achieves state-of-the-art performance on all RGB-T datasets, providing an effective solution for cross-modal crowd counting. We will release the code at https://github.com/KunChenKKK/CMPNet.
跨模态人群计数方法通过引入独立的补充信息,在复杂条件下表现出更好的场景适应性。然而,现有方法仍面临模态特征融合不足、人群结构利用不足、尺度信息被忽视等问题。针对上述问题,本文提出了一种跨模态多尺度感知网络(CMPNet)。具体来说,CMPNet 主要由跨模态感知融合模块和多尺度特征聚合模块组成。跨模态感知融合模块可有效抑制噪声特征,同时共享不同模态之间的特征,从而显著提高人群计数过程的鲁棒性。多尺度特征聚合模块通过空间上下文感知图卷积单元获取丰富的人群结构信息,然后整合不同尺度的特征信息,增强网络对人群密度的感知能力。据我们所知,CMPNet 是在跨模态人群计数领域首次尝试建立人群结构模型并挖掘其语义。实验结果表明,CMPNet 在所有 RGB-T 数据集上都达到了最先进的性能,为跨模态人群统计提供了有效的解决方案。我们将在 https://github.com/KunChenKKK/CMPNet 发布代码。
{"title":"CMPNet: A cross-modal multi-scale perception network for RGB-T crowd counting","authors":"Shihui Zhang ,&nbsp;Kun Chen ,&nbsp;Gangzheng Zhai ,&nbsp;He Li ,&nbsp;Shaojie Han","doi":"10.1016/j.future.2024.107596","DOIUrl":"10.1016/j.future.2024.107596","url":null,"abstract":"<div><div>The cross-modal crowd counting method demonstrates better scene adaptability under complex conditions by introducing independent supplementary information. However, existing methods still face problems such as insufficient fusion of modal features, underutilization of crowd structure, and the neglect of scale information. In response to the above issues, this paper proposes a cross-modal multi-scale perception network (CMPNet). Specifically, CMPNet mainly consists of a cross-modal perception fusion module and a multi-scale feature aggregation module. The cross-modal perception fusion module effectively suppresses noise features while sharing features between different modalities, thereby significantly improving the robustness of the crowd counting process. The multi-scale feature aggregation module obtains rich crowd structure information through a spatial context aware graph convolution unit, and then integrates feature information from different scales to enhance the network’s perception ability of crowd density. To the best of our knowledge, CMPNet is the first attempt to model the crowd structure and mine its semantics in the field of cross-modal crowd counting. The experimental results show that CMPNet achieves state-of-the-art performance on all RGB-T datasets, providing an effective solution for cross-modal crowd counting. We will release the code at <span><span>https://github.com/KunChenKKK/CMPNet</span><svg><path></path></svg></span>.</div></div>","PeriodicalId":55132,"journal":{"name":"Future Generation Computer Systems-The International Journal of Escience","volume":"164 ","pages":"Article 107596"},"PeriodicalIF":6.2,"publicationDate":"2024-11-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142651745","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Private approximate nearest neighbor search for on-chain data based on locality-sensitive hashing 基于位置敏感哈希算法的链上数据私有近似近邻搜索
IF 6.2 2区 计算机科学 Q1 COMPUTER SCIENCE, THEORY & METHODS Pub Date : 2024-11-05 DOI: 10.1016/j.future.2024.107586
Siyuan Shang , Xuehui Du , Xiaohan Wang, Aodi Liu
Blockchain manages data with immutability, decentralization and traceability, offering new solutions for traditional information systems and greatly facilitating data sharing. However, on-chain data query still faces challenges such as low efficiency and difficulty in privacy protection. We propose a private Approximate Nearest Neighbor (ANN) search method for on-chain data based on Locality-Sensitive Hashing (LSH), which mainly includes two steps: query initialization and query implementation. In query initialization, the data management node builds hash tables for on-chain data through improved LSH, which are encrypted and stored on the blockchain using attribute-based encryption. In query implementation, node with correct privileges utilizes random smart contracts to query on-chain data privately by distributed point function and a privacy protection technique called oblivious masking. To validate the effectiveness of this method, we compare the performance with two ANN search algorithms, the query time is reduced by 57% and 59.2%, the average recall is increased by 4.5% and 2%, the average precision is increased by 7.7% and 6.9%, the average F1-score is increased by 6% and 4.3%, the average initialization time is reduced by 34 times and 122 times, respectively. We also compare the performance with private ANN search methods using homomorphic encryption, differential privacy and secure multi-party computation. The results show that our method can reduce the query time by several orders of magnitude, which is more applicable to the blockchain environment. To the best of our knowledge, this is the first private ANN search method for on-chain data, which consider the query efficiency and privacy protection, achieving efficient, accurate, and private data query.
区块链管理数据具有不可篡改性、去中心化和可追溯性,为传统信息系统提供了新的解决方案,极大地促进了数据共享。然而,链上数据查询仍面临效率低、隐私保护难等挑战。我们提出了一种基于位置敏感散列(LSH)的链上数据私有近似近邻(ANN)搜索方法,主要包括查询初始化和查询实现两个步骤。在查询初始化中,数据管理节点通过改进的 LSH 为链上数据建立哈希表,并使用基于属性的加密技术将哈希表加密后存储在区块链上。在查询执行过程中,拥有正确权限的节点利用随机智能合约,通过分布式点函数和一种称为遗忘掩码的隐私保护技术,私下查询链上数据。为了验证这种方法的有效性,我们将其与两种 ANN 搜索算法进行了性能比较,结果显示,查询时间分别缩短了 57% 和 59.2%,平均召回率分别提高了 4.5% 和 2%,平均精度分别提高了 7.7% 和 6.9%,平均 F1 分数分别提高了 6% 和 4.3%,平均初始化时间分别缩短了 34 倍和 122 倍。我们还比较了使用同态加密、差分隐私和安全多方计算的私有 ANN 搜索方法的性能。结果表明,我们的方法可以将查询时间缩短几个数量级,更适用于区块链环境。据我们所知,这是第一种考虑查询效率和隐私保护的链上数据私有 ANN 搜索方法,实现了高效、准确和私有的数据查询。
{"title":"Private approximate nearest neighbor search for on-chain data based on locality-sensitive hashing","authors":"Siyuan Shang ,&nbsp;Xuehui Du ,&nbsp;Xiaohan Wang,&nbsp;Aodi Liu","doi":"10.1016/j.future.2024.107586","DOIUrl":"10.1016/j.future.2024.107586","url":null,"abstract":"<div><div>Blockchain manages data with immutability, decentralization and traceability, offering new solutions for traditional information systems and greatly facilitating data sharing. However, on-chain data query still faces challenges such as low efficiency and difficulty in privacy protection. We propose a private Approximate Nearest Neighbor (ANN) search method for on-chain data based on Locality-Sensitive Hashing (LSH), which mainly includes two steps: query initialization and query implementation. In query initialization, the data management node builds hash tables for on-chain data through improved LSH, which are encrypted and stored on the blockchain using attribute-based encryption. In query implementation, node with correct privileges utilizes random smart contracts to query on-chain data privately by distributed point function and a privacy protection technique called oblivious masking. To validate the effectiveness of this method, we compare the performance with two ANN search algorithms, the query time is reduced by 57% and 59.2%, the average recall is increased by 4.5% and 2%, the average precision is increased by 7.7% and 6.9%, the average F1-score is increased by 6% and 4.3%, the average initialization time is reduced by 34 times and 122 times, respectively. We also compare the performance with private ANN search methods using homomorphic encryption, differential privacy and secure multi-party computation. The results show that our method can reduce the query time by several orders of magnitude, which is more applicable to the blockchain environment. To the best of our knowledge, this is the first private ANN search method for on-chain data, which consider the query efficiency and privacy protection, achieving efficient, accurate, and private data query.</div></div>","PeriodicalId":55132,"journal":{"name":"Future Generation Computer Systems-The International Journal of Escience","volume":"164 ","pages":"Article 107586"},"PeriodicalIF":6.2,"publicationDate":"2024-11-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142651664","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
期刊
Future Generation Computer Systems-The International Journal of Escience
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1