首页 > 最新文献

Future Generation Computer Systems-The International Journal of Escience最新文献

英文 中文
In silico framework for genome analysis 基因组分析的硅学框架
IF 6.2 2区 计算机科学 Q1 COMPUTER SCIENCE, THEORY & METHODS Pub Date : 2024-11-12 DOI: 10.1016/j.future.2024.107585
M. Saqib Nawaz , M. Zohaib Nawaz , Yongshun Gong , Philippe Fournier-Viger , Abdoulaye Baniré Diallo
Genomes hold the complete genetic information of an organism. Examining and analyzing genomic data plays a critical role in properly understanding an organism, particularly the main characteristics, functionalities, and evolving nature of harmful viruses. However, the rapid increase in genomic data poses new challenges and demands for extracting meaningful and valuable insights from large and complex genomic datasets. In this paper, a novel Framework for Genome Data Analysis (F4GDA), is developed that offers various methods for the analysis of viral genomic data in various forms. The framework’s methods can not only analyze the changes in genomes but also various genome contents. As a case study, the genomes of five SARS-CoV-2 (severe acute respiratory syndrome coronavirus 2) VoC (variants of concern), which are divided into three types/groups on the basis of geographical locations, are analyzed using this framework to investigate (1) the nucleotides, amino acids and synonymous codon changes in the whole genomes of VoC as well as in the Spike (S) protein, (2) whether different environments affect the rate of changes in genomes, (3) the variations in nucleotide bases, amino acids, and codon base compositions in VoC genomes, and (4) to compare VoC genomes with the reference genome sequence of SARS-CoV-2.
基因组拥有生物体的全部遗传信息。检查和分析基因组数据对于正确理解生物体,特别是有害病毒的主要特征、功能和进化性质起着至关重要的作用。然而,基因组数据的快速增长对从庞大而复杂的基因组数据集中提取有意义、有价值的见解提出了新的挑战和要求。本文开发了一个新颖的基因组数据分析框架(Framework for Genome Data Analysis,F4GDA),为各种形式的病毒基因组数据分析提供了多种方法。该框架的方法不仅能分析基因组的变化,还能分析各种基因组内容。作为一项案例研究,我们利用该框架分析了五种 SARS-CoV-2(严重急性呼吸系统综合征冠状病毒 2)VoC(关注变种)的基因组,这些变种根据地理位置被分为三种类型/组别,研究内容包括:(1) 核苷酸、氨基酸和同义密码子的变化;(2) 核苷酸、氨基酸和同义密码子的变化;(3) 核苷酸、氨基酸和同义密码子的变化、(2)不同环境是否影响基因组的变化率;(3)VoC 基因组中核苷酸碱基、氨基酸和密码子碱基组成的变化;以及(4)VoC 基因组与 SARS-CoV-2 参考基因组序列的比较。
{"title":"In silico framework for genome analysis","authors":"M. Saqib Nawaz ,&nbsp;M. Zohaib Nawaz ,&nbsp;Yongshun Gong ,&nbsp;Philippe Fournier-Viger ,&nbsp;Abdoulaye Baniré Diallo","doi":"10.1016/j.future.2024.107585","DOIUrl":"10.1016/j.future.2024.107585","url":null,"abstract":"<div><div>Genomes hold the complete genetic information of an organism. Examining and analyzing genomic data plays a critical role in properly understanding an organism, particularly the main characteristics, functionalities, and evolving nature of harmful viruses. However, the rapid increase in genomic data poses new challenges and demands for extracting meaningful and valuable insights from large and complex genomic datasets. In this paper, a novel Framework for Genome Data Analysis (F4GDA), is developed that offers various methods for the analysis of viral genomic data in various forms. The framework’s methods can not only analyze the changes in genomes but also various genome contents. As a case study, the genomes of five SARS-CoV-2 (severe acute respiratory syndrome coronavirus 2) VoC (variants of concern), which are divided into three types/groups on the basis of geographical locations, are analyzed using this framework to investigate (1) the nucleotides, amino acids and synonymous codon changes in the whole genomes of VoC as well as in the Spike (S) protein, (2) whether different environments affect the rate of changes in genomes, (3) the variations in nucleotide bases, amino acids, and codon base compositions in VoC genomes, and (4) to compare VoC genomes with the reference genome sequence of SARS-CoV-2.</div></div>","PeriodicalId":55132,"journal":{"name":"Future Generation Computer Systems-The International Journal of Escience","volume":"164 ","pages":"Article 107585"},"PeriodicalIF":6.2,"publicationDate":"2024-11-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142651828","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Adaptive ensemble optimization for memory-related hyperparameters in retraining DNN at edge 在边缘重新训练 DNN 时对与记忆相关的超参数进行自适应集合优化
IF 6.2 2区 计算机科学 Q1 COMPUTER SCIENCE, THEORY & METHODS Pub Date : 2024-11-10 DOI: 10.1016/j.future.2024.107600
Yidong Xu , Rui Han , Xiaojiang Zuo , Junyan Ouyang , Chi Harold Liu , Lydia Y. Chen
Edge applications are increasingly empowered by deep neural networks (DNN) and face the challenges of adapting or retraining models for the changes in input data domains and learning tasks. The existing techniques to enable DNN retraining on edge devices are to configure the memory-related hyperparameters, termed m-hyperparameters, via batch size reduction, parameter freezing, and gradient checkpoint. While those methods show promising results for static DNNs, little is known about how to online and opportunistically optimize all their m-hyperparameters, especially for retraining tasks of edge applications. In this paper, we propose, MPOptimizer, which jointly optimizes an ensemble of m-hyperparameters according to the input distribution and available edge resources at runtime. The key feature of MPOptimizer is to easily emulate the execution of retraining tasks under different m-hyperparameters and thus effectively estimate their influence on task performance. We implement MPOptimizer on prevalent DNNs and demonstrate its effectiveness against state-of-the-art techniques, i.e. successfully find the best configuration that improves model accuracy by an average of 13% (up to 25.3%) while reducing memory and training time by 4.1x and 5.3x under the same model accuracies.
边缘应用越来越多地采用深度神经网络(DNN),并面临着根据输入数据域和学习任务的变化调整或重新训练模型的挑战。在边缘设备上实现 DNN 再训练的现有技术是通过减少批量大小、参数冻结和梯度检查点来配置与记忆相关的超参数(称为 m-超参数)。虽然这些方法在静态 DNN 上显示出良好的效果,但对于如何在线并适时地优化所有 m-hyperparameters 却知之甚少,尤其是在边缘应用的再训练任务中。在本文中,我们提出了 MPOptimizer,它可以在运行时根据输入分布和可用的边缘资源联合优化 m 个全参数集合。MPOptimizer 的主要特点是可以轻松模拟不同 m-hyperparameters 下的再训练任务执行情况,从而有效估计它们对任务性能的影响。我们在流行的 DNN 上实现了 MPOptimizer,并证明了它对最先进技术的有效性,即成功找到了最佳配置,在相同模型精度下,平均提高模型精度 13%(最高达 25.3%),同时减少内存和训练时间 4.1 倍和 5.3 倍。
{"title":"Adaptive ensemble optimization for memory-related hyperparameters in retraining DNN at edge","authors":"Yidong Xu ,&nbsp;Rui Han ,&nbsp;Xiaojiang Zuo ,&nbsp;Junyan Ouyang ,&nbsp;Chi Harold Liu ,&nbsp;Lydia Y. Chen","doi":"10.1016/j.future.2024.107600","DOIUrl":"10.1016/j.future.2024.107600","url":null,"abstract":"<div><div>Edge applications are increasingly empowered by deep neural networks (DNN) and face the challenges of adapting or retraining models for the changes in input data domains and learning tasks. The existing techniques to enable DNN retraining on edge devices are to configure the memory-related hyperparameters, termed <em>m</em>-hyperparameters, via batch size reduction, parameter freezing, and gradient checkpoint. While those methods show promising results for static DNNs, little is known about how to online and opportunistically optimize all their <em>m</em>-hyperparameters, especially for retraining tasks of edge applications. In this paper, we propose, MPOptimizer, which jointly optimizes an ensemble of <em>m</em>-hyperparameters according to the input distribution and available edge resources at runtime. The key feature of MPOptimizer is to easily emulate the execution of retraining tasks under different <em>m</em>-hyperparameters and thus effectively estimate their influence on task performance. We implement MPOptimizer on prevalent DNNs and demonstrate its effectiveness against state-of-the-art techniques, i.e. successfully find the best configuration that improves model accuracy by an average of 13% (up to 25.3%) while reducing memory and training time by 4.1x and 5.3x under the same model accuracies.</div></div>","PeriodicalId":55132,"journal":{"name":"Future Generation Computer Systems-The International Journal of Escience","volume":"164 ","pages":"Article 107600"},"PeriodicalIF":6.2,"publicationDate":"2024-11-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142651663","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Convergence-aware optimal checkpointing for exploratory deep learning training jobs 针对探索性深度学习训练工作的收敛感知优化检查点功能
IF 6.2 2区 计算机科学 Q1 COMPUTER SCIENCE, THEORY & METHODS Pub Date : 2024-11-08 DOI: 10.1016/j.future.2024.107597
Hongliang Li , Zichen Wang , Hairui Zhao , Meng Zhang , Xiang Li , Haixiao Xu
Training Deep Learning (DL) models are becoming more time-consuming, thus interruptions to the training processes are inevitable. We can obtain an optimal checkpointing interval to minimize the fault tolerance overhead for a HPC (High Performance Computing) job with the precondition that the job progress is proportional to its execution time. Unfortunately, it is not the case in DL model training, where a DL training job yields diminishing returns across its lifetime. Meanwhile, training DL models is inherently exploratory, with early termination frequently occurring during model training&developing. It makes the early progress of a DL training job more valuable than the later ones. Even placement of checkpoints would either increase the risks in the early stages or waste resources overprotecting the latter stages. Moreover, in data parallelism, the state-of-the-art quality-driven scheduling strategies allocate more resources for the early stages of a job than the later ones to accelerate the training progress, which further amplifies the issue. In summary, the early stage is more important than the later stages. Allocating more fault-tolerant resources to the early stages is beneficial for the model exploration. Based on the aforementioned conclusion, we present COCI, an approach to compute optimal checkpointing configuration for a exploratory DL training job, minimizing the fault tolerance overhead, including checkpoint cost and recovery cost. We implement COCI based on state-of-the-art iteration-level checkpointing mechanism, as a pluggable module compatible with PyTorch without extra user input. The experimental results show that COCI reduces up to 40.18% fault tolerance overhead compared to existing state-of-the-art DL fault tolerance methods in serial scenario, 60.64% in data parallel scenario.
深度学习(DL)模型的训练越来越耗时,因此训练过程的中断不可避免。我们可以获得最佳的检查点间隔,从而最大限度地减少 HPC(高性能计算)作业的容错开销,前提条件是作业进度与其执行时间成正比。遗憾的是,在 DL 模型训练中情况并非如此,DL 训练作业在其整个生命周期中的收益是递减的。同时,DL 模型的训练本质上是探索性的,在模型训练和开发过程中经常会出现提前终止的情况。这使得 DL 训练工作的早期进展比后期进展更有价值。即使设置检查点,要么会增加早期阶段的风险,要么会浪费资源过度保护后期阶段。此外,在数据并行的情况下,最先进的质量驱动调度策略会为作业的早期阶段分配比后期阶段更多的资源,以加快训练进度,这进一步加剧了问题的严重性。总之,早期阶段比后期阶段更重要。为早期阶段分配更多容错资源有利于模型探索。基于上述结论,我们提出了 COCI,这是一种为探索性 DL 训练作业计算最佳检查点配置的方法,能最大限度地减少容错开销,包括检查点成本和恢复成本。我们基于最先进的迭代级检查点机制实现了 COCI,它是与 PyTorch 兼容的可插拔模块,无需用户额外输入。实验结果表明,与现有最先进的 DL 容错方法相比,COCI 在串行场景下减少了 40.18% 的容错开销,在数据并行场景下减少了 60.64%。
{"title":"Convergence-aware optimal checkpointing for exploratory deep learning training jobs","authors":"Hongliang Li ,&nbsp;Zichen Wang ,&nbsp;Hairui Zhao ,&nbsp;Meng Zhang ,&nbsp;Xiang Li ,&nbsp;Haixiao Xu","doi":"10.1016/j.future.2024.107597","DOIUrl":"10.1016/j.future.2024.107597","url":null,"abstract":"<div><div>Training Deep Learning (DL) models are becoming more time-consuming, thus interruptions to the training processes are inevitable. We can obtain an optimal checkpointing interval to minimize the fault tolerance overhead for a HPC (High Performance Computing) job with the precondition that the job progress is proportional to its execution time. Unfortunately, it is not the case in DL model training, where a DL training job yields diminishing returns across its lifetime. Meanwhile, training DL models is inherently exploratory, with early termination frequently occurring during model training&amp;developing. It makes the early progress of a DL training job more valuable than the later ones. Even placement of checkpoints would either increase the risks in the early stages or waste resources overprotecting the latter stages. Moreover, in data parallelism, the state-of-the-art quality-driven scheduling strategies allocate more resources for the early stages of a job than the later ones to accelerate the training progress, which further amplifies the issue. In summary, the early stage is more important than the later stages. Allocating more fault-tolerant resources to the early stages is beneficial for the model exploration. Based on the aforementioned conclusion, we present COCI, an approach to compute optimal checkpointing configuration for a exploratory DL training job, minimizing the fault tolerance overhead, including checkpoint cost and recovery cost. We implement COCI based on state-of-the-art iteration-level checkpointing mechanism, as a pluggable module compatible with PyTorch without extra user input. The experimental results show that COCI reduces up to 40.18% fault tolerance overhead compared to existing state-of-the-art DL fault tolerance methods in serial scenario, 60.64% in data parallel scenario.</div></div>","PeriodicalId":55132,"journal":{"name":"Future Generation Computer Systems-The International Journal of Escience","volume":"164 ","pages":"Article 107597"},"PeriodicalIF":6.2,"publicationDate":"2024-11-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142651744","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Task Offloading Optimization for Multi-objective Based on Cloud-Edge-End Collaboration in Maritime Networks 基于云端协作的海事网络多目标任务卸载优化
IF 6.2 2区 计算机科学 Q1 COMPUTER SCIENCE, THEORY & METHODS Pub Date : 2024-11-08 DOI: 10.1016/j.future.2024.107588
Lingqiang Liu , Ying Zhang
In recent years, global maritime activities have surged, yet maritime networks face significant limitations in capacity. To address this challenge, integrating mobile edge computing into maritime networks has emerged as a solution, enabling the offloading of computation-intensive tasks to the edge to enhance system performance. However, existing research often narrowly focuses on either system cost or Quality of Service (QoS), failing to optimize both concurrently. This study aims to bridge this research gap by proposing a novel approach that optimizes both system cost and QoS simultaneously through collaborative computing among terminals, edge servers, and a cloud server in a maritime network environment. We leverage the Improved Coati Optimization Algorithm (ICOA) to optimize transmission power for vessel users, and subsequently, we apply Binary Particle Swarm Optimization (BPSO) to make task offloading decisions that consider both system cost and QoS. Experimental results demonstrate that our proposed approach significantly outperforms existing benchmark algorithms in balancing system cost and QoS in cloud-edge-end collaborative scenarios.
近年来,全球海事活动激增,但海事网络的容量却受到严重限制。为应对这一挑战,将移动边缘计算集成到海事网络中成为一种解决方案,可将计算密集型任务卸载到边缘以提高系统性能。然而,现有研究往往狭隘地关注系统成本或服务质量(QoS),未能同时优化这两个方面。本研究旨在弥补这一研究空白,提出了一种新方法,通过海事网络环境中终端、边缘服务器和云服务器之间的协同计算,同时优化系统成本和服务质量。我们利用改进的科蒂优化算法(ICOA)来优化船舶用户的传输功率,然后应用二元粒子群优化(BPSO)来做出同时考虑系统成本和服务质量的任务卸载决策。实验结果表明,在云-边缘-终端协作场景中,我们提出的方法在平衡系统成本和服务质量方面明显优于现有的基准算法。
{"title":"Task Offloading Optimization for Multi-objective Based on Cloud-Edge-End Collaboration in Maritime Networks","authors":"Lingqiang Liu ,&nbsp;Ying Zhang","doi":"10.1016/j.future.2024.107588","DOIUrl":"10.1016/j.future.2024.107588","url":null,"abstract":"<div><div>In recent years, global maritime activities have surged, yet maritime networks face significant limitations in capacity. To address this challenge, integrating mobile edge computing into maritime networks has emerged as a solution, enabling the offloading of computation-intensive tasks to the edge to enhance system performance. However, existing research often narrowly focuses on either system cost or Quality of Service (QoS), failing to optimize both concurrently. This study aims to bridge this research gap by proposing a novel approach that optimizes both system cost and QoS simultaneously through collaborative computing among terminals, edge servers, and a cloud server in a maritime network environment. We leverage the Improved Coati Optimization Algorithm (ICOA) to optimize transmission power for vessel users, and subsequently, we apply Binary Particle Swarm Optimization (BPSO) to make task offloading decisions that consider both system cost and QoS. Experimental results demonstrate that our proposed approach significantly outperforms existing benchmark algorithms in balancing system cost and QoS in cloud-edge-end collaborative scenarios.</div></div>","PeriodicalId":55132,"journal":{"name":"Future Generation Computer Systems-The International Journal of Escience","volume":"164 ","pages":"Article 107588"},"PeriodicalIF":6.2,"publicationDate":"2024-11-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142699723","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
FedGen: Personalized federated learning with data generation for enhanced model customization and class imbalance FedGen:带有数据生成功能的个性化联合学习,可增强模型定制和类不平衡性
IF 6.2 2区 计算机科学 Q1 COMPUTER SCIENCE, THEORY & METHODS Pub Date : 2024-11-07 DOI: 10.1016/j.future.2024.107595
Peng Zhao , Shaocong Guo , Yanan Li , Shusen Yang , Xuebin Ren
Federated learning has emerged as a prominent solution for the collaborative training of machine learning models without exchanging local data. However, existing approaches often impose rigid constraints on model heterogeneity, limiting the ability of clients to customize unique models and increasing the vulnerability of models to potential attacks. This paper presents FedGen, a novel personalized federated learning framework based on generative adversarial networks (GANs). FedGen shifts the focus from training task-specific models to generating data, especially for minority classes with imbalanced data. With FedGen, clients can gain knowledge from others by training generators, while maintaining a heterogeneous local model and avoiding sharing model information with other participants. Moreover, to address challenges arising from imbalanced data, we propose AT-GAN, a novel generative model incorporating pseudo augmentation and differentiable augmentation modules to foster healthy competition between the generator and discriminator. To evaluate the effectiveness of our approach, we conduct extensive experiments on real-world tabular datasets. The experimental results demonstrate that FedGen significantly enhances the performance of local models, achieving improvements of up to 11.92% in F1 score and up to 9.14% in MCC score compared to existing methods.
联盟学习已成为无需交换本地数据就能协同训练机器学习模型的重要解决方案。然而,现有的方法往往对模型的异质性施加严格的限制,从而限制了客户定制独特模型的能力,并增加了模型遭受潜在攻击的可能性。本文介绍了基于生成式对抗网络(GAN)的新型个性化联合学习框架 FedGen。FedGen 将重点从训练特定任务模型转移到生成数据上,尤其是针对数据不平衡的少数群体类别。有了 FedGen,客户可以通过训练生成器从他人那里获得知识,同时保持异构的本地模型,避免与其他参与者共享模型信息。此外,为了应对不平衡数据带来的挑战,我们提出了 AT-GAN,这是一种新颖的生成模型,包含伪增强和可微分增强模块,可促进生成器和判别器之间的良性竞争。为了评估我们方法的有效性,我们在真实世界的表格数据集上进行了广泛的实验。实验结果表明,FedGen 显著提高了本地模型的性能,与现有方法相比,F1 分数提高了 11.92%,MCC 分数提高了 9.14%。
{"title":"FedGen: Personalized federated learning with data generation for enhanced model customization and class imbalance","authors":"Peng Zhao ,&nbsp;Shaocong Guo ,&nbsp;Yanan Li ,&nbsp;Shusen Yang ,&nbsp;Xuebin Ren","doi":"10.1016/j.future.2024.107595","DOIUrl":"10.1016/j.future.2024.107595","url":null,"abstract":"<div><div>Federated learning has emerged as a prominent solution for the collaborative training of machine learning models without exchanging local data. However, existing approaches often impose rigid constraints on model heterogeneity, limiting the ability of clients to customize unique models and increasing the vulnerability of models to potential attacks. This paper presents FedGen, a novel personalized federated learning framework based on generative adversarial networks (GANs). FedGen shifts the focus from training task-specific models to generating data, especially for minority classes with imbalanced data. With FedGen, clients can gain knowledge from others by training generators, while maintaining a heterogeneous local model and avoiding sharing model information with other participants. Moreover, to address challenges arising from imbalanced data, we propose AT-GAN, a novel generative model incorporating pseudo augmentation and differentiable augmentation modules to foster healthy competition between the generator and discriminator. To evaluate the effectiveness of our approach, we conduct extensive experiments on real-world tabular datasets. The experimental results demonstrate that FedGen significantly enhances the performance of local models, achieving improvements of up to 11.92% in F1 score and up to 9.14% in MCC score compared to existing methods.</div></div>","PeriodicalId":55132,"journal":{"name":"Future Generation Computer Systems-The International Journal of Escience","volume":"164 ","pages":"Article 107595"},"PeriodicalIF":6.2,"publicationDate":"2024-11-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142651851","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Time-constrained persistent deletion for key–value store engine on ZNS SSD 在 ZNS 固态硬盘上为键值存储引擎提供时间受限的持续删除功能
IF 6.2 2区 计算机科学 Q1 COMPUTER SCIENCE, THEORY & METHODS Pub Date : 2024-11-06 DOI: 10.1016/j.future.2024.107598
Shiqiang Nie, Tong Lei, Jie Niu, Qihan Hu, Song Liu, Weiguo Wu
The inherent out-of-place update characteristic of the Log-Structured Merge tree (LSM tree) cannot guarantee persistent deletion within a specific time window, leading to potential data privacy and security issues. Existing solutions like Lethe-Fade ensure time-constrained persistent deletion but introduce considerable write overhead, worsening the write amplification issue, particularly for key–value stores on ZNS SSD. To address this problem, we propose a zone-aware persistent deletion scheme for key–value store engines. Targeting mitigating the write amplification induced by level compaction, we design an adaptive SSTable selection strategy for each level in the LSM tree. Additionally, as the SSTable with deletion records would become invalid after the persistent deletion timer reaches its threshold, we design a tombstone-aware zone allocation strategy to reduce the data migration induced by garbage collection. In further, we optimize the victim zone selection in GC to reduce the invalid migration of tombstone files. Experimental results demonstrate that our scheme effectively ensures that most outdated physical versions are deleted before reaching the persistent deletion time threshold. When deleting 10% of keys in the key–value store engine, this scheme reduces write amplification by 74.7% and the garbage collection-induced write by 87.3% compared to the Lethe-Fade scheme.
日志结构合并树(LSM 树)固有的非就地更新特性无法保证在特定时间窗口内持续删除,从而导致潜在的数据隐私和安全问题。现有的解决方案(如 Lethe-Fade)能确保有时间限制的持久性删除,但会带来相当大的写开销,加剧写放大问题,尤其是对于 ZNS SSD 上的键值存储。为了解决这个问题,我们为键值存储引擎提出了一种区域感知持久删除方案。为了减轻层级压缩引起的写放大,我们为 LSM 树中的每个层级设计了自适应 SSTable 选择策略。此外,由于带有删除记录的 SSTable 会在持续删除计时器达到阈值后失效,因此我们设计了一种墓碑感知区域分配策略,以减少垃圾收集引起的数据迁移。此外,我们还优化了 GC 中的受害区选择,以减少墓碑文件的无效迁移。实验结果表明,我们的方案能有效确保大多数过时的物理版本在达到持续删除时间阈值之前被删除。与 Lethe-Fade 方案相比,当删除键值存储引擎中 10% 的键时,该方案可将写入放大率降低 74.7%,将垃圾收集引起的写入降低 87.3%。
{"title":"Time-constrained persistent deletion for key–value store engine on ZNS SSD","authors":"Shiqiang Nie,&nbsp;Tong Lei,&nbsp;Jie Niu,&nbsp;Qihan Hu,&nbsp;Song Liu,&nbsp;Weiguo Wu","doi":"10.1016/j.future.2024.107598","DOIUrl":"10.1016/j.future.2024.107598","url":null,"abstract":"<div><div>The inherent out-of-place update characteristic of the Log-Structured Merge tree (LSM tree) cannot guarantee persistent deletion within a specific time window, leading to potential data privacy and security issues. Existing solutions like Lethe-Fade ensure time-constrained persistent deletion but introduce considerable write overhead, worsening the write amplification issue, particularly for key–value stores on ZNS SSD. To address this problem, we propose a zone-aware persistent deletion scheme for key–value store engines. Targeting mitigating the write amplification induced by level compaction, we design an adaptive SSTable selection strategy for each level in the LSM tree. Additionally, as the SSTable with deletion records would become invalid after the persistent deletion timer reaches its threshold, we design a tombstone-aware zone allocation strategy to reduce the data migration induced by garbage collection. In further, we optimize the victim zone selection in GC to reduce the invalid migration of tombstone files. Experimental results demonstrate that our scheme effectively ensures that most outdated physical versions are deleted before reaching the persistent deletion time threshold. When deleting 10% of keys in the key–value store engine, this scheme reduces write amplification by 74.7% and the garbage collection-induced write by 87.3% compared to the Lethe-Fade scheme.</div></div>","PeriodicalId":55132,"journal":{"name":"Future Generation Computer Systems-The International Journal of Escience","volume":"164 ","pages":"Article 107598"},"PeriodicalIF":6.2,"publicationDate":"2024-11-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142651746","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
RNC-DP: A personalized trajectory data publishing scheme combining road network constraints and GAN RNC-DP:结合路网约束和 GAN 的个性化轨迹数据发布方案
IF 6.2 2区 计算机科学 Q1 COMPUTER SCIENCE, THEORY & METHODS Pub Date : 2024-11-06 DOI: 10.1016/j.future.2024.107589
Hui Wang , Haiyang Li , Zihao Shen , Peiqian Liu
The popularity of location-based services facilitates people’s lives to a certain extent and generates a large amount of trajectory data. Analyzing these data can contribute to society’s development and provide better location services for users, but it also faces the security problem of personal trajectory privacy leakage. However, existing methods often suffer from either excessive privacy protection or insufficient protection of individual privacy. Therefore, this paper proposes a personalized trajectory data publishing scheme combining road network constraints and GAN (RNC-DP). Firstly, after grid-representing the trajectory data, we remove the unreachable grids and define a trajectory generation constraint. Second, the proposed TraGM model synthesizes the trajectory data to meet the constraints. Again, during the trajectory data publishing process, the proposed TraDP mechanism performs k-means clustering on the synthesized trajectories and assigns appropriate privacy budgets to the clustered generalized trajectory location points. Finally, the protected trajectory data is published. Compared with the existing schemes, the proposed scheme improves privacy protection strength by 10.2%–41.2% while balancing data availability and has low time complexity.
基于位置的服务的普及在一定程度上方便了人们的生活,同时也产生了大量的轨迹数据。分析这些数据可以促进社会发展,为用户提供更好的位置服务,但同时也面临着个人轨迹隐私泄露的安全问题。然而,现有的方法往往存在隐私保护过度或个人隐私保护不足的问题。因此,本文提出了一种结合路网约束和 GAN 的个性化轨迹数据发布方案(RNC-DP)。首先,在对轨迹数据进行网格化表示后,我们删除了无法到达的网格,并定义了轨迹生成约束。其次,建议的 TraGM 模型合成轨迹数据以满足约束条件。再次,在轨迹数据发布过程中,建议的 TraDP 机制会对合成轨迹进行 k-means 聚类,并为聚类后的广义轨迹位置点分配适当的隐私预算。最后,发布受保护的轨迹数据。与现有方案相比,拟议方案在平衡数据可用性的同时,将隐私保护强度提高了 10.2%-41.2%,而且时间复杂度较低。
{"title":"RNC-DP: A personalized trajectory data publishing scheme combining road network constraints and GAN","authors":"Hui Wang ,&nbsp;Haiyang Li ,&nbsp;Zihao Shen ,&nbsp;Peiqian Liu","doi":"10.1016/j.future.2024.107589","DOIUrl":"10.1016/j.future.2024.107589","url":null,"abstract":"<div><div>The popularity of location-based services facilitates people’s lives to a certain extent and generates a large amount of trajectory data. Analyzing these data can contribute to society’s development and provide better location services for users, but it also faces the security problem of personal trajectory privacy leakage. However, existing methods often suffer from either excessive privacy protection or insufficient protection of individual privacy. Therefore, this paper proposes a personalized trajectory data publishing scheme combining road network constraints and GAN (RNC-DP). Firstly, after grid-representing the trajectory data, we remove the unreachable grids and define a trajectory generation constraint. Second, the proposed TraGM model synthesizes the trajectory data to meet the constraints. Again, during the trajectory data publishing process, the proposed TraDP mechanism performs k-means clustering on the synthesized trajectories and assigns appropriate privacy budgets to the clustered generalized trajectory location points. Finally, the protected trajectory data is published. Compared with the existing schemes, the proposed scheme improves privacy protection strength by 10.2%–41.2% while balancing data availability and has low time complexity.</div></div>","PeriodicalId":55132,"journal":{"name":"Future Generation Computer Systems-The International Journal of Escience","volume":"164 ","pages":"Article 107589"},"PeriodicalIF":6.2,"publicationDate":"2024-11-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142651747","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
CMPNet: A cross-modal multi-scale perception network for RGB-T crowd counting CMPNet:用于 RGB-T 人群计数的跨模态多尺度感知网络
IF 6.2 2区 计算机科学 Q1 COMPUTER SCIENCE, THEORY & METHODS Pub Date : 2024-11-06 DOI: 10.1016/j.future.2024.107596
Shihui Zhang , Kun Chen , Gangzheng Zhai , He Li , Shaojie Han
The cross-modal crowd counting method demonstrates better scene adaptability under complex conditions by introducing independent supplementary information. However, existing methods still face problems such as insufficient fusion of modal features, underutilization of crowd structure, and the neglect of scale information. In response to the above issues, this paper proposes a cross-modal multi-scale perception network (CMPNet). Specifically, CMPNet mainly consists of a cross-modal perception fusion module and a multi-scale feature aggregation module. The cross-modal perception fusion module effectively suppresses noise features while sharing features between different modalities, thereby significantly improving the robustness of the crowd counting process. The multi-scale feature aggregation module obtains rich crowd structure information through a spatial context aware graph convolution unit, and then integrates feature information from different scales to enhance the network’s perception ability of crowd density. To the best of our knowledge, CMPNet is the first attempt to model the crowd structure and mine its semantics in the field of cross-modal crowd counting. The experimental results show that CMPNet achieves state-of-the-art performance on all RGB-T datasets, providing an effective solution for cross-modal crowd counting. We will release the code at https://github.com/KunChenKKK/CMPNet.
跨模态人群计数方法通过引入独立的补充信息,在复杂条件下表现出更好的场景适应性。然而,现有方法仍面临模态特征融合不足、人群结构利用不足、尺度信息被忽视等问题。针对上述问题,本文提出了一种跨模态多尺度感知网络(CMPNet)。具体来说,CMPNet 主要由跨模态感知融合模块和多尺度特征聚合模块组成。跨模态感知融合模块可有效抑制噪声特征,同时共享不同模态之间的特征,从而显著提高人群计数过程的鲁棒性。多尺度特征聚合模块通过空间上下文感知图卷积单元获取丰富的人群结构信息,然后整合不同尺度的特征信息,增强网络对人群密度的感知能力。据我们所知,CMPNet 是在跨模态人群计数领域首次尝试建立人群结构模型并挖掘其语义。实验结果表明,CMPNet 在所有 RGB-T 数据集上都达到了最先进的性能,为跨模态人群统计提供了有效的解决方案。我们将在 https://github.com/KunChenKKK/CMPNet 发布代码。
{"title":"CMPNet: A cross-modal multi-scale perception network for RGB-T crowd counting","authors":"Shihui Zhang ,&nbsp;Kun Chen ,&nbsp;Gangzheng Zhai ,&nbsp;He Li ,&nbsp;Shaojie Han","doi":"10.1016/j.future.2024.107596","DOIUrl":"10.1016/j.future.2024.107596","url":null,"abstract":"<div><div>The cross-modal crowd counting method demonstrates better scene adaptability under complex conditions by introducing independent supplementary information. However, existing methods still face problems such as insufficient fusion of modal features, underutilization of crowd structure, and the neglect of scale information. In response to the above issues, this paper proposes a cross-modal multi-scale perception network (CMPNet). Specifically, CMPNet mainly consists of a cross-modal perception fusion module and a multi-scale feature aggregation module. The cross-modal perception fusion module effectively suppresses noise features while sharing features between different modalities, thereby significantly improving the robustness of the crowd counting process. The multi-scale feature aggregation module obtains rich crowd structure information through a spatial context aware graph convolution unit, and then integrates feature information from different scales to enhance the network’s perception ability of crowd density. To the best of our knowledge, CMPNet is the first attempt to model the crowd structure and mine its semantics in the field of cross-modal crowd counting. The experimental results show that CMPNet achieves state-of-the-art performance on all RGB-T datasets, providing an effective solution for cross-modal crowd counting. We will release the code at <span><span>https://github.com/KunChenKKK/CMPNet</span><svg><path></path></svg></span>.</div></div>","PeriodicalId":55132,"journal":{"name":"Future Generation Computer Systems-The International Journal of Escience","volume":"164 ","pages":"Article 107596"},"PeriodicalIF":6.2,"publicationDate":"2024-11-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142651745","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Private approximate nearest neighbor search for on-chain data based on locality-sensitive hashing 基于位置敏感哈希算法的链上数据私有近似近邻搜索
IF 6.2 2区 计算机科学 Q1 COMPUTER SCIENCE, THEORY & METHODS Pub Date : 2024-11-05 DOI: 10.1016/j.future.2024.107586
Siyuan Shang , Xuehui Du , Xiaohan Wang, Aodi Liu
Blockchain manages data with immutability, decentralization and traceability, offering new solutions for traditional information systems and greatly facilitating data sharing. However, on-chain data query still faces challenges such as low efficiency and difficulty in privacy protection. We propose a private Approximate Nearest Neighbor (ANN) search method for on-chain data based on Locality-Sensitive Hashing (LSH), which mainly includes two steps: query initialization and query implementation. In query initialization, the data management node builds hash tables for on-chain data through improved LSH, which are encrypted and stored on the blockchain using attribute-based encryption. In query implementation, node with correct privileges utilizes random smart contracts to query on-chain data privately by distributed point function and a privacy protection technique called oblivious masking. To validate the effectiveness of this method, we compare the performance with two ANN search algorithms, the query time is reduced by 57% and 59.2%, the average recall is increased by 4.5% and 2%, the average precision is increased by 7.7% and 6.9%, the average F1-score is increased by 6% and 4.3%, the average initialization time is reduced by 34 times and 122 times, respectively. We also compare the performance with private ANN search methods using homomorphic encryption, differential privacy and secure multi-party computation. The results show that our method can reduce the query time by several orders of magnitude, which is more applicable to the blockchain environment. To the best of our knowledge, this is the first private ANN search method for on-chain data, which consider the query efficiency and privacy protection, achieving efficient, accurate, and private data query.
区块链管理数据具有不可篡改性、去中心化和可追溯性,为传统信息系统提供了新的解决方案,极大地促进了数据共享。然而,链上数据查询仍面临效率低、隐私保护难等挑战。我们提出了一种基于位置敏感散列(LSH)的链上数据私有近似近邻(ANN)搜索方法,主要包括查询初始化和查询实现两个步骤。在查询初始化中,数据管理节点通过改进的 LSH 为链上数据建立哈希表,并使用基于属性的加密技术将哈希表加密后存储在区块链上。在查询执行过程中,拥有正确权限的节点利用随机智能合约,通过分布式点函数和一种称为遗忘掩码的隐私保护技术,私下查询链上数据。为了验证这种方法的有效性,我们将其与两种 ANN 搜索算法进行了性能比较,结果显示,查询时间分别缩短了 57% 和 59.2%,平均召回率分别提高了 4.5% 和 2%,平均精度分别提高了 7.7% 和 6.9%,平均 F1 分数分别提高了 6% 和 4.3%,平均初始化时间分别缩短了 34 倍和 122 倍。我们还比较了使用同态加密、差分隐私和安全多方计算的私有 ANN 搜索方法的性能。结果表明,我们的方法可以将查询时间缩短几个数量级,更适用于区块链环境。据我们所知,这是第一种考虑查询效率和隐私保护的链上数据私有 ANN 搜索方法,实现了高效、准确和私有的数据查询。
{"title":"Private approximate nearest neighbor search for on-chain data based on locality-sensitive hashing","authors":"Siyuan Shang ,&nbsp;Xuehui Du ,&nbsp;Xiaohan Wang,&nbsp;Aodi Liu","doi":"10.1016/j.future.2024.107586","DOIUrl":"10.1016/j.future.2024.107586","url":null,"abstract":"<div><div>Blockchain manages data with immutability, decentralization and traceability, offering new solutions for traditional information systems and greatly facilitating data sharing. However, on-chain data query still faces challenges such as low efficiency and difficulty in privacy protection. We propose a private Approximate Nearest Neighbor (ANN) search method for on-chain data based on Locality-Sensitive Hashing (LSH), which mainly includes two steps: query initialization and query implementation. In query initialization, the data management node builds hash tables for on-chain data through improved LSH, which are encrypted and stored on the blockchain using attribute-based encryption. In query implementation, node with correct privileges utilizes random smart contracts to query on-chain data privately by distributed point function and a privacy protection technique called oblivious masking. To validate the effectiveness of this method, we compare the performance with two ANN search algorithms, the query time is reduced by 57% and 59.2%, the average recall is increased by 4.5% and 2%, the average precision is increased by 7.7% and 6.9%, the average F1-score is increased by 6% and 4.3%, the average initialization time is reduced by 34 times and 122 times, respectively. We also compare the performance with private ANN search methods using homomorphic encryption, differential privacy and secure multi-party computation. The results show that our method can reduce the query time by several orders of magnitude, which is more applicable to the blockchain environment. To the best of our knowledge, this is the first private ANN search method for on-chain data, which consider the query efficiency and privacy protection, achieving efficient, accurate, and private data query.</div></div>","PeriodicalId":55132,"journal":{"name":"Future Generation Computer Systems-The International Journal of Escience","volume":"164 ","pages":"Article 107586"},"PeriodicalIF":6.2,"publicationDate":"2024-11-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142651664","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Using Deep Reinforcement Learning (DRL) for minimizing power consumption in Video-on-Demand (VoD) storage systems 利用深度强化学习(DRL)最大限度降低视频点播(VoD)存储系统的功耗
IF 6.2 2区 计算机科学 Q1 COMPUTER SCIENCE, THEORY & METHODS Pub Date : 2024-11-05 DOI: 10.1016/j.future.2024.107582
Minseok Song, Mingoo Kwon
As video streaming services such as Netflix become popular, resolving the problem of high power consumption arising from both large data size and high bandwidth in video storage systems has become important. However, because various factors, such as the power characteristics of heterogeneous storage devices, variable workloads, and disk array models, influence storage power consumption, reducing power consumption with deterministic policies is ineffective. To address this, we present a new deep reinforcement learning (DRL)-based file placement algorithm for replication-based video storage systems, which aims to minimize overall storage power consumption. We first model the video storage system with time-varying streaming workloads as the DRL environment, in which the agent aims to find power-efficient file placement. We then propose a proximal policy optimization (PPO) algorithm, consisting of (1) an action space that determines the placement of each file; (2) an observation space that allows the agent to learn a power-efficient placement based on the current I/O bandwidth utilization; (3) a reward model that assigns a greater penalty for increased power consumption for each action; and (4) an action masking model that supports effective learning by preventing agents from selecting unnecessary actions. Extensive simulations were performed to evaluate the proposed scheme under various solid-state disk (SSD) models and replication configurations. Results show that our scheme reduces storage power consumption by 5% to 25.8% (average 12%) compared to existing benchmark methods known to be effective for file placement.
随着 Netflix 等视频流服务的流行,解决视频存储系统中因数据量大和带宽高而产生的高功耗问题变得非常重要。然而,由于异构存储设备的功耗特性、可变工作负载和磁盘阵列模型等多种因素会影响存储功耗,因此采用确定性策略降低功耗的效果并不理想。为此,我们针对基于复制的视频存储系统提出了一种基于深度强化学习(DRL)的新型文件放置算法,旨在最大限度地降低整体存储功耗。我们首先将具有时变流媒体工作负载的视频存储系统建模为 DRL 环境,其中代理的目标是找到省电的文件放置位置。然后,我们提出了一种近端策略优化(PPO)算法,该算法由以下部分组成:(1) 确定每个文件放置位置的行动空间;(2) 允许代理根据当前 I/O 带宽利用率学习高能效放置位置的观察空间;(3) 为每个行动的能耗增加分配更大惩罚的奖励模型;(4) 通过防止代理选择不必要行动来支持有效学习的行动屏蔽模型。我们进行了大量模拟,以评估在各种固态硬盘(SSD)型号和复制配置下的拟议方案。结果表明,与已知有效的文件放置现有基准方法相比,我们的方案降低了 5% 到 25.8% 的存储功耗(平均为 12%)。
{"title":"Using Deep Reinforcement Learning (DRL) for minimizing power consumption in Video-on-Demand (VoD) storage systems","authors":"Minseok Song,&nbsp;Mingoo Kwon","doi":"10.1016/j.future.2024.107582","DOIUrl":"10.1016/j.future.2024.107582","url":null,"abstract":"<div><div>As video streaming services such as Netflix become popular, resolving the problem of high power consumption arising from both large data size and high bandwidth in video storage systems has become important. However, because various factors, such as the power characteristics of heterogeneous storage devices, variable workloads, and disk array models, influence storage power consumption, reducing power consumption with deterministic policies is ineffective. To address this, we present a new deep reinforcement learning (DRL)-based file placement algorithm for replication-based video storage systems, which aims to minimize overall storage power consumption. We first model the video storage system with time-varying streaming workloads as the DRL environment, in which the agent aims to find power-efficient file placement. We then propose a proximal policy optimization (PPO) algorithm, consisting of (1) an action space that determines the placement of each file; (2) an observation space that allows the agent to learn a power-efficient placement based on the current I/O bandwidth utilization; (3) a reward model that assigns a greater penalty for increased power consumption for each action; and (4) an action masking model that supports effective learning by preventing agents from selecting unnecessary actions. Extensive simulations were performed to evaluate the proposed scheme under various solid-state disk (SSD) models and replication configurations. Results show that our scheme reduces storage power consumption by 5% to 25.8% (average 12%) compared to existing benchmark methods known to be effective for file placement.</div></div>","PeriodicalId":55132,"journal":{"name":"Future Generation Computer Systems-The International Journal of Escience","volume":"164 ","pages":"Article 107582"},"PeriodicalIF":6.2,"publicationDate":"2024-11-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142651662","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
期刊
Future Generation Computer Systems-The International Journal of Escience
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1