首页 > 最新文献

IEEE Transactions on Computers最新文献

英文 中文
Edge-MPQ: Layer-Wise Mixed-Precision Quantization With Tightly Integrated Versatile Inference Units for Edge Computing Edge-MPQ:为边缘计算配备紧密集成的多功能推理单元的分层混合精度量化技术
IF 3.6 2区 计算机科学 Q2 COMPUTER SCIENCE, HARDWARE & ARCHITECTURE Pub Date : 2024-08-12 DOI: 10.1109/TC.2024.3441860
Xiaotian Zhao;Ruge Xu;Yimin Gao;Vaibhav Verma;Mircea R. Stan;Xinfei Guo
As one of the prevailing deep neural networks compression techniques, layer-wise mixed-precision quantization (MPQ) strikes a better balance between accuracy and efficiency than uniform quantization schemes. However, existing MPQ strategies either lack hardware awareness or incur huge computation costs, limiting their deployment at the edge. Additionally, researchers usually make a one-time decision between post-training quantization (PTQ) and quantization-aware training (QAT) based on the quantized bit-width or hardware requirements. In this paper, we propose the tight integration of versatile MPQ inference units supporting INT2-INT8 and INT16 precisions, which feature a hierarchical multiplier architecture, into a RISC-V processor pipeline through micro-architecture and Instruction Set Architecture (ISA) co-design. Synthesized with a 14nm technology, the design delivers a speedup of $15.50times$ to $47.67times$ over the baseline RV64IMA core when running a single convolution layer kernel and achieves up to 2.86 GOPS performance. This work also achieves an energy efficiency at 20.51 TOPS/W, which not only exceeds contemporary state-of-the-art MPQ hardware solutions at the edge, but also marks a significant advancement in the field. We also propose a novel MPQ search algorithm that incorporates both hardware awareness and training necessity. The algorithm samples layer-wise sensitivities using a set of newly proposed metrics and runs a heuristics search. Evaluation results show that this search algorithm achieves $2.2%sim 6.7%$ higher inference accuracy under similar hardware constraints compared to state-of-the-art MPQ strategies. Furthermore we expand the search space using a dynamic programming (DP) strategy to perform search with more fine-grained accuracy intervals and support multi-dimensional search. This further improves the inference accuracy by over $1.3%$ compared to a greedy-based search.
作为目前流行的深度神经网络压缩技术之一,层智混合精度量化(MPQ)比统一量化方案在精度和效率之间取得了更好的平衡。然而,现有的 MPQ 策略要么缺乏硬件意识,要么会产生巨大的计算成本,从而限制了它们在边缘的部署。此外,研究人员通常根据量化位宽或硬件要求在训练后量化(PTQ)和量化感知训练(QAT)之间做出一次性决定。在本文中,我们提出通过微体系结构和指令集体系结构(ISA)协同设计,将支持 INT2INT8 和 INT16 精确度的多功能 MPQ 推理单元紧密集成到 RISC-V 处理器流水线中。该设计采用 14nm 技术合成,在运行单卷积层内核时,比基准 RV64IMA 内核的速度提高了 15.50 美元到 47.67 美元,并实现了高达 2.86 GOPS 的性能。这项工作还实现了 20.51 TOPS/W 的能效,不仅超越了当代最先进的边缘 MPQ 硬件解决方案,而且标志着该领域的重大进步。我们还提出了一种新颖的 MPQ 搜索算法,该算法结合了硬件感知和训练必要性。该算法使用一组新提出的指标对各层敏感度进行采样,并运行启发式搜索。评估结果表明,与最先进的 MPQ 策略相比,这种搜索算法在类似的硬件限制条件下实现了更高的推理准确率。此外,我们还使用动态编程(DP)策略扩展了搜索空间,以更细粒度的精度区间进行搜索,并支持多维搜索。与基于贪婪的搜索相比,这进一步提高了推理精度超过 1.3%/$。
{"title":"Edge-MPQ: Layer-Wise Mixed-Precision Quantization With Tightly Integrated Versatile Inference Units for Edge Computing","authors":"Xiaotian Zhao;Ruge Xu;Yimin Gao;Vaibhav Verma;Mircea R. Stan;Xinfei Guo","doi":"10.1109/TC.2024.3441860","DOIUrl":"10.1109/TC.2024.3441860","url":null,"abstract":"As one of the prevailing deep neural networks compression techniques, layer-wise mixed-precision quantization (MPQ) strikes a better balance between accuracy and efficiency than uniform quantization schemes. However, existing MPQ strategies either lack hardware awareness or incur huge computation costs, limiting their deployment at the edge. Additionally, researchers usually make a one-time decision between post-training quantization (PTQ) and quantization-aware training (QAT) based on the quantized bit-width or hardware requirements. In this paper, we propose the tight integration of versatile MPQ inference units supporting INT2-INT8 and INT16 precisions, which feature a hierarchical multiplier architecture, into a RISC-V processor pipeline through micro-architecture and Instruction Set Architecture (ISA) co-design. Synthesized with a 14nm technology, the design delivers a speedup of \u0000<inline-formula><tex-math>$15.50times$</tex-math></inline-formula>\u0000 to \u0000<inline-formula><tex-math>$47.67times$</tex-math></inline-formula>\u0000 over the baseline RV64IMA core when running a single convolution layer kernel and achieves up to 2.86 GOPS performance. This work also achieves an energy efficiency at 20.51 TOPS/W, which not only exceeds contemporary state-of-the-art MPQ hardware solutions at the edge, but also marks a significant advancement in the field. We also propose a novel MPQ search algorithm that incorporates both hardware awareness and training necessity. The algorithm samples layer-wise sensitivities using a set of newly proposed metrics and runs a heuristics search. Evaluation results show that this search algorithm achieves \u0000<inline-formula><tex-math>$2.2%sim 6.7%$</tex-math></inline-formula>\u0000 higher inference accuracy under similar hardware constraints compared to state-of-the-art MPQ strategies. Furthermore we expand the search space using a dynamic programming (DP) strategy to perform search with more fine-grained accuracy intervals and support multi-dimensional search. This further improves the inference accuracy by over \u0000<inline-formula><tex-math>$1.3%$</tex-math></inline-formula>\u0000 compared to a greedy-based search.","PeriodicalId":13087,"journal":{"name":"IEEE Transactions on Computers","volume":"73 11","pages":"2504-2519"},"PeriodicalIF":3.6,"publicationDate":"2024-08-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142200669","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
A Mutual-Influence-Aware Heuristic Method for Quantum Circuit Mapping 量子电路映射的相互影响启发法
IF 3.6 2区 计算机科学 Q2 COMPUTER SCIENCE, HARDWARE & ARCHITECTURE Pub Date : 2024-08-12 DOI: 10.1109/TC.2024.3441825
Kui Ye;Shengxin Dai;Bing Guo;Yan Shen;Chuanjie Liu;Kejun Bi;Fei Chen;Yuchuan Hu;Mingjie Zhao
Quantum circuit mapping (QCM) is a crucial preprocessing step for executing a logical circuit (LC) on noisy intermediate-scale quantum (NISQ) devices. Balancing the introduction of extra gates and the efficiency of preprocessing poses a significant challenge for the mapping process. To address this challenge, we propose the mutual-influence-aware (MIA) heuristic method by integrating an initial mapping search framework, an initial mapping generator, and a heuristic circuit mapper. Initially, the framework utilizes the generator to obtain a favorable starting point for the initial mapping search. With this starting point, the search process can efficiently discover a promising initial mapping within a few bidirectional iterations. The circuit mapper considers mutual influences of SWAP gates and is invoked once per iteration. Ultimately, the best result from all iterations is considered the QCM outcome. The experimental results on extensive benchmark circuits demonstrate that, compared to the iterated local search (ILS) method, which represents the current state-of-the-art, our MIA method introduces a similar number of extra gates while achieving nearly 95 times faster execution.
量子电路映射(QCM)是在噪声中等规模量子(NISQ)器件上执行逻辑电路(LC)的关键预处理步骤。如何在引入额外门电路和提高预处理效率之间取得平衡,是映射过程面临的一项重大挑战。为了应对这一挑战,我们提出了相互影响感知(MIA)启发式方法,将初始映射搜索框架、初始映射生成器和启发式电路映射器整合在一起。最初,该框架利用生成器为初始映射搜索获得一个有利的起点。有了这个起点,搜索过程就能在几次双向迭代中高效地发现有希望的初始映射。电路映射器考虑了 SWAP 门的相互影响,每次迭代调用一次。最终,所有迭代的最佳结果被视为 QCM 结果。在大量基准电路上的实验结果表明,与代表当前最先进水平的迭代局部搜索(ILS)方法相比,我们的 MIA 方法引入的额外门数量相近,但执行速度却快了近 95 倍。
{"title":"A Mutual-Influence-Aware Heuristic Method for Quantum Circuit Mapping","authors":"Kui Ye;Shengxin Dai;Bing Guo;Yan Shen;Chuanjie Liu;Kejun Bi;Fei Chen;Yuchuan Hu;Mingjie Zhao","doi":"10.1109/TC.2024.3441825","DOIUrl":"10.1109/TC.2024.3441825","url":null,"abstract":"Quantum circuit mapping (QCM) is a crucial preprocessing step for executing a logical circuit (LC) on noisy intermediate-scale quantum (NISQ) devices. Balancing the introduction of extra gates and the efficiency of preprocessing poses a significant challenge for the mapping process. To address this challenge, we propose the mutual-influence-aware (MIA) heuristic method by integrating an initial mapping search framework, an initial mapping generator, and a heuristic circuit mapper. Initially, the framework utilizes the generator to obtain a favorable starting point for the initial mapping search. With this starting point, the search process can efficiently discover a promising initial mapping within a few bidirectional iterations. The circuit mapper considers mutual influences of SWAP gates and is invoked once per iteration. Ultimately, the best result from all iterations is considered the QCM outcome. The experimental results on extensive benchmark circuits demonstrate that, compared to the iterated local search (ILS) method, which represents the current state-of-the-art, our MIA method introduces a similar number of extra gates while achieving nearly 95 times faster execution.","PeriodicalId":13087,"journal":{"name":"IEEE Transactions on Computers","volume":"73 12","pages":"2855-2867"},"PeriodicalIF":3.6,"publicationDate":"2024-08-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142200672","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Response-Time Analysis of Bundled Gang Tasks Under Partitioned FP Scheduling 分区 FP 调度下捆绑帮派任务的响应时间分析
IF 3.6 2区 计算机科学 Q2 COMPUTER SCIENCE, HARDWARE & ARCHITECTURE Pub Date : 2024-08-12 DOI: 10.1109/TC.2024.3441823
Veronica Rispo;Federico Aromolo;Daniel Casini;Alessandro Biondi
The study of parallel task models for real-time systems has become fundamental due to the increasing computational demand of modern applications. Recently, gang scheduling has gained attention for improving performance in tightly synchronized parallel applications. Nevertheless, existing studies often overestimate computational demand by assuming a constant number of cores for each task. In contrast, the bundled model accurately represents internal parallelism by means of a string of segments demanding for a variable number of cores. This model is particularly relevant to modern real-time systems, as it allows transforming general parallel tasks into bundled tasks while preserving accurate parallelism. However, it has only been analyzed for global scheduling, which carries analytical pessimism and considerable run-time overheads. This paper introduces two response-time analysis techniques for parallel real-time tasks under partitioned, fixed-priority gang scheduling under the bundled model, together with a set of specialized allocation heuristics. Experimental results compare the proposed methods against state-of-the-art approaches.
由于现代应用的计算需求不断增加,对实时系统并行任务模型的研究已成为基础。最近,帮派调度在提高紧密同步并行应用的性能方面受到了关注。然而,现有研究往往假设每个任务的内核数量不变,从而高估了计算需求。与此相反,捆绑模型通过要求可变内核数的段串,准确地表示了内部并行性。该模型与现代实时系统尤为相关,因为它可以将一般并行任务转化为捆绑任务,同时保留精确的并行性。然而,它只针对全局调度进行过分析,这带来了分析上的悲观和相当大的运行时开销。本文介绍了捆绑模型下分区、固定优先级帮派调度下并行实时任务的两种响应时间分析技术,以及一套专门的分配启发式方法。实验结果将所提出的方法与最先进的方法进行了比较。
{"title":"Response-Time Analysis of Bundled Gang Tasks Under Partitioned FP Scheduling","authors":"Veronica Rispo;Federico Aromolo;Daniel Casini;Alessandro Biondi","doi":"10.1109/TC.2024.3441823","DOIUrl":"10.1109/TC.2024.3441823","url":null,"abstract":"The study of parallel task models for real-time systems has become fundamental due to the increasing computational demand of modern applications. Recently, gang scheduling has gained attention for improving performance in tightly synchronized parallel applications. Nevertheless, existing studies often overestimate computational demand by assuming a constant number of cores for each task. In contrast, the bundled model accurately represents internal parallelism by means of a string of segments demanding for a variable number of cores. This model is particularly relevant to modern real-time systems, as it allows transforming general parallel tasks into bundled tasks while preserving accurate parallelism. However, it has only been analyzed for global scheduling, which carries analytical pessimism and considerable run-time overheads. This paper introduces two response-time analysis techniques for parallel real-time tasks under partitioned, fixed-priority gang scheduling under the bundled model, together with a set of specialized allocation heuristics. Experimental results compare the proposed methods against state-of-the-art approaches.","PeriodicalId":13087,"journal":{"name":"IEEE Transactions on Computers","volume":"73 11","pages":"2534-2547"},"PeriodicalIF":3.6,"publicationDate":"2024-08-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=10633880","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142200674","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Automatic Generation and Optimization Framework of NoC-Based Neural Network Accelerator Through Reinforcement Learning 通过强化学习自动生成和优化基于 NoC 的神经网络加速器框架
IF 3.6 2区 计算机科学 Q2 COMPUTER SCIENCE, HARDWARE & ARCHITECTURE Pub Date : 2024-08-12 DOI: 10.1109/TC.2024.3441822
Yongqi Xue;Jinlun Ji;Xinming Yu;Shize Zhou;Siyue Li;Xinyi Li;Tong Cheng;Shiping Li;Kai Chen;Zhonghai Lu;Li Li;Yuxiang Fu
Choices of dataflows, which are known as intra-core neural network (NN) computation loop nest scheduling and inter-core hardware mapping strategies, play a critical role in the performance and energy efficiency of NoC-based neural network accelerators. Confronted with an enormous dataflow exploration space, this paper proposes an automatic framework for generating and optimizing the full-layer-mappings based on two reinforcement learning algorithms including A2C and PPO. Combining soft and hard constraints, this work transforms the mapping configuration into a sequential decision problem and aims to explore the performance and energy efficient hardware mapping for NoC systems. We evaluate the performance of the proposed framework on 10 experimental neural networks. The results show that compared with the direct-X mapping, the direct-Y mapping, GA-base mapping, and NN-aware mapping, our optimization framework reduces the average execution time of 10 experimental NNs by 9.09$%$, improves the throughput by 11.27$%$, reduces the energy by 12.62$%$, and reduces the time-energy-product (TEP) by 14.49$%$. The results also show that the performance enhancement is related to the coefficient of variation of the neural network to be computed.
数据流的选择(即内核内神经网络(NN)计算环巢调度和内核间硬件映射策略)对基于 NoC 的神经网络加速器的性能和能效起着至关重要的作用。面对巨大的数据流探索空间,本文基于 A2C 和 PPO 两种强化学习算法,提出了一种自动生成和优化全层映射的框架。结合软约束和硬约束,这项工作将映射配置转化为一个顺序决策问题,旨在探索 NoC 系统的性能和能效硬件映射。我们在 10 个实验性神经网络上评估了拟议框架的性能。结果表明,与直接-X映射、直接-Y映射、基于GA的映射和神经网络感知映射相比,我们的优化框架将10个实验神经网络的平均执行时间缩短了9.09美元/%美元,将吞吐量提高了11.27美元/%美元,将能耗降低了12.62美元/%美元,将时间-能耗-产品(TEP)降低了14.49美元/%美元。结果还表明,性能提升与待计算神经网络的变异系数有关。
{"title":"Automatic Generation and Optimization Framework of NoC-Based Neural Network Accelerator Through Reinforcement Learning","authors":"Yongqi Xue;Jinlun Ji;Xinming Yu;Shize Zhou;Siyue Li;Xinyi Li;Tong Cheng;Shiping Li;Kai Chen;Zhonghai Lu;Li Li;Yuxiang Fu","doi":"10.1109/TC.2024.3441822","DOIUrl":"10.1109/TC.2024.3441822","url":null,"abstract":"Choices of dataflows, which are known as intra-core neural network (NN) computation loop nest scheduling and inter-core hardware mapping strategies, play a critical role in the performance and energy efficiency of NoC-based neural network accelerators. Confronted with an enormous dataflow exploration space, this paper proposes an automatic framework for generating and optimizing the full-layer-mappings based on two reinforcement learning algorithms including A2C and PPO. Combining soft and hard constraints, this work transforms the mapping configuration into a sequential decision problem and aims to explore the performance and energy efficient hardware mapping for NoC systems. We evaluate the performance of the proposed framework on 10 experimental neural networks. The results show that compared with the direct-X mapping, the direct-Y mapping, GA-base mapping, and NN-aware mapping, our optimization framework reduces the average execution time of 10 experimental NNs by 9.09\u0000<inline-formula><tex-math>$%$</tex-math></inline-formula>\u0000, improves the throughput by 11.27\u0000<inline-formula><tex-math>$%$</tex-math></inline-formula>\u0000, reduces the energy by 12.62\u0000<inline-formula><tex-math>$%$</tex-math></inline-formula>\u0000, and reduces the time-energy-product (TEP) by 14.49\u0000<inline-formula><tex-math>$%$</tex-math></inline-formula>\u0000. The results also show that the performance enhancement is related to the coefficient of variation of the neural network to be computed.","PeriodicalId":13087,"journal":{"name":"IEEE Transactions on Computers","volume":"73 12","pages":"2882-2896"},"PeriodicalIF":3.6,"publicationDate":"2024-08-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142200671","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Online Container Scheduling With Fast Function Startup and Low Memory Cost in Edge Computing 边缘计算中具有快速功能启动和低内存成本的在线容器调度
IF 3.6 2区 计算机科学 Q2 COMPUTER SCIENCE, HARDWARE & ARCHITECTURE Pub Date : 2024-08-12 DOI: 10.1109/TC.2024.3441836
Zhenzheng Li;Jiong Lou;Jianfei Wu;Jianxiong Guo;Zhiqing Tang;Ping Shen;Weijia Jia;Wei Zhao
Extending serverless computing to the edge has emerged as a promising approach to support service, but startup containerized serverless functions lead to the cold-start delay. Recent research has introduced container caching methods to alleviate the cold-start delay, including cache as the entire container or the Zygote container. However, container caching incurs memory costs. The system must ensure fast function startup and low memory cost of edge servers, which has been overlooked in the literature. This paper aims to jointly optimize startup delay and memory cost. We formulate an online joint optimization problem that encompasses container scheduling decisions, including invocation distribution, container startup, and container caching. To solve the problem, we propose an online algorithm with a competitive ratio and low computational complexity. The proposed algorithm decomposes the problem into two subproblems and solves them sequentially. Each container is assigned a randomized strategy, and these container-level decisions are merged to constitute overall container caching decisions. Furthermore, a greedy-based subroutine is designed to solve the subproblem associated with invocation distribution and container startup decisions. Experiments on the real-world dataset indicate that the algorithm can reduce average startup delay by up to 23% and lower memory costs by up to 15%.
将无服务器计算扩展到边缘已成为支持服务的一种有前途的方法,但启动容器化的无服务器功能会导致冷启动延迟。最近的研究引入了容器缓存方法来缓解冷启动延迟,包括将整个容器或Zygote容器作为缓存。不过,容器缓存会产生内存成本。系统必须确保边缘服务器的快速功能启动和低内存成本,而这一点在文献中一直被忽视。本文旨在联合优化启动延迟和内存成本。我们提出了一个在线联合优化问题,其中包含容器调度决策,包括调用分布、容器启动和容器缓存。为了解决这个问题,我们提出了一种在线算法,该算法具有极高的竞争力和较低的计算复杂度。所提算法将问题分解为两个子问题,并依次求解。为每个容器分配一个随机策略,然后将这些容器级决策合并,构成整体容器缓存决策。此外,还设计了一个基于贪婪的子程序来解决与调用分配和容器启动决策相关的子问题。在实际数据集上的实验表明,该算法可将平均启动延迟减少 23%,内存成本降低 15%。
{"title":"Online Container Scheduling With Fast Function Startup and Low Memory Cost in Edge Computing","authors":"Zhenzheng Li;Jiong Lou;Jianfei Wu;Jianxiong Guo;Zhiqing Tang;Ping Shen;Weijia Jia;Wei Zhao","doi":"10.1109/TC.2024.3441836","DOIUrl":"10.1109/TC.2024.3441836","url":null,"abstract":"Extending serverless computing to the edge has emerged as a promising approach to support service, but startup containerized serverless functions lead to the cold-start delay. Recent research has introduced container caching methods to alleviate the cold-start delay, including cache as the entire container or the Zygote container. However, container caching incurs memory costs. The system must ensure fast function startup and low memory cost of edge servers, which has been overlooked in the literature. This paper aims to jointly optimize startup delay and memory cost. We formulate an online joint optimization problem that encompasses container scheduling decisions, including invocation distribution, container startup, and container caching. To solve the problem, we propose an online algorithm with a competitive ratio and low computational complexity. The proposed algorithm decomposes the problem into two subproblems and solves them sequentially. Each container is assigned a randomized strategy, and these container-level decisions are merged to constitute overall container caching decisions. Furthermore, a greedy-based subroutine is designed to solve the subproblem associated with invocation distribution and container startup decisions. Experiments on the real-world dataset indicate that the algorithm can reduce average startup delay by up to 23% and lower memory costs by up to 15%.","PeriodicalId":13087,"journal":{"name":"IEEE Transactions on Computers","volume":"73 12","pages":"2747-2760"},"PeriodicalIF":3.6,"publicationDate":"2024-08-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142200684","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Statistical Higher-Order Correlation Attacks Against Code-Based Masking 针对基于代码的掩码的统计高阶相关性攻击
IF 3.6 2区 计算机科学 Q2 COMPUTER SCIENCE, HARDWARE & ARCHITECTURE Pub Date : 2024-07-05 DOI: 10.1109/TC.2024.3424208
Wei Cheng;Jingdian Ming;Sylvain Guilley;Jean-Luc Danger
Masking is one of the most well-established methods to thwart side-channel attacks. Many masking schemes have been proposed in the literature, and code-based masking emerges and unifies several masking schemes in a coding-theoretic framework. In this work, we investigate the side-channel resistance of code-based masking from a non-profiling perspective by utilizing correlation-based side-channel attacks. We present a systematic evaluation of correlation attacks with various higher-order (centered) moments and then present the form of optimal correlation attacks. Interestingly, the Pearson correlation coefficient between the hypothetical leakage and the measured traces is connected to the signal-to-noise ratio in higher-order moments, and it turns out to be easy to evaluate rather than launch repeated attacks. We also identify some ineffective higher-order correlation attacks at certain orders when the device leaks under the Hamming weight leakage model. Our theoretical findings are verified through both simulated and real-world measurements.
掩码是挫败侧信道攻击最行之有效的方法之一。文献中提出了许多掩码方案,而基于编码的掩码是在编码理论框架下出现并统一了几种掩码方案。在这项工作中,我们利用基于相关性的侧信道攻击,从非伪装的角度研究了基于代码的掩蔽的抗侧信道能力。我们用各种高阶(居中)矩对相关性攻击进行了系统评估,然后提出了最优相关性攻击的形式。有趣的是,假设泄漏与测量迹线之间的皮尔逊相关系数与高阶时刻的信噪比有关,因此很容易评估而不是发起重复攻击。在汉明权重泄漏模型下,当设备发生泄漏时,我们还确定了某些阶次的无效高阶相关性攻击。我们的理论发现通过模拟和实际测量得到了验证。
{"title":"Statistical Higher-Order Correlation Attacks Against Code-Based Masking","authors":"Wei Cheng;Jingdian Ming;Sylvain Guilley;Jean-Luc Danger","doi":"10.1109/TC.2024.3424208","DOIUrl":"10.1109/TC.2024.3424208","url":null,"abstract":"Masking is one of the most well-established methods to thwart side-channel attacks. Many masking schemes have been proposed in the literature, and code-based masking emerges and unifies several masking schemes in a coding-theoretic framework. In this work, we investigate the side-channel resistance of code-based masking from a non-profiling perspective by utilizing correlation-based side-channel attacks. We present a systematic evaluation of correlation attacks with various higher-order (centered) moments and then present the form of optimal correlation attacks. Interestingly, the Pearson correlation coefficient between the hypothetical leakage and the measured traces is connected to the signal-to-noise ratio in higher-order moments, and it turns out to be easy to evaluate rather than launch repeated attacks. We also identify some ineffective higher-order correlation attacks at certain orders when the device leaks under the Hamming weight leakage model. Our theoretical findings are verified through both simulated and real-world measurements.","PeriodicalId":13087,"journal":{"name":"IEEE Transactions on Computers","volume":"73 10","pages":"2364-2377"},"PeriodicalIF":3.6,"publicationDate":"2024-07-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141570093","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
LPAH: Illustrating Efficient Live Patching With Alignment Holes in Kernel Data LPAH:利用内核数据中的对齐漏洞说明高效的实时修补程序
IF 3.6 2区 计算机科学 Q2 COMPUTER SCIENCE, HARDWARE & ARCHITECTURE Pub Date : 2024-07-05 DOI: 10.1109/TC.2024.3424263
Chao Su;Xiaoshuang Xing;Xiaolu Cheng;Rui Guo;Chuanwen Luo
The Linux kernel is regularly updated to enhance security, improve performance, and introduce new functionalities. Traditional updating methods typically require rebooting, leading to service disruptions and potential data loss. Live-patching technology dynamically updates the kernel modules without rebooting, ensuring continuous service availability. However, this technique has its drawbacks. Since live-patching alters the original structure of data types, it can no longer utilize base offsets to access the members, imposing considerable overheads. This paper proposes LPAH (Live Patching with Alignment Holes), a live patching system that leverages the fragmented space generated by compile-time alignment for data types, to enable effective live patching updates for security vulnerability fixes, feature enhancements, and user-defined patching tasks. LPAH capitalizes on the relationship between these alignment holes and data objects. This approach ensures efficient access to extended data members while preserving the original data's integrity. This approach allows other functions to remain unaffected by updates and replacements through explicit type casts. Extensive experimental results show that LPAH offers valid and robust live patching for multiple real vulnerabilities in the Linux kernel, without degrading performance. Our method provides an efficient way to install security patches in the Linux kernel, and thus reenforces kernel security.
Linux 内核会定期更新,以增强安全性、提高性能并引入新功能。传统的更新方法通常需要重新启动,导致服务中断和潜在的数据丢失。实时补丁技术可以动态更新内核模块,无需重启,从而确保服务的持续可用性。不过,这种技术也有缺点。由于实时补丁改变了数据类型的原始结构,因此无法再利用基偏移来访问成员,从而造成了相当大的开销。本文提出的 LPAH(带对齐孔的实时补丁)是一种实时补丁系统,它利用数据类型编译时对齐所产生的碎片空间,为安全漏洞修复、功能增强和用户定义的补丁任务提供有效的实时补丁更新。LPAH 利用了这些对齐漏洞和数据对象之间的关系。这种方法可确保高效访问扩展数据成员,同时保持原始数据的完整性。这种方法允许其他函数通过显式类型转换不受更新和替换的影响。广泛的实验结果表明,LPAH 为 Linux 内核中的多个真实漏洞提供了有效、稳健的实时补丁,而且不会降低性能。我们的方法提供了一种在 Linux 内核中安装安全补丁的有效方法,从而加强了内核的安全性。
{"title":"LPAH: Illustrating Efficient Live Patching With Alignment Holes in Kernel Data","authors":"Chao Su;Xiaoshuang Xing;Xiaolu Cheng;Rui Guo;Chuanwen Luo","doi":"10.1109/TC.2024.3424263","DOIUrl":"10.1109/TC.2024.3424263","url":null,"abstract":"The Linux kernel is regularly updated to enhance security, improve performance, and introduce new functionalities. Traditional updating methods typically require rebooting, leading to service disruptions and potential data loss. Live-patching technology dynamically updates the kernel modules without rebooting, ensuring continuous service availability. However, this technique has its drawbacks. Since live-patching alters the original structure of data types, it can no longer utilize base offsets to access the members, imposing considerable overheads. This paper proposes LPAH (Live Patching with Alignment Holes), a live patching system that leverages the fragmented space generated by compile-time alignment for data types, to enable effective live patching updates for security vulnerability fixes, feature enhancements, and user-defined patching tasks. LPAH capitalizes on the relationship between these alignment holes and data objects. This approach ensures efficient access to extended data members while preserving the original data's integrity. This approach allows other functions to remain unaffected by updates and replacements through explicit type casts. Extensive experimental results show that LPAH offers valid and robust live patching for multiple real vulnerabilities in the Linux kernel, without degrading performance. Our method provides an efficient way to install security patches in the Linux kernel, and thus reenforces kernel security.","PeriodicalId":13087,"journal":{"name":"IEEE Transactions on Computers","volume":"73 10","pages":"2434-2448"},"PeriodicalIF":3.6,"publicationDate":"2024-07-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141570092","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
HYDRA: A Hybrid Resistance Drift Resilient Architecture for Phase Change Memory-Based Neural Network Accelerators HYDRA:基于相变存储器的神经网络加速器的混合抗漂移架构
IF 3.6 2区 计算机科学 Q2 COMPUTER SCIENCE, HARDWARE & ARCHITECTURE Pub Date : 2024-06-24 DOI: 10.1109/TC.2024.3404096
Thai-Hoang Nguyen;Muhammad Imran;Jaehyuk Choi;Joon-Sung Yang
In-memory Computing (IMC) using Phase Change Memory (PCM) has proven to be effective for efficient processing of Deep Neural Networks (DNNs). However, with the use of multi-level cell PCM (MLC-PCM) in NVMs-based accelerators, errors due to resistance drift in MLC-PCM can severely degrade the DNNs accuracy. In this paper, an analysis of the impact of resistance drift errors on accuracy of MLC-PCM based DNN accelerator shows that the drift errors alone can significantly impact the accuracy. This paper proposes Hydra, which is a hybrid resistance drift resilient architecture for MLC-PCM based DNN accelerators which use IMC for efficient computations. Hydra utilizes Tri-level cell PCM, which has a negligible resistance drift error rate, to store the critical bits of DNNs parameters and MLC-PCM (4-level cell), which has a higher error rate (but offers more storage density), for the non-critical bits. Experimental results on various DNN architectures, configurations and datasets show that, with the presence of resistance drift errors in PCM, Hydra can maintain the baseline accuracy of DNNs for up to 1 year (resistance drift is time-dependent), whereas conventional drift tolerance techniques lead to a significant accuracy drop in just a few seconds.
事实证明,使用相变存储器(PCM)的内存计算(IMC)可有效处理深度神经网络(DNN)。然而,在基于 NVMs 的加速器中使用多层单元 PCM(MLC-PCM)时,MLC-PCM 中电阻漂移导致的误差会严重降低 DNN 的精度。本文分析了电阻漂移误差对基于 MLC-PCM 的 DNN 加速器精度的影响,结果表明仅漂移误差就会对精度产生重大影响。本文提出的 Hydra 是一种混合电阻漂移弹性架构,适用于使用 IMC 进行高效计算的基于 MLC-PCM 的 DNN 加速器。Hydra 利用电阻漂移误差率可忽略不计的三级单元 PCM 来存储 DNN 参数的关键位,而利用误差率较高(但存储密度更大)的 MLC-PCM(四级单元)来存储非关键位。在各种 DNN 架构、配置和数据集上的实验结果表明,在 PCM 存在电阻漂移误差的情况下,Hydra 可以将 DNN 的基线精度保持长达 1 年(电阻漂移与时间有关),而传统的漂移容错技术仅在几秒钟内就会导致精度大幅下降。
{"title":"HYDRA: A Hybrid Resistance Drift Resilient Architecture for Phase Change Memory-Based Neural Network Accelerators","authors":"Thai-Hoang Nguyen;Muhammad Imran;Jaehyuk Choi;Joon-Sung Yang","doi":"10.1109/TC.2024.3404096","DOIUrl":"10.1109/TC.2024.3404096","url":null,"abstract":"In-memory Computing (IMC) using Phase Change Memory (PCM) has proven to be effective for efficient processing of Deep Neural Networks (DNNs). However, with the use of multi-level cell PCM (MLC-PCM) in NVMs-based accelerators, errors due to resistance drift in MLC-PCM can severely degrade the DNNs accuracy. In this paper, an analysis of the impact of resistance drift errors on accuracy of MLC-PCM based DNN accelerator shows that the drift errors alone can significantly impact the accuracy. This paper proposes Hydra, which is a hybrid resistance drift resilient architecture for MLC-PCM based DNN accelerators which use IMC for efficient computations. Hydra utilizes Tri-level cell PCM, which has a negligible resistance drift error rate, to store the critical bits of DNNs parameters and MLC-PCM (4-level cell), which has a higher error rate (but offers more storage density), for the non-critical bits. Experimental results on various DNN architectures, configurations and datasets show that, with the presence of resistance drift errors in PCM, Hydra can maintain the baseline accuracy of DNNs for up to 1 year (resistance drift is time-dependent), whereas conventional drift tolerance techniques lead to a significant accuracy drop in just a few seconds.","PeriodicalId":13087,"journal":{"name":"IEEE Transactions on Computers","volume":"73 9","pages":"2123-2135"},"PeriodicalIF":3.6,"publicationDate":"2024-06-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141503753","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Novas: Tackling Online Dynamic Video Analytics With Service Adaptation at Mobile Edge Servers Novas:利用移动边缘服务器的服务适应性解决在线动态视频分析问题
IF 3.6 2区 计算机科学 Q2 COMPUTER SCIENCE, HARDWARE & ARCHITECTURE Pub Date : 2024-06-19 DOI: 10.1109/TC.2024.3416675
Liang Zhang;Hongzi Zhu;Wen Fei;Yunzhe Li;Mingjin Zhang;Jiannong Cao;Minyi Guo
Video analytics at mobile edge servers offers significant benefits like reduced response time and enhanced privacy. However, guaranteeing various quality-of-service (QoS) requirements of dynamic video analysis requests on heterogeneous edge devices remains challenging. In this paper, we propose a scalable online video analytics scheme, called Novas, which automatically makes precise service configuration adjustments upon constant video content changes. Specifically, Novas leverages the filtered confidence sum and a two-window t-test to online detect accuracy fluctuations without ground truth information. In such cases, Novas efficiently estimates the performance of all potential service configurations through a singular value decomposition (SVD)-based collaborative filtering method. Finally, given the NP-hardness of the optimal scheduling problem, a heuristic scheduling strategy that maximizes the minimum remaining resources is devised to schedule the most suitable configurations to servers for execution. We evaluate the effectiveness of Novas through extensive hybrid experiments conducted on a dedicated testbed. Results show that Novas can achieve a substantial over 27$times$ improvement in satisfying the accuracy requirements compared with existing methods adopting fixed configurations, while ensuring latency requirements. Moreover, Novas improves the goodput of the system by an average of 37.86% compared to existing state-of-the-art scheduling solutions.
移动边缘服务器上的视频分析具有显著优势,如缩短响应时间和增强隐私保护。然而,在异构边缘设备上保证动态视频分析请求的各种服务质量(QoS)要求仍然具有挑战性。在本文中,我们提出了一种名为 Novas 的可扩展在线视频分析方案,它能在视频内容不断变化时自动进行精确的服务配置调整。具体来说,Novas 利用滤波置信度总和和双窗口 t 检验来在线检测精度波动,而无需地面实况信息。在这种情况下,Novas 通过基于奇异值分解(SVD)的协同过滤方法,有效地估算出所有潜在服务配置的性能。最后,考虑到最优调度问题的 NP 难度,我们设计了一种启发式调度策略,最大限度地减少剩余资源,从而将最合适的配置调度到服务器上执行。我们在专用测试平台上进行了广泛的混合实验,评估了 Novas 的有效性。结果表明,与采用固定配置的现有方法相比,Novas 在满足精度要求方面可实现超过 27 美元/次的大幅改进,同时还能确保延迟要求。此外,与现有的最先进调度解决方案相比,Novas 还能将系统的吞吐量平均提高 37.86%。
{"title":"Novas: Tackling Online Dynamic Video Analytics With Service Adaptation at Mobile Edge Servers","authors":"Liang Zhang;Hongzi Zhu;Wen Fei;Yunzhe Li;Mingjin Zhang;Jiannong Cao;Minyi Guo","doi":"10.1109/TC.2024.3416675","DOIUrl":"10.1109/TC.2024.3416675","url":null,"abstract":"Video analytics at mobile edge servers offers significant benefits like reduced response time and enhanced privacy. However, guaranteeing various quality-of-service (QoS) requirements of dynamic video analysis requests on heterogeneous edge devices remains challenging. In this paper, we propose a scalable online video analytics scheme, called Novas, which automatically makes precise service configuration adjustments upon constant video content changes. Specifically, Novas leverages the filtered confidence sum and a two-window t-test to online detect accuracy fluctuations without ground truth information. In such cases, Novas efficiently estimates the performance of all potential service configurations through a singular value decomposition (SVD)-based collaborative filtering method. Finally, given the NP-hardness of the optimal scheduling problem, a heuristic scheduling strategy that maximizes the minimum remaining resources is devised to schedule the most suitable configurations to servers for execution. We evaluate the effectiveness of Novas through extensive hybrid experiments conducted on a dedicated testbed. Results show that Novas can achieve a substantial over 27\u0000<inline-formula><tex-math>$times$</tex-math></inline-formula>\u0000 improvement in satisfying the accuracy requirements compared with existing methods adopting fixed configurations, while ensuring latency requirements. Moreover, Novas improves the goodput of the system by an average of 37.86% compared to existing state-of-the-art scheduling solutions.","PeriodicalId":13087,"journal":{"name":"IEEE Transactions on Computers","volume":"73 9","pages":"2220-2232"},"PeriodicalIF":3.6,"publicationDate":"2024-06-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141939839","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
A Unified and Fully Automated Framework for Wavelet-Based Attacks on Random Delay 基于小波的随机延迟攻击的统一和全自动框架
IF 3.6 2区 计算机科学 Q2 COMPUTER SCIENCE, HARDWARE & ARCHITECTURE Pub Date : 2024-06-19 DOI: 10.1109/TC.2024.3416682
Qianmei Wu;Fan Zhang;Shize Guo;Kun Yang;Haoting Shen
As a common defense against side-channel attacks, random delay insertion introduces noise into the executive flow of encryption, which increases attack complexity. Accordingly, various techniques are exploited to mitigate the defense effect of such insertions. As an advanced mathematical technique, wavelet analysis is considered to be a more effective technology according to its detailed and comprehensive interpretation of signals. In this paper, we propose a unified and fully automated wavelet-based attack framework (denoted as UWAF), whose data processing is kept within one unified wavelet domain, with three enhanced components: denoising, alignment and key extraction. We put forward a new idea of combining machine learning with wavelet analysis to realize the full automation of the program for attack framework, rendering it possible to search exhaustively for the optimal combination of parameter settings in wavelet transform. Our proposal finds a new setting of wavelet parameters that have not been exploited ever before and achieves the performance enhancement for about 20 times fewer traces required for successful key recovery. UWAF is compared with several mainstream attack frameworks. Experimental results show that it outperforms those counterparts, and can be considered as an effective framework-level solution to defeat the countermeasure of random delay insertion.
作为一种常见的侧信道攻击防御手段,随机延迟插入会在加密执行流中引入噪声,从而增加攻击的复杂性。因此,人们利用各种技术来减轻这种插入的防御效果。小波分析作为一种先进的数学技术,对信号的解释细致而全面,被认为是一种更有效的技术。在本文中,我们提出了一种基于小波的统一全自动攻击框架(简称 UWAF),其数据处理保持在一个统一的小波域内,并包含三个增强组件:去噪、对齐和密钥提取。我们提出了将机器学习与小波分析相结合的新思路,以实现攻击框架程序的完全自动化,从而可以穷举搜索小波变换中参数设置的最佳组合。我们的建议找到了一种新的小波参数设置,这种参数设置以前从未被利用过,并且在成功恢复密钥所需的痕迹数量减少约 20 倍的情况下实现了性能提升。UWAF 与几种主流攻击框架进行了比较。实验结果表明,UWAF 的性能优于这些主流攻击框架,可被视为一种有效的框架级解决方案,可击败随机延迟插入的对策。
{"title":"A Unified and Fully Automated Framework for Wavelet-Based Attacks on Random Delay","authors":"Qianmei Wu;Fan Zhang;Shize Guo;Kun Yang;Haoting Shen","doi":"10.1109/TC.2024.3416682","DOIUrl":"10.1109/TC.2024.3416682","url":null,"abstract":"As a common defense against side-channel attacks, random delay insertion introduces noise into the executive flow of encryption, which increases attack complexity. Accordingly, various techniques are exploited to mitigate the defense effect of such insertions. As an advanced mathematical technique, wavelet analysis is considered to be a more effective technology according to its detailed and comprehensive interpretation of signals. In this paper, we propose a unified and fully automated wavelet-based attack framework (denoted as \u0000<bold>UWAF</b>\u0000), whose data processing is kept within one unified wavelet domain, with three enhanced components: denoising, alignment and key extraction. We put forward a new idea of combining machine learning with wavelet analysis to realize the full automation of the program for attack framework, rendering it possible to search exhaustively for the optimal combination of parameter settings in wavelet transform. Our proposal finds a new setting of wavelet parameters that have not been exploited ever before and achieves the performance enhancement for about 20 times fewer traces required for successful key recovery. \u0000<bold>UWAF</b>\u0000 is compared with several mainstream attack frameworks. Experimental results show that it outperforms those counterparts, and can be considered as an effective framework-level solution to defeat the countermeasure of random delay insertion.","PeriodicalId":13087,"journal":{"name":"IEEE Transactions on Computers","volume":"73 9","pages":"2206-2219"},"PeriodicalIF":3.6,"publicationDate":"2024-06-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141939831","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
期刊
IEEE Transactions on Computers
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1