首页 > 最新文献

IEEE Transactions on Computers最新文献

英文 中文
A Unified and Fully Automated Framework for Wavelet-Based Attacks on Random Delay 基于小波的随机延迟攻击的统一和全自动框架
IF 3.6 2区 计算机科学 Q2 COMPUTER SCIENCE, HARDWARE & ARCHITECTURE Pub Date : 2024-06-19 DOI: 10.1109/TC.2024.3416682
Qianmei Wu;Fan Zhang;Shize Guo;Kun Yang;Haoting Shen
As a common defense against side-channel attacks, random delay insertion introduces noise into the executive flow of encryption, which increases attack complexity. Accordingly, various techniques are exploited to mitigate the defense effect of such insertions. As an advanced mathematical technique, wavelet analysis is considered to be a more effective technology according to its detailed and comprehensive interpretation of signals. In this paper, we propose a unified and fully automated wavelet-based attack framework (denoted as UWAF), whose data processing is kept within one unified wavelet domain, with three enhanced components: denoising, alignment and key extraction. We put forward a new idea of combining machine learning with wavelet analysis to realize the full automation of the program for attack framework, rendering it possible to search exhaustively for the optimal combination of parameter settings in wavelet transform. Our proposal finds a new setting of wavelet parameters that have not been exploited ever before and achieves the performance enhancement for about 20 times fewer traces required for successful key recovery. UWAF is compared with several mainstream attack frameworks. Experimental results show that it outperforms those counterparts, and can be considered as an effective framework-level solution to defeat the countermeasure of random delay insertion.
作为一种常见的侧信道攻击防御手段,随机延迟插入会在加密执行流中引入噪声,从而增加攻击的复杂性。因此,人们利用各种技术来减轻这种插入的防御效果。小波分析作为一种先进的数学技术,对信号的解释细致而全面,被认为是一种更有效的技术。在本文中,我们提出了一种基于小波的统一全自动攻击框架(简称 UWAF),其数据处理保持在一个统一的小波域内,并包含三个增强组件:去噪、对齐和密钥提取。我们提出了将机器学习与小波分析相结合的新思路,以实现攻击框架程序的完全自动化,从而可以穷举搜索小波变换中参数设置的最佳组合。我们的建议找到了一种新的小波参数设置,这种参数设置以前从未被利用过,并且在成功恢复密钥所需的痕迹数量减少约 20 倍的情况下实现了性能提升。UWAF 与几种主流攻击框架进行了比较。实验结果表明,UWAF 的性能优于这些主流攻击框架,可被视为一种有效的框架级解决方案,可击败随机延迟插入的对策。
{"title":"A Unified and Fully Automated Framework for Wavelet-Based Attacks on Random Delay","authors":"Qianmei Wu;Fan Zhang;Shize Guo;Kun Yang;Haoting Shen","doi":"10.1109/TC.2024.3416682","DOIUrl":"10.1109/TC.2024.3416682","url":null,"abstract":"As a common defense against side-channel attacks, random delay insertion introduces noise into the executive flow of encryption, which increases attack complexity. Accordingly, various techniques are exploited to mitigate the defense effect of such insertions. As an advanced mathematical technique, wavelet analysis is considered to be a more effective technology according to its detailed and comprehensive interpretation of signals. In this paper, we propose a unified and fully automated wavelet-based attack framework (denoted as \u0000<bold>UWAF</b>\u0000), whose data processing is kept within one unified wavelet domain, with three enhanced components: denoising, alignment and key extraction. We put forward a new idea of combining machine learning with wavelet analysis to realize the full automation of the program for attack framework, rendering it possible to search exhaustively for the optimal combination of parameter settings in wavelet transform. Our proposal finds a new setting of wavelet parameters that have not been exploited ever before and achieves the performance enhancement for about 20 times fewer traces required for successful key recovery. \u0000<bold>UWAF</b>\u0000 is compared with several mainstream attack frameworks. Experimental results show that it outperforms those counterparts, and can be considered as an effective framework-level solution to defeat the countermeasure of random delay insertion.","PeriodicalId":13087,"journal":{"name":"IEEE Transactions on Computers","volume":"73 9","pages":"2206-2219"},"PeriodicalIF":3.6,"publicationDate":"2024-06-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141939831","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Juliet: A Configurable Processor for Computing on Encrypted Data 朱丽叶用于加密数据计算的可配置处理器
IF 3.6 2区 计算机科学 Q2 COMPUTER SCIENCE, HARDWARE & ARCHITECTURE Pub Date : 2024-06-19 DOI: 10.1109/TC.2024.3416752
Charles Gouert;Dimitris Mouris;Nektarios Georgios Tsoutsos
Fully homomorphic encryption (FHE) has become progressively more viable in the years since its original inception in 2009. At the same time, leveraging state-of-the-art schemes in an efficient way for general computation remains prohibitively difficult for the average programmer. In this work, we introduce a new design for a fully homomorphic processor, dubbed Juliet, to enable faster operations on encrypted data using the state-of-the-art TFHE and cuFHE libraries for both CPU and GPU evaluation. To improve usability, we define an expressive assembly language and instruction set architecture (ISA) judiciously designed for end-to-end encrypted computation. We demonstrate Juliet's capabilities with a broad range of realistic benchmarks including cryptographic algorithms, such as the lightweight ciphers Simon and Speck, as well as logistic regression (LR) inference and matrix multiplication.
全同态加密(FHE)自 2009 年诞生以来,已逐渐变得更加可行。与此同时,对于普通程序员来说,以高效的方式利用最先进的方案进行一般计算仍然非常困难。在这项工作中,我们介绍了一种全新设计的全同态处理器(命名为 Juliet),利用最先进的 TFHE 和 cuFHE 库,在 CPU 和 GPU 评估中实现更快的加密数据运算。为了提高可用性,我们定义了一种富有表现力的汇编语言和指令集架构 (ISA),该架构专为端到端加密计算而精心设计。我们利用各种实际基准来展示 Juliet 的能力,包括加密算法(如轻量级密码 Simon 和 Speck)以及逻辑回归(LR)推理和矩阵乘法。
{"title":"Juliet: A Configurable Processor for Computing on Encrypted Data","authors":"Charles Gouert;Dimitris Mouris;Nektarios Georgios Tsoutsos","doi":"10.1109/TC.2024.3416752","DOIUrl":"10.1109/TC.2024.3416752","url":null,"abstract":"Fully homomorphic encryption (FHE) has become progressively more viable in the years since its original inception in 2009. At the same time, leveraging state-of-the-art schemes in an efficient way for general computation remains prohibitively difficult for the average programmer. In this work, we introduce a new design for a fully homomorphic processor, dubbed Juliet, to enable faster operations on encrypted data using the state-of-the-art TFHE and cuFHE libraries for both CPU and GPU evaluation. To improve usability, we define an expressive assembly language and instruction set architecture (ISA) judiciously designed for end-to-end encrypted computation. We demonstrate Juliet's capabilities with a broad range of realistic benchmarks including cryptographic algorithms, such as the lightweight ciphers \u0000<sc>Simon</small>\u0000 and \u0000<sc>Speck</small>\u0000, as well as logistic regression (LR) inference and matrix multiplication.","PeriodicalId":13087,"journal":{"name":"IEEE Transactions on Computers","volume":"73 9","pages":"2335-2349"},"PeriodicalIF":3.6,"publicationDate":"2024-06-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141939892","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Efficient Fault-Tolerant Path Embedding for 3D Torus Network Using Locally Faulty Blocks 利用局部故障块为三维环形网络嵌入高效容错路径
IF 3.6 2区 计算机科学 Q2 COMPUTER SCIENCE, HARDWARE & ARCHITECTURE Pub Date : 2024-06-19 DOI: 10.1109/TC.2024.3416695
Weibei Fan;Fu Xiao;Mengjie Lv;Lei Han;Shui Yu
3D tori are significant interconnection architectures in building supercomputers and parallel computing systems. Due to the rapid growth of edge faults and the crucial role of path structures in large-scale distributed systems, fault-tolerant path embedding and correlated issues have drawn widespread researches. However, existing path embedding methods are based on traditional fault models, allowing all faults to be near the same node, so they usually only focus on theoretical proof and generate linear fault-tolerance related to dimension $n$. In order to improve the fault-tolerance of 3D torus, we first propose a novel conditional fault model called the Locally Faulty Block model (LFB model). On the basis of this model, the Hamiltonian paths with large-scale edge defects in torus are investigated. After that, we construct an Hamiltonian path embedding algorithm HP-LFB into torus with $O(N)$ under the LFB model, where $N$ is the number of nodes in torus. Furthermore, we present an adaptive routing algorithm HoeFA, which is based on the method of distance vector to limit the use of virtual channels (VCs). We also make a comparison with state-of-the-art schemes, indicating that our scheme enhance other comprehensive results. The experiment indicated that HP-LFB can sustain the dynamic degradation of the batting average of establishing Hamiltonian paths, with the added faulty edges exceeding fault-tolerance.
三维环是构建超级计算机和并行计算系统的重要互连架构。由于边缘故障的快速增长和路径结构在大规模分布式系统中的关键作用,容错路径嵌入和相关问题引起了广泛的研究。然而,现有的路径嵌入方法都是基于传统的故障模型,允许所有故障都在同一节点附近,因此通常只注重理论证明,并产生与维度 $n$ 相关的线性容错。为了提高三维环的容错性,我们首先提出了一种新的条件故障模型,即局部故障块模型(LFB 模型)。在此基础上,研究了环中存在大规模边缘缺陷的哈密顿路径。然后,我们构建了一种在 LFB 模型下以 $O(N)$ 嵌入环的哈密顿路径嵌入算法 HP-LFB,其中 $N$ 是环中的节点数。此外,我们还提出了一种自适应路由算法 HoeFA,它基于距离矢量方法来限制虚拟通道(VC)的使用。我们还与最先进的方案进行了比较,结果表明我们的方案增强了其他综合结果。实验表明,HP-LFB 可以承受建立哈密尔顿路径的击球平均值的动态衰减,增加的故障边超出了容错范围。
{"title":"Efficient Fault-Tolerant Path Embedding for 3D Torus Network Using Locally Faulty Blocks","authors":"Weibei Fan;Fu Xiao;Mengjie Lv;Lei Han;Shui Yu","doi":"10.1109/TC.2024.3416695","DOIUrl":"https://doi.org/10.1109/TC.2024.3416695","url":null,"abstract":"3D tori are significant interconnection architectures in building supercomputers and parallel computing systems. Due to the rapid growth of edge faults and the crucial role of path structures in large-scale distributed systems, fault-tolerant path embedding and correlated issues have drawn widespread researches. However, existing path embedding methods are based on traditional fault models, allowing all faults to be near the same node, so they usually only focus on theoretical proof and generate linear fault-tolerance related to dimension \u0000<inline-formula><tex-math>$n$</tex-math></inline-formula>\u0000. In order to improve the fault-tolerance of 3D torus, we first propose a novel conditional fault model called the Locally Faulty Block model (LFB model). On the basis of this model, the Hamiltonian paths with large-scale edge defects in torus are investigated. After that, we construct an Hamiltonian path embedding algorithm HP-LFB into torus with \u0000<inline-formula><tex-math>$O(N)$</tex-math></inline-formula>\u0000 under the LFB model, where \u0000<inline-formula><tex-math>$N$</tex-math></inline-formula>\u0000 is the number of nodes in torus. Furthermore, we present an adaptive routing algorithm HoeFA, which is based on the method of distance vector to limit the use of virtual channels (VCs). We also make a comparison with state-of-the-art schemes, indicating that our scheme enhance other comprehensive results. The experiment indicated that HP-LFB can sustain the dynamic degradation of the batting average of establishing Hamiltonian paths, with the added faulty edges exceeding fault-tolerance.","PeriodicalId":13087,"journal":{"name":"IEEE Transactions on Computers","volume":"73 9","pages":"2305-2319"},"PeriodicalIF":3.6,"publicationDate":"2024-06-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141966295","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
A Combined Trend Virtual Machine Consolidation Strategy for Cloud Data Centers 云数据中心虚拟机整合战略的综合趋势
IF 3.6 2区 计算机科学 Q2 COMPUTER SCIENCE, HARDWARE & ARCHITECTURE Pub Date : 2024-06-19 DOI: 10.1109/TC.2024.3416734
Yuxuan Chen;Zhen Zhang;Yuhui Deng;Geyong Min;Lin Cui
Virtual machine (VM) consolidation strategies are widely used in cloud data centers (CDC) to optimize resource utilization and reduce total energy consumption. Although existing strategies consider current and future resource utilization, the impact of sudden bursts in historical resource utilization on the hosts has been underestimated in uncertain future periods. Insufficient analysis of historical resource utilization may increase the risk of host overloading and Service Level Agreement Violation (SLAV). By defining historical and future trends based on resource utilization, we propose a novel combined trend VM consolidation (CTVMC) strategy which can effectively reduce energy consumption and SLAV. The VMs with the largest combined trend are selected for migration to prevent host overloading. Based on the temporal locality and prediction technique, CTVMC then employs the past, present, and future resource utilization to filter candidate hosts, and identifies the most complementary host to place VM using combined trends. We conduct extensive simulation experiments with PlanetLab Trace and Google Cluster Trace in the CloudSim simulator. Compared with the well-known strategies, CTVMC strategy using the PlanetLab Trace can reduce the number of migrations by over 72.39%, SLAV by over 75.85%, and ESV (a combined metric that judges the trade-off between energy consumption and SLAV) by over 81.54%. According to the Google Cluster Trace, our strategy can reduce the number of migrations by over 61.51%, SLAV by over 37.37%, and ESV by over 35.30%.
虚拟机(VM)整合策略被广泛应用于云数据中心(CDC),以优化资源利用率并降低总能耗。虽然现有策略考虑了当前和未来的资源利用率,但在不确定的未来时期,历史资源利用率的突然爆发对主机的影响被低估了。对历史资源利用率的分析不足可能会增加主机过载和违反服务级别协议(SLAV)的风险。通过定义基于资源利用率的历史和未来趋势,我们提出了一种新颖的合并趋势虚拟机整合(CTVMC)策略,该策略可有效降低能耗和 SLAV。我们选择综合趋势最大的虚拟机进行迁移,以防止主机过载。然后,基于时间定位和预测技术,CTVMC 利用过去、现在和未来的资源利用率来筛选候选主机,并利用综合趋势识别出最具互补性的主机来放置虚拟机。我们在 CloudSim 模拟器中使用 PlanetLab Trace 和 Google Cluster Trace 进行了大量模拟实验。与众所周知的策略相比,使用 PlanetLab Trace 的 CTVMC 策略可以减少 72.39% 以上的迁移次数,减少 75.85% 以上的 SLAV,减少 81.54% 以上的 ESV(判断能耗和 SLAV 之间权衡的综合指标)。根据谷歌集群跟踪,我们的策略可以减少 61.51% 以上的迁移次数、37.37% 以上的 SLAV 和 35.30% 以上的 ESV。
{"title":"A Combined Trend Virtual Machine Consolidation Strategy for Cloud Data Centers","authors":"Yuxuan Chen;Zhen Zhang;Yuhui Deng;Geyong Min;Lin Cui","doi":"10.1109/TC.2024.3416734","DOIUrl":"10.1109/TC.2024.3416734","url":null,"abstract":"Virtual machine (VM) consolidation strategies are widely used in cloud data centers (CDC) to optimize resource utilization and reduce total energy consumption. Although existing strategies consider current and future resource utilization, the impact of sudden bursts in historical resource utilization on the hosts has been underestimated in uncertain future periods. Insufficient analysis of historical resource utilization may increase the risk of host overloading and Service Level Agreement Violation (SLAV). By defining historical and future trends based on resource utilization, we propose a novel combined trend VM consolidation (CTVMC) strategy which can effectively reduce energy consumption and SLAV. The VMs with the largest combined trend are selected for migration to prevent host overloading. Based on the temporal locality and prediction technique, CTVMC then employs the past, present, and future resource utilization to filter candidate hosts, and identifies the most complementary host to place VM using combined trends. We conduct extensive simulation experiments with PlanetLab Trace and Google Cluster Trace in the CloudSim simulator. Compared with the well-known strategies, CTVMC strategy using the PlanetLab Trace can reduce the number of migrations by over 72.39%, SLAV by over 75.85%, and ESV (a combined metric that judges the trade-off between energy consumption and SLAV) by over 81.54%. According to the Google Cluster Trace, our strategy can reduce the number of migrations by over 61.51%, SLAV by over 37.37%, and ESV by over 35.30%.","PeriodicalId":13087,"journal":{"name":"IEEE Transactions on Computers","volume":"73 9","pages":"2150-2164"},"PeriodicalIF":3.6,"publicationDate":"2024-06-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141939835","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Enabling Reliable Memory-Mapped I/O With Auto-Snapshot for Persistent Memory Systems 利用自动快照功能为持久内存系统提供可靠的内存映射 I/O
IF 3.6 2区 计算机科学 Q2 COMPUTER SCIENCE, HARDWARE & ARCHITECTURE Pub Date : 2024-06-19 DOI: 10.1109/TC.2024.3416683
Bo Ding;Wei Tong;Yu Hua;Zhangyu Chen;Xueliang Wei;Dan Feng
Persistent memory (PM) is promising to be the next-generation storage device with better I/O performance. Since the traditional I/O path is too lengthy to drive PM featuring low latency and high bandwidth, prior works proposed memory-mapped I/O (MMIO) to shorten the I/O path to PM. However, native MMIO directly maps files into the user address space, which puts files at risk of being corrupted by scribbles and non-atomic I/O interfaces, causing serious reliability issues. To address these issues, we propose RMMIO, an efficient user-space library that provides reliable MMIO for PM systems. RMMIO provides atomic I/O interfaces and lightweight snapshots to ensure the reliability of MMIO. Compared with existing schemes, RMMIO mitigates additional writes and extra software overheads caused by reliability guarantees, thus achieving MMIO-like performance. In addition, we also propose an automatic snapshot with efficient memory management for RMMIO to minimize data loss incurred by reliability issues. The experimental results of microbenchmarks show that RMMIO achieves 8.49x and 2.31x higher throughput than ext4-DAX and the state-of-the-art MMIO-based scheme, respectively, while ensuring data reliability. The real-world application accelerated by RMMIO achieves at most 7.06x higher throughput than that of ext4-DAX.
持久内存(PM)有望成为具有更好 I/O 性能的下一代存储设备。由于传统的 I/O 路径过于冗长,无法驱动具有低延迟和高带宽特性的持久性内存,因此之前的研究提出了内存映射 I/O(MMIO),以缩短通往持久性内存的 I/O 路径。然而,原生 MMIO 直接将文件映射到用户地址空间,这使得文件有可能被涂鸦和非原子 I/O 接口损坏,从而导致严重的可靠性问题。为了解决这些问题,我们提出了 RMMIO,一个为 PM 系统提供可靠 MMIO 的高效用户空间库。RMMIO 提供原子 I/O 接口和轻量级快照,以确保 MMIO 的可靠性。与现有方案相比,RMMIO 减少了由可靠性保证引起的额外写入和额外软件开销,从而实现了类似 MMIO 的性能。此外,我们还为 RMMIO 提出了一种具有高效内存管理功能的自动快照,以尽量减少因可靠性问题造成的数据丢失。微基准测试的实验结果表明,在确保数据可靠性的前提下,RMMIO 的吞吐量分别比 ext4-DAX 和基于 MMIO 的最先进方案高出 8.49 倍和 2.31 倍。RMMIO 加速的实际应用的吞吐量比 ext4-DAX 最多高出 7.06 倍。
{"title":"Enabling Reliable Memory-Mapped I/O With Auto-Snapshot for Persistent Memory Systems","authors":"Bo Ding;Wei Tong;Yu Hua;Zhangyu Chen;Xueliang Wei;Dan Feng","doi":"10.1109/TC.2024.3416683","DOIUrl":"10.1109/TC.2024.3416683","url":null,"abstract":"Persistent memory (PM) is promising to be the next-generation storage device with better I/O performance. Since the traditional I/O path is too lengthy to drive PM featuring low latency and high bandwidth, prior works proposed memory-mapped I/O (MMIO) to shorten the I/O path to PM. However, native MMIO directly maps files into the user address space, which puts files at risk of being corrupted by scribbles and non-atomic I/O interfaces, causing serious reliability issues. To address these issues, we propose RMMIO, an efficient user-space library that provides reliable MMIO for PM systems. RMMIO provides atomic I/O interfaces and lightweight snapshots to ensure the reliability of MMIO. Compared with existing schemes, RMMIO mitigates additional writes and extra software overheads caused by reliability guarantees, thus achieving MMIO-like performance. In addition, we also propose an automatic snapshot with efficient memory management for RMMIO to minimize data loss incurred by reliability issues. The experimental results of microbenchmarks show that RMMIO achieves 8.49x and 2.31x higher throughput than ext4-DAX and the state-of-the-art MMIO-based scheme, respectively, while ensuring data reliability. The real-world application accelerated by RMMIO achieves at most 7.06x higher throughput than that of ext4-DAX.","PeriodicalId":13087,"journal":{"name":"IEEE Transactions on Computers","volume":"73 9","pages":"2290-2304"},"PeriodicalIF":3.6,"publicationDate":"2024-06-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141939834","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Highly Evasive Targeted Bit-Trojan on Deep Neural Networks 深度神经网络上的高规避性定向比特木马
IF 3.6 2区 计算机科学 Q2 COMPUTER SCIENCE, HARDWARE & ARCHITECTURE Pub Date : 2024-06-19 DOI: 10.1109/TC.2024.3416705
Lingxin Jin;Wei Jiang;Jinyu Zhan;Xiangyu Wen
Bit-Trojan attacks based on Bit-Flip Attacks (BFAs) have emerged as severe threats to Deep Neural Networks (DNNs) deployed in safety-critical systems since they can inject Trojans during the model deployment stage without accessing training supply chains. Existing works are mainly devoted to improving the executability of Bit-Trojan attacks, while seriously ignoring the concerns on evasiveness. In this paper, we propose a highly Evasive Targeted Bit-Trojan (ETBT) with evasiveness improvements from three aspects, i.e., reducing the number of bit-flips (improving executability), smoothing activation distribution, and reducing accuracy fluctuation. Specifically, key neuron extraction is utilized to identify essential neurons from DNNs precisely and decouple the key neurons between different classes, thus improving the evasiveness regarding accuracy fluctuation and executability. Additionally, activation-constrained trigger generation is devised to eliminate the differences between activation distributions of Trojaned and clean models, which enhances evasiveness from the perspective of activation distribution. Ultimately, the strategy of constrained target bits search is designed to reduce bit-flip numbers, directly benefits the evasiveness of ETBT. Benchmark-based experiments are conducted to evaluate the superiority of ETBT. Compared with existing works, ETBT can significantly improve evasiveness-relevant performances with much lower computation overheads, better robustness, and generalizability. Our code is released at https://github.com/bluefier/ETBT.
基于比特翻转攻击(BFA)的比特木马攻击已成为部署在安全关键型系统中的深度神经网络(DNN)的严重威胁,因为它们可以在模型部署阶段注入木马,而无需访问训练供应链。现有研究主要致力于提高比特木马攻击的可执行性,而严重忽视了对规避性的关注。本文提出了一种高度规避性的目标比特木马(ETBT),从减少比特翻转次数(提高可执行性)、平滑激活分布和减少精度波动三个方面提高了规避性。具体来说,利用关键神经元提取技术从 DNN 中精确识别出基本神经元,并将不同类别之间的关键神经元解耦,从而在准确性波动和可执行性方面提高规避性。此外,还设计了激活受限触发器生成技术,以消除木马模型和干净模型的激活分布差异,从而从激活分布的角度提高规避性。最后,限制目标比特搜索策略旨在减少比特翻转次数,直接提高了 ETBT 的规避性。基于基准的实验评估了 ETBT 的优越性。与现有研究相比,ETBT 能显著提高规避性相关性能,而且计算开销更低、鲁棒性更好、通用性更强。我们的代码发布于 https://github.com/bluefier/ETBT。
{"title":"Highly Evasive Targeted Bit-Trojan on Deep Neural Networks","authors":"Lingxin Jin;Wei Jiang;Jinyu Zhan;Xiangyu Wen","doi":"10.1109/TC.2024.3416705","DOIUrl":"10.1109/TC.2024.3416705","url":null,"abstract":"Bit-Trojan attacks based on Bit-Flip Attacks (BFAs) have emerged as severe threats to Deep Neural Networks (DNNs) deployed in safety-critical systems since they can inject Trojans during the model deployment stage without accessing training supply chains. Existing works are mainly devoted to improving the executability of Bit-Trojan attacks, while seriously ignoring the concerns on evasiveness. In this paper, we propose a highly Evasive Targeted Bit-Trojan (ETBT) with evasiveness improvements from three aspects, i.e., reducing the number of bit-flips (improving executability), smoothing activation distribution, and reducing accuracy fluctuation. Specifically, key neuron extraction is utilized to identify essential neurons from DNNs precisely and decouple the key neurons between different classes, thus improving the evasiveness regarding accuracy fluctuation and executability. Additionally, activation-constrained trigger generation is devised to eliminate the differences between activation distributions of Trojaned and clean models, which enhances evasiveness from the perspective of activation distribution. Ultimately, the strategy of constrained target bits search is designed to reduce bit-flip numbers, directly benefits the evasiveness of ETBT. Benchmark-based experiments are conducted to evaluate the superiority of ETBT. Compared with existing works, ETBT can significantly improve evasiveness-relevant performances with much lower computation overheads, better robustness, and generalizability. Our code is released at \u0000<uri>https://github.com/bluefier/ETBT</uri>\u0000.","PeriodicalId":13087,"journal":{"name":"IEEE Transactions on Computers","volume":"73 9","pages":"2350-2363"},"PeriodicalIF":3.6,"publicationDate":"2024-06-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141939837","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Hiding in Plain Sight: Adversarial Attack via Style Transfer on Image Borders 隐藏在众目睽睽之下通过样式转移对图像边界进行对抗性攻击
IF 3.6 2区 计算机科学 Q2 COMPUTER SCIENCE, HARDWARE & ARCHITECTURE Pub Date : 2024-06-19 DOI: 10.1109/TC.2024.3416761
Haiyan Zhang;Xinghua Li;Jiawei Tang;Chunlei Peng;Yunwei Wang;Ning Zhang;Yingbin Miao;Ximeng Liu;Kim-Kwang Raymond Choo
Deep Convolution Neural Networks (CNNs) have become the cornerstone of image classification, but the emergence of adversarial image attacks brings serious security risks to CNN-based applications. As a local perturbation attack, the border attack can achieve high success rates by only modifying the pixels around the border of an image, which is a novel attack perspective. However, existing border attacks have shortcomings in stealthiness and are easily detected. In this article, we propose a novel stealthy border attack method based on deep feature alignment. Specifically, we propose a deep feature alignment algorithm based on style transfer to guarantee the stealthiness of adversarial borders. The algorithm takes the deep feature difference between the adversarial and the original borders as the stealthiness loss and thus ensures good stealthiness of the generated adversarial images. To ensure high attack success rates simultaneously, we apply cross entropy to design the targeted attack loss and use margin loss as well as Leaky ReLU to design the untargeted attack loss. Experiments show that the structural similarity between the generated adversarial images and the original images is 8.8% higher than the state-of-art border attack method, indicating that our proposed adversarial images have better stealthiness. At the same time, the success rate of our attack in the face of defense methods is much higher, which is about four times that of the state-of-art border attack under the adversarial training defense.
深度卷积神经网络(CNN)已成为图像分类的基石,但对抗性图像攻击的出现给基于 CNN 的应用带来了严重的安全隐患。作为一种局部扰动攻击,边界攻击只需修改图像边界周围的像素就能达到很高的成功率,这是一种新颖的攻击视角。然而,现有的边界攻击存在隐蔽性差、易被检测等缺点。本文提出了一种基于深度特征对齐的新型隐形边界攻击方法。具体来说,我们提出了一种基于样式转移的深度特征对齐算法,以保证对抗性边界的隐蔽性。该算法将对抗边界与原始边界之间的深度特征差异作为隐蔽性损失,从而确保生成的对抗图像具有良好的隐蔽性。为了同时确保较高的攻击成功率,我们采用交叉熵来设计目标攻击损失,并使用边际损失和 Leaky ReLU 来设计非目标攻击损失。实验表明,生成的对抗图像与原始图像的结构相似度比最先进的边界攻击方法高出 8.8%,这表明我们提出的对抗图像具有更好的隐蔽性。同时,面对各种防御方法,我们的攻击成功率也更高,在对抗训练防御下,我们的攻击成功率约为最先进边界攻击方法的四倍。
{"title":"Hiding in Plain Sight: Adversarial Attack via Style Transfer on Image Borders","authors":"Haiyan Zhang;Xinghua Li;Jiawei Tang;Chunlei Peng;Yunwei Wang;Ning Zhang;Yingbin Miao;Ximeng Liu;Kim-Kwang Raymond Choo","doi":"10.1109/TC.2024.3416761","DOIUrl":"10.1109/TC.2024.3416761","url":null,"abstract":"Deep Convolution Neural Networks (CNNs) have become the cornerstone of image classification, but the emergence of adversarial image attacks brings serious security risks to CNN-based applications. As a local perturbation attack, the border attack can achieve high success rates by only modifying the pixels around the border of an image, which is a novel attack perspective. However, existing border attacks have shortcomings in stealthiness and are easily detected. In this article, we propose a novel stealthy border attack method based on deep feature alignment. Specifically, we propose a deep feature alignment algorithm based on style transfer to guarantee the stealthiness of adversarial borders. The algorithm takes the deep feature difference between the adversarial and the original borders as the stealthiness loss and thus ensures good stealthiness of the generated adversarial images. To ensure high attack success rates simultaneously, we apply cross entropy to design the targeted attack loss and use margin loss as well as Leaky ReLU to design the untargeted attack loss. Experiments show that the structural similarity between the generated adversarial images and the original images is 8.8% higher than the state-of-art border attack method, indicating that our proposed adversarial images have better stealthiness. At the same time, the success rate of our attack in the face of defense methods is much higher, which is about four times that of the state-of-art border attack under the adversarial training defense.","PeriodicalId":13087,"journal":{"name":"IEEE Transactions on Computers","volume":"73 10","pages":"2405-2419"},"PeriodicalIF":3.6,"publicationDate":"2024-06-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141939893","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
CBANA: A Lightweight, Efficient, and Flexible Cache Behavior Analysis Framework CBANA:轻量、高效、灵活的缓存行为分析框架
IF 3.6 2区 计算机科学 Q2 COMPUTER SCIENCE, HARDWARE & ARCHITECTURE Pub Date : 2024-06-19 DOI: 10.1109/TC.2024.3416747
Qilin Hu;Yan Ding;Chubo Liu;Keqin Li;Kenli Li;Albert Y. Zomaya
Cache miss analysis has become one of the most important things to improve the execution performance of a program. Generally, the approaches for analyzing cache misses can be categorized into dynamic analysis and static analysis. The former collects sampling statistics during program execution but is limited to specialized hardware support and incurs expensive execution overhead. The latter avoids the limitations but faces two challenges: inaccurate execution path prediction and inefficient analysis resulted by the explosion of the program state graph. To overcome these challenges, we propose CBANA, an LLVM- and process address space-based lightweight, efficient, and flexible cache behavior analysis framework. CBANA significantly improves the prediction accuracy of the execution path with awareness of inputs. To improve analysis efficiency and utilize the program preprocessing, CBANA refactors loop structures to reduce search space and dynamically splices intermediate results to reduce unnecessary or redundant computations. CBANA also supports configurable hardware parameter settings, and decouples the module of cache replacement policy from other modules. Thus, its flexibility is established. We evaluate CBANA by using the popular open benchmark PolyBench, graph workloads, and our synthetic workloads with good and poor data locality. Compared with the popular dynamic cache analysis tools Perf and Valgrind, the cache miss gap is less than 3.79% and 2.74% respectively with over ten thousand data accesses for the synthetic workloads, and the time reduction is up to 92.38% and 97.51% for the multiple-path workloads. Compared with the popular static cache analysis tool Heptane, CBANA achieves a time reduction of 97.71% while ensuring accuracy at the same time.
高速缓存未命中分析已成为提高程序执行性能的最重要手段之一。一般来说,缓存未命中分析方法可分为动态分析和静态分析。前者在程序执行过程中收集采样统计数据,但受限于专门的硬件支持,而且会产生昂贵的执行开销。后者避免了上述限制,但面临两个挑战:不准确的执行路径预测和程序状态图爆炸导致的低效分析。为了克服这些挑战,我们提出了 CBANA,一个基于 LLVM 和进程地址空间的轻量级、高效、灵活的缓存行为分析框架。CBANA 通过对输入的感知,大大提高了执行路径预测的准确性。为了提高分析效率和利用程序预处理,CBANA 重构了循环结构以减少搜索空间,并动态拼接中间结果以减少不必要或多余的计算。CBANA 还支持可配置的硬件参数设置,并将缓存替换策略模块与其他模块解耦。因此,它的灵活性得以确立。我们通过使用流行的开放基准 PolyBench、图工作负载和我们的具有良好和较差数据局部性的合成工作负载对 CBANA 进行了评估。与流行的动态高速缓存分析工具 Perf 和 Valgrind 相比,在超过一万次数据访问的合成工作负载中,高速缓存未命中率差距分别小于 3.79% 和 2.74%,在多路径工作负载中,时间缩短率高达 92.38% 和 97.51%。与流行的静态缓存分析工具 Heptane 相比,CBANA 在确保准确性的同时,还缩短了 97.71% 的时间。
{"title":"CBANA: A Lightweight, Efficient, and Flexible Cache Behavior Analysis Framework","authors":"Qilin Hu;Yan Ding;Chubo Liu;Keqin Li;Kenli Li;Albert Y. Zomaya","doi":"10.1109/TC.2024.3416747","DOIUrl":"10.1109/TC.2024.3416747","url":null,"abstract":"Cache miss analysis has become one of the most important things to improve the execution performance of a program. Generally, the approaches for analyzing cache misses can be categorized into dynamic analysis and static analysis. The former collects sampling statistics during program execution but is limited to specialized hardware support and incurs expensive execution overhead. The latter avoids the limitations but faces two challenges: inaccurate execution path prediction and inefficient analysis resulted by the explosion of the program state graph. To overcome these challenges, we propose CBANA, an LLVM- and process address space-based lightweight, efficient, and flexible cache behavior analysis framework. CBANA significantly improves the prediction accuracy of the execution path with awareness of inputs. To improve analysis efficiency and utilize the program preprocessing, CBANA refactors loop structures to reduce search space and dynamically splices intermediate results to reduce unnecessary or redundant computations. CBANA also supports configurable hardware parameter settings, and decouples the module of cache replacement policy from other modules. Thus, its flexibility is established. We evaluate CBANA by using the popular open benchmark PolyBench, graph workloads, and our synthetic workloads with good and poor data locality. Compared with the popular dynamic cache analysis tools Perf and Valgrind, the cache miss gap is less than 3.79% and 2.74% respectively with over ten thousand data accesses for the synthetic workloads, and the time reduction is up to 92.38% and 97.51% for the multiple-path workloads. Compared with the popular static cache analysis tool Heptane, CBANA achieves a time reduction of 97.71% while ensuring accuracy at the same time.","PeriodicalId":13087,"journal":{"name":"IEEE Transactions on Computers","volume":"73 9","pages":"2262-2274"},"PeriodicalIF":3.6,"publicationDate":"2024-06-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141939830","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Quantum Support Vector Machine for Classifying Noisy Data 用于噪声数据分类的量子支持向量机
IF 3.6 2区 计算机科学 Q2 COMPUTER SCIENCE, HARDWARE & ARCHITECTURE Pub Date : 2024-06-19 DOI: 10.1109/TC.2024.3416619
Jiaye Li;Yangding Li;Jiagang Song;Jian Zhang;Shichao Zhang
Noisy data is ubiquitous in quantum computer, greatly affecting the performance of various algorithms. However, existing quantum support vector machine models are not equipped with anti-noise ability, and often deliver low performance when learning accurate hyperplane normal vectors from noisy data. To attack this issue, an anti-noise quantum support vector machine algorithm is developed in this paper. Specifically, a weight factor is first embedded into the hinge loss, so as to construct the objective function of anti-noise support vector machine. And then, an alternative iterative optimization strategy and a quantum circuit are designed for solving the objective function, aiming to obtain the normal vector and intercept of the hyperplane that finally divides the data. Finally, the classification and anti-noise effect of the algorithm are verified on artificial dataset and public dataset. Experimental results show that the proposed algorithm is efficient, yet maintains stable accuracy in noisy data.
噪声数据在量子计算机中无处不在,极大地影响了各种算法的性能。然而,现有的量子支持向量机模型不具备抗噪声能力,在从噪声数据中学习精确的超平面法向量时,往往性能低下。针对这一问题,本文开发了一种抗噪声量子支持向量机算法。具体来说,首先在铰链损失中嵌入权重因子,从而构建抗噪声支持向量机的目标函数。然后,设计了另一种迭代优化策略和量子电路来求解目标函数,旨在获得最终划分数据的超平面的法向量和截距。最后,在人工数据集和公共数据集上验证了算法的分类和抗噪效果。实验结果表明,所提出的算法是高效的,而且能在噪声数据中保持稳定的准确性。
{"title":"Quantum Support Vector Machine for Classifying Noisy Data","authors":"Jiaye Li;Yangding Li;Jiagang Song;Jian Zhang;Shichao Zhang","doi":"10.1109/TC.2024.3416619","DOIUrl":"10.1109/TC.2024.3416619","url":null,"abstract":"Noisy data is ubiquitous in quantum computer, greatly affecting the performance of various algorithms. However, existing quantum support vector machine models are not equipped with anti-noise ability, and often deliver low performance when learning accurate hyperplane normal vectors from noisy data. To attack this issue, an anti-noise quantum support vector machine algorithm is developed in this paper. Specifically, a weight factor is first embedded into the hinge loss, so as to construct the objective function of anti-noise support vector machine. And then, an alternative iterative optimization strategy and a quantum circuit are designed for solving the objective function, aiming to obtain the normal vector and intercept of the hyperplane that finally divides the data. Finally, the classification and anti-noise effect of the algorithm are verified on artificial dataset and public dataset. Experimental results show that the proposed algorithm is efficient, yet maintains stable accuracy in noisy data.","PeriodicalId":13087,"journal":{"name":"IEEE Transactions on Computers","volume":"73 9","pages":"2233-2247"},"PeriodicalIF":3.6,"publicationDate":"2024-06-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141939829","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
ISSA: Architecting CNN Accelerators Using Input-Skippable, Set-Associative Computing-in-Memory ISSA:利用输入可抽取、集合关联的内存计算架构 CNN 加速器
IF 3.6 2区 计算机科学 Q2 COMPUTER SCIENCE, HARDWARE & ARCHITECTURE Pub Date : 2024-06-04 DOI: 10.1109/TC.2024.3404060
Yun-Chen Lo;Jun-Shen Wu;Chia-Chun Wang;Yu-Chih Tsai;Chih-Chen Yeh;Wen-Chien Ting;Ren-Shuo Liu
Among several emerging architectures, computing in memory (CIM), which features in-situ analog computation, is a potential solution to the data movement bottleneck of the Von Neumann architecture for artificial intelligence (AI). Interestingly, more strengths of CIM significantly different from in-situ analog computation are not widely known yet. In this work, we point out that mutually stationary vectors (MSVs), which can be maximized by introducing associativity to CIM, are another inherent power unique to CIM. By MSVs, CIM exhibits significant freedom to dynamically vectorize the stored data (e.g., weights) to perform agile computation using the dynamically formed vectors. We have designed and realized an SA-CIM silicon prototype and corresponding architecture and acceleration schemes in the TSMC 28 nm process. More specifically, the contributions of this paper are fivefold: 1) We identify MSVs as new features that can be exploited to improve the current performance and energy challenges of the CIM-based hardware. 2) We propose SA-CIM to enhance MSVs (input-reordering flexibility) for skipping the zeros, small values, and sparse vectors. 3) We propose channel swapping to enhance the zero-skipping technique. 4) We propose a transposed systolic dataflow to efficiently conduct conv3$times$3 while being capable of exploiting input-skipping schemes. 5) We propose a design flow to search for optimal aggressive skipping scheme setups while satisfying the accuracy loss constraint. The proposed ISSA architecture improves the throughput by $1.91times$ to $2.97times$ speedup and the energy efficiency by $2.5times$ to $4.2times$.
在几种新兴架构中,以原位模拟计算为特点的内存计算(CIM)是人工智能(AI)冯-诺依曼架构数据移动瓶颈的潜在解决方案。有趣的是,CIM 与原位模拟计算明显不同的更多优势尚未广为人知。在这项工作中,我们指出,相互静止向量(MSV)是 CIM 独有的另一项固有优势,它可以通过在 CIM 中引入关联性而实现最大化。通过 MSVs,CIM 在动态矢量化存储数据(如权重)方面展现出极大的自由度,可利用动态形成的矢量执行敏捷计算。我们在台积电 28 纳米工艺中设计并实现了 SA-CIM 硅原型以及相应的架构和加速方案。更具体地说,本文的贡献有五个方面:1)我们确定了 MSV 的新特性,可以利用 MSV 来改善目前基于 CIM 的硬件所面临的性能和能耗挑战。2) 我们提出 SA-CIM 来增强 MSV(输入记录灵活性),以跳过零、小值和稀疏向量。3) 我们提出了信道交换来增强跳零技术。4) 我们提出了一种转置系统数据流,以高效地进行 conv3$times$3 处理,同时能够利用输入跳转方案。5) 我们提出了一种设计流程,在满足精度损失约束的同时,搜索最佳的激进跳转方案设置。所提出的 ISSA 架构将吞吐量提高了 1.91 美元/次到 2.97 美元/次,能效提高了 2.5 美元/次到 4.2 美元/次。
{"title":"ISSA: Architecting CNN Accelerators Using Input-Skippable, Set-Associative Computing-in-Memory","authors":"Yun-Chen Lo;Jun-Shen Wu;Chia-Chun Wang;Yu-Chih Tsai;Chih-Chen Yeh;Wen-Chien Ting;Ren-Shuo Liu","doi":"10.1109/TC.2024.3404060","DOIUrl":"10.1109/TC.2024.3404060","url":null,"abstract":"Among several emerging architectures, computing in memory (CIM), which features in-situ analog computation, is a potential solution to the data movement bottleneck of the Von Neumann architecture for artificial intelligence (AI). Interestingly, more strengths of CIM significantly different from in-situ analog computation are not widely known yet. In this work, we point out that mutually stationary vectors (MSVs), which can be maximized by introducing associativity to CIM, are another inherent power unique to CIM. By MSVs, CIM exhibits significant freedom to dynamically vectorize the stored data (e.g., weights) to perform agile computation using the dynamically formed vectors. We have designed and realized an SA-CIM silicon prototype and corresponding architecture and acceleration schemes in the TSMC 28 nm process. More specifically, the contributions of this paper are fivefold: 1) We identify MSVs as new features that can be exploited to improve the current performance and energy challenges of the CIM-based hardware. 2) We propose SA-CIM to enhance MSVs (input-reordering flexibility) for skipping the zeros, small values, and sparse vectors. 3) We propose channel swapping to enhance the zero-skipping technique. 4) We propose a transposed systolic dataflow to efficiently conduct conv3\u0000<inline-formula><tex-math>$times$</tex-math></inline-formula>\u00003 while being capable of exploiting input-skipping schemes. 5) We propose a design flow to search for optimal aggressive skipping scheme setups while satisfying the accuracy loss constraint. The proposed ISSA architecture improves the throughput by \u0000<inline-formula><tex-math>$1.91times$</tex-math></inline-formula>\u0000 to \u0000<inline-formula><tex-math>$2.97times$</tex-math></inline-formula>\u0000 speedup and the energy efficiency by \u0000<inline-formula><tex-math>$2.5times$</tex-math></inline-formula>\u0000 to \u0000<inline-formula><tex-math>$4.2times$</tex-math></inline-formula>\u0000.","PeriodicalId":13087,"journal":{"name":"IEEE Transactions on Computers","volume":"73 9","pages":"2136-2149"},"PeriodicalIF":3.6,"publicationDate":"2024-06-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141939833","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
期刊
IEEE Transactions on Computers
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1