首页 > 最新文献

IEEE Transactions on Emerging Topics in Computing最新文献

英文 中文
Analyzing Wet-Neuromorphic Computing Using Bacterial Gene Regulatory Neural Networks 利用细菌基因调控神经网络分析湿神经形态计算
IF 5.4 2区 计算机科学 Q1 COMPUTER SCIENCE, INFORMATION SYSTEMS Pub Date : 2025-03-04 DOI: 10.1109/TETC.2025.3546119
Samitha Somathilaka;Sasitharan Balasubramaniam;Daniel P. Martins
Biocomputing envisions the development computing paradigms using biological systems, ranging from micron-level components to collections of cells, including organoids. This paradigm shift exploits hidden natural computing properties, to develop miniaturized wet-computing devices that can be deployed in harsh environments, and to explore designs of novel energy-efficient systems. In parallel, we witness the emergence of AI hardware, including neuromorphic processors with the aim of improving computational capacity. This study brings together the concept of biocomputing and neuromorphic systems by focusing on the bacterial gene regulatory networks and their transformation into Gene Regulatory Neural Networks (GRNNs). We explore the intrinsic properties of gene regulations, map this to a gene-perceptron function, and propose an application-specific sub-GRNN search algorithm that maps the network structure to match a computing problem. Focusing on the model organism Escherichia coli, the base-GRNN is initially extracted and validated for accuracy. Subsequently, a comprehensive feasibility analysis of the derived GRNN confirms its computational prowess in classification and regression tasks. Furthermore, we discuss the possibility of performing a well-known digit classification task as a use case. Our analysis and simulation experiments show promising results in the offloading of computation tasks to GRNN in bacterial cells, advancing wet-neuromorphic computing using natural cells.
生物计算设想使用生物系统开发计算范式,范围从微米级组件到细胞集合,包括类器官。这种范式转变利用了隐藏的自然计算特性,开发了可以在恶劣环境中部署的小型化湿计算设备,并探索了新型节能系统的设计。与此同时,我们见证了人工智能硬件的出现,包括旨在提高计算能力的神经形态处理器。本研究通过关注细菌基因调控网络及其向基因调控神经网络(GRNNs)的转化,将生物计算和神经形态系统的概念结合在一起。我们探索了基因调控的内在属性,将其映射到基因感知器函数,并提出了一种特定于应用的子grnn搜索算法,该算法将网络结构映射到匹配计算问题。以模式生物大肠杆菌为重点,首先提取碱基grnn并验证其准确性。随后,对衍生的GRNN进行了全面的可行性分析,证实了其在分类和回归任务中的计算能力。此外,我们讨论了执行一个众所周知的数字分类任务作为用例的可能性。我们的分析和模拟实验显示,在将计算任务卸载给细菌细胞中的GRNN方面取得了可喜的结果,从而推进了使用自然细胞的湿神经形态计算。
{"title":"Analyzing Wet-Neuromorphic Computing Using Bacterial Gene Regulatory Neural Networks","authors":"Samitha Somathilaka;Sasitharan Balasubramaniam;Daniel P. Martins","doi":"10.1109/TETC.2025.3546119","DOIUrl":"https://doi.org/10.1109/TETC.2025.3546119","url":null,"abstract":"Biocomputing envisions the development computing paradigms using biological systems, ranging from micron-level components to collections of cells, including organoids. This paradigm shift exploits hidden natural computing properties, to develop miniaturized wet-computing devices that can be deployed in harsh environments, and to explore designs of novel energy-efficient systems. In parallel, we witness the emergence of AI hardware, including neuromorphic processors with the aim of improving computational capacity. This study brings together the concept of biocomputing and neuromorphic systems by focusing on the bacterial gene regulatory networks and their transformation into Gene Regulatory Neural Networks (GRNNs). We explore the intrinsic properties of gene regulations, map this to a gene-perceptron function, and propose an application-specific sub-GRNN search algorithm that maps the network structure to match a computing problem. Focusing on the model organism Escherichia coli, the base-GRNN is initially extracted and validated for accuracy. Subsequently, a comprehensive feasibility analysis of the derived GRNN confirms its computational prowess in classification and regression tasks. Furthermore, we discuss the possibility of performing a well-known digit classification task as a use case. Our analysis and simulation experiments show promising results in the offloading of computation tasks to GRNN in bacterial cells, advancing wet-neuromorphic computing using natural cells.","PeriodicalId":13156,"journal":{"name":"IEEE Transactions on Emerging Topics in Computing","volume":"13 3","pages":"902-918"},"PeriodicalIF":5.4,"publicationDate":"2025-03-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145057434","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
LionHeart: A Layer-Based Mapping Framework for Heterogeneous Systems With Analog In-Memory Computing Tiles 基于层的异构系统映射框架与模拟内存计算块
IF 5.4 2区 计算机科学 Q1 COMPUTER SCIENCE, INFORMATION SYSTEMS Pub Date : 2025-03-04 DOI: 10.1109/TETC.2025.3546128
Corey Lammie;Yuxuan Wang;Flavio Ponzina;Joshua Klein;Hadjer Benmeziane;Marina Zapater;Irem Boybat;Abu Sebastian;Giovanni Ansaloni;David Atienza
When arranged in a crossbar configuration, resistive memory devices can be used to execute Matrix-Vector Multiplications (MVMs), the most dominant operation of many Machine Learning (ML) algorithms, in constant time complexity. Nonetheless, when performing computations in the analog domain, novel challenges are introduced in terms of arithmetic precision and stochasticity, due to non-ideal circuit and device behaviour. Moreover, these non-idealities have a temporal dimension, resulting in a degrading application accuracy over time. Facing these challenges, we propose a novel framework, named LionHeart, to obtain hybrid analog-digital mappings to execute Deep Learning (DL) inference workloads using heterogeneous accelerators. The accuracy-constrained mappings derived by LionHeart showcase, across different Convolutional Neural Networks (CNNs) and one transformer-based network, high accuracy and potential for speedup. The results of the full system simulations highlight run-time reductions and energy efficiency gains that exceed 6×, with a user-defined accuracy threshold for a fully digital floating point implementation.
当以交叉条配置排列时,电阻式存储设备可用于以恒定的时间复杂度执行矩阵向量乘法(mvm),这是许多机器学习(ML)算法中最主要的操作。然而,当在模拟域中执行计算时,由于非理想的电路和设备行为,在算术精度和随机性方面引入了新的挑战。此外,这些非理想性具有时间维度,导致应用程序的准确性随着时间的推移而降低。面对这些挑战,我们提出了一个名为LionHeart的新框架,以获得使用异构加速器执行深度学习(DL)推理工作负载的混合模拟-数字映射。通过不同的卷积神经网络(cnn)和一个基于变压器的网络,LionHeart得出的精度约束映射展示了高精度和加速潜力。全系统模拟的结果突出了运行时间的减少和能源效率的提高,超过了6倍,具有用户定义的全数字浮点实现的精度阈值。
{"title":"LionHeart: A Layer-Based Mapping Framework for Heterogeneous Systems With Analog In-Memory Computing Tiles","authors":"Corey Lammie;Yuxuan Wang;Flavio Ponzina;Joshua Klein;Hadjer Benmeziane;Marina Zapater;Irem Boybat;Abu Sebastian;Giovanni Ansaloni;David Atienza","doi":"10.1109/TETC.2025.3546128","DOIUrl":"https://doi.org/10.1109/TETC.2025.3546128","url":null,"abstract":"When arranged in a crossbar configuration, resistive memory devices can be used to execute Matrix-Vector Multiplications (MVMs), the most dominant operation of many Machine Learning (ML) algorithms, in constant time complexity. Nonetheless, when performing computations in the analog domain, novel challenges are introduced in terms of arithmetic precision and stochasticity, due to non-ideal circuit and device behaviour. Moreover, these non-idealities have a temporal dimension, resulting in a degrading application accuracy over time. Facing these challenges, we propose a novel framework, named <italic>LionHeart</i>, to obtain hybrid analog-digital mappings to execute Deep Learning (DL) inference workloads using heterogeneous accelerators. The accuracy-constrained mappings derived by <italic>LionHeart</i> showcase, across different Convolutional Neural Networks (CNNs) and one transformer-based network, high accuracy and potential for speedup. The results of the full system simulations highlight run-time reductions and energy efficiency gains that exceed 6×, with a user-defined accuracy threshold for a fully digital floating point implementation.","PeriodicalId":13156,"journal":{"name":"IEEE Transactions on Emerging Topics in Computing","volume":"13 4","pages":"1383-1395"},"PeriodicalIF":5.4,"publicationDate":"2025-03-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145674757","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Exploiting Entity Information for Robust Prediction Over Event Knowledge Graphs 利用实体信息对事件知识图进行鲁棒预测
IF 5.4 2区 计算机科学 Q1 COMPUTER SCIENCE, INFORMATION SYSTEMS Pub Date : 2025-01-31 DOI: 10.1109/TETC.2025.3534243
Han Yu;Hongming Cai;Shengtung Tsai;Mengyao Li;Pan Hu;Jiaoyan Chen;Bingqing Shen
Script event prediction is the task of predicting the subsequent event given a sequence of events that already took place. It benefits task planning and process scheduling for event-centric systems including enterprise systems, IoT systems, etc. Sequence-based and graph-based learning models have been applied to this task. However, when learning data is limited, especially in a multiple-participant-involved enterprise environment, the performance of such models falls short of expectations as they heavily rely on large-scale training data. To take full advantage of given data, in this article we propose a new type of knowledge graph (KG) that models not just events but also entities participating in the events, and we design a collaborative event prediction model exploiting such KGs. Our model identifies semantically similar vertices as collaborators to resolve unknown events, applies gated graph neural networks to extract event-wise sequential features, and exploits a heterogeneous attention network to cope with entity-wise influence in event sequences. To verify the effectiveness of our approach, we designed multiple-choice narrative cloze tasks with inadequate knowledge. Our experimental evaluation with three datasets generated from well-known corpora shows our method can successfully defend against such incompleteness of data and outperforms the state-of-the-art approaches for event prediction.
脚本事件预测是根据已经发生的事件序列预测后续事件的任务。它有利于以事件为中心的系统(包括企业系统、物联网系统等)的任务规划和流程调度。基于序列和基于图的学习模型已应用于此任务。然而,当学习数据有限时,特别是在涉及多参与者的企业环境中,由于这些模型严重依赖于大规模的训练数据,其性能达不到预期。为了充分利用给定的数据,本文提出了一种新型的知识图(KG),它不仅对事件建模,而且对参与事件的实体建模,并利用这种知识图设计了一个协作事件预测模型。我们的模型识别语义上相似的顶点作为合作者来解决未知事件,应用门控图神经网络提取事件相关的顺序特征。并利用异构注意网络来处理事件序列中的实体影响。为了验证我们方法的有效性,我们设计了知识不足的多项选择叙事填空任务。我们对从知名语料库生成的三个数据集进行了实验评估,结果表明我们的方法可以成功地抵御数据的不完整性,并且优于最先进的事件预测方法。
{"title":"Exploiting Entity Information for Robust Prediction Over Event Knowledge Graphs","authors":"Han Yu;Hongming Cai;Shengtung Tsai;Mengyao Li;Pan Hu;Jiaoyan Chen;Bingqing Shen","doi":"10.1109/TETC.2025.3534243","DOIUrl":"https://doi.org/10.1109/TETC.2025.3534243","url":null,"abstract":"Script event prediction is the task of predicting the subsequent event given a sequence of events that already took place. It benefits task planning and process scheduling for event-centric systems including enterprise systems, IoT systems, etc. Sequence-based and graph-based learning models have been applied to this task. However, when learning data is limited, especially in a multiple-participant-involved enterprise environment, the performance of such models falls short of expectations as they heavily rely on large-scale training data. To take full advantage of given data, in this article we propose a new type of knowledge graph (KG) that models not just events but also entities participating in the events, and we design a collaborative event prediction model exploiting such KGs. Our model identifies semantically similar vertices as collaborators to resolve unknown events, applies gated graph neural networks to extract event-wise sequential features, and exploits a heterogeneous attention network to cope with entity-wise influence in event sequences. To verify the effectiveness of our approach, we designed multiple-choice narrative cloze tasks with inadequate knowledge. Our experimental evaluation with three datasets generated from well-known corpora shows our method can successfully defend against such incompleteness of data and outperforms the state-of-the-art approaches for event prediction.","PeriodicalId":13156,"journal":{"name":"IEEE Transactions on Emerging Topics in Computing","volume":"13 3","pages":"890-901"},"PeriodicalIF":5.4,"publicationDate":"2025-01-31","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145057431","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
SiPT: Signature-Based Predictive Testing of RRAM Crossbar Arrays for Deep Neural Networks 基于特征的深度神经网络RRAM横杆阵列预测测试
IF 5.4 2区 计算机科学 Q1 COMPUTER SCIENCE, INFORMATION SYSTEMS Pub Date : 2025-01-30 DOI: 10.1109/TETC.2025.3533895
Kwondo Ma;Anurup Saha;Chandramouli Amarnath;Abhijit Chatterjee
Resistive Random-Access Memory (RRAM) crossbar array-based Deep Neural Networks (DNNs) are increasingly attractive for implementing ultra-low-power computing for AI. However, RRAM-based DNNs face inherent challenges from manufacturing process variability, which can compromise their performance (classification accuracy) and functional safety. One way to test these DNNs is to apply the exhaustive set of test images to each DNN to ascertain its performance; however, this is expensive and time-consuming. We propose a signature-based predictive testing (SiPT) in which a small subset of test images is applied to each DNN and the classification accuracy of the DNN is predicted directly from observations of the intermediate and final layer outputs of the network. This saves the test cost while allowing binning of RRAM-based DNNs for performance. To further improve the test efficiency of SiPT, we create the optimized compact set of test images, leveraging image filters and enhancements to synthesize images and develop a cascaded test structure, incorporating multiple sets of SiPT modules trained on compact test subsets of varying sizes. Through experimentation across diverse test cases, we demonstrate the viability of our SiPT framework under the RRAM process variations, showing test efficiency improvements up to 48X over testing with the exhaustive image dataset.
基于电阻随机存取存储器(RRAM)交叉棒阵列的深度神经网络(dnn)在实现人工智能的超低功耗计算方面越来越有吸引力。然而,基于随机存储器的深度神经网络面临着制造工艺可变性的固有挑战,这可能会损害其性能(分类准确性)和功能安全性。测试这些深度神经网络的一种方法是对每个深度神经网络应用穷举测试图像集以确定其性能;然而,这既昂贵又耗时。我们提出了一种基于签名的预测测试(SiPT),其中将一小部分测试图像应用于每个DNN,并直接从网络的中间层和最终层输出的观察中预测DNN的分类精度。这节省了测试成本,同时允许基于ram的dnn进行分组以提高性能。为了进一步提高SiPT的测试效率,我们创建了优化的紧凑测试图像集,利用图像过滤器和增强来合成图像,并开发了级联测试结构,结合了在不同大小的紧凑测试子集上训练的多组SiPT模块。通过不同测试用例的实验,我们证明了SiPT框架在RRAM过程变化下的可行性,与穷举图像数据集测试相比,测试效率提高了48倍。
{"title":"SiPT: Signature-Based Predictive Testing of RRAM Crossbar Arrays for Deep Neural Networks","authors":"Kwondo Ma;Anurup Saha;Chandramouli Amarnath;Abhijit Chatterjee","doi":"10.1109/TETC.2025.3533895","DOIUrl":"https://doi.org/10.1109/TETC.2025.3533895","url":null,"abstract":"Resistive Random-Access Memory (RRAM) crossbar array-based Deep Neural Networks (DNNs) are increasingly attractive for implementing ultra-low-power computing for AI. However, RRAM-based DNNs face inherent challenges from manufacturing process variability, which can compromise their performance (classification accuracy) and functional safety. One way to test these DNNs is to apply the exhaustive set of test images to each DNN to ascertain its performance; however, this is expensive and time-consuming. We propose a signature-based predictive testing (SiPT) in which a small subset of test images is applied to each DNN and the classification accuracy of the DNN is predicted directly from observations of the intermediate and final layer outputs of the network. This saves the test cost while allowing binning of RRAM-based DNNs for performance. To further improve the test efficiency of SiPT, we create the optimized compact set of test images, leveraging image filters and enhancements to synthesize images and develop a cascaded test structure, incorporating multiple sets of SiPT modules trained on compact test subsets of varying sizes. Through experimentation across diverse test cases, we demonstrate the viability of our SiPT framework under the RRAM process variations, showing test efficiency improvements up to 48X over testing with the exhaustive image dataset.","PeriodicalId":13156,"journal":{"name":"IEEE Transactions on Emerging Topics in Computing","volume":"13 4","pages":"1465-1480"},"PeriodicalIF":5.4,"publicationDate":"2025-01-30","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145729309","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Towards Label-Efficient Deep Learning-Based Aging-Related Bug Prediction With Spiking Convolutional Neural Networks 基于标记高效深度学习的脉冲卷积神经网络老化相关Bug预测
IF 5.1 2区 计算机科学 Q1 COMPUTER SCIENCE, INFORMATION SYSTEMS Pub Date : 2025-01-24 DOI: 10.1109/TETC.2025.3531051
Yunzhe Tian;Yike Li;Kang Chen;Zhenguo Zhang;Endong Tong;Jiqiang Liu;Fangyun Qin;Zheng Zheng;Wenjia Niu
Recent advances in Deep Learning (DL) have enhanced Aging-Related Bug (ARB) prediction for mitigating software aging. However, DL-based ARB prediction models face a dual challenge: overcoming overfitting to enhance generalization and managing the high labeling costs associated with extensive data requirements. To address the first issue, we utilize the sparse and binary nature of spiking communication in Spiking Neural Networks (SNNs), which inherently provides brain-inspired regularization to effectively alleviate overfitting. Therefore, we propose a Spiking Convolutional Neural Network (SCNN)-based ARB prediction model along with a training framework that handles the model’s spatial-temporal dynamics and non-differentiable nature. To reduce labeling costs, we introduce a Bio-inspired and Diversity-aware Active Learning framework (BiDAL), which prioritizes highly informative and diverse samples, enabling more efficient usage of the limited labeling budget. This framework incorporates bio-inspired uncertainty to enhance informativeness measurement along with using a diversity-aware selection strategy based on clustering to prevent redundant labeling. Experiments on three ARB datasets show that ARB-SCNN effectively reduces overfitting, improving generalization performance by 6.65% over other DL-based classifiers. Additionally, BiDAL boosts label efficiency for ARB-SCNN training, outperforming four state-of-the-art active learning methods by 4.77% within limited labeling budgets.
深度学习(DL)的最新进展增强了老化相关错误(ARB)预测,以减轻软件老化。然而,基于dl的ARB预测模型面临双重挑战:克服过度拟合以增强泛化,并管理与大量数据需求相关的高标记成本。为了解决第一个问题,我们利用了尖峰神经网络(snn)中尖峰通信的稀疏性和二值性,它固有地提供了大脑启发的正则化来有效地缓解过拟合。因此,我们提出了一个基于尖峰卷积神经网络(SCNN)的ARB预测模型,以及一个处理模型时空动态和不可微性质的训练框架。为了降低标签成本,我们引入了生物启发和多样性意识主动学习框架(BiDAL),该框架优先考虑高信息量和多样性的样本,从而更有效地利用有限的标签预算。该框架结合了生物启发的不确定性来增强信息性测量,并使用基于聚类的多样性感知选择策略来防止冗余标记。在三个ARB数据集上的实验表明,ARB- scnn有效地减少了过拟合,与其他基于dl的分类器相比,其泛化性能提高了6.65%。此外,BiDAL提高了ARB-SCNN训练的标签效率,在有限的标签预算下,比四种最先进的主动学习方法高出4.77%。
{"title":"Towards Label-Efficient Deep Learning-Based Aging-Related Bug Prediction With Spiking Convolutional Neural Networks","authors":"Yunzhe Tian;Yike Li;Kang Chen;Zhenguo Zhang;Endong Tong;Jiqiang Liu;Fangyun Qin;Zheng Zheng;Wenjia Niu","doi":"10.1109/TETC.2025.3531051","DOIUrl":"https://doi.org/10.1109/TETC.2025.3531051","url":null,"abstract":"Recent advances in Deep Learning (DL) have enhanced Aging-Related Bug (ARB) prediction for mitigating software aging. However, DL-based ARB prediction models face a dual challenge: overcoming overfitting to enhance generalization and managing the high labeling costs associated with extensive data requirements. To address the first issue, we utilize the sparse and binary nature of spiking communication in Spiking Neural Networks (SNNs), which inherently provides brain-inspired regularization to effectively alleviate overfitting. Therefore, we propose a Spiking Convolutional Neural Network (SCNN)-based ARB prediction model along with a training framework that handles the model’s spatial-temporal dynamics and non-differentiable nature. To reduce labeling costs, we introduce a Bio-inspired and Diversity-aware Active Learning framework (BiDAL), which prioritizes highly informative and diverse samples, enabling more efficient usage of the limited labeling budget. This framework incorporates bio-inspired uncertainty to enhance informativeness measurement along with using a diversity-aware selection strategy based on clustering to prevent redundant labeling. Experiments on three ARB datasets show that ARB-SCNN effectively reduces overfitting, improving generalization performance by 6.65% over other DL-based classifiers. Additionally, BiDAL boosts label efficiency for ARB-SCNN training, outperforming four state-of-the-art active learning methods by 4.77% within limited labeling budgets.","PeriodicalId":13156,"journal":{"name":"IEEE Transactions on Emerging Topics in Computing","volume":"13 2","pages":"314-329"},"PeriodicalIF":5.1,"publicationDate":"2025-01-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144323163","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Processing In-Memory PUF Watermark Embedding With Cellular Memristor Network 基于细胞记忆电阻网络的内存PUF水印嵌入处理
IF 5.4 2区 计算机科学 Q1 COMPUTER SCIENCE, INFORMATION SYSTEMS Pub Date : 2025-01-23 DOI: 10.1109/TETC.2025.3528336
Alex James;Chithra Reghuvaran;Leon Chua
The cellular neural network (CNN or CeNN) is known to be useful because of its suitability in real-time processing, parallel processing, robustness, flexibility, and energy efficiency. CeNNs have a large number of interconnected processing elements, which can be programmed to produce a wide range of patterns, including regular and irregular patterns, random patterns, and more. When implemented in memristive hardware, the pattern generator ability and inherent variability of memristive devices can be explored to create Physical Unclonable Functions (PUFs). This work reports a method of using memristive CeNNs to perform image processing tasks along with PUF image generation. The CeNN-PUF has dual mode capability combining data processing and encryption using PUF image watermarking. The proposed method provides unique device-specific image watermarks, following a two-stage process of (1) device-specific secret mask generation and (2) watermark embedding. The system is evaluated using multiple CeNN cloning templates and the robustness of the method is validated against ML attacks. A detailed analysis is presented to evaluate the uniqueness, randomness and reliability against different environmental changes. The experimental validation of the proposed model is done on FPGA Xilinx Zynq-7010 processor and benchmarked the system against quantization noise.
细胞神经网络(CNN或CeNN)因其在实时处理、并行处理、鲁棒性、灵活性和能效方面的适用性而被认为是有用的。cenn具有大量相互连接的处理元素,可以对其进行编程以产生广泛的模式,包括规则和不规则模式,随机模式等等。当在忆阻硬件中实现时,可以探索忆阻器件的模式生成器能力和固有可变性来创建物理不可克隆函数(puf)。这项工作报告了一种使用记忆性cenn来执行图像处理任务以及PUF图像生成的方法。CeNN-PUF具有数据处理和PUF图像水印加密的双模式能力。该方法通过(1)生成特定于设备的秘密掩码和(2)水印嵌入两阶段的过程,提供了唯一的特定于设备的图像水印。使用多个CeNN克隆模板对系统进行了评估,并验证了该方法对ML攻击的鲁棒性。详细分析了该方法在不同环境下的独特性、随机性和可靠性。在Xilinx Zynq-7010 FPGA处理器上对该模型进行了实验验证,并对系统进行了量化噪声测试。
{"title":"Processing In-Memory PUF Watermark Embedding With Cellular Memristor Network","authors":"Alex James;Chithra Reghuvaran;Leon Chua","doi":"10.1109/TETC.2025.3528336","DOIUrl":"https://doi.org/10.1109/TETC.2025.3528336","url":null,"abstract":"The cellular neural network (CNN or CeNN) is known to be useful because of its suitability in real-time processing, parallel processing, robustness, flexibility, and energy efficiency. CeNNs have a large number of interconnected processing elements, which can be programmed to produce a wide range of patterns, including regular and irregular patterns, random patterns, and more. When implemented in memristive hardware, the pattern generator ability and inherent variability of memristive devices can be explored to create Physical Unclonable Functions (PUFs). This work reports a method of using memristive CeNNs to perform image processing tasks along with PUF image generation. The CeNN-PUF has dual mode capability combining data processing and encryption using PUF image watermarking. The proposed method provides unique device-specific image watermarks, following a two-stage process of (1) device-specific secret mask generation and (2) watermark embedding. The system is evaluated using multiple CeNN cloning templates and the robustness of the method is validated against ML attacks. A detailed analysis is presented to evaluate the uniqueness, randomness and reliability against different environmental changes. The experimental validation of the proposed model is done on FPGA Xilinx Zynq-7010 processor and benchmarked the system against quantization noise.","PeriodicalId":13156,"journal":{"name":"IEEE Transactions on Emerging Topics in Computing","volume":"13 4","pages":"1453-1464"},"PeriodicalIF":5.4,"publicationDate":"2025-01-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=10851809","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145674764","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Continual Test-Time Adaptation With Weighted Contrastive Learning and Pseudo-Label Correction 基于加权对比学习和伪标签校正的连续测试时间自适应
IF 5.4 2区 计算机科学 Q1 COMPUTER SCIENCE, INFORMATION SYSTEMS Pub Date : 2025-01-20 DOI: 10.1109/TETC.2025.3528985
Shih-Chieh Chuang;Ching-Hu Lu
Real-time adaptability is often required to maintain system accuracy in scenarios involving domain shifts caused by constantly changing environments. While continual test-time adaptation has been proposed to handle such scenarios, existing methods rely on high-accuracy pseudo-labels. Moreover, contrastive learning methods for continuous test-time adaptation consider the aggregation of features from the same class while neglecting the problem of aggregating similar features within the same class. Therefore, we propose “Weighted Contrastive Learning” and apply it to both pre-training and continual test-time adaptation. To address the issue of catastrophic forgetting caused by continual adaptation, previous studies have employed source-domain knowledge to stochastically recover the target-domain model. However, significant domain shifts may cause the source-domain knowledge to behave as noise, thus impacting the model's adaptability. Therefore, we propose “Domain-aware Pseudo-label Correction” to mitigate catastrophic forgetting and error accumulation without accessing the original source-domain data while minimizing the impact on model adaptability. The thorough evaluations in our experiments have demonstrated the effectiveness of our proposed approach.
在涉及由不断变化的环境引起的领域转移的场景中,通常需要实时适应性来保持系统的准确性。虽然已经提出了持续的测试时间适应来处理这种情况,但现有的方法依赖于高精度的伪标签。此外,连续测试时间自适应的对比学习方法考虑了同类特征的聚合,而忽略了同类内相似特征的聚合问题。因此,我们提出了“加权对比学习”,并将其应用于预训练和持续测试时间适应。为了解决持续适应导致的灾难性遗忘问题,已有研究采用源域知识随机恢复目标域模型。然而,显著的领域转移可能导致源领域知识表现为噪声,从而影响模型的适应性。因此,我们提出了“领域感知伪标签校正”,在不访问原始源领域数据的情况下减轻灾难性遗忘和错误积累,同时最大限度地减少对模型适应性的影响。我们在实验中的全面评估证明了我们提出的方法的有效性。
{"title":"Continual Test-Time Adaptation With Weighted Contrastive Learning and Pseudo-Label Correction","authors":"Shih-Chieh Chuang;Ching-Hu Lu","doi":"10.1109/TETC.2025.3528985","DOIUrl":"https://doi.org/10.1109/TETC.2025.3528985","url":null,"abstract":"Real-time adaptability is often required to maintain system accuracy in scenarios involving domain shifts caused by constantly changing environments. While continual test-time adaptation has been proposed to handle such scenarios, existing methods rely on high-accuracy pseudo-labels. Moreover, contrastive learning methods for continuous test-time adaptation consider the aggregation of features from the same class while neglecting the problem of aggregating similar features within the same class. Therefore, we propose “Weighted Contrastive Learning” and apply it to both pre-training and continual test-time adaptation. To address the issue of catastrophic forgetting caused by continual adaptation, previous studies have employed source-domain knowledge to stochastically recover the target-domain model. However, significant domain shifts may cause the source-domain knowledge to behave as noise, thus impacting the model's adaptability. Therefore, we propose “Domain-aware Pseudo-label Correction” to mitigate catastrophic forgetting and error accumulation without accessing the original source-domain data while minimizing the impact on model adaptability. The thorough evaluations in our experiments have demonstrated the effectiveness of our proposed approach.","PeriodicalId":13156,"journal":{"name":"IEEE Transactions on Emerging Topics in Computing","volume":"13 3","pages":"866-877"},"PeriodicalIF":5.4,"publicationDate":"2025-01-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145057433","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
BCIM: Constant-Time and High-Throughput Block-Cipher-in-Memory With Massively-Parallel Bit-Serial Execution BCIM:具有大规模并行位串行执行的内存中恒定时间和高吞吐量块密码
IF 5.4 2区 计算机科学 Q1 COMPUTER SCIENCE, INFORMATION SYSTEMS Pub Date : 2025-01-20 DOI: 10.1109/TETC.2025.3529842
Andrew Dervay;Wenfeng Zhao
In-memory computing (IMC) emerges as one of the most promising computing technologies for data-intensive applications to ameliorate the “memory wall” bottleneck in von Neumann computer systems. Meanwhile, IMC also shows promising prospects towards high-throughput and energy-efficient processing of cryptographic workloads. This paper presents Block-Cipher-In-Memory (BCIM), a constant-time, high-throughput, bit-serial in-memory cryptography scheme to support versatile Substitution-Permutation and Feistel network based block ciphers, such as standard ciphers like Advanced Encryption Standard (AES), and lightweight block ciphers like RECTANGLE and Simon. In addition, BCIM employs a processor-assisted key loading scheme and prudent memory management strategies to minimize the memory footprint needed for cryptographic algorithms to improve the peak operating frequency and energy efficiency. Built upon these, BCIM can also support alternative block cipher modes of operation like counter mode beyond electronic-codebook. Furthermore, the bit-serial operation of BCIM inherently ensures constant-time execution and exploits column-wise single instruction multiple data (SIMD) processing, thereby providing strong resistance to side-channel timing attacks, and achieves high-throughput encryption and decryption via massively-parallel compact round function implementation. Experimental results suggest that BCIM shows substantial performance and energy improvements over state-of-the-art bit-parallel IMC ciphers. Additionally, BCIM show competitive performance and orders of magnitude energy advantages over the bitsliced software implementations on MCU/CPU platforms.
内存计算(IMC)作为数据密集型应用中最有前途的计算技术之一出现,以改善冯·诺依曼计算机系统中的“内存墙”瓶颈。同时,IMC在高吞吐量和高能效的加密工作负载处理方面也显示出良好的前景。本文提出了内存块密码(BCIM),这是一种恒定时间、高吞吐量、位串行的内存加密方案,支持基于替换置换和Feistel网络的通用块密码,如高级加密标准(AES)等标准密码,以及RECTANGLE和Simon等轻量级块密码。此外,BCIM采用处理器辅助的密钥加载方案和谨慎的内存管理策略,以最大限度地减少加密算法所需的内存占用,从而提高峰值操作频率和能源效率。在此基础上,BCIM还可以支持除电子码本之外的其他分组密码操作模式,如计数器模式。此外,BCIM的位串行操作固有地保证了恒定时间的执行,并利用了按列单指令多数据(SIMD)处理,从而提供了强大的抗侧信道定时攻击能力,并通过大规模并行的紧凑轮函数实现实现了高吞吐量的加密和解密。实验结果表明,与最先进的位并行IMC密码相比,BCIM具有显著的性能和能量改进。此外,与MCU/CPU平台上的位片软件实现相比,BCIM表现出具有竞争力的性能和数量级的能源优势。
{"title":"BCIM: Constant-Time and High-Throughput Block-Cipher-in-Memory With Massively-Parallel Bit-Serial Execution","authors":"Andrew Dervay;Wenfeng Zhao","doi":"10.1109/TETC.2025.3529842","DOIUrl":"https://doi.org/10.1109/TETC.2025.3529842","url":null,"abstract":"In-memory computing (IMC) emerges as one of the most promising computing technologies for data-intensive applications to ameliorate the “<italic>memory wall</i>” bottleneck in von Neumann computer systems. Meanwhile, IMC also shows promising prospects towards high-throughput and energy-efficient processing of cryptographic workloads. This paper presents Block-Cipher-In-Memory (BCIM), a constant-time, high-throughput, bit-serial in-memory cryptography scheme to support versatile Substitution-Permutation and Feistel network based block ciphers, such as standard ciphers like Advanced Encryption Standard (AES), and lightweight block ciphers like RECTANGLE and S<sc>imon</small>. In addition, BCIM employs a processor-assisted key loading scheme and prudent memory management strategies to minimize the memory footprint needed for cryptographic algorithms to improve the peak operating frequency and energy efficiency. Built upon these, BCIM can also support alternative block cipher modes of operation like counter mode beyond electronic-codebook. Furthermore, the bit-serial operation of BCIM inherently ensures constant-time execution and exploits column-wise single instruction multiple data (SIMD) processing, thereby providing strong resistance to side-channel timing attacks, and achieves high-throughput encryption and decryption via massively-parallel compact round function implementation. Experimental results suggest that BCIM shows substantial performance and energy improvements over state-of-the-art bit-parallel IMC ciphers. Additionally, BCIM show competitive performance and orders of magnitude energy advantages over the bitsliced software implementations on MCU/CPU platforms.","PeriodicalId":13156,"journal":{"name":"IEEE Transactions on Emerging Topics in Computing","volume":"13 4","pages":"1440-1452"},"PeriodicalIF":5.4,"publicationDate":"2025-01-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145674814","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
A Pervasive Edge Computing Model for Proactive Intelligent Data Migration 主动智能数据迁移的普适边缘计算模型
IF 5.4 2区 计算机科学 Q1 COMPUTER SCIENCE, INFORMATION SYSTEMS Pub Date : 2025-01-20 DOI: 10.1109/TETC.2025.3528994
Georgios Boulougaris;Kostas Kolomvatsos
Currently, there is a great attention of the research community for the intelligent management of data in a context-aware manner at the intersection of the Internet of Things (IoT) and Edge Computing (EC). In this article, we propose a strategy to be adopted by autonomous edge nodes related to their decision on what data should be migrated to specific locations of the infrastructure and support the desired requests for processing. Our intention is to arm nodes with the ability of learning the access patterns of offloaded data-driven tasks and predict which data should be migrated to the original ‘owners’ of tasks. Naturally, these tasks are linked to the processing of data that are absent at the original hosting nodes indicating the required data assets that need to be accessed directly. To identify these data intervals, we employ an ensemble scheme that combines a statistically oriented model and a machine learning scheme. Hence, we are able not only to detect the density of the requests but also to learn and infer the ‘strong’ data assets. The proposed approach is analyzed in detail by presenting the corresponding formulations being also evaluated and compared against baselines and models found in the respective literature.
目前,在物联网(IoT)和边缘计算(EC)的交叉领域,以上下文感知的方式对数据进行智能管理受到了研究界的极大关注。在本文中,我们提出了一种策略,供自主边缘节点采用,这些节点决定将哪些数据迁移到基础设施的特定位置,并支持所需的处理请求。我们的目的是让节点具备学习已卸载数据驱动任务的访问模式的能力,并预测哪些数据应该迁移到任务的原始“所有者”。自然地,这些任务被链接到原始托管节点中不存在的数据处理,这些节点表明需要直接访问所需的数据资产。为了识别这些数据区间,我们采用了一种集成方案,该方案结合了面向统计的模型和机器学习方案。因此,我们不仅能够检测请求的密度,还能够学习和推断“强”数据资产。通过提出相应的公式,并与各自文献中的基线和模型进行评估和比较,详细分析了所提出的方法。
{"title":"A Pervasive Edge Computing Model for Proactive Intelligent Data Migration","authors":"Georgios Boulougaris;Kostas Kolomvatsos","doi":"10.1109/TETC.2025.3528994","DOIUrl":"https://doi.org/10.1109/TETC.2025.3528994","url":null,"abstract":"Currently, there is a great attention of the research community for the intelligent management of data in a context-aware manner at the intersection of the Internet of Things (IoT) and Edge Computing (EC). In this article, we propose a strategy to be adopted by autonomous edge nodes related to their decision on what data should be migrated to specific locations of the infrastructure and support the desired requests for processing. Our intention is to arm nodes with the ability of learning the access patterns of offloaded data-driven tasks and predict which data should be migrated to the original ‘owners’ of tasks. Naturally, these tasks are linked to the processing of data that are absent at the original hosting nodes indicating the required data assets that need to be accessed directly. To identify these data intervals, we employ an ensemble scheme that combines a statistically oriented model and a machine learning scheme. Hence, we are able not only to detect the density of the requests but also to learn and infer the ‘strong’ data assets. The proposed approach is analyzed in detail by presenting the corresponding formulations being also evaluated and compared against baselines and models found in the respective literature.","PeriodicalId":13156,"journal":{"name":"IEEE Transactions on Emerging Topics in Computing","volume":"13 3","pages":"878-889"},"PeriodicalIF":5.4,"publicationDate":"2025-01-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145057421","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Software-Defined Number Formats for High-Speed Belief Propagation 用于高速信念传播的软件定义数字格式
IF 5.4 2区 计算机科学 Q1 COMPUTER SCIENCE, INFORMATION SYSTEMS Pub Date : 2025-01-20 DOI: 10.1109/TETC.2025.3528972
Amir Sabbagh Molahosseini;JunKyu Lee;Hans Vandierendonck
This article presents the design and implementation of Software-Defined Floating-Point (SDF) number formats for high-speed implementation of the Belief Propagation (BP) algorithm. SDF formats are designed specifically to meet the numeric needs of the computation and are more compact representations of the data. They reduce memory footprint and memory bandwidth requirements without sacrificing accuracy, given that BP for loopy graphs inherently involves algorithmic errors. This article designs several SDF formats for sum-product BP applications by careful analysis of the computation. Our theoretical analysis leads to the design of 16-bit (half-precision) and 8-bit (mini-precision) widths. We moreover present highly efficient software implementation of the proposed SDF formats which is centered around conversion to hardware-supported single-precision arithmetic hardware. Our solution demonstrates negligible conversion overhead on commercially available CPUs. For Ising grids with sizes from 100 × 100 to 500 × 500, the 16- and 8-bit SDF formats along with our conversion module produce equivalent accuracy to double-precision floating-point format but with 2.86× speedups on average on an Intel Xeon processor. Particularly, increasing the grid size results in higher speed-up. For example, the proposed half-precision format with 3-bit exponent and 13-bit mantissa achieved the minimum and maximum speedups of 1.30× and 1.39× over single-precision, and 2.55× and 3.40× over double-precision, by increasing grid size from 100 × 100 to 500 × 500.
本文介绍了用于高速实现信念传播(BP)算法的软件定义浮点(SDF)数字格式的设计和实现。SDF格式是专门为满足计算的数字需求而设计的,是数据的更紧凑的表示形式。它们在不牺牲精度的情况下减少了内存占用和内存带宽需求,因为循环图的BP固有地包含算法错误。通过对计算过程的仔细分析,设计了几种用于和积BP应用的SDF格式。我们的理论分析导致了16位(半精度)和8位(微精度)宽度的设计。此外,我们还提供了所提出的SDF格式的高效软件实现,其核心是转换为硬件支持的单精度算术硬件。我们的解决方案在商用cpu上的转换开销可以忽略不计。对于尺寸从100 × 100到500 × 500的Ising网格,16位和8位SDF格式以及我们的转换模块可以产生与双精度浮点格式相当的精度,但在英特尔至强处理器上平均速度为2.86倍。特别是,增加网格大小会导致更高的加速。例如,所提出的3位指数和13位尾数的半精度格式通过将网格大小从100 × 100增加到500 × 500,在单精度上实现了1.30倍和1.39倍的最小和最大速度,在双精度上实现了2.55倍和3.40倍的最大速度。
{"title":"Software-Defined Number Formats for High-Speed Belief Propagation","authors":"Amir Sabbagh Molahosseini;JunKyu Lee;Hans Vandierendonck","doi":"10.1109/TETC.2025.3528972","DOIUrl":"https://doi.org/10.1109/TETC.2025.3528972","url":null,"abstract":"This article presents the design and implementation of Software-Defined Floating-Point (SDF) number formats for high-speed implementation of the Belief Propagation (BP) algorithm. SDF formats are designed specifically to meet the numeric needs of the computation and are more compact representations of the data. They reduce memory footprint and memory bandwidth requirements without sacrificing accuracy, given that BP for loopy graphs inherently involves algorithmic errors. This article designs several SDF formats for sum-product BP applications by careful analysis of the computation. Our theoretical analysis leads to the design of 16-bit (half-precision) and 8-bit (mini-precision) widths. We moreover present highly efficient software implementation of the proposed SDF formats which is centered around conversion to hardware-supported single-precision arithmetic hardware. Our solution demonstrates negligible conversion overhead on commercially available CPUs. For Ising grids with sizes from 100 × 100 to 500 × 500, the 16- and 8-bit SDF formats along with our conversion module produce equivalent accuracy to double-precision floating-point format but with 2.86× speedups on average on an Intel Xeon processor. Particularly, increasing the grid size results in higher speed-up. For example, the proposed half-precision format with 3-bit exponent and 13-bit mantissa achieved the minimum and maximum speedups of 1.30× and 1.39× over single-precision, and 2.55× and 3.40× over double-precision, by increasing grid size from 100 × 100 to 500 × 500.","PeriodicalId":13156,"journal":{"name":"IEEE Transactions on Emerging Topics in Computing","volume":"13 3","pages":"853-865"},"PeriodicalIF":5.4,"publicationDate":"2025-01-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145051083","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
期刊
IEEE Transactions on Emerging Topics in Computing
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1