首页 > 最新文献

Microprocessors and Microsystems最新文献

英文 中文
GWalloc: A self-adaptive generational wear-aware allocator for non-volatile main memory GWalloc:用于非易失性主存储器的自适应分代磨损感知分配器
IF 2.6 4区 计算机科学 Q2 Computer Science Pub Date : 2023-11-01 DOI: 10.1016/j.micpro.2023.104971
Ziwei Wang , Wei Li , Ziqi Shuai , Qingan Li

Phase Change Memory (PCM) is considered a promising replacement for DRAM due to its superior performance characteristics such as low leakage power, high integration density, byte addressability and non-volatility. However, PCM’s limited write endurance significantly hinders its wide application. For example, PCM wears out quickly with traditional dynamic memory allocation policy in embedded systems which aggregates lots of writes in few memory blocks. To extend the lifespan of PCM, some wear-aware dynamic memory allocators have been proposed, which generally depend on some fixed parameters to limit the wear of PCM. However, these allocators can be inflexible as it is difficult to specify appropriate values for the required parameters in different scenarios. In this paper, we propose a Self-Adaptive Generational Wear-Aware Allocator (GWalloc). GWalloc divides memory blocks into two generations: the young and the old generation, according to their number of allocation times. GWalloc also dynamically adjusts the system’s wear threshold during allocations so that it can effectively balance the wear degree of PCM and the consumed memory space. The wear threshold restricts the upper wear limit of young memory blocks. Experimental evaluations show that compared with the state-of-the-art wear-aware dynamic memory allocators (NVMalloc, Walloc and UWLalloc), GWalloc improves PCM wear-leveling (evaluated by CV, a wear leveling indicator) by 38.6%, 39.1% and 38.3%, and saves 62.1%, 22.2% and 37.2% memory space overhead.

相变存储器(PCM)由于具有低泄漏功率、高集成密度、字节可寻址和非易失性等优越的性能特点,被认为是DRAM的一个很有前途的替代品。然而,PCM有限的写入持久性极大地阻碍了它的广泛应用。例如,在嵌入式系统中,传统的动态内存分配策略将大量写操作集中在少数内存块中,PCM很快就会耗尽。为了延长PCM的使用寿命,提出了一些磨损感知动态内存分配器,这些分配器通常依赖于一些固定的参数来限制PCM的磨损。然而,这些分配器可能不够灵活,因为很难在不同的场景中为所需的参数指定适当的值。本文提出了一种自适应分代磨损感知分配器(GWalloc)。GWalloc根据内存块的分配次数将其分为两代:年轻一代和老一代。GWalloc还在分配过程中动态调整系统的磨损阈值,从而有效地平衡PCM的磨损程度和消耗的内存空间。磨损阈值限制年轻内存块的磨损上限。实验评估表明,与当前最先进的磨损感知动态内存分配器(NVMalloc、Walloc和UWLalloc)相比,GWalloc可将PCM磨损均衡(由磨损均衡指标CV评估)提高38.6%、39.1%和38.3%,节省62.1%、22.2%和37.2%的内存空间开销。
{"title":"GWalloc: A self-adaptive generational wear-aware allocator for non-volatile main memory","authors":"Ziwei Wang ,&nbsp;Wei Li ,&nbsp;Ziqi Shuai ,&nbsp;Qingan Li","doi":"10.1016/j.micpro.2023.104971","DOIUrl":"https://doi.org/10.1016/j.micpro.2023.104971","url":null,"abstract":"<div><p>Phase Change Memory (PCM) is considered a promising replacement for DRAM due to its superior performance characteristics such as low leakage power, high integration density, byte addressability and non-volatility. However, PCM’s limited write endurance significantly hinders its wide application. For example, PCM wears out quickly with traditional dynamic memory allocation policy in embedded systems which aggregates lots of writes in few memory blocks. To extend the lifespan of PCM, some wear-aware dynamic memory allocators have been proposed, which generally depend on some fixed parameters to limit the wear of PCM. However, these allocators can be inflexible as it is difficult to specify appropriate values for the required parameters in different scenarios. In this paper, we propose a Self-Adaptive Generational Wear-Aware Allocator (<em>GWalloc</em>). <em>GWalloc</em> divides memory blocks into two generations: the <em>young</em> and the <em>old</em> generation, according to their number of allocation times. <em>GWalloc</em> also dynamically adjusts the system’s wear threshold during allocations so that it can effectively balance the wear degree of PCM and the consumed memory space. The wear threshold restricts the upper wear limit of young memory blocks. Experimental evaluations show that compared with the state-of-the-art wear-aware dynamic memory allocators (<em>NVMalloc</em>, <em>Walloc</em> and <em>UWLalloc</em>), <em>GWalloc</em> improves PCM wear-leveling (evaluated by CV, a wear leveling indicator) by 38.6%, 39.1% and 38.3%, and saves 62.1%, 22.2% and 37.2% memory space overhead.</p></div>","PeriodicalId":49815,"journal":{"name":"Microprocessors and Microsystems","volume":null,"pages":null},"PeriodicalIF":2.6,"publicationDate":"2023-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"92046111","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Designing an ultra-efficient Hamming code generator circuit for a secure nano-telecommunication network 设计一种用于安全纳米通信网络的超高效汉明码产生电路
IF 2.6 4区 计算机科学 Q2 Computer Science Pub Date : 2023-11-01 DOI: 10.1016/j.micpro.2023.104961
Hongbo Xie , Yincheng Qi , Farah Qasim Ahmed Alyousuf

Communication links forming secure telecommunications networks rely on various technologies such as message switching, circuit switching, or packet switching to transmit messages and data. Hamming codes, a family of linear error-correcting codes, are commonly used in communication networks to detect and correct one-bit and two-bit errors. However, reducing power consumption, occupied area, and latency in secure telecommunication networks remains a challenge for future information and communication technology. To address these challenges, emerging technologies like quantum dots offer potential solutions. Quantum-dot cellular automata (QCA) stands as a promising frontier in nanotechnology for enhancing secure telecommunications networks. It opens up the possibility of crafting high-performance, energy-efficient digital circuits. This research harnesses the potential of QCA and introduces groundbreaking innovations: a 3-8 decoder employing a single-layer layout and a 3-input XOR gate with a multi-layer configuration. These components are utilized in the design of an electronic circuit for Hamming codes, incorporating the QCA-based approach. It is important to note that practical implementation in real-world scenarios presents challenges due to the nature of QCA technology. As a result, the evaluation and validation of the proposed designs heavily rely on simulations using QCADesigner. While experimental validation in real-world scenarios is limited, the simulations provide insights into the functionality and feasibility of the suggested designs. By leveraging QCA, the proposed Hamming code circuit significantly enhances cell count, occupied area, and clock latency. The suggested design can be adapted to fit different generating matrices in Hamming codes without requiring drastic modifications to the underlying architecture.

构成安全电信网络的通信链路依靠各种技术,如消息交换、电路交换或分组交换来传输消息和数据。汉明码是一类线性纠错码,在通信网络中常用来检测和纠正1位和2位的错误。然而,如何降低安全通信网络的功耗、占用面积和延迟仍然是未来信息通信技术面临的挑战。为了应对这些挑战,量子点等新兴技术提供了潜在的解决方案。量子点元胞自动机(QCA)是纳米技术增强安全电信网络的一个有前途的前沿。它开启了制造高性能、节能数字电路的可能性。本研究利用了QCA的潜力,并引入了突破性的创新:采用单层布局的3-8解码器和具有多层配置的3输入异或门。这些元件被用于汉明码的电子电路设计中,结合了基于qca的方法。值得注意的是,由于QCA技术的性质,在真实场景中的实际实现会带来挑战。因此,所提出的设计的评估和验证在很大程度上依赖于使用qcaddesigner的仿真。虽然在现实场景中的实验验证是有限的,但模拟提供了对建议设计的功能和可行性的见解。通过利用QCA,所提出的汉明码电路显著提高了小区数、占用面积和时钟延迟。建议的设计可以适应汉明码中的不同生成矩阵,而不需要对底层架构进行剧烈修改。
{"title":"Designing an ultra-efficient Hamming code generator circuit for a secure nano-telecommunication network","authors":"Hongbo Xie ,&nbsp;Yincheng Qi ,&nbsp;Farah Qasim Ahmed Alyousuf","doi":"10.1016/j.micpro.2023.104961","DOIUrl":"https://doi.org/10.1016/j.micpro.2023.104961","url":null,"abstract":"<div><p>Communication links forming secure telecommunications networks rely on various technologies such as message switching, circuit switching, or packet switching to transmit messages and data. Hamming codes, a family of linear error-correcting codes, are commonly used in communication networks to detect and correct one-bit and two-bit errors. However, reducing power consumption, occupied area, and latency in secure telecommunication networks remains a challenge for future information and communication technology. To address these challenges, emerging technologies like quantum dots offer potential solutions. Quantum-dot cellular automata (QCA) stands as a promising frontier in nanotechnology for enhancing secure telecommunications networks. It opens up the possibility of crafting high-performance, energy-efficient digital circuits. This research harnesses the potential of QCA and introduces groundbreaking innovations: a 3-8 decoder employing a single-layer layout and a 3-input XOR gate with a multi-layer configuration. These components are utilized in the design of an electronic circuit for Hamming codes, incorporating the QCA-based approach. It is important to note that practical implementation in real-world scenarios presents challenges due to the nature of QCA technology. As a result, the evaluation and validation of the proposed designs heavily rely on simulations using QCADesigner. While experimental validation in real-world scenarios is limited, the simulations provide insights into the functionality and feasibility of the suggested designs. By leveraging QCA, the proposed Hamming code circuit significantly enhances cell count, occupied area, and clock latency. The suggested design can be adapted to fit different generating matrices in Hamming codes without requiring drastic modifications to the underlying architecture.</p></div>","PeriodicalId":49815,"journal":{"name":"Microprocessors and Microsystems","volume":null,"pages":null},"PeriodicalIF":2.6,"publicationDate":"2023-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"92115792","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
A framework for detection of cyber attacks by the classification of intrusion detection datasets 通过入侵检测数据集分类检测网络攻击的框架
IF 2.6 4区 计算机科学 Q2 Computer Science Pub Date : 2023-10-22 DOI: 10.1016/j.micpro.2023.104964
Durgesh Srivastava , Rajeshwar Singh , Chinmay Chakraborty , Sunil Kr. Maakar , Aaisha Makkar , Deepak Sinwar

Recognition of the consequence for advanced tools and techniques to secure the network infrastructure from the security risks has prompted the advancement of many machine learning-based intrusion detection strategies. However, it is a big challenge for the researchers to make improvements in an Intrusion Detection System with desired advantages and constraints. This paper has developed a proficient soft computing framework using Grey Wolf Optimization and Entropy-Based Graph (GWO-EBG) to classify intrusion detection datasets to reduce the false rate. In the proposed scheme, initially, the input data is preprocessed by the data transformation and normalization procedure. After the preprocessing, optimal features have been chosen for the dimension reduction from the preprocessed data using the grey wolf optimization (GWO) algorithm. Then, the Entropy value has estimated from the idyllically selected features. Lastly, an Entropy-Based Graph (EBG) has been constructed to classify data into intrusion or normal data. The experimental results demonstrate that the developed method outperforms other existing methods in various performance measures. The detection rate of the developed GWO-EBG is found to be 94.6%, which is higher than 91.24 % of EBG, 75.60 % K-Nearest Neighbors (KNN), 73.36 % of Support Vector Machine (SVM), and 74.88 % of Generalized Regression Neural Network (GRNN) on 5000 connection vectors data obtained from KDD CUP’99 testing dataset. The false-positive rate of developed strategy (GWO-EBG) is 0.35 %%, which is lower than 2.18 % of EBG, 7.32 % KNN, 8.15 % of SVM, and 8.13 % of GRNN with 5000 testing datasets.

由于认识到需要先进的工具和技术来保护网络基础设施免受安全风险的影响,许多基于机器学习的入侵检测策略得到了发展。然而,如何改进入侵检测系统,使其具备所需的优势和约束条件,是研究人员面临的一大挑战。本文利用灰狼优化和基于熵的图(GWO-EBG)开发了一个精通软计算的框架,用于对入侵检测数据集进行分类,以降低误报率。在提议的方案中,首先通过数据转换和归一化程序对输入数据进行预处理。预处理后,使用灰狼优化(GWO)算法从预处理数据中选择最佳特征进行降维。然后,根据所选特征估算熵值。最后,构建了基于熵的图(EBG),将数据分为入侵数据和正常数据。实验结果表明,所开发的方法在各种性能指标上都优于其他现有方法。在从 KDD CUP'99 测试数据集获得的 5000 个连接向量数据上,发现所开发的 GWO-EBG 的检测率为 94.6%,高于 EBG 的 91.24%、K-近邻(KNN)的 75.60%、支持向量机(SVM)的 73.36% 和广义回归神经网络(GRNN)的 74.88%。所开发策略(GWO-EBG)的误判率为 0.35 %%,低于使用 5000 个测试数据集的 EBG 的 2.18 %、KNN 的 7.32 %、SVM 的 8.15 % 和 GRNN 的 8.13 %。
{"title":"A framework for detection of cyber attacks by the classification of intrusion detection datasets","authors":"Durgesh Srivastava ,&nbsp;Rajeshwar Singh ,&nbsp;Chinmay Chakraborty ,&nbsp;Sunil Kr. Maakar ,&nbsp;Aaisha Makkar ,&nbsp;Deepak Sinwar","doi":"10.1016/j.micpro.2023.104964","DOIUrl":"10.1016/j.micpro.2023.104964","url":null,"abstract":"<div><p><span><span>Recognition of the consequence for advanced tools and techniques to secure the network infrastructure from the security risks has prompted the advancement of many machine learning-based intrusion detection strategies. However, it is a big challenge for the researchers to make improvements in an </span>Intrusion Detection System with desired advantages and constraints. This paper has developed a proficient soft computing framework using </span>Grey Wolf Optimization<span> and Entropy-Based Graph (GWO-EBG) to classify intrusion detection datasets to reduce the false rate. In the proposed scheme, initially, the input data is preprocessed by the data transformation and normalization procedure. After the preprocessing, optimal features have been chosen for the dimension reduction from the preprocessed data using the grey wolf optimization (GWO) algorithm. Then, the Entropy value has estimated from the idyllically selected features. Lastly, an Entropy-Based Graph (EBG) has been constructed to classify data into intrusion or normal data. The experimental results demonstrate that the developed method outperforms other existing methods in various performance measures<span><span>. The detection rate of the developed GWO-EBG is found to be 94.6%, which is higher than 91.24 % of EBG, 75.60 % K-Nearest Neighbors (KNN), 73.36 % of Support Vector Machine<span> (SVM), and 74.88 % of Generalized Regression Neural Network (GRNN) on 5000 connection vectors data obtained from KDD CUP’99 </span></span>testing dataset. The false-positive rate of developed strategy (GWO-EBG) is 0.35 %%, which is lower than 2.18 % of EBG, 7.32 % KNN, 8.15 % of SVM, and 8.13 % of GRNN with 5000 testing datasets.</span></span></p></div>","PeriodicalId":49815,"journal":{"name":"Microprocessors and Microsystems","volume":null,"pages":null},"PeriodicalIF":2.6,"publicationDate":"2023-10-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"136010089","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
3-D spatial correlation model for reducing the transmitting nodes in densely deployed WSN 用于减少密集部署WSN中传输节点的三维空间相关模型
IF 2.6 4区 计算机科学 Q2 Computer Science Pub Date : 2023-10-14 DOI: 10.1016/j.micpro.2023.104963
Rajesh Kumar Garg , Surender Kumar Soni , S. Vimal , Gaurav Dhiman

In Wireless Sensor Networks, a large number of sensor nodes are distributed in the monitoring area to increase fault tolerance, coverage and communication range. In highly dense network, many nodes belong to common sensing region and record almost similar data of the event. Base station, however, can also identify the event features from data of a few representative nodes of the sensing region. The battery power of some sensor nodes may be saved by not sending multiple copies of the sensed information. In order to reduce transmitting nodes from the sensing region, an analytical model is presented to segregate the whole network into group of correlated regions. The minimum number of transmitting nodes are selected from probability based deployment of sensor nodes in 3D scenario and rest of the nodes are operated in sleep mode for saving the battery power. Effectiveness of proposed models is demonstrated with established technique of CHEF i.e. Cluster Head Election using Fuzzy Logic. Results show that number of nodes transmitting data from sense region can be reduced considerably with respect to threshold correlation value (ξ), which results in the energy saving of additional nodes and enhancement of network life. With implementation of proposed models, at ξ0.5, maximum transmitting nodes are 87% which saves battery power of at least 13% nodes.

在无线传感器网络中,大量的传感器节点分布在监测区域,以增加容错性、覆盖范围和通信范围。在高密度网络中,许多节点属于公共传感区域,记录的事件数据几乎相似。然而,基站也可以从感测区域的几个代表性节点的数据中识别事件特征。可以通过不发送感测信息的多个副本来节省一些传感器节点的电池功率。为了减少感测区域的传输节点,提出了一个分析模型,将整个网络划分为一组相关区域。从3D场景中基于概率的传感器节点部署中选择最小数量的发射节点,并且其余节点在睡眠模式下操作以节省电池功率。所提出的模型的有效性通过所建立的CHEF技术(即使用模糊逻辑的簇头选择)得到了证明。结果表明,相对于阈值相关值(ξ),从感测区域传输数据的节点数量可以显著减少,从而节省了额外节点的能量,提高了网络寿命。通过实施所提出的模型,在ξ≤0.5时,最大发射节点为87%,这至少节省了13%节点的电池电量。
{"title":"3-D spatial correlation model for reducing the transmitting nodes in densely deployed WSN","authors":"Rajesh Kumar Garg ,&nbsp;Surender Kumar Soni ,&nbsp;S. Vimal ,&nbsp;Gaurav Dhiman","doi":"10.1016/j.micpro.2023.104963","DOIUrl":"https://doi.org/10.1016/j.micpro.2023.104963","url":null,"abstract":"<div><p><span><span>In Wireless Sensor Networks<span>, a large number of sensor nodes are distributed in the monitoring area to increase </span></span>fault tolerance<span><span>, coverage and communication range. In highly dense network, many nodes belong to common sensing region and record almost similar data of the event. Base station<span>, however, can also identify the event features from data of a few representative nodes of the sensing region. The battery power of some sensor nodes may be saved by not sending multiple copies of the sensed information. In order to reduce transmitting nodes from the sensing region, an analytical model is presented to segregate the whole network into group of correlated regions. The minimum number of transmitting nodes are selected from probability based deployment of sensor nodes in 3D scenario and rest of the nodes are operated in sleep mode for saving the battery power. Effectiveness of proposed models is demonstrated with established technique of CHEF i.e. </span></span>Cluster Head Election using Fuzzy Logic. Results show that number of nodes transmitting data from sense region can be reduced considerably with respect to threshold correlation value </span></span><span><math><mrow><mo>(</mo><mi>ξ</mi><mo>)</mo></mrow></math></span><span>, which results in the energy saving of additional nodes and enhancement of network life. With implementation of proposed models, at </span><span><math><mrow><mi>ξ</mi><mspace></mspace><mo>≤</mo><mspace></mspace><mn>0</mn><mo>.</mo><mn>5</mn></mrow></math></span>, maximum transmitting nodes are 87% which saves battery power of at least 13% nodes.</p></div>","PeriodicalId":49815,"journal":{"name":"Microprocessors and Microsystems","volume":null,"pages":null},"PeriodicalIF":2.6,"publicationDate":"2023-10-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"49738068","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Echo state network implementation for chaotic time series prediction 回声状态网络对混沌时间序列预测的实现
IF 2.6 4区 计算机科学 Q2 Computer Science Pub Date : 2023-10-14 DOI: 10.1016/j.micpro.2023.104950
Luis Gerardo de la Fraga , Brisbane Ovilla-Martínez , Esteban Tlelo-Cuautle

The implementation of an Echo State Neural Network (ESNN) for chaotic time series prediction is introduced. First, the ESNN is simulated using floating-point arithmetic and afterwards fixed-point arithmetic. The synthesis of the ESNN is done in a field-programmable gate array (FPGA), in which the activation function of the neurons’ outputs is a hyperbolic tangent one, and is approximated with a new design of quadratic order b-splines and four integer multipliers. The FPGA implementation of the ESNN is applied to predict four chaotic time series associated to the Lorenz, Chua, Lü, and Rossler chaotic oscillators. The experimental results show that with 50 hidden neurons, the fixed-point arithmetic is good enough when using 15 or 16 bits in the fractional part: using more bits does not reduce the mean-squared error prediction. The neurons are limited to four inputs in the hidden layer to achieve a more efficient hardware implementation, guaranteeing a prediction of more than 10 steps ahead.

介绍了一种用于混沌时间序列预测的回声状态神经网络(ESNN)的实现。首先,使用浮点运算和定点运算对ESNN进行仿真。ESNN的合成是在现场可编程门阵列(FPGA)中完成的,其中神经元输出的激活函数是双曲正切函数,并用二阶b样条和四个整数乘法器的新设计进行近似。ESNN的FPGA实现被应用于预测与Lorenz、Chua、Lü和Rossler混沌振荡器相关的四个混沌时间序列。实验结果表明,对于50个隐藏神经元,当在小数部分使用15或16位时,定点算法就足够好了:使用更多的位不会降低均方误差预测。神经元在隐藏层中被限制为四个输入,以实现更高效的硬件实现,保证提前10步以上的预测。
{"title":"Echo state network implementation for chaotic time series prediction","authors":"Luis Gerardo de la Fraga ,&nbsp;Brisbane Ovilla-Martínez ,&nbsp;Esteban Tlelo-Cuautle","doi":"10.1016/j.micpro.2023.104950","DOIUrl":"https://doi.org/10.1016/j.micpro.2023.104950","url":null,"abstract":"<div><p><span>The implementation of an Echo State Neural Network (ESNN) for chaotic time series prediction is introduced. First, the ESNN is simulated using floating-point arithmetic and afterwards fixed-point arithmetic. The synthesis of the ESNN is done in a field-programmable gate array (FPGA), in which the activation function<span> of the neurons’ outputs is a hyperbolic tangent<span> one, and is approximated with a new design of quadratic order b-splines and four integer multipliers. The FPGA implementation of the ESNN is applied to predict four chaotic time series associated to the Lorenz, Chua, Lü, and Rossler chaotic oscillators. The experimental results show that with 50 hidden neurons, the fixed-point arithmetic is good enough when using 15 or 16 bits in the </span></span></span>fractional part: using more bits does not reduce the mean-squared error prediction. The neurons are limited to four inputs in the hidden layer to achieve a more efficient hardware implementation, guaranteeing a prediction of more than 10 steps ahead.</p></div>","PeriodicalId":49815,"journal":{"name":"Microprocessors and Microsystems","volume":null,"pages":null},"PeriodicalIF":2.6,"publicationDate":"2023-10-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"49725022","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Low-area architecture design of multi-mode activation functions with controllable maximum absolute error for neural network applications 神经网络应用中具有可控最大绝对误差的多模激活函数的低面积结构设计
IF 2.6 4区 计算机科学 Q2 Computer Science Pub Date : 2023-10-11 DOI: 10.1016/j.micpro.2023.104952
Shu-Yen Lin, Jung-Chuan Chiang

In the development of the neural network (NN), the activation function has become more and more important. The selection of the activation function indirectly affects the convergence speed and accuracy. This study proposes the multi-mode activation function design (MMAFD) based on the least square method (LSM) with a controllable maximum absolute error (MAE) to support multiple activation functions. MMAFD selects the activation function to maintain the accuracy for different deep learning applications. MMAFD is implemented by TSMC 90 nm CMOS technology. In MMAFD, the power consumption is 0.98 mW, the operational frequency is 250 MHz, and the area is 0.416mm². MMAFD is also verified by Xilinx Spartan-6 XC6SLX45 development board. Compared to the related works verified in the FPGA boards, the LUTs and slices registers are reduced by up to 62.96 % and 73.90 %.

在神经网络的发展过程中,激活函数变得越来越重要。激活函数的选择间接影响收敛速度和精度。本研究提出了基于具有可控最大绝对误差(MAE)的最小二乘法的多模式激活函数设计(MMAFD),以支持多个激活函数。MMAFD选择激活函数来保持不同深度学习应用程序的准确性。MMAFD采用TSMC 90nm CMOS技术实现。在MMAFD中,功耗为0.98mW,工作频率为250MHz,面积为0.416mm²。MMAFD也经过Xilinx Spartan-6 XC6SLX45开发板的验证。与在FPGA板上验证的相关工作相比,LUT和片寄存器分别减少了62.96%和73.90%。
{"title":"Low-area architecture design of multi-mode activation functions with controllable maximum absolute error for neural network applications","authors":"Shu-Yen Lin,&nbsp;Jung-Chuan Chiang","doi":"10.1016/j.micpro.2023.104952","DOIUrl":"https://doi.org/10.1016/j.micpro.2023.104952","url":null,"abstract":"<div><p><span><span>In the development of the neural network<span><span> (NN), the activation function has become more and more important. The selection of the activation function indirectly affects the convergence speed and accuracy. This study proposes the multi-mode activation function design (MMAFD) based on the </span>least square method<span> (LSM) with a controllable maximum absolute error<span> (MAE) to support multiple activation functions. MMAFD selects the activation function to maintain the accuracy for different deep learning applications. MMAFD is implemented by TSMC 90 nm CMOS technology. In MMAFD, the </span></span></span></span>power consumption is 0.98 mW, the operational frequency is 250 MHz, and the area is 0.416mm². MMAFD is also verified by Xilinx Spartan-6 XC6SLX45 development board. Compared to the related works verified in the </span>FPGA boards, the LUTs and slices registers are reduced by up to 62.96 % and 73.90 %.</p></div>","PeriodicalId":49815,"journal":{"name":"Microprocessors and Microsystems","volume":null,"pages":null},"PeriodicalIF":2.6,"publicationDate":"2023-10-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"49725021","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Notification Oriented Paradigm to Digital Hardware — A benchmark evaluation with Random Forest algorithm 面向通知的数字硬件范式——随机森林算法的基准评估
IF 2.6 4区 计算机科学 Q2 Computer Science Pub Date : 2023-10-09 DOI: 10.1016/j.micpro.2023.104951
Leonardo Faix Pordeus , André Eugenio Lazzaretti , Robson Ribeiro Linhares , Jean Marcelo Simão

The Notification Oriented Paradigm (NOP) emerges as an alternative to develop and execute applications. The NOP brings a new inference concept based on precise notifying collaborative minimal entities. This inference implicitly allows achieving decoupled solutions, thereby enabling parallelism at a granularity level as fine-grained as possible in the envisaged computational platform. Previous research has proposed a digital circuit solution based on the NOP model, which is called NOP to Digital Hardware (DH), as a sort of High-Level Synthesis (HLS) prototype tool. The results with NOP-DH were encouraging indeed. However, the previous NOP-DH works lack benchmarks that exploit well-known algorithms against known HLS tools, such as the Vivado HLS tool, which is one of the suitable commercial HLS solutions. This work proposes evaluating the NOP-DH applied to develop the well-known Random Forest algorithm. The Random Forest is a popular Machine Learning algorithm used in several classification and regression applications. Due to the high number of logic-causal evaluations in the Random Forest algorithm and the possibility of running them in parallel, it is suitable for envisaged benchmark purpose. Experiments were performed to compare NOP-DH, and two Vivado HLS approaches (an ad hoc code and a hls4ml tool-based code) in terms of performance, amount of logic elements, maximum frequency, and the number of predictions per second. Those experiments demonstrated that NOP-DH circuits achieve better results concerning the number of logical elements and prediction rates, with some scalability limitations as a drawback. On average, the NOP-DH uses 52.5% fewer resources, and the number of predictions per second is 4.7 times higher than Vivado HLS. Finally, our codes are made publicly available at https://nop.dainf.ct.utfpr.edu.br/nop-public/nop-dh-random-forest-algorithm.

面向通知的范式(NOP)是开发和执行应用程序的一种替代方案。NOP提出了一种新的基于精确通知协作最小实体的推理概念。这种推断隐含地允许实现解耦的解决方案,从而在设想的计算平台中实现尽可能细粒度的粒度级别的并行性。先前的研究提出了一种基于NOP模型的数字电路解决方案,称为NOP到数字硬件(DH),作为一种高级综合(HLS)原型工具。NOP-DH的结果确实令人鼓舞。然而,以前的NOP-DH工作缺乏针对已知HLS工具(如Vivado HLS工具)利用已知算法的基准,Vivado是合适的商业HLS解决方案之一。这项工作提出了评估应用于开发著名的随机森林算法的NOP-DH。随机森林是一种流行的机器学习算法,用于多种分类和回归应用。由于随机森林算法中逻辑因果评估的数量很高,并且可以并行运行,因此它适合于预期的基准测试目的。在性能、逻辑元素数量、最大频率和每秒预测次数方面,对NOP-DH和两种Vivado HLS方法(ad hoc代码和hls4ml基于工具的代码)进行了实验比较。这些实验表明,NOP-DH电路在逻辑元件数量和预测率方面取得了更好的结果,但存在一些可扩展性限制。平均而言,NOP-DH使用的资源减少了52.5%,每秒的预测次数是Vivado HLS的4.7倍。最后,我们的代码在https://nop.dainf.ct.utfpr.edu.br/nop-public/nop-dh-random-forest-algorithm.
{"title":"Notification Oriented Paradigm to Digital Hardware — A benchmark evaluation with Random Forest algorithm","authors":"Leonardo Faix Pordeus ,&nbsp;André Eugenio Lazzaretti ,&nbsp;Robson Ribeiro Linhares ,&nbsp;Jean Marcelo Simão","doi":"10.1016/j.micpro.2023.104951","DOIUrl":"https://doi.org/10.1016/j.micpro.2023.104951","url":null,"abstract":"<div><p><span><span><span>The Notification Oriented Paradigm<span> (NOP) emerges as an alternative to develop and execute applications. The NOP brings a new inference concept based on precise notifying collaborative minimal entities. This inference implicitly allows achieving decoupled solutions, thereby enabling parallelism at a </span></span>granularity level<span> as fine-grained as possible in the envisaged computational platform. Previous research has proposed a digital circuit<span> solution based on the NOP model, which is called NOP to Digital Hardware (DH), as a sort of High-Level Synthesis (HLS) prototype tool. The results with NOP-DH were encouraging indeed. However, the previous NOP-DH works lack benchmarks that exploit well-known algorithms against known HLS tools, such as the Vivado HLS tool, which is one of the suitable commercial HLS solutions. This work proposes evaluating the NOP-DH applied to develop the well-known </span></span></span>Random Forest<span> algorithm. The Random Forest is a popular Machine Learning algorithm used in several classification and regression applications. Due to the high number of logic-causal evaluations in the Random Forest algorithm and the possibility of running them in parallel, it is suitable for envisaged benchmark purpose. Experiments were performed to compare NOP-DH, and two Vivado HLS approaches (an </span></span><em>ad hoc</em> code and a <em>hls4ml</em> tool-based code) in terms of performance, amount of logic elements, maximum frequency, and the number of predictions per second. Those experiments demonstrated that NOP-DH circuits achieve better results concerning the number of logical elements and prediction rates, with some scalability limitations as a drawback. On average, the NOP-DH uses 52.5% fewer resources, and the number of predictions per second is 4.7 times higher than Vivado HLS. Finally, our codes are made publicly available at <span>https://nop.dainf.ct.utfpr.edu.br/nop-public/nop-dh-random-forest-algorithm</span><svg><path></path></svg>.</p></div>","PeriodicalId":49815,"journal":{"name":"Microprocessors and Microsystems","volume":null,"pages":null},"PeriodicalIF":2.6,"publicationDate":"2023-10-09","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"49725321","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
Convolutional Tsetlin Machine-based Training and Inference Accelerator for 2-D Pattern Classification 基于卷积Tsetlin机器的二维模式分类训练与推理加速器
IF 2.6 4区 计算机科学 Q2 Computer Science Pub Date : 2023-10-07 DOI: 10.1016/j.micpro.2023.104949
Svein Anders Tunheim , Lei Jiao , Rishad Shafik , Alex Yakovlev , Ole-Christoffer Granmo

The Tsetlin Machine (TM) is a machine learning algorithm based on an ensemble of Tsetlin Automata (TAs) that learns propositional logic expressions from Boolean input features. In this paper, the design and implementation of a Field Programmable Gate Array (FPGA) accelerator based on the Convolutional Tsetlin Machine (CTM) is presented. The accelerator performs classification of two pattern classes in 4 × 4 Boolean images with a 2 × 2 convolution window. Specifically, there are two separate TMs, one per class. Each TM comprises 40 propositional logic formulas, denoted as clauses, which are conjunctions of literals. Include/exclude actions from the TAs determine which literals are included in each clause. The accelerator supports full training, including random patch selection during convolution based on parallel reservoir sampling across all clauses. The design is implemented on a Xilinx Zynq XC7Z020 FPGA platform. With an operating clock speed of 40 MHz, the accelerator achieves a classification rate of 4.4 million images per second with an energy per classification of 0.6 μJ. The mean test accuracy is 99.9% when trained on the 2-dimensional Noisy XOR dataset with 40% noise in the training labels. To achieve this performance, which is on par with the original software implementation, Linear Feedback Shift Register (LFSR) random number generators of minimum 16 bits are required. The solution demonstrates the core principles of a CTM and can be scaled to operate on multi-class systems for larger images.

Tsetlin机器(TM)是一种基于Tsetlin自动机(TA)集合的机器学习算法,它从布尔输入特征中学习命题逻辑表达式。本文介绍了一种基于卷积Tsetlin机(CTM)的现场可编程门阵列(FPGA)加速器的设计与实现。加速器利用2×2卷积窗口对4×4布尔图像中的两个模式类进行分类。具体来说,有两个独立的TM,每个类一个。每个TM包含40个命题逻辑公式,表示为子句,它们是文字的连词。TA中的包含/排除操作决定了每个子句中包含哪些文字。加速器支持完全训练,包括在卷积期间基于所有子句的并行储层采样的随机补丁选择。该设计是在Xilinx Zynq XC7Z020 FPGA平台上实现的。在40 MHz的工作时钟速度下,加速器实现了每秒440万张图像的分类率,每次分类的能量为0.6μJ。当在训练标签中具有40%噪声的二维噪声XOR数据集上训练时,平均测试准确率为99.9%。为了实现与原始软件实现相同的性能,需要至少16位的线性反馈移位寄存器(LFSR)随机数生成器。该解决方案展示了CTM的核心原理,并且可以扩展为在多类系统上操作以获得更大的图像。
{"title":"Convolutional Tsetlin Machine-based Training and Inference Accelerator for 2-D Pattern Classification","authors":"Svein Anders Tunheim ,&nbsp;Lei Jiao ,&nbsp;Rishad Shafik ,&nbsp;Alex Yakovlev ,&nbsp;Ole-Christoffer Granmo","doi":"10.1016/j.micpro.2023.104949","DOIUrl":"https://doi.org/10.1016/j.micpro.2023.104949","url":null,"abstract":"<div><p>The Tsetlin Machine (TM) is a machine learning algorithm based on an ensemble of Tsetlin Automata (TAs) that learns propositional logic expressions from Boolean input features. In this paper, the design and implementation of a Field Programmable Gate Array (FPGA) accelerator based on the Convolutional Tsetlin Machine (CTM) is presented. The accelerator performs classification of two pattern classes in 4 × 4 Boolean images with a 2 × 2 convolution window. Specifically, there are two separate TMs, one per class. Each TM comprises 40 propositional logic formulas, denoted as clauses, which are conjunctions of literals. Include/exclude actions from the TAs determine which literals are included in each clause. The accelerator supports full training, including random patch selection during convolution based on parallel reservoir sampling across all clauses. The design is implemented on a Xilinx Zynq XC7Z020 FPGA platform. With an operating clock speed of 40 MHz, the accelerator achieves a classification rate of 4.4 million images per second with an energy per classification of 0.6 <span><math><mi>μ</mi></math></span>J. The mean test accuracy is 99.9% when trained on the 2-dimensional Noisy XOR dataset with 40% noise in the training labels. To achieve this performance, which is on par with the original software implementation, Linear Feedback Shift Register (LFSR) random number generators of minimum 16 bits are required. The solution demonstrates the core principles of a CTM and can be scaled to operate on multi-class systems for larger images.</p></div>","PeriodicalId":49815,"journal":{"name":"Microprocessors and Microsystems","volume":null,"pages":null},"PeriodicalIF":2.6,"publicationDate":"2023-10-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"49725134","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Accelerating AI and Computer Vision for Satellite Pose Estimation on the Intel Myriad X Embedded SoC 在Intel Myriad X嵌入式SoC上加速人工智能和计算机视觉卫星姿态估计
IF 2.6 4区 计算机科学 Q2 Computer Science Pub Date : 2023-10-07 DOI: 10.1016/j.micpro.2023.104947
Vasileios Leon , Panagiotis Minaidis , George Lentaris , Dimitrios Soudris

The challenging deployment of Artificial Intelligence (AI) and Computer Vision (CV) algorithms at the edge pushes the community of embedded computing to examine heterogeneous System-on-Chips (SoCs). Such novel computing platforms provide increased diversity in interfaces, processors and storage, however, the efficient partitioning and mapping of AI/CV workloads still remains an open issue. In this context, the current paper develops a hybrid AI/CV system on Intel’s Movidius Myriad X, which is an heterogeneous Vision Processing Unit (VPU), for initializing and tracking the satellite’s pose in space missions. The space industry is among the communities examining alternative computing platforms to comply with the tight constraints of on-board data processing, while it is also striving to adopt functionalities from the AI domain. At algorithmic level, we rely on the ResNet-50-based UrsoNet network along with a custom classical CV pipeline. For efficient acceleration, we exploit the SoC’s neural compute engine and 16 vector processors by combining multiple parallelization and low-level optimization techniques. The proposed single-chip, robust-estimation, and real-time solution delivers a throughput of up to 5 FPS for 1-MegaPixel RGB images within a limited power envelope of 2 W.

人工智能(AI)和计算机视觉(CV)算法在边缘的挑战性部署推动了嵌入式计算社区对异构片上系统(SoC)的研究。这种新颖的计算平台在接口、处理器和存储方面提供了更多的多样性,然而,AI/CV工作负载的有效划分和映射仍然是一个悬而未决的问题。在这种背景下,本文在英特尔的Movidius Myriad X上开发了一个混合AI/CV系统,这是一个异构的视觉处理单元(VPU),用于初始化和跟踪卫星在太空任务中的姿态。航天工业是研究替代计算平台以满足机载数据处理的严格限制的社区之一,同时也在努力采用人工智能领域的功能。在算法层面,我们依赖基于ResNet-50的UrsoNet网络以及自定义的经典CV管道。为了实现高效加速,我们结合了多种并行化和低级优化技术,利用了SoC的神经计算引擎和16个矢量处理器。所提出的单芯片、稳健估计和实时解决方案在2 W的有限功率范围内为1百万像素RGB图像提供了高达5 FPS的吞吐量。
{"title":"Accelerating AI and Computer Vision for Satellite Pose Estimation on the Intel Myriad X Embedded SoC","authors":"Vasileios Leon ,&nbsp;Panagiotis Minaidis ,&nbsp;George Lentaris ,&nbsp;Dimitrios Soudris","doi":"10.1016/j.micpro.2023.104947","DOIUrl":"https://doi.org/10.1016/j.micpro.2023.104947","url":null,"abstract":"<div><p><span><span>The challenging deployment of Artificial Intelligence (AI) and </span>Computer Vision<span> (CV) algorithms at the edge pushes the community of embedded computing to examine heterogeneous System-on-Chips (SoCs). Such novel computing platforms provide increased diversity in interfaces, processors and storage, however, the efficient partitioning and mapping of AI/CV workloads still remains an open issue. In this context, the current paper develops a hybrid AI/CV system on Intel’s Movidius Myriad X, which is an heterogeneous Vision Processing Unit (VPU), for initializing and tracking the satellite’s pose in space missions. The space industry is among the communities examining alternative computing platforms to comply with the tight constraints of on-board data processing<span>, while it is also striving to adopt functionalities from the AI domain. At algorithmic level, we rely on the ResNet-50-based UrsoNet network along with a custom classical CV pipeline. For efficient acceleration, we exploit the SoC’s neural compute engine and 16 vector processors by combining multiple </span></span></span>parallelization<span> and low-level optimization techniques. The proposed single-chip, robust-estimation, and real-time solution delivers a throughput of up to 5 FPS for 1-MegaPixel RGB images within a limited power envelope of 2 W.</span></p></div>","PeriodicalId":49815,"journal":{"name":"Microprocessors and Microsystems","volume":null,"pages":null},"PeriodicalIF":2.6,"publicationDate":"2023-10-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"49724943","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
A general jitter analysis of DLL considering the jitter accumulation effect of loop capacitor 考虑环路电容抖动累积效应的动态链接库的一般抖动分析
IF 2.6 4区 计算机科学 Q2 Computer Science Pub Date : 2023-10-07 DOI: 10.1016/j.micpro.2023.104943
Shahram Modanlou , Gholamreza Ardeshir , Mohammad Gholami

This paper presents a time-domain model for general jitter analysis of delay-locked loops (DLLs). According to this model, the noise contribution of each part of the circuit is specified at the output of DLL, and a closed-form relationship is extracted. By this closed-form relationship, we show that accumulated jitter, known as jitter peaking, always exists due to the loop filter capacitor in a widely used DLL configuration. The effect of jitter accumulation can cause an unstable lock state. A conventional DLL is simulated in 0.18 µm CMOS technology to verify the closed-form relationship.

本文提出了一种用于延迟锁定环(DLL)一般抖动分析的时域模型。根据该模型,在DLL的输出处指定电路的每个部分的噪声贡献,并提取闭合形式关系。通过这种闭合关系,我们表明,在广泛使用的DLL配置中,由于环路滤波器电容器的存在,累积抖动(称为抖动峰值)总是存在的。抖动累积的影响可能导致不稳定的锁定状态。在0.18µm CMOS技术中模拟传统DLL,以验证闭合形式关系。
{"title":"A general jitter analysis of DLL considering the jitter accumulation effect of loop capacitor","authors":"Shahram Modanlou ,&nbsp;Gholamreza Ardeshir ,&nbsp;Mohammad Gholami","doi":"10.1016/j.micpro.2023.104943","DOIUrl":"https://doi.org/10.1016/j.micpro.2023.104943","url":null,"abstract":"<div><p>This paper presents a time-domain model for general jitter analysis of delay-locked loops (DLLs). According to this model, the noise contribution of each part of the circuit is specified at the output of DLL, and a closed-form relationship is extracted. By this closed-form relationship, we show that accumulated jitter, known as jitter peaking, always exists due to the loop filter capacitor in a widely used DLL configuration. The effect of jitter accumulation can cause an unstable lock state. A conventional DLL is simulated in 0.18 µm CMOS technology to verify the closed-form relationship.</p></div>","PeriodicalId":49815,"journal":{"name":"Microprocessors and Microsystems","volume":null,"pages":null},"PeriodicalIF":2.6,"publicationDate":"2023-10-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"49724944","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
期刊
Microprocessors and Microsystems
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1