首页 > 最新文献

IJCNN'01. International Joint Conference on Neural Networks. Proceedings (Cat. No.01CH37222)最新文献

英文 中文
Neural networks and the traveling salesman problem 神经网络与旅行商问题
Bryan Bemley
Gives a brief definition of a neural network, talks a little about two different neural networks, names some people that were in the field of neural networks, gives a brief history on neural networks, an explanation of the traveling salesman problems, and a brief conclusion.
给出了神经网络的简单定义,简单介绍了两种不同的神经网络,列出了神经网络领域的一些人的名字,简要介绍了神经网络的历史,解释了旅行推销员问题,并给出了简短的结论。
{"title":"Neural networks and the traveling salesman problem","authors":"Bryan Bemley","doi":"10.1109/IJCNN.2001.939080","DOIUrl":"https://doi.org/10.1109/IJCNN.2001.939080","url":null,"abstract":"Gives a brief definition of a neural network, talks a little about two different neural networks, names some people that were in the field of neural networks, gives a brief history on neural networks, an explanation of the traveling salesman problems, and a brief conclusion.","PeriodicalId":346955,"journal":{"name":"IJCNN'01. International Joint Conference on Neural Networks. Proceedings (Cat. No.01CH37222)","volume":"1226 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2001-07-15","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115908674","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 2
A new adaptive learning algorithm using magnified gradient function 一种新的基于放大梯度函数的自适应学习算法
S. Ng, C. Cheung, S. Leung, A. Luk
An algorithm is proposed to solve the "flat spot" problem in backpropagation networks by magnifying the gradient function. The idea of the learning algorithm is to vary the gradient of the activation function so as to magnify the backward propagated error signal gradient function especially when the output approaches a wrong value, thus the convergence rate can be accelerated and the flat spot problem can be eliminated. Simulation results show that, in terms of the convergence rate and global search capability, the new algorithm always outperforms the other traditional methods.
提出了一种通过放大梯度函数来解决反向传播网络中的“平坦点”问题的算法。该学习算法的思想是通过改变激活函数的梯度来放大后向传播的误差信号梯度函数,特别是当输出接近错误值时,从而加快收敛速度,消除平斑问题。仿真结果表明,在收敛速度和全局搜索能力方面,新算法始终优于其他传统方法。
{"title":"A new adaptive learning algorithm using magnified gradient function","authors":"S. Ng, C. Cheung, S. Leung, A. Luk","doi":"10.1109/IJCNN.2001.939009","DOIUrl":"https://doi.org/10.1109/IJCNN.2001.939009","url":null,"abstract":"An algorithm is proposed to solve the \"flat spot\" problem in backpropagation networks by magnifying the gradient function. The idea of the learning algorithm is to vary the gradient of the activation function so as to magnify the backward propagated error signal gradient function especially when the output approaches a wrong value, thus the convergence rate can be accelerated and the flat spot problem can be eliminated. Simulation results show that, in terms of the convergence rate and global search capability, the new algorithm always outperforms the other traditional methods.","PeriodicalId":346955,"journal":{"name":"IJCNN'01. International Joint Conference on Neural Networks. Proceedings (Cat. No.01CH37222)","volume":"150 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2001-07-15","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"131392754","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
Italian Lira classification by LVQ 意大利里拉按LVQ分类
S. Omatu, T. Fujinaka, T. Kosaka, H. Yanagimoto, M. Yoshioka
In this paper, a new method to classify the Italian Liras by using the learning vector quantization (LVQ) is proposed. The Italian Liras of 8 kinds, 1000, 2000, 5000, 10000, 50000 (new), 50000 (old), 100000 (new), 100000 (old) Liras with four directions A,B,C, and D are used, where A and B mean the normal direction and the upside down direction and C and D mean the reverse version of A and B. The original image with 128 by 64 pixels is observed at the transaction machine in which rotation and shift are included. After correction of these effects, we select a suitable area which shows the bill image and feed the image with 64 by 15 pixels to a neural network. Although the neural network of the LVQ type can process in any order of the dimension of the input data, the smaller size is better to achieve a faster convergence.
本文提出了一种基于学习向量量化(LVQ)的意大利里拉分类新方法。使用1000、2000、5000、10000、50000(新)、50000(旧)、100000(新)、100000(旧)里拉8种,A、B、C、D四个方向的意大利里拉,其中A、B表示正方向和倒立方向,C、D表示A、B的反方向。在交易机器上观察到128 × 64像素的原始图像,其中包含旋转和移位。在对这些影响进行校正后,我们选择一个合适的区域来显示账单图像,并将64 × 15像素的图像馈送给神经网络。虽然LVQ类型的神经网络可以对输入数据的任意维序进行处理,但是越小越好,收敛速度越快。
{"title":"Italian Lira classification by LVQ","authors":"S. Omatu, T. Fujinaka, T. Kosaka, H. Yanagimoto, M. Yoshioka","doi":"10.1109/IJCNN.2001.938846","DOIUrl":"https://doi.org/10.1109/IJCNN.2001.938846","url":null,"abstract":"In this paper, a new method to classify the Italian Liras by using the learning vector quantization (LVQ) is proposed. The Italian Liras of 8 kinds, 1000, 2000, 5000, 10000, 50000 (new), 50000 (old), 100000 (new), 100000 (old) Liras with four directions A,B,C, and D are used, where A and B mean the normal direction and the upside down direction and C and D mean the reverse version of A and B. The original image with 128 by 64 pixels is observed at the transaction machine in which rotation and shift are included. After correction of these effects, we select a suitable area which shows the bill image and feed the image with 64 by 15 pixels to a neural network. Although the neural network of the LVQ type can process in any order of the dimension of the input data, the smaller size is better to achieve a faster convergence.","PeriodicalId":346955,"journal":{"name":"IJCNN'01. International Joint Conference on Neural Networks. Proceedings (Cat. No.01CH37222)","volume":"195 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2001-07-15","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"132021875","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 11
A comparison of BSS algorithms BSS算法的比较
Y. Singh, C. Rai
Several gradient-based algorithms exist for performing blind source separation (BSS). In this paper we compare three most popular neural algorithms: EASI, natural gradient and Bell-Sejnowski algorithms. The effectiveness of these algorithms depends upon the nonlinear activation function. These algorithms were evaluated with different nonlinear functions for sub-Gaussian and super-Gaussian sources.
目前已有几种基于梯度的盲源分离(BSS)算法。本文比较了三种最流行的神经算法:EASI、自然梯度和Bell-Sejnowski算法。这些算法的有效性取决于非线性激活函数。针对亚高斯和超高斯源,用不同的非线性函数对这些算法进行了评价。
{"title":"A comparison of BSS algorithms","authors":"Y. Singh, C. Rai","doi":"10.1109/IJCNN.2001.939484","DOIUrl":"https://doi.org/10.1109/IJCNN.2001.939484","url":null,"abstract":"Several gradient-based algorithms exist for performing blind source separation (BSS). In this paper we compare three most popular neural algorithms: EASI, natural gradient and Bell-Sejnowski algorithms. The effectiveness of these algorithms depends upon the nonlinear activation function. These algorithms were evaluated with different nonlinear functions for sub-Gaussian and super-Gaussian sources.","PeriodicalId":346955,"journal":{"name":"IJCNN'01. International Joint Conference on Neural Networks. Proceedings (Cat. No.01CH37222)","volume":"72 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2001-07-15","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"132523294","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 3
A modified PI control action with a robust adaptive fuzzy controller applied to DC motor 将改进的PI控制动作与鲁棒自适应模糊控制器应用于直流电动机
N. Almutairi, M. Chow
In this paper, a robust and adaptive fuzzy controller to modify a well-known PI controller in order to improve the system performance is presented. Robustness is concluded showing that the sensitivity function of the output is bounded. Based on this robust adaptive fuzzy controller, several performance measures of the controlled plant, including the rise time, delay time, settling time, and percentage of maximum overshoot is shown to be highly related to the parameters in certain fuzzy rules of the controller. The sensitivity functions of the relative errors in the performance measures is used to derive these relations. A DC motor with a nonlinear friction model is used to illustrate this method.
本文提出了一种鲁棒自适应模糊控制器来改进已知的PI控制器,以提高系统的性能。鲁棒性表明,灵敏度函数的输出是有界的。基于该鲁棒自适应模糊控制器,被控对象的上升时间、延迟时间、稳定时间和最大超调百分比等性能指标与控制器模糊规则中的参数高度相关。利用性能指标中相对误差的灵敏度函数推导了这些关系。以具有非线性摩擦模型的直流电动机为例说明了该方法。
{"title":"A modified PI control action with a robust adaptive fuzzy controller applied to DC motor","authors":"N. Almutairi, M. Chow","doi":"10.1109/IJCNN.2001.939071","DOIUrl":"https://doi.org/10.1109/IJCNN.2001.939071","url":null,"abstract":"In this paper, a robust and adaptive fuzzy controller to modify a well-known PI controller in order to improve the system performance is presented. Robustness is concluded showing that the sensitivity function of the output is bounded. Based on this robust adaptive fuzzy controller, several performance measures of the controlled plant, including the rise time, delay time, settling time, and percentage of maximum overshoot is shown to be highly related to the parameters in certain fuzzy rules of the controller. The sensitivity functions of the relative errors in the performance measures is used to derive these relations. A DC motor with a nonlinear friction model is used to illustrate this method.","PeriodicalId":346955,"journal":{"name":"IJCNN'01. International Joint Conference on Neural Networks. Proceedings (Cat. No.01CH37222)","volume":"9 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2001-07-15","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130038345","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 16
Rosenblatt perceptrons for handwritten digit recognition 用于手写数字识别的Rosenblatt感知器
Kussul Emst
The Rosenblatt perceptron was used for handwritten digit recognition. For testing its performance the MNIST database was used. 60,000 samples of handwritten digits were used for perceptron training, and 10,000 samples for testing. A recognition rate of 99.2% was obtained. The critical parameter of Rosenblatt perceptrons is the number of neurons N in the associative neuron layer. We changed the parameter N from 1,000 to 512,000. We investigated the influence of this parameter on the performance of the Rosenblatt perceptron. Increasing N from 1,000 to 512,000 involves decreasing of test errors from 5 to 8 times. It was shown that a large scale Rosenblatt perceptron is comparable with the best classifiers checked on MNIST database (98.9%-99.3%).
Rosenblatt感知器用于手写数字识别。为了测试其性能,使用了MNIST数据库。感知器训练使用60,000个手写数字样本,测试使用10,000个样本。识别率为99.2%。Rosenblatt感知器的关键参数是联想神经元层的神经元数量N。我们把参数N从1000改成了512000。我们研究了这个参数对Rosenblatt感知器性能的影响。将N从1,000增加到512,000涉及将测试误差从5倍减少到8倍。结果表明,大规模Rosenblatt感知器与MNIST数据库上的最佳分类器(98.9%-99.3%)相当。
{"title":"Rosenblatt perceptrons for handwritten digit recognition","authors":"Kussul Emst","doi":"10.1109/IJCNN.2001.939589","DOIUrl":"https://doi.org/10.1109/IJCNN.2001.939589","url":null,"abstract":"The Rosenblatt perceptron was used for handwritten digit recognition. For testing its performance the MNIST database was used. 60,000 samples of handwritten digits were used for perceptron training, and 10,000 samples for testing. A recognition rate of 99.2% was obtained. The critical parameter of Rosenblatt perceptrons is the number of neurons N in the associative neuron layer. We changed the parameter N from 1,000 to 512,000. We investigated the influence of this parameter on the performance of the Rosenblatt perceptron. Increasing N from 1,000 to 512,000 involves decreasing of test errors from 5 to 8 times. It was shown that a large scale Rosenblatt perceptron is comparable with the best classifiers checked on MNIST database (98.9%-99.3%).","PeriodicalId":346955,"journal":{"name":"IJCNN'01. International Joint Conference on Neural Networks. Proceedings (Cat. No.01CH37222)","volume":"40 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2001-07-15","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"134347518","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 39
Comparative analysis of backpropagation and extended Kalman filter in pattern and batch forms for training neural networks 反向传播和扩展卡尔曼滤波在模式和批处理形式下用于神经网络训练的比较分析
Shuhui Li
The extended Kalman filter (EKF) algorithm has been used for training neural networks. Like the backpropagation (BP) algorithm, the EKF algorithm can be in pattern or batch form. But the batch form EKF is different from the gradient averaging in standard batch mode BP. The paper compares backpropagation and extended Kalman filter in pattern and batch forms for neural network trainings. For each comparison between the batch-mode EKF and BP, the same batch data size is used. An overall RMS error computed for all training examples is adopted in the paper for the comparison, which is found to be especially beneficial to pattern mode EKF and BP trainings. Simulation of the network training with different batch data sizes shows that EKF and BP in batch-form usually are more stable and can obtain smaller RMS error than in pattern-form. However, too large batch data size can let the BP trap to a "local minimum", and can also reduces the network training effect of the EKF algorithm.
扩展卡尔曼滤波(EKF)算法已被用于神经网络的训练。与反向传播(BP)算法一样,EKF算法也可以采用模式形式或批处理形式。但批式EKF不同于标准批式BP的梯度平均。本文比较了反向传播和扩展卡尔曼滤波在模式和批处理形式下用于神经网络训练的效果。对于批处理模式EKF和BP之间的每次比较,都使用相同的批处理数据大小。本文采用计算所有训练样例的总体均方根误差进行比较,发现这种方法特别有利于模式模式EKF和BP的训练。对不同批数据规模的网络训练仿真表明,批数据形式的EKF和BP通常比模式形式的更稳定,可以获得更小的均方根误差。然而,太大的批数据量会让BP陷入“局部最小值”,也会降低EKF算法的网络训练效果。
{"title":"Comparative analysis of backpropagation and extended Kalman filter in pattern and batch forms for training neural networks","authors":"Shuhui Li","doi":"10.1109/IJCNN.2001.939007","DOIUrl":"https://doi.org/10.1109/IJCNN.2001.939007","url":null,"abstract":"The extended Kalman filter (EKF) algorithm has been used for training neural networks. Like the backpropagation (BP) algorithm, the EKF algorithm can be in pattern or batch form. But the batch form EKF is different from the gradient averaging in standard batch mode BP. The paper compares backpropagation and extended Kalman filter in pattern and batch forms for neural network trainings. For each comparison between the batch-mode EKF and BP, the same batch data size is used. An overall RMS error computed for all training examples is adopted in the paper for the comparison, which is found to be especially beneficial to pattern mode EKF and BP trainings. Simulation of the network training with different batch data sizes shows that EKF and BP in batch-form usually are more stable and can obtain smaller RMS error than in pattern-form. However, too large batch data size can let the BP trap to a \"local minimum\", and can also reduces the network training effect of the EKF algorithm.","PeriodicalId":346955,"journal":{"name":"IJCNN'01. International Joint Conference on Neural Networks. Proceedings (Cat. No.01CH37222)","volume":"478 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2001-07-15","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"134369067","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 21
Predictability analysis of the heart rate variability 心率变异性的可预测性分析
Zhijie Cai, Liping Tang, J. Ruan, Shixiong Xu, Fanji Gu
Sixteen acute myocardial infarction (AMI) inpatients were selected randomly. Sixteen normal subjects were selected as a control group, their age and sex matched to the AMI group. The patient's HRV were recorded 9 times after AMI in half a year. Some predictability measures based on neural network learning were used to analyze these data. It was found that for normal subjects, their HRVs were chaotic, but for AMI patients, their HRVs could be either periodic or stochastic. For all the sixteen AMI patients, some of the predictability measures kept one order lower than the one for normal subjects at least for six months after AMI. Therefore, it can be a sensitive index for measuring the damage of the heart due to the AMI attack.
随机选取16例急性心肌梗死(AMI)住院患者。选取年龄、性别与AMI组相匹配的正常人16例作为对照组。AMI后半年内记录患者HRV 9次。采用基于神经网络学习的可预测性方法对这些数据进行分析。研究发现,正常受试者的心率波动是混沌的,而AMI患者的心率波动可能是周期性的,也可能是随机的。对于所有16名AMI患者,在AMI后至少6个月内,一些可预测性措施比正常受试者低一个数量级。因此,它可以作为衡量AMI发作对心脏损害程度的敏感指标。
{"title":"Predictability analysis of the heart rate variability","authors":"Zhijie Cai, Liping Tang, J. Ruan, Shixiong Xu, Fanji Gu","doi":"10.1109/IJCNN.2001.938407","DOIUrl":"https://doi.org/10.1109/IJCNN.2001.938407","url":null,"abstract":"Sixteen acute myocardial infarction (AMI) inpatients were selected randomly. Sixteen normal subjects were selected as a control group, their age and sex matched to the AMI group. The patient's HRV were recorded 9 times after AMI in half a year. Some predictability measures based on neural network learning were used to analyze these data. It was found that for normal subjects, their HRVs were chaotic, but for AMI patients, their HRVs could be either periodic or stochastic. For all the sixteen AMI patients, some of the predictability measures kept one order lower than the one for normal subjects at least for six months after AMI. Therefore, it can be a sensitive index for measuring the damage of the heart due to the AMI attack.","PeriodicalId":346955,"journal":{"name":"IJCNN'01. International Joint Conference on Neural Networks. Proceedings (Cat. No.01CH37222)","volume":"63 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2001-07-15","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"131500553","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Memory-guided exploration in reinforcement learning 记忆引导下的强化学习探索
J. Carroll, T. Peterson, N. Owens
We focus on the task transfer in reinforcement learning and specifically in Q-learning. There are three main model free methods for performing task transfer in Q-learning: direct transfer, soft transfer and memory-guided exploration. In direct transfer, the Q-values from a previous task are used to initialize the Q-values of the next task. The soft transfer initializes the Q-values of the new task with a weighted average of the standard initialization value and the Q-values of the previous task. In memory-guided exploration the Q-values of previous tasks are used as a guide in the initial exploration of the agent. The weight that the agent gives to its past experience decreases over time. We explore stability issues related to the off-policy nature of memory-guided exploration and compare memory-guided exploration to soft transfer and direct transfer in three different environments.
我们主要研究强化学习中的任务迁移,特别是q学习中的任务迁移。在q学习中,主要有三种无模型的任务迁移方法:直接迁移、软迁移和记忆引导探索。在直接传输中,前一个任务的q值被用来初始化下一个任务的q值。软迁移在初始化新任务时,将标准初始值与前一个任务的q值进行加权平均。在记忆引导探索中,使用先前任务的q值作为智能体初始探索的指南。代理给予其过去经验的权重随着时间的推移而减少。我们探讨了与内存引导探索的非策略性质相关的稳定性问题,并将内存引导探索与三种不同环境下的软传输和直接传输进行了比较。
{"title":"Memory-guided exploration in reinforcement learning","authors":"J. Carroll, T. Peterson, N. Owens","doi":"10.1109/IJCNN.2001.939497","DOIUrl":"https://doi.org/10.1109/IJCNN.2001.939497","url":null,"abstract":"We focus on the task transfer in reinforcement learning and specifically in Q-learning. There are three main model free methods for performing task transfer in Q-learning: direct transfer, soft transfer and memory-guided exploration. In direct transfer, the Q-values from a previous task are used to initialize the Q-values of the next task. The soft transfer initializes the Q-values of the new task with a weighted average of the standard initialization value and the Q-values of the previous task. In memory-guided exploration the Q-values of previous tasks are used as a guide in the initial exploration of the agent. The weight that the agent gives to its past experience decreases over time. We explore stability issues related to the off-policy nature of memory-guided exploration and compare memory-guided exploration to soft transfer and direct transfer in three different environments.","PeriodicalId":346955,"journal":{"name":"IJCNN'01. International Joint Conference on Neural Networks. Proceedings (Cat. No.01CH37222)","volume":"38 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2001-07-15","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129409920","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 21
On-chip learning of FPGA-inspired neural nets fpga启发神经网络的片上学习
B. Girau
Neural networks are usually considered as naturally parallel computing models. But the number of operators and the complex connection graphs of standard neural models can not be handled by digital hardware devices. A new theoretical and practical framework allows to reconcile simple hardware topologies with complex neural architectures: field programmable neural arrays (FPNA) lead to powerful neural architectures that are easy to map onto digital hardware, thanks to a simplified topology and an original data exchange scheme. The paper focuses on a class of synchronous FPNAs, for which an efficient implementation with on-chip learning is described. Application and implementation results are discussed.
神经网络通常被认为是一种天然的并行计算模型。但是标准神经模型的算子数量和复杂的连接图是数字硬件设备无法处理的。一个新的理论和实践框架允许调和简单的硬件拓扑与复杂的神经架构:现场可编程神经阵列(FPNA)导致强大的神经架构,很容易映射到数字硬件上,这要归功于简化的拓扑和原始的数据交换方案。本文重点介绍了一类同步fpga,并描述了其片上学习的有效实现。讨论了应用和实现结果。
{"title":"On-chip learning of FPGA-inspired neural nets","authors":"B. Girau","doi":"10.1109/IJCNN.2001.939021","DOIUrl":"https://doi.org/10.1109/IJCNN.2001.939021","url":null,"abstract":"Neural networks are usually considered as naturally parallel computing models. But the number of operators and the complex connection graphs of standard neural models can not be handled by digital hardware devices. A new theoretical and practical framework allows to reconcile simple hardware topologies with complex neural architectures: field programmable neural arrays (FPNA) lead to powerful neural architectures that are easy to map onto digital hardware, thanks to a simplified topology and an original data exchange scheme. The paper focuses on a class of synchronous FPNAs, for which an efficient implementation with on-chip learning is described. Application and implementation results are discussed.","PeriodicalId":346955,"journal":{"name":"IJCNN'01. International Joint Conference on Neural Networks. Proceedings (Cat. No.01CH37222)","volume":"82 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2001-07-15","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"131209810","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 8
期刊
IJCNN'01. International Joint Conference on Neural Networks. Proceedings (Cat. No.01CH37222)
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1