首页 > 最新文献

[Proceedings 1992] IJCNN International Joint Conference on Neural Networks最新文献

英文 中文
Modeling neural network dynamics using iterative image reconstruction algorithms 使用迭代图像重建算法建模神经网络动力学
Pub Date : 1992-06-07 DOI: 10.1109/IJCNN.1992.227312
R. Steriti, M. Fiddy
Image reconstruction problems can be viewed as energy minimization problems and can be mapped onto a Hopfield neural network. For image reconstruction problems the authors describe the Gerchberg-Papoulis iterative method and the priorized discrete Fourier transform (PDFT) algorithm (C.L. Byrne et al., 1983). Both of these can be mapped onto a Hopfield neural network architecture, with the PDFT incorporating an iterative matrix inversion. The equations describing the operation of the Hopfield neural network are formally equivalent to those used in these iterative reconstruction methods, and these iterative reconstruction algorithms are regularized. The PDFT algorithm is a closed form solution to the Gerchberg-Papoulis algorithm when image support information is used. The regularized Gerchberg-Papoulis algorithm can be implemented synchronously, from which it follows that the Hopfield neural network implementation can also converge.<>
图像重建问题可以看作是能量最小化问题,可以映射到Hopfield神经网络上。对于图像重建问题,作者描述了Gerchberg-Papoulis迭代法和优先离散傅立叶变换(PDFT)算法(C.L. Byrne et al., 1983)。这两种方法都可以映射到Hopfield神经网络架构上,PDFT结合了迭代矩阵反演。描述Hopfield神经网络运行的方程在形式上等价于这些迭代重建方法中使用的方程,并且这些迭代重建算法是正则化的。当使用图像支持信息时,PDFT算法是Gerchberg-Papoulis算法的封闭解。正则化Gerchberg-Papoulis算法可以同步实现,由此可以得出Hopfield神经网络的实现也可以收敛。
{"title":"Modeling neural network dynamics using iterative image reconstruction algorithms","authors":"R. Steriti, M. Fiddy","doi":"10.1109/IJCNN.1992.227312","DOIUrl":"https://doi.org/10.1109/IJCNN.1992.227312","url":null,"abstract":"Image reconstruction problems can be viewed as energy minimization problems and can be mapped onto a Hopfield neural network. For image reconstruction problems the authors describe the Gerchberg-Papoulis iterative method and the priorized discrete Fourier transform (PDFT) algorithm (C.L. Byrne et al., 1983). Both of these can be mapped onto a Hopfield neural network architecture, with the PDFT incorporating an iterative matrix inversion. The equations describing the operation of the Hopfield neural network are formally equivalent to those used in these iterative reconstruction methods, and these iterative reconstruction algorithms are regularized. The PDFT algorithm is a closed form solution to the Gerchberg-Papoulis algorithm when image support information is used. The regularized Gerchberg-Papoulis algorithm can be implemented synchronously, from which it follows that the Hopfield neural network implementation can also converge.<<ETX>>","PeriodicalId":286849,"journal":{"name":"[Proceedings 1992] IJCNN International Joint Conference on Neural Networks","volume":"10 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1992-06-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126752169","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Learning fuzzy rule-based neural networks for function approximation 学习基于模糊规则的神经网络的函数逼近
Pub Date : 1992-06-07 DOI: 10.1109/IJCNN.1992.287127
C. Higgins, R. M. Goodman
The authors present a method for the induction of fuzzy logic rules to predict a numerical function from samples of the function and its dependent variables. This method uses an information-theoretic approach based on the authors' previous work with discrete-valued data (see Proc. Int. Joint. Conf. on Neur. Net., vol.1, p.875-80, 1991). The rules learned can then be used in a neural network to predict the function value based on its dependent variables. An example is shown of learning a control system function.<>
作者提出了一种从函数及其因变量的样本中归纳模糊逻辑规则来预测数值函数的方法。该方法使用了一种基于作者先前对离散值数据的工作的信息论方法(参见Proc. Int)。关节。忏悔。网,第1卷,第875-80页,1991年)。学习到的规则可以用在神经网络中,根据它的因变量来预测函数值。给出了一个学习控制系统功能的例子。
{"title":"Learning fuzzy rule-based neural networks for function approximation","authors":"C. Higgins, R. M. Goodman","doi":"10.1109/IJCNN.1992.287127","DOIUrl":"https://doi.org/10.1109/IJCNN.1992.287127","url":null,"abstract":"The authors present a method for the induction of fuzzy logic rules to predict a numerical function from samples of the function and its dependent variables. This method uses an information-theoretic approach based on the authors' previous work with discrete-valued data (see Proc. Int. Joint. Conf. on Neur. Net., vol.1, p.875-80, 1991). The rules learned can then be used in a neural network to predict the function value based on its dependent variables. An example is shown of learning a control system function.<<ETX>>","PeriodicalId":286849,"journal":{"name":"[Proceedings 1992] IJCNN International Joint Conference on Neural Networks","volume":"68 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1992-06-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114914642","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 33
A neural network architecture for load forecasting 一种用于负荷预测的神经网络结构
Pub Date : 1992-06-07 DOI: 10.1109/IJCNN.1992.226948
H. Bacha, W. Meyer
Neural networks offer superior performance for predicting the future behaviour of pseudo-random time series. The authors present a neural network architecture for load forecasting which is capable of capturing the relevant relationships and weather trends. The proposed architecture is tested by training three neural networks, which in turn are tested with weather data form the same four-day period. The network is made up of a series of subnetworks each connected to its immediate neighbors in a way that takes into consideration not only current weather conditions but also the weather trend around the hour for which the forecast is being made. The neural network forecasts were very close to the actual values despite the facts that only a small sample was used and there were errors in the data. A more comprehensive study is being contemplated for the next phase. One of the issues to be addressed is the expansion of the scope of the research to include data from a complete season (three consecutive months) over several years.<>
神经网络在预测伪随机时间序列的未来行为方面具有优越的性能。提出了一种能够捕捉相关关系和天气趋势的负荷预测神经网络体系结构。提出的架构通过训练三个神经网络来测试,这些神经网络又用同样四天的天气数据进行测试。该网络由一系列子网组成,每个子网都与其相邻的子网相连,其方式不仅考虑到当前的天气状况,还考虑到正在进行预报的一小时左右的天气趋势。尽管使用的样本很小,而且数据中存在误差,但神经网络预测结果与实际值非常接近。下一阶段正在考虑进行一项更全面的研究。需要解决的问题之一是扩大研究范围,包括几年来一个完整季节(连续三个月)的数据。
{"title":"A neural network architecture for load forecasting","authors":"H. Bacha, W. Meyer","doi":"10.1109/IJCNN.1992.226948","DOIUrl":"https://doi.org/10.1109/IJCNN.1992.226948","url":null,"abstract":"Neural networks offer superior performance for predicting the future behaviour of pseudo-random time series. The authors present a neural network architecture for load forecasting which is capable of capturing the relevant relationships and weather trends. The proposed architecture is tested by training three neural networks, which in turn are tested with weather data form the same four-day period. The network is made up of a series of subnetworks each connected to its immediate neighbors in a way that takes into consideration not only current weather conditions but also the weather trend around the hour for which the forecast is being made. The neural network forecasts were very close to the actual values despite the facts that only a small sample was used and there were errors in the data. A more comprehensive study is being contemplated for the next phase. One of the issues to be addressed is the expansion of the scope of the research to include data from a complete season (three consecutive months) over several years.<<ETX>>","PeriodicalId":286849,"journal":{"name":"[Proceedings 1992] IJCNN International Joint Conference on Neural Networks","volume":"35 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1992-06-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"116147726","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 30
Design and evaluation of a robust dynamic neurocontroller for a multivariable aircraft control problem 多变量飞机控制问题鲁棒动态神经控制器的设计与评价
Pub Date : 1992-06-07 DOI: 10.1109/IJCNN.1992.287193
T. Troudet, Sanjay Garg, Walter C. Merrill
The design of a dynamic neurocontroller with good robustness properties is presented for a multivariable aircraft control problem. The internal dynamics of the neurocontroller are synthesized by a state estimator feedback loop. The neurocontrol is generated by a multilayer feedforward neural network which is trained through backpropagation to minimize an objective function that is a weighted sum of tracking errors, and control input commands and rates. The neurocontroller exhibits good robustness through stability margins in phase and vehicle output gains. By maintaining performance and stability in the presence of sensor failures in the error loops, the structure of the neurocontroller is also consistent with the classical approach of flight control design.<>
针对多变量飞行器控制问题,设计了具有良好鲁棒性的动态神经控制器。神经控制器的内部动态由状态估计器反馈回路合成。神经控制由多层前馈神经网络生成,该神经网络通过反向传播训练以最小化跟踪误差加权和的目标函数,并控制输入命令和速率。通过相位和车辆输出增益的稳定裕度,神经控制器表现出良好的鲁棒性。通过在误差回路中存在传感器故障的情况下保持性能和稳定性,神经控制器的结构也与飞行控制设计的经典方法一致
{"title":"Design and evaluation of a robust dynamic neurocontroller for a multivariable aircraft control problem","authors":"T. Troudet, Sanjay Garg, Walter C. Merrill","doi":"10.1109/IJCNN.1992.287193","DOIUrl":"https://doi.org/10.1109/IJCNN.1992.287193","url":null,"abstract":"The design of a dynamic neurocontroller with good robustness properties is presented for a multivariable aircraft control problem. The internal dynamics of the neurocontroller are synthesized by a state estimator feedback loop. The neurocontrol is generated by a multilayer feedforward neural network which is trained through backpropagation to minimize an objective function that is a weighted sum of tracking errors, and control input commands and rates. The neurocontroller exhibits good robustness through stability margins in phase and vehicle output gains. By maintaining performance and stability in the presence of sensor failures in the error loops, the structure of the neurocontroller is also consistent with the classical approach of flight control design.<<ETX>>","PeriodicalId":286849,"journal":{"name":"[Proceedings 1992] IJCNN International Joint Conference on Neural Networks","volume":"37 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1992-06-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"116585334","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 11
Handwritten alpha-numeric recognition by a self-growing neural network 'CombNET-II' 自生长神经网络CombNET-II的手写字母数字识别
Pub Date : 1992-06-07 DOI: 10.1109/IJCNN.1992.227337
A. Iwata, Y. Suwa, Y. Ino, N. Suzumura
CombNET-II is a self-growing four-layer neural network model which has a comb structure. The first layer constitutes a stem network which quantizes an input feature vector space into several subspaces and the following 2-4 layers constitute branch network modules which classify input data in each sub-space into specified categories. CombNET-II uses a self-growing neural network learning procedure, for training the stem network. Back propagation is utilized to train branch networks. Each branch module, which is a three-layer hierarchical network, has a restricted number of output neurons and inter-connections so that it is easy to train. Therefore CombNET-II does not cause the local minimum state since the complexities of the problems to be solved for each branch module are restricted by the stem network. CombNET-II correctly classified 99.0% of previously unseen handwritten alpha-numeric characters.<>
CombNET-II是一种具有梳状结构的自生长四层神经网络模型。第一层构成一个干网络,它将输入特征向量空间量化为几个子空间,接下来的2-4层构成分支网络模块,将每个子空间中的输入数据分类为指定的类别。CombNET-II使用自生长神经网络学习程序,用于训练干网络。利用反向传播来训练分支网络。每个分支模块是一个三层分层网络,其输出神经元数量和相互连接数量有限,便于训练。因此,由于每个分支模块要解决的问题的复杂性受到主干网络的限制,CombNET-II不会导致局部最小状态。CombNET-II正确分类了99.0%以前看不见的手写字母数字字符。
{"title":"Handwritten alpha-numeric recognition by a self-growing neural network 'CombNET-II'","authors":"A. Iwata, Y. Suwa, Y. Ino, N. Suzumura","doi":"10.1109/IJCNN.1992.227337","DOIUrl":"https://doi.org/10.1109/IJCNN.1992.227337","url":null,"abstract":"CombNET-II is a self-growing four-layer neural network model which has a comb structure. The first layer constitutes a stem network which quantizes an input feature vector space into several subspaces and the following 2-4 layers constitute branch network modules which classify input data in each sub-space into specified categories. CombNET-II uses a self-growing neural network learning procedure, for training the stem network. Back propagation is utilized to train branch networks. Each branch module, which is a three-layer hierarchical network, has a restricted number of output neurons and inter-connections so that it is easy to train. Therefore CombNET-II does not cause the local minimum state since the complexities of the problems to be solved for each branch module are restricted by the stem network. CombNET-II correctly classified 99.0% of previously unseen handwritten alpha-numeric characters.<<ETX>>","PeriodicalId":286849,"journal":{"name":"[Proceedings 1992] IJCNN International Joint Conference on Neural Networks","volume":"76 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1992-06-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114433193","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 12
Nonlinear estimation of torque in switched reluctance motors using grid locking and preferential training techniques on self-organizing neural networks 基于网格锁定和自组织神经网络优先训练技术的开关磁阻电机转矩非线性估计
Pub Date : 1992-06-07 DOI: 10.1109/IJCNN.1992.226887
J.J. Garside, R. Brown, T.L. Ruchti, X. Feng
The torque of a switched reluctance motor (SRM) can be estimated using a topology-preserving self-organizing neural network map. Since self-organizing maps tend to contract at region boundaries, a procedure for locking neuron weights at specific locations in a region is presented. A strategy for preferentially training neuron weights on the region boundaries is introduced. As an example of these training techniques, a one-dimensional neural network will approximate a nonlinear function. In general an n-dimension mapping can be used to approximate an m-dimensional system for n>
利用保持拓扑的自组织神经网络映射可以估计开关磁阻电机的转矩。由于自组织映射倾向于在区域边界处收缩,因此提出了在区域中特定位置锁定神经元权值的方法。介绍了一种在区域边界上优先训练神经元权值的策略。作为这些训练技术的一个例子,一维神经网络将近似一个非线性函数。一般来说,n维映射可以用来近似n>的m维系统。
{"title":"Nonlinear estimation of torque in switched reluctance motors using grid locking and preferential training techniques on self-organizing neural networks","authors":"J.J. Garside, R. Brown, T.L. Ruchti, X. Feng","doi":"10.1109/IJCNN.1992.226887","DOIUrl":"https://doi.org/10.1109/IJCNN.1992.226887","url":null,"abstract":"The torque of a switched reluctance motor (SRM) can be estimated using a topology-preserving self-organizing neural network map. Since self-organizing maps tend to contract at region boundaries, a procedure for locking neuron weights at specific locations in a region is presented. A strategy for preferentially training neuron weights on the region boundaries is introduced. As an example of these training techniques, a one-dimensional neural network will approximate a nonlinear function. In general an n-dimension mapping can be used to approximate an m-dimensional system for n<or=m. As a practical implementation of this technique, the modeling of the theoretical torque of a SRM as a function of position and current is presented. A two-dimensional neural network estimates a three-dimensional highly nonlinear surface.<<ETX>>","PeriodicalId":286849,"journal":{"name":"[Proceedings 1992] IJCNN International Joint Conference on Neural Networks","volume":"8 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1992-06-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"122058596","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 6
A dynamic approach to improve sparsely encoded associative memory capability 一种提高稀疏编码联想记忆能力的动态方法
Pub Date : 1992-06-07 DOI: 10.1109/IJCNN.1992.287169
Y.-P. Huang, D. Gustafson
A method for improving sparsely encoded associative memory storage capacity based on dynamic thresholding is presented. Under the dynamic thresholding scheme, the sparsely encoding method is shown to have greater storage capacity than the ordinary associative memory. The results are also considered from the storage sensitivity point of view. Simulation results are consistent with the quantitative analysis. It is found that system capacity is strongly dependent on the selected threshold. Selection of threshold is based on each neuron working close to its threshold assumption. This makes it possible to find a more reasonable storage capacity by using signal part and mean noise only.<>
提出了一种基于动态阈值提高稀疏编码联想记忆存储容量的方法。在动态阈值方案下,稀疏编码方法比普通联想记忆具有更大的存储容量。结果还从存储灵敏度的角度进行了考虑。仿真结果与定量分析结果一致。结果表明,系统容量与所选择的阈值有很强的相关性。阈值的选择是基于每个接近其阈值假设的神经元。这使得仅使用信号部分和平均噪声即可找到更合理的存储容量成为可能。
{"title":"A dynamic approach to improve sparsely encoded associative memory capability","authors":"Y.-P. Huang, D. Gustafson","doi":"10.1109/IJCNN.1992.287169","DOIUrl":"https://doi.org/10.1109/IJCNN.1992.287169","url":null,"abstract":"A method for improving sparsely encoded associative memory storage capacity based on dynamic thresholding is presented. Under the dynamic thresholding scheme, the sparsely encoding method is shown to have greater storage capacity than the ordinary associative memory. The results are also considered from the storage sensitivity point of view. Simulation results are consistent with the quantitative analysis. It is found that system capacity is strongly dependent on the selected threshold. Selection of threshold is based on each neuron working close to its threshold assumption. This makes it possible to find a more reasonable storage capacity by using signal part and mean noise only.<<ETX>>","PeriodicalId":286849,"journal":{"name":"[Proceedings 1992] IJCNN International Joint Conference on Neural Networks","volume":"117 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1992-06-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"116812436","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 4
A genetic approach to the truck backer upper problem and the inter-twined spiral problem 基于遗传算法的卡车后轮上部问题和缠绕螺旋问题
Pub Date : 1992-06-07 DOI: 10.1109/IJCNN.1992.227324
J. Koza
The author describes a biologically motivated paradigm, genetic programming, which can solve a variety of problems. When genetic programming solves a problem, it produces a computer program that takes the state variables of the system as input and produces the actions required to solve the problem as output. Genetic programming is explained and applied to two well-known benchmark problems from the field of neural networks. The truck backer upper problem is a multidimensional control problem and the inter-twined spirals problem is a challenging classification problem.<>
作者描述了一个生物驱动的范例,遗传编程,它可以解决各种问题。当遗传编程解决一个问题时,它产生一个计算机程序,该程序将系统的状态变量作为输入,并产生解决问题所需的操作作为输出。本文解释了遗传规划,并将其应用于神经网络领域的两个著名的基准问题。卡车后部上部问题是一个多维控制问题,而缠绕螺旋问题是一个具有挑战性的分类问题。
{"title":"A genetic approach to the truck backer upper problem and the inter-twined spiral problem","authors":"J. Koza","doi":"10.1109/IJCNN.1992.227324","DOIUrl":"https://doi.org/10.1109/IJCNN.1992.227324","url":null,"abstract":"The author describes a biologically motivated paradigm, genetic programming, which can solve a variety of problems. When genetic programming solves a problem, it produces a computer program that takes the state variables of the system as input and produces the actions required to solve the problem as output. Genetic programming is explained and applied to two well-known benchmark problems from the field of neural networks. The truck backer upper problem is a multidimensional control problem and the inter-twined spirals problem is a challenging classification problem.<<ETX>>","PeriodicalId":286849,"journal":{"name":"[Proceedings 1992] IJCNN International Joint Conference on Neural Networks","volume":"28 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1992-06-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"116846037","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 82
A fuzzy neural networks technique with fast backpropagation learning 一种快速反向传播学习的模糊神经网络技术
Pub Date : 1992-06-07 DOI: 10.1109/IJCNN.1992.287133
H. Y. Xu, G.Z. Wang, C.B. Baird
A fuzzy neural network (FNN) technique is presented based on fuzzy systems and neural network technologies. Utilizing human knowledge and expertise, the FNN technique is applied to accelerate the learning process of a novel backpropagation algorithm in which both self-adjusting activation and learning rate functions are designated. The learning speed and quality of the fuzzy neural networks are proved to be superior to those of standard backpropagation and other methods using changeable learning rates or activation functions. The proposed networks are currently developed and implemented in a C language environment. Experimental and analytical results demonstrate that the FNN technique is a novel and potentially powerful approach to intelligent neural networks.<>
基于模糊系统和神经网络技术,提出了一种模糊神经网络技术。利用人类的知识和专业知识,应用FNN技术加速了一种新的反向传播算法的学习过程,该算法同时指定了自调节激活和学习率函数。结果表明,模糊神经网络的学习速度和学习质量优于标准反向传播方法和其他使用可变学习率或激活函数的方法。所提出的网络目前是在C语言环境中开发和实现的。实验和分析结果表明,FNN技术是一种新颖的、潜在的强大的智能神经网络方法
{"title":"A fuzzy neural networks technique with fast backpropagation learning","authors":"H. Y. Xu, G.Z. Wang, C.B. Baird","doi":"10.1109/IJCNN.1992.287133","DOIUrl":"https://doi.org/10.1109/IJCNN.1992.287133","url":null,"abstract":"A fuzzy neural network (FNN) technique is presented based on fuzzy systems and neural network technologies. Utilizing human knowledge and expertise, the FNN technique is applied to accelerate the learning process of a novel backpropagation algorithm in which both self-adjusting activation and learning rate functions are designated. The learning speed and quality of the fuzzy neural networks are proved to be superior to those of standard backpropagation and other methods using changeable learning rates or activation functions. The proposed networks are currently developed and implemented in a C language environment. Experimental and analytical results demonstrate that the FNN technique is a novel and potentially powerful approach to intelligent neural networks.<<ETX>>","PeriodicalId":286849,"journal":{"name":"[Proceedings 1992] IJCNN International Joint Conference on Neural Networks","volume":"42 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1992-06-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129844998","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 16
A software reconfigurable multi-networks simulator using a custom associative chip 使用自定义关联芯片的软件可重构多网络模拟器
Pub Date : 1992-06-07 DOI: 10.1109/IJCNN.1992.226991
J. Gascuel, E. Delaunay, L. Montoliu, B. Moobed, M. Weinfeld
A special-purpose simulator is described. It has been designed to try various interconnection schemes between several similar associative chips, in order to assess hierarchical assemblies of neural networks. These chips are digital feedback networks with 64 fully interconnected binary neurons, capable of on-chip learning and automatic detection of spurious attractors. This simulator is based on the MCP development board. Each such board can house four associative chips. The simulator is designed to transparently address chips not only inside the machine in which it resides, but also chips in other machines. All the virtual interconnections between chips are made at the neuron level, which means that the individual components of binary vectors processed by each chip can be routed to the input or from the output of any other chip. Simulator scheduling allows sequentiality in information processing.<>
介绍了一种专用模拟器。为了评估神经网络的层次组合,它被设计成在几个相似的关联芯片之间尝试各种互连方案。这些芯片是由64个完全相互连接的二进制神经元组成的数字反馈网络,能够在芯片上学习并自动检测虚假吸引子。该模拟器是基于MCP开发板的。每个这样的棋盘可以容纳四个相关的芯片。该模拟器被设计为透明地寻址芯片,不仅在其驻留的机器内,而且在其他机器中的芯片。芯片之间的所有虚拟互连都是在神经元级别上进行的,这意味着每个芯片处理的二进制向量的单个分量可以路由到任何其他芯片的输入或输出。模拟器调度允许信息处理的顺序性
{"title":"A software reconfigurable multi-networks simulator using a custom associative chip","authors":"J. Gascuel, E. Delaunay, L. Montoliu, B. Moobed, M. Weinfeld","doi":"10.1109/IJCNN.1992.226991","DOIUrl":"https://doi.org/10.1109/IJCNN.1992.226991","url":null,"abstract":"A special-purpose simulator is described. It has been designed to try various interconnection schemes between several similar associative chips, in order to assess hierarchical assemblies of neural networks. These chips are digital feedback networks with 64 fully interconnected binary neurons, capable of on-chip learning and automatic detection of spurious attractors. This simulator is based on the MCP development board. Each such board can house four associative chips. The simulator is designed to transparently address chips not only inside the machine in which it resides, but also chips in other machines. All the virtual interconnections between chips are made at the neuron level, which means that the individual components of binary vectors processed by each chip can be routed to the input or from the output of any other chip. Simulator scheduling allows sequentiality in information processing.<<ETX>>","PeriodicalId":286849,"journal":{"name":"[Proceedings 1992] IJCNN International Joint Conference on Neural Networks","volume":"19 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1992-06-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129880562","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 2
期刊
[Proceedings 1992] IJCNN International Joint Conference on Neural Networks
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1