首页 > 最新文献

[Proceedings 1992] IJCNN International Joint Conference on Neural Networks最新文献

英文 中文
Using the Kohonen topology preserving mapping network for learning the minimal environment representation 利用Kohonen拓扑保持映射网络学习最小环境表示
Pub Date : 1992-06-07 DOI: 10.1109/IJCNN.1992.226979
S. Najand, Z. Lo, B. Bavarian
The authors present the application of the Kohonen self-organizing topology-preserving neural network for learning and developing a minimal representation for the open environment in mobile robot navigation. The input to the algorithm consists of the coordinates of randomly selected points in the open environment. No specific knowledge of the size, number, and shape of the obstacles is needed by the network. The parameter selection for the network is discussed. The neighborhood function, adaptation gain, and the number of training sample points have direct effect on the convergence and usefulness of the final representation. The environment dimensions and a measure of environment complexity are used to find approximate bounds and requirements on these parameters.<>
作者提出了Kohonen自组织拓扑保持神经网络在开放环境中学习和开发最小表示的应用。该算法的输入由开放环境中随机选择的点的坐标组成。网络不需要具体了解障碍物的大小、数量和形状。讨论了网络参数的选择。邻域函数、自适应增益和训练样本点的个数直接影响最终表示的收敛性和有用性。使用环境维度和环境复杂性的度量来找到这些参数的近似界限和要求。
{"title":"Using the Kohonen topology preserving mapping network for learning the minimal environment representation","authors":"S. Najand, Z. Lo, B. Bavarian","doi":"10.1109/IJCNN.1992.226979","DOIUrl":"https://doi.org/10.1109/IJCNN.1992.226979","url":null,"abstract":"The authors present the application of the Kohonen self-organizing topology-preserving neural network for learning and developing a minimal representation for the open environment in mobile robot navigation. The input to the algorithm consists of the coordinates of randomly selected points in the open environment. No specific knowledge of the size, number, and shape of the obstacles is needed by the network. The parameter selection for the network is discussed. The neighborhood function, adaptation gain, and the number of training sample points have direct effect on the convergence and usefulness of the final representation. The environment dimensions and a measure of environment complexity are used to find approximate bounds and requirements on these parameters.<<ETX>>","PeriodicalId":286849,"journal":{"name":"[Proceedings 1992] IJCNN International Joint Conference on Neural Networks","volume":"315 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1992-06-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114232768","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 2
Feature maps for input normalization and feature integration in a speaker independent isolated digit recognition system 一个独立于说话人的隔离数字识别系统中用于输入归一化和特征集成的特征映射
Pub Date : 1992-06-07 DOI: 10.1109/IJCNN.1992.227096
G.R. De Haan, O. Ececioglu
The use of the topology preserving properties of feature maps for speaker-independent isolated digit recognition is discussed. The results of recognition experiments indicate that feature maps can be effectively used for input normalization, which is important for practical implementations of neural-network-based classifiers. Recognition rates can be increased when a third feature map is trained to integrate the responses of two feature maps, each trained with different transducer-level features. Despite the use of a rudimentary classification scheme, recognition rates exceeded 97% for integrated, feature-map-normalized, transducer-level features.<>
讨论了特征映射拓扑保持特性在独立于说话人的孤立数字识别中的应用。识别实验结果表明,特征映射可以有效地用于输入归一化,这对于基于神经网络的分类器的实际实现至关重要。当训练第三个特征图来整合两个特征图的响应时,识别率可以提高,每个特征图都用不同的传感器级特征训练。尽管使用了基本的分类方案,但对于集成的、特征图归一化的、传感器级的特征,识别率超过97%
{"title":"Feature maps for input normalization and feature integration in a speaker independent isolated digit recognition system","authors":"G.R. De Haan, O. Ececioglu","doi":"10.1109/IJCNN.1992.227096","DOIUrl":"https://doi.org/10.1109/IJCNN.1992.227096","url":null,"abstract":"The use of the topology preserving properties of feature maps for speaker-independent isolated digit recognition is discussed. The results of recognition experiments indicate that feature maps can be effectively used for input normalization, which is important for practical implementations of neural-network-based classifiers. Recognition rates can be increased when a third feature map is trained to integrate the responses of two feature maps, each trained with different transducer-level features. Despite the use of a rudimentary classification scheme, recognition rates exceeded 97% for integrated, feature-map-normalized, transducer-level features.<<ETX>>","PeriodicalId":286849,"journal":{"name":"[Proceedings 1992] IJCNN International Joint Conference on Neural Networks","volume":"35 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1992-06-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"116293243","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 3
Nonlinear system identification using diagonal recurrent neural networks 非线性系统的对角递归神经网络辨识
Pub Date : 1992-06-07 DOI: 10.1109/IJCNN.1992.227048
C. Ku, K.Y. Lee
The recurrent neural network is proposed for system identification of nonlinear dynamic systems. When the system identification is coupled with control problems, the real-time feature is very important, and a neuro-identifier must be designed so that it will converge and the training time will not be too long. The neural network should also be simple and implemented easily. A novel neuro-identifier, the diagonal recurrent neural network (DRNN), that fulfils these requirements is proposed. A generalized algorithm, dynamic backpropagation, is developed to train the DRNN. The DRNN was used to identify nonlinear systems, and simulation showed promising results.<>
提出了递归神经网络用于非线性动态系统辨识的方法。当系统辨识与控制问题相结合时,实时性非常重要,必须设计一种神经辨识器,使其收敛且训练时间不会太长。神经网络也应该是简单和容易实现的。提出了一种新的神经辨识器,即对角递归神经网络(DRNN)。提出了一种广义的动态反向传播算法来训练DRNN。将该方法应用于非线性系统的识别,仿真结果显示了良好的效果。
{"title":"Nonlinear system identification using diagonal recurrent neural networks","authors":"C. Ku, K.Y. Lee","doi":"10.1109/IJCNN.1992.227048","DOIUrl":"https://doi.org/10.1109/IJCNN.1992.227048","url":null,"abstract":"The recurrent neural network is proposed for system identification of nonlinear dynamic systems. When the system identification is coupled with control problems, the real-time feature is very important, and a neuro-identifier must be designed so that it will converge and the training time will not be too long. The neural network should also be simple and implemented easily. A novel neuro-identifier, the diagonal recurrent neural network (DRNN), that fulfils these requirements is proposed. A generalized algorithm, dynamic backpropagation, is developed to train the DRNN. The DRNN was used to identify nonlinear systems, and simulation showed promising results.<<ETX>>","PeriodicalId":286849,"journal":{"name":"[Proceedings 1992] IJCNN International Joint Conference on Neural Networks","volume":"81 13","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1992-06-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"113943315","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 15
ANN bandpass filters for electro-optical implementation 用于光电实现的人工神经网络带通滤波器
Pub Date : 1992-06-07 DOI: 10.1109/IJCNN.1992.287215
M. E. Ulug
The design and simulation of a bandpass filter are described, and an electro-optical implementation is proposed. The neural network used in this filter has an architecture similar to the one suggested by Kolmogorov's existence theorem and a data processing method based on Fourier series. The resulting system, called the orthonormal neural network, can approximate any L/sub 2/ mapping function between the input and output vectors without using the backpropagation rule or hidden layers. Because the transfer functions of the middle nodes are the terms of the Fourier series, the synaptic link values between the middle and output layers represent the frequency spectrum of the signals of the output nodes. As a result, by autoassociatively training the network with all the middle nodes and testing it with certain selected ones, it is easy to build a nonlinear bandpass filter. The system is basically a two-layer network consisting of virtual input nodes and output nodes. The transfer functions of the output nodes are linear. As a result, the network is free from the problems of local minima and has a bowl-shaped error surface. The sharp slopes of this surface make the system tolerant to loss of computational accuracy and suitable for electro-optical implementation.<>
介绍了一种带通滤波器的设计与仿真,并提出了一种光电实现方案。该滤波器采用的神经网络结构类似于柯尔莫哥洛夫存在性定理,采用基于傅立叶级数的数据处理方法。由此产生的系统称为标准正交神经网络,可以在不使用反向传播规则或隐藏层的情况下近似输入和输出向量之间的任何L/sub 2/映射函数。因为中间节点的传递函数是傅里叶级数的项,所以中间层和输出层之间的突触链接值代表了输出节点信号的频谱。因此,通过对所有中间节点进行自动关联训练,并用选定的中间节点进行测试,可以很容易地构建非线性带通滤波器。该系统基本上是一个由虚拟输入节点和虚拟输出节点组成的双层网络。输出节点的传递函数是线性的。因此,该网络不存在局部极小值问题,具有碗形误差面。该表面的陡坡使系统能够承受计算精度的损失,适合于光电实现。
{"title":"ANN bandpass filters for electro-optical implementation","authors":"M. E. Ulug","doi":"10.1109/IJCNN.1992.287215","DOIUrl":"https://doi.org/10.1109/IJCNN.1992.287215","url":null,"abstract":"The design and simulation of a bandpass filter are described, and an electro-optical implementation is proposed. The neural network used in this filter has an architecture similar to the one suggested by Kolmogorov's existence theorem and a data processing method based on Fourier series. The resulting system, called the orthonormal neural network, can approximate any L/sub 2/ mapping function between the input and output vectors without using the backpropagation rule or hidden layers. Because the transfer functions of the middle nodes are the terms of the Fourier series, the synaptic link values between the middle and output layers represent the frequency spectrum of the signals of the output nodes. As a result, by autoassociatively training the network with all the middle nodes and testing it with certain selected ones, it is easy to build a nonlinear bandpass filter. The system is basically a two-layer network consisting of virtual input nodes and output nodes. The transfer functions of the output nodes are linear. As a result, the network is free from the problems of local minima and has a bowl-shaped error surface. The sharp slopes of this surface make the system tolerant to loss of computational accuracy and suitable for electro-optical implementation.<<ETX>>","PeriodicalId":286849,"journal":{"name":"[Proceedings 1992] IJCNN International Joint Conference on Neural Networks","volume":"35 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1992-06-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"124166186","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 4
Fuzzy neural-logic system 模糊神经逻辑系统
Pub Date : 1992-06-07 DOI: 10.1109/IJCNN.1992.287128
L. Hsu, H. H. Teh, P. Wang, S. Chan, K. Loe
A realization of fuzzy logic by a neural network is described. Each node in the network represents a premise or a conclusion. Let x be a member of the universal set, and let A be a node in the network. The value of activation of node A is taken to be the value of the membership function at point x, m/sub A/(x). A logical operation is defined by a set of weights which are independent of x. Given any value of x, a preprocessor will determine the values of the membership function for all the premises that correspond to the input nodes. These are treated as input to the network. A propagation algorithm is used to emulate the inference process. When the network stabilizes, the value of activation at an output node represents the value of the membership function that indicates the degree to which the given conclusion is true. Weight assignment for the standard logical operations is discussed. It is also shown that the scheme makes it possible to define more general logical operations.<>
描述了一种用神经网络实现模糊逻辑的方法。网络中的每个节点代表一个前提或结论。设x是全称集合中的一个成员,设a是网络中的一个节点。取节点A的激活值为隶属函数在点x处的值m/下标A/(x)。逻辑运算是由一组独立于x的权重定义的。给定x的任何值,预处理器将确定与输入节点对应的所有前提的隶属函数的值。这些被视为网络的输入。采用传播算法模拟推理过程。当网络稳定时,输出节点上的激活值表示隶属函数的值,该隶属函数表示给定结论为真的程度。讨论了标准逻辑运算的权重分配。还表明,该方案使得定义更一般的逻辑运算成为可能。
{"title":"Fuzzy neural-logic system","authors":"L. Hsu, H. H. Teh, P. Wang, S. Chan, K. Loe","doi":"10.1109/IJCNN.1992.287128","DOIUrl":"https://doi.org/10.1109/IJCNN.1992.287128","url":null,"abstract":"A realization of fuzzy logic by a neural network is described. Each node in the network represents a premise or a conclusion. Let x be a member of the universal set, and let A be a node in the network. The value of activation of node A is taken to be the value of the membership function at point x, m/sub A/(x). A logical operation is defined by a set of weights which are independent of x. Given any value of x, a preprocessor will determine the values of the membership function for all the premises that correspond to the input nodes. These are treated as input to the network. A propagation algorithm is used to emulate the inference process. When the network stabilizes, the value of activation at an output node represents the value of the membership function that indicates the degree to which the given conclusion is true. Weight assignment for the standard logical operations is discussed. It is also shown that the scheme makes it possible to define more general logical operations.<<ETX>>","PeriodicalId":286849,"journal":{"name":"[Proceedings 1992] IJCNN International Joint Conference on Neural Networks","volume":"32 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1992-06-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"124167323","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 3
Lateral inhibition neural networks for classification of simulated radar imagery 用于模拟雷达图像分类的侧抑制神经网络
Pub Date : 1992-06-07 DOI: 10.1109/ijcnn.1992.226975
C. Bachmann, S. Musman, A. Schultz
The use of neural networks for the classification of simulated inverse synthetic aperture radar (ISAR) imagery is investigated. Certain symmetries of the artificial imagery make the use of localized moments a convenient preprocessing tool for the inputs to a neural network. A database of simulated targets is obtained by warping dynamical models to representative angles and generating images with different target motions. Ordinary backward propagation (BP) and some variants of BP which incorporate lateral inhibition obtain a generalization rate of up to approximately 78% for novel data not used during training, a rate which is comparable to the level of classification accuracy that trained human observers obtained from the unprocessed simulated imagery.<>
研究了神经网络在模拟逆合成孔径雷达(ISAR)图像分类中的应用。人工图像的某些对称性使得局部矩的使用成为神经网络输入的一种方便的预处理工具。通过将动力学模型翘曲到具有代表性的角度,生成具有不同目标运动的图像,得到仿真目标数据库。对于训练中未使用的新数据,普通的反向传播(BP)和一些包含横向抑制的BP变体获得了高达约78%的泛化率,这一比率与训练后的人类观察者从未处理的模拟图像中获得的分类精度水平相当
{"title":"Lateral inhibition neural networks for classification of simulated radar imagery","authors":"C. Bachmann, S. Musman, A. Schultz","doi":"10.1109/ijcnn.1992.226975","DOIUrl":"https://doi.org/10.1109/ijcnn.1992.226975","url":null,"abstract":"The use of neural networks for the classification of simulated inverse synthetic aperture radar (ISAR) imagery is investigated. Certain symmetries of the artificial imagery make the use of localized moments a convenient preprocessing tool for the inputs to a neural network. A database of simulated targets is obtained by warping dynamical models to representative angles and generating images with different target motions. Ordinary backward propagation (BP) and some variants of BP which incorporate lateral inhibition obtain a generalization rate of up to approximately 78% for novel data not used during training, a rate which is comparable to the level of classification accuracy that trained human observers obtained from the unprocessed simulated imagery.<<ETX>>","PeriodicalId":286849,"journal":{"name":"[Proceedings 1992] IJCNN International Joint Conference on Neural Networks","volume":"93 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1992-06-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126218791","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 14
The synthesis of arbitrary stable dynamics in non-linear neural networks. II. Feedback and universality 非线性神经网络中任意稳定动力学的综合。2反馈与通用性
Pub Date : 1992-06-07 DOI: 10.1109/IJCNN.1992.287224
M. A. Cohen
A parametrized family of higher-order, gradient-like neural networks that have known arbitrary equilibria with unstable manifolds of known specified dimension is described. Any system with hyperbolic dynamics is conjugate to one of the systems in a neighborhood of the equilibrium points. Prior work on how to synthesize attractors using dynamical systems theory, optimization, or direct parametric fits to known stable systems is nonconstructive, lacks generality, or has unspecified attracting equilibria. More specifically, a parameterized family of gradient-like neural networks is constructed with a simple feedback rule that will generate equilibrium points with a set of unstable manifolds of specified dimension. Strict Lyapunov functions and nested periodic orbits are obtained for these systems and used as a method of synthesis to generate a large family of systems with the same local dynamics. This work is applied to show how one can interpolate finite sets of data on nested periodic orbits.<>
描述了一类具有已知特定维数的不稳定流形的任意平衡的高阶类梯度神经网络的参数化族。任何具有双曲动力学的系统都在平衡点的邻域中与其中一个系统共轭。先前关于如何利用动力系统理论、优化或对已知稳定系统的直接参数拟合来合成吸引子的工作是非建设性的,缺乏普遍性,或者具有未指定的吸引平衡点。更具体地说,用一个简单的反馈规则构造一个参数化的类梯度神经网络族,该规则将产生一组特定维数的不稳定流形的平衡点。得到了这些系统的严格Lyapunov函数和嵌套周期轨道,并将其作为一种综合方法来生成具有相同局部动力学的大系统族。这项工作用于展示如何在嵌套的周期轨道上插入有限数据集。
{"title":"The synthesis of arbitrary stable dynamics in non-linear neural networks. II. Feedback and universality","authors":"M. A. Cohen","doi":"10.1109/IJCNN.1992.287224","DOIUrl":"https://doi.org/10.1109/IJCNN.1992.287224","url":null,"abstract":"A parametrized family of higher-order, gradient-like neural networks that have known arbitrary equilibria with unstable manifolds of known specified dimension is described. Any system with hyperbolic dynamics is conjugate to one of the systems in a neighborhood of the equilibrium points. Prior work on how to synthesize attractors using dynamical systems theory, optimization, or direct parametric fits to known stable systems is nonconstructive, lacks generality, or has unspecified attracting equilibria. More specifically, a parameterized family of gradient-like neural networks is constructed with a simple feedback rule that will generate equilibrium points with a set of unstable manifolds of specified dimension. Strict Lyapunov functions and nested periodic orbits are obtained for these systems and used as a method of synthesis to generate a large family of systems with the same local dynamics. This work is applied to show how one can interpolate finite sets of data on nested periodic orbits.<<ETX>>","PeriodicalId":286849,"journal":{"name":"[Proceedings 1992] IJCNN International Joint Conference on Neural Networks","volume":"14 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1992-06-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126570545","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
Activated hidden connections to accelerate the learning in recurrent neural networks 激活隐藏连接加速递归神经网络的学习
Pub Date : 1992-06-07 DOI: 10.1109/IJCNN.1992.287106
R. Kamimura
A method of accelerating the learning in recurrent neural networks is considered. Owing to a possible large number of connections, it has been expected that recurrent neural networks will converge faster. To activate hidden connections and use hidden units efficiently, a complexity term proposed by D.E. Rumelhart was added to the standard quadratic error function. A complexity term method is modified with a parameter to be normally effective for positive values, while negative values are pushed toward values with larger absolute values. Thus, some hidden connections are expected to be large enough to use hidden units and to speed up the learning. From the author's experiments, it was confirmed that the complexity term was effective in increasing the variance of connections, especially hidden connections, and that eventually some hidden connections were activated and large enough for hidden units to be used in speeding up the learning.<>
提出了一种加速递归神经网络学习的方法。由于可能存在大量的连接,人们一直期望递归神经网络能够更快地收敛。为了激活隐藏连接和有效地利用隐藏单元,在标准二次误差函数中加入了D.E. Rumelhart提出的复杂度项。使用参数修改复杂度项方法,使其通常对正值有效,而负值则被推向绝对值较大的值。因此,一些隐藏的连接应该足够大,可以使用隐藏单元并加快学习速度。从作者的实验中,证实了复杂度项在增加连接,尤其是隐藏连接的方差方面是有效的,并且最终一些隐藏连接被激活,并且足够大,可以使用隐藏单元来加速学习。
{"title":"Activated hidden connections to accelerate the learning in recurrent neural networks","authors":"R. Kamimura","doi":"10.1109/IJCNN.1992.287106","DOIUrl":"https://doi.org/10.1109/IJCNN.1992.287106","url":null,"abstract":"A method of accelerating the learning in recurrent neural networks is considered. Owing to a possible large number of connections, it has been expected that recurrent neural networks will converge faster. To activate hidden connections and use hidden units efficiently, a complexity term proposed by D.E. Rumelhart was added to the standard quadratic error function. A complexity term method is modified with a parameter to be normally effective for positive values, while negative values are pushed toward values with larger absolute values. Thus, some hidden connections are expected to be large enough to use hidden units and to speed up the learning. From the author's experiments, it was confirmed that the complexity term was effective in increasing the variance of connections, especially hidden connections, and that eventually some hidden connections were activated and large enough for hidden units to be used in speeding up the learning.<<ETX>>","PeriodicalId":286849,"journal":{"name":"[Proceedings 1992] IJCNN International Joint Conference on Neural Networks","volume":"16 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1992-06-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"125845573","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 2
Discrete wave machine and Fourier transform 离散波机和傅里叶变换
Pub Date : 1992-06-07 DOI: 10.1109/IJCNN.1992.287187
L. Chang
In the biological neural network, the interaction and communication among neurons can be thought of as a kind of wave correlation, which is the basic idea of the discrete wave machine. A discrete wave machine is described by a complex state space. An energy function of a discrete wave machine with the Hermitian connection determines the convergence of the state evolution and the points of memory. The discrete Fourier transform is directly described by a discrete wave machine with a special connection.<>
在生物神经网络中,神经元之间的相互作用和通信可以看作是一种波相关,这是离散波机的基本思想。离散波机用复状态空间来描述。具有厄米连接的离散波机的能量函数决定了状态演化和记忆点的收敛性。离散傅里叶变换由具有特殊连接的离散波机直接描述。
{"title":"Discrete wave machine and Fourier transform","authors":"L. Chang","doi":"10.1109/IJCNN.1992.287187","DOIUrl":"https://doi.org/10.1109/IJCNN.1992.287187","url":null,"abstract":"In the biological neural network, the interaction and communication among neurons can be thought of as a kind of wave correlation, which is the basic idea of the discrete wave machine. A discrete wave machine is described by a complex state space. An energy function of a discrete wave machine with the Hermitian connection determines the convergence of the state evolution and the points of memory. The discrete Fourier transform is directly described by a discrete wave machine with a special connection.<<ETX>>","PeriodicalId":286849,"journal":{"name":"[Proceedings 1992] IJCNN International Joint Conference on Neural Networks","volume":"44 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1992-06-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"122246097","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
A line and edge orientation sensor 线和边方向传感器
Pub Date : 1992-06-07 DOI: 10.1109/IJCNN.1992.287141
W. O. Camp, J. van der Spiegel, M. Xiao
The authors show an integrated circuit implementation of a higher-order vision function, that of determining the orientation of line segments and edges across an image projected onto the chip. The IC includes an array of photoreceptors and analog processing elements consisting of weights arranged in a network. A primary objective of the implementation was a compact and simple design, as it would be a prelude to including even higher levels of visual processing on the chip in the kerf areas between many such arrays.<>
作者展示了一个高阶视觉功能的集成电路实现,该功能可以确定投影到芯片上的图像的线段和边缘的方向。该集成电路包括光感受器阵列和由排列在网络中的权重组成的模拟处理元件。实现的主要目标是紧凑和简单的设计,因为这将是在许多这样的阵列之间的切口区域的芯片上包括更高水平的视觉处理的前奏。
{"title":"A line and edge orientation sensor","authors":"W. O. Camp, J. van der Spiegel, M. Xiao","doi":"10.1109/IJCNN.1992.287141","DOIUrl":"https://doi.org/10.1109/IJCNN.1992.287141","url":null,"abstract":"The authors show an integrated circuit implementation of a higher-order vision function, that of determining the orientation of line segments and edges across an image projected onto the chip. The IC includes an array of photoreceptors and analog processing elements consisting of weights arranged in a network. A primary objective of the implementation was a compact and simple design, as it would be a prelude to including even higher levels of visual processing on the chip in the kerf areas between many such arrays.<<ETX>>","PeriodicalId":286849,"journal":{"name":"[Proceedings 1992] IJCNN International Joint Conference on Neural Networks","volume":"20 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1992-06-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"121741570","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
期刊
[Proceedings 1992] IJCNN International Joint Conference on Neural Networks
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1