首页 > 最新文献

[Proceedings] 1991 IEEE International Joint Conference on Neural Networks最新文献

英文 中文
Fault tolerance of lateral interaction networks 横向相互作用网络的容错性
Pub Date : 1991-11-18 DOI: 10.1109/IJCNN.1991.170654
G. Bolt
An examination of the fault tolerance properties of lateral interaction networks is presented. The general concept of a soft problem is discussed along with the resulting implications for reliability. Fault injection experiments were performed using several input datasets with differing characteristics in conjunction with various combinations of network parameters. It was found that a high degree of tolerance to faults existed and that the reliability of operation degraded smoothly. This result was independent of both the nature of the input dataset and to a lesser extent of the choice of network parameters.<>
研究了横向相互作用网络的容错特性。讨论了软问题的一般概念以及由此产生的可靠性含义。故障注入实验使用几个具有不同特征的输入数据集,并结合各种网络参数组合进行。结果表明,该系统具有较高的容错能力,运行可靠性平稳退化。该结果与输入数据集的性质无关,并且在较小程度上与网络参数的选择无关。
{"title":"Fault tolerance of lateral interaction networks","authors":"G. Bolt","doi":"10.1109/IJCNN.1991.170654","DOIUrl":"https://doi.org/10.1109/IJCNN.1991.170654","url":null,"abstract":"An examination of the fault tolerance properties of lateral interaction networks is presented. The general concept of a soft problem is discussed along with the resulting implications for reliability. Fault injection experiments were performed using several input datasets with differing characteristics in conjunction with various combinations of network parameters. It was found that a high degree of tolerance to faults existed and that the reliability of operation degraded smoothly. This result was independent of both the nature of the input dataset and to a lesser extent of the choice of network parameters.<<ETX>>","PeriodicalId":211135,"journal":{"name":"[Proceedings] 1991 IEEE International Joint Conference on Neural Networks","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"1991-11-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"123675711","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
Inherent structure detection by neural sequential associator 基于神经序列关联器的固有结构检测
Pub Date : 1991-11-18 DOI: 10.1109/IJCNN.1991.170704
I. Matsuba
A sequential associator based on a feedback multilayer neural network is proposed to analyze inherent structures in a sequence generated by a nonlinear dynamical system and to predict a future sequence based on these structures. The network represents time correlations in the connection weights during learning. It is capable of detecting the inherent structure and explaining the behavior of systems. The structure of the neural sequential associator, inherent structure detection, and the optimal network size based on the use of an information criterion are discussed.<>
提出了一种基于反馈多层神经网络的序列关联器,用于分析非线性动力系统产生的序列的固有结构,并根据这些结构预测未来的序列。网络表示学习过程中连接权值的时间相关性。它能够检测系统的内在结构并解释系统的行为。讨论了神经序列关联器的结构、固有结构检测以及基于信息准则的最优网络大小
{"title":"Inherent structure detection by neural sequential associator","authors":"I. Matsuba","doi":"10.1109/IJCNN.1991.170704","DOIUrl":"https://doi.org/10.1109/IJCNN.1991.170704","url":null,"abstract":"A sequential associator based on a feedback multilayer neural network is proposed to analyze inherent structures in a sequence generated by a nonlinear dynamical system and to predict a future sequence based on these structures. The network represents time correlations in the connection weights during learning. It is capable of detecting the inherent structure and explaining the behavior of systems. The structure of the neural sequential associator, inherent structure detection, and the optimal network size based on the use of an information criterion are discussed.<<ETX>>","PeriodicalId":211135,"journal":{"name":"[Proceedings] 1991 IEEE International Joint Conference on Neural Networks","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"1991-11-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126023722","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Adjustment of the basin size in autoassociative memories by use of the BPTT technique 利用BPTT技术调整自联想记忆中的记忆盆大小
Pub Date : 1991-11-18 DOI: 10.1109/IJCNN.1991.170674
T. Hatanaka, Y. Nishikawa
An auto-associative memory is constructed in a recurrent network whose connection matrix is determined by use of backpropagation through time (BPTT). Through several computer simulations, basins of the memory generated by this method are compared with those generated by the conventional methods. In particular, the ability of the BPTT to adjust the basin size is investigated in detail.<>
在递归网络中构造了一个自联想存储器,该网络的连接矩阵由时间反向传播(BPTT)确定。通过多次计算机模拟,比较了该方法与常规方法产生的存储池。特别是,详细研究了BPTT调节盆地大小的能力
{"title":"Adjustment of the basin size in autoassociative memories by use of the BPTT technique","authors":"T. Hatanaka, Y. Nishikawa","doi":"10.1109/IJCNN.1991.170674","DOIUrl":"https://doi.org/10.1109/IJCNN.1991.170674","url":null,"abstract":"An auto-associative memory is constructed in a recurrent network whose connection matrix is determined by use of backpropagation through time (BPTT). Through several computer simulations, basins of the memory generated by this method are compared with those generated by the conventional methods. In particular, the ability of the BPTT to adjust the basin size is investigated in detail.<<ETX>>","PeriodicalId":211135,"journal":{"name":"[Proceedings] 1991 IEEE International Joint Conference on Neural Networks","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"1991-11-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"124657362","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Synaptic and somatic learning and adaptation in fuzzy neural systems 模糊神经系统的突触和躯体学习与适应
Pub Date : 1991-11-18 DOI: 10.1109/IJCNN.1991.170510
M. Gupta, J. Qi
An attempt is made to establish some basic models for fuzzy neurons. Three types of fuzzy neural models are proposed. The neuron I is described by logical equations or if-then rules; its inputs are either fuzzy sets or crisp values. The neuron II, with numerical inputs, and the neuron III, with fuzzy inputs, are considered to be a simple extension of nonfuzzy neurons. A few methods of how these neurons change themselves during learning to improve their performance are also given. The notion of synaptic and somatic learning and adaptation is also introduced, which seems to be a powerful approach for developed a new class of fuzzy neural networks. Such an approach may have application in the processing of fuzzy information and the design of expert systems with learning and adaptation abilities.<>
尝试建立一些模糊神经元的基本模型。提出了三种类型的模糊神经模型。神经元I用逻辑方程或if-then规则来描述;它的输入要么是模糊集,要么是清晰值。具有数值输入的神经元II和具有模糊输入的神经元III被认为是非模糊神经元的简单扩展。本文还介绍了这些神经元在学习过程中如何改变自己以提高其表现的一些方法。介绍了突触和躯体学习和适应的概念,这似乎是开发一类新的模糊神经网络的有力途径。该方法可应用于模糊信息的处理和具有学习和自适应能力的专家系统的设计。
{"title":"Synaptic and somatic learning and adaptation in fuzzy neural systems","authors":"M. Gupta, J. Qi","doi":"10.1109/IJCNN.1991.170510","DOIUrl":"https://doi.org/10.1109/IJCNN.1991.170510","url":null,"abstract":"An attempt is made to establish some basic models for fuzzy neurons. Three types of fuzzy neural models are proposed. The neuron I is described by logical equations or if-then rules; its inputs are either fuzzy sets or crisp values. The neuron II, with numerical inputs, and the neuron III, with fuzzy inputs, are considered to be a simple extension of nonfuzzy neurons. A few methods of how these neurons change themselves during learning to improve their performance are also given. The notion of synaptic and somatic learning and adaptation is also introduced, which seems to be a powerful approach for developed a new class of fuzzy neural networks. Such an approach may have application in the processing of fuzzy information and the design of expert systems with learning and adaptation abilities.<<ETX>>","PeriodicalId":211135,"journal":{"name":"[Proceedings] 1991 IEEE International Joint Conference on Neural Networks","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"1991-11-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129589533","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Efficient question answering in a hybrid system 混合系统中的高效问答
Pub Date : 1991-11-18 DOI: 10.1109/IJCNN.1991.170447
J. Diederich, D. Long
A connectionist model for answering open-class questions in the context of text processing is presented. The system answers questions from different question categories, such as how, why, and consequence questions. The system responds to a question by generating a set of possible answers that are weighted according to their plausibility. Search is performed by means of a massively parallel directed spreading activation process. The search process operates on several knowledge sources (i.e., connectionist networks) that are learned or explicitly built-in. Spreading activation involves the use of signature messages, which are numeric values that are propagated throughout the networks and identify a particular question category (this makes the system hybrid). Binder units that gate the flow of activation between textual units receive these signatures and change their states. That is, the binder units either block the spread of activation or allow the flow of activation in a certain direction. The process results in a pattern of activation that represents a set of candidate answers based on available knowledge sources.<>
在文本处理的背景下,提出了一个回答公开课问题的连接主义模型。该系统回答来自不同问题类别的问题,例如如何、为什么和结果问题。系统通过生成一组可能的答案来回应问题,这些答案根据其合理性进行加权。搜索是通过一个大规模并行的定向扩展激活过程来完成的。搜索过程在几个知识来源(即连接网络)上操作,这些知识来源是学习的或明确内置的。扩展激活涉及到签名消息的使用,签名消息是在整个网络中传播的数字值,并识别特定的问题类别(这使系统混合)。限制文本单元之间激活流的绑定器单元接收这些签名并更改其状态。也就是说,粘合剂单元要么阻止激活的扩散,要么允许激活在一定方向上流动。这个过程产生了一个激活模式,该模式代表了一组基于可用知识来源的候选答案。
{"title":"Efficient question answering in a hybrid system","authors":"J. Diederich, D. Long","doi":"10.1109/IJCNN.1991.170447","DOIUrl":"https://doi.org/10.1109/IJCNN.1991.170447","url":null,"abstract":"A connectionist model for answering open-class questions in the context of text processing is presented. The system answers questions from different question categories, such as how, why, and consequence questions. The system responds to a question by generating a set of possible answers that are weighted according to their plausibility. Search is performed by means of a massively parallel directed spreading activation process. The search process operates on several knowledge sources (i.e., connectionist networks) that are learned or explicitly built-in. Spreading activation involves the use of signature messages, which are numeric values that are propagated throughout the networks and identify a particular question category (this makes the system hybrid). Binder units that gate the flow of activation between textual units receive these signatures and change their states. That is, the binder units either block the spread of activation or allow the flow of activation in a certain direction. The process results in a pattern of activation that represents a set of candidate answers based on available knowledge sources.<<ETX>>","PeriodicalId":211135,"journal":{"name":"[Proceedings] 1991 IEEE International Joint Conference on Neural Networks","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"1991-11-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129654924","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 5
A cognitive framework for hybrid systems 混合系统的认知框架
Pub Date : 1991-11-18 DOI: 10.1109/IJCNN.1991.170449
J. Wallace, K. Bluff
The authors explore the potential of a specific cognitive architecture to provide the relational mechanism needed to capitalize on the respective strengths of symbolic and nonsymbolic modes of representation, and on the benefits of their interaction in achieving machine intelligence. This architecture is strongly influenced by the BAIRN system of I. Wallace et al. (1987) which provides a general theory of human cognition with a particular emphasis on the function of learning. This cognitive architecture is being used in a generic approach to the aspects of human performance designated by the term situation awareness.<>
作者探索了特定认知架构的潜力,以提供利用符号和非符号表示模式各自优势所需的关系机制,以及它们在实现机器智能方面的相互作用的好处。这种架构深受Wallace等人(1987)的BAIRN系统的影响,该系统提供了人类认知的一般理论,特别强调学习的功能。这种认知架构被用于一种通用的方法来研究人类表现的各个方面,这种方法被称为情境感知。
{"title":"A cognitive framework for hybrid systems","authors":"J. Wallace, K. Bluff","doi":"10.1109/IJCNN.1991.170449","DOIUrl":"https://doi.org/10.1109/IJCNN.1991.170449","url":null,"abstract":"The authors explore the potential of a specific cognitive architecture to provide the relational mechanism needed to capitalize on the respective strengths of symbolic and nonsymbolic modes of representation, and on the benefits of their interaction in achieving machine intelligence. This architecture is strongly influenced by the BAIRN system of I. Wallace et al. (1987) which provides a general theory of human cognition with a particular emphasis on the function of learning. This cognitive architecture is being used in a generic approach to the aspects of human performance designated by the term situation awareness.<<ETX>>","PeriodicalId":211135,"journal":{"name":"[Proceedings] 1991 IEEE International Joint Conference on Neural Networks","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"1991-11-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127448676","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
The handling of don't care attributes 对不在乎属性的处理
Pub Date : 1991-11-18 DOI: 10.1109/IJCNN.1991.170539
Hahn-Ming Lee, Ching-Chi Hsu
A critical factor that affects the performance of neural network training algorithms and the generalization of trained networks is the training instances. The authors consider the handling of don't care attributes in training instances. Several approaches are discussed and their experimental results are presented. The following approaches are considered: (1) replace don't care attributes with a fixed value; (2) replace don't care attributes with their maximum or minimum encoded values; (3) replace don't care attributes with their maximum and minimum encoded values; and (4) replace don't care attributes with all their possible encoded values.<>
影响神经网络训练算法性能和训练网络泛化的一个关键因素是训练实例。作者考虑了在训练实例中不关心属性的处理。讨论了几种方法,并给出了实验结果。考虑以下方法:(1)用固定值替换不关心属性;(2)将不关心属性替换为其最大或最小编码值;(3)将不关心属性替换为其编码值的最大值和最小值;(4)用所有可能的编码值替换不关心属性。
{"title":"The handling of don't care attributes","authors":"Hahn-Ming Lee, Ching-Chi Hsu","doi":"10.1109/IJCNN.1991.170539","DOIUrl":"https://doi.org/10.1109/IJCNN.1991.170539","url":null,"abstract":"A critical factor that affects the performance of neural network training algorithms and the generalization of trained networks is the training instances. The authors consider the handling of don't care attributes in training instances. Several approaches are discussed and their experimental results are presented. The following approaches are considered: (1) replace don't care attributes with a fixed value; (2) replace don't care attributes with their maximum or minimum encoded values; (3) replace don't care attributes with their maximum and minimum encoded values; and (4) replace don't care attributes with all their possible encoded values.<<ETX>>","PeriodicalId":211135,"journal":{"name":"[Proceedings] 1991 IEEE International Joint Conference on Neural Networks","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"1991-11-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127471345","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 7
Applications of the pRAM pRAM的应用
Pub Date : 1991-11-18 DOI: 10.1109/IJCNN.1991.170348
T. Clarkson, D. Gorse, Y. Guan, J.G. Taylor
The probabilistic RAM (pRAM) neuron is highly nonlinear and stochastic, and it is hardware-realizable. The following applications of the pRAM are discussed: the processing of half-tone images, the generation of topological maps, the storage of temporal sequences, and the recognition of regular grammars.<>
概率随机存储器(pRAM)神经元具有高度非线性和随机性,并且是硬件可实现的。本文讨论了该算法在半色调图像处理、拓扑映射生成、时间序列存储和规则语法识别等方面的应用。
{"title":"Applications of the pRAM","authors":"T. Clarkson, D. Gorse, Y. Guan, J.G. Taylor","doi":"10.1109/IJCNN.1991.170348","DOIUrl":"https://doi.org/10.1109/IJCNN.1991.170348","url":null,"abstract":"The probabilistic RAM (pRAM) neuron is highly nonlinear and stochastic, and it is hardware-realizable. The following applications of the pRAM are discussed: the processing of half-tone images, the generation of topological maps, the storage of temporal sequences, and the recognition of regular grammars.<<ETX>>","PeriodicalId":211135,"journal":{"name":"[Proceedings] 1991 IEEE International Joint Conference on Neural Networks","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"1991-11-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127517022","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 2
Optimally generalizing neural networks 最优泛化神经网络
Pub Date : 1991-11-18 DOI: 10.1109/IJCNN.1991.170648
H. Ogawa, E. Oja
The problem of approximating a real function f of L variables, given only in terms of its values y/sub 1/,. . .,y/sub M/ at a small set of sample points x/sub 1/,. . .,x/sub M/ in R/sup L/, is studied in the context of multilayer neural networks. Using the theory of reproducing kernels of Hilbert spaces, it is shown that this problem is the inverse of a linear model relating the values y/sub m/ to the function f itself. The authors consider the least-mean-square training criterion for nonlinear multilayer neural network architectures that learn the training set completely. The generalization property of a neural network is defined in terms of function reconstruction and the concept of the optimally generalizing neural network (OGNN) is proposed. It is a network that minimizes a criterion given in terms of the true error between the original function f and the reconstruction f/sub 1/ in the function space, instead of minimizing the error at the sample points only. As an example of the OGNN, a projection filter (PF) criterion is considered and the PFGNN is introduced. The network is of the two-layer nonlinear-linear type.<>
在多层神经网络的背景下,研究了L个变量的实函数f在一小组样本点(x/sub 1/,…,x/sub M/)处的逼近问题,该问题仅以其值y/sub 1/,…,y/sub M/给出。利用希尔伯特空间的再现核理论,证明了这个问题是一个关于y/下标m/与函数f本身的线性模型的逆问题。考虑了非线性多层神经网络结构的最小均方训练准则,使其能够完全学习训练集。从函数重构的角度定义了神经网络的泛化特性,提出了最优泛化神经网络的概念。它是一种网络,它最小化根据函数空间中原始函数f与重构函数f/sub 1/之间的真实误差给出的准则,而不是仅仅最小化样本点上的误差。作为OGNN的一个例子,考虑了投影滤波(PF)准则,并引入了PFGNN。网络为两层非线性-线性型
{"title":"Optimally generalizing neural networks","authors":"H. Ogawa, E. Oja","doi":"10.1109/IJCNN.1991.170648","DOIUrl":"https://doi.org/10.1109/IJCNN.1991.170648","url":null,"abstract":"The problem of approximating a real function f of L variables, given only in terms of its values y/sub 1/,. . .,y/sub M/ at a small set of sample points x/sub 1/,. . .,x/sub M/ in R/sup L/, is studied in the context of multilayer neural networks. Using the theory of reproducing kernels of Hilbert spaces, it is shown that this problem is the inverse of a linear model relating the values y/sub m/ to the function f itself. The authors consider the least-mean-square training criterion for nonlinear multilayer neural network architectures that learn the training set completely. The generalization property of a neural network is defined in terms of function reconstruction and the concept of the optimally generalizing neural network (OGNN) is proposed. It is a network that minimizes a criterion given in terms of the true error between the original function f and the reconstruction f/sub 1/ in the function space, instead of minimizing the error at the sample points only. As an example of the OGNN, a projection filter (PF) criterion is considered and the PFGNN is introduced. The network is of the two-layer nonlinear-linear type.<<ETX>>","PeriodicalId":211135,"journal":{"name":"[Proceedings] 1991 IEEE International Joint Conference on Neural Networks","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"1991-11-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129973990","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 5
Optical inner-product implementations for multi-layer BAM with 2-dimensional patterns 二维模式多层BAM的光学内积实现
Pub Date : 1991-11-18 DOI: 10.1109/IJCNN.1991.170675
Hyuek-Jae Lee, Soo-Young Lee, C. Park, S. Shin
The authors present an optical inner-product architecture for MBAM (multi-layer bidirectional associative memory) with two-dimensional input and output patterns. The proposed architecture utilizes compact solid modules for single-layer feedforward networks, which may be cascaded for MBAM. Instead of analog interconnection weights the inner-product scheme stores input and output patterns. For binary input and output patterns this inner-product scheme requires binary spatial light modulators only, and is scalable to very large-size implementations. Unlike optical neural networks for one-dimensional patterns, multifocus holograms and lenslet arrays become essential components in these modules. The performance of the MBAM was demonstrated by an electrooptic inner-product implementation for the exclusive-OR problem.<>
提出了一种二维输入输出模式的多层双向联想存储器(MBAM)光学内积结构。所提出的体系结构采用紧凑的固体模块用于单层前馈网络,可以级联用于MBAM。代替模拟互连权重内积方案存储输入和输出模式。对于二进制输入和输出模式,这种内积方案只需要二进制空间光调制器,并且可扩展到非常大尺寸的实现。与一维模式的光学神经网络不同,多焦点全息图和透镜阵列成为这些模块的重要组成部分。通过对异或问题的电光内积实现验证了MBAM的性能。
{"title":"Optical inner-product implementations for multi-layer BAM with 2-dimensional patterns","authors":"Hyuek-Jae Lee, Soo-Young Lee, C. Park, S. Shin","doi":"10.1109/IJCNN.1991.170675","DOIUrl":"https://doi.org/10.1109/IJCNN.1991.170675","url":null,"abstract":"The authors present an optical inner-product architecture for MBAM (multi-layer bidirectional associative memory) with two-dimensional input and output patterns. The proposed architecture utilizes compact solid modules for single-layer feedforward networks, which may be cascaded for MBAM. Instead of analog interconnection weights the inner-product scheme stores input and output patterns. For binary input and output patterns this inner-product scheme requires binary spatial light modulators only, and is scalable to very large-size implementations. Unlike optical neural networks for one-dimensional patterns, multifocus holograms and lenslet arrays become essential components in these modules. The performance of the MBAM was demonstrated by an electrooptic inner-product implementation for the exclusive-OR problem.<<ETX>>","PeriodicalId":211135,"journal":{"name":"[Proceedings] 1991 IEEE International Joint Conference on Neural Networks","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"1991-11-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"125650628","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
期刊
[Proceedings] 1991 IEEE International Joint Conference on Neural Networks
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1