首页 > 最新文献

Proceedings of the 9th International Conference on Neural Information Processing, 2002. ICONIP '02.最新文献

英文 中文
On discrete N-layer heteroassociative memory models 离散n层异联想记忆模型
R. Waivio
In this paper we investigate computational properties of a new N-layer heteroassociative memory model with respect to information encoding. We describe a technique for encoding a set of m/spl times/n matrix patterns where entering one column (row) of a pattern allows the remaining columns (rows) to be recurrently reconstructed. Following are some of the main contributions of this paper: - We show how to transform any given set of patterns to a standard form using a simple procedure. Then we demonstrate that after a competitive initialization among all layers our multilayer network converges in one step to fixed points which are one of the given patterns in its standard form. Due to an increase in the domain of attraction, our architecture becomes more powerful than the previous models. - We analyze the optimal number of layers, as well as their dimensions, based on the cardinality of maximal linearly independent subspaces of the input patterns. - We prove that our proposed model is stable under mild technical assumptions using the discrete Lyapunov energy function.
本文研究了一种新的n层异联想记忆模型在信息编码方面的计算特性。我们描述了一种编码一组m/spl times/n矩阵模式的技术,其中输入模式的一列(行)允许循环重构其余的列(行)。以下是本文的一些主要贡献:-我们展示了如何使用一个简单的过程将任何给定的模式集转换为标准形式。然后,我们证明了在所有层之间的竞争初始化之后,我们的多层网络在一步内收敛到不动点,这些不动点是其标准形式中的给定模式之一。由于吸引力领域的增加,我们的架构变得比以前的模型更强大。基于输入模式的最大线性独立子空间的基数,我们分析了层的最佳数量,以及它们的维度。-我们使用离散Lyapunov能量函数证明了我们提出的模型在温和的技术假设下是稳定的。
{"title":"On discrete N-layer heteroassociative memory models","authors":"R. Waivio","doi":"10.1109/ICONIP.2002.1202131","DOIUrl":"https://doi.org/10.1109/ICONIP.2002.1202131","url":null,"abstract":"In this paper we investigate computational properties of a new N-layer heteroassociative memory model with respect to information encoding. We describe a technique for encoding a set of m/spl times/n matrix patterns where entering one column (row) of a pattern allows the remaining columns (rows) to be recurrently reconstructed. Following are some of the main contributions of this paper: - We show how to transform any given set of patterns to a standard form using a simple procedure. Then we demonstrate that after a competitive initialization among all layers our multilayer network converges in one step to fixed points which are one of the given patterns in its standard form. Due to an increase in the domain of attraction, our architecture becomes more powerful than the previous models. - We analyze the optimal number of layers, as well as their dimensions, based on the cardinality of maximal linearly independent subspaces of the input patterns. - We prove that our proposed model is stable under mild technical assumptions using the discrete Lyapunov energy function.","PeriodicalId":146553,"journal":{"name":"Proceedings of the 9th International Conference on Neural Information Processing, 2002. ICONIP '02.","volume":"3 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2002-11-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"125405296","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Face and non-face classification by multinomial logit model and kernel feature compound vectors 利用多项logit模型和核特征复合向量对人脸和非人脸进行分类
S. Hasegawa, T. Kurita
This paper introduces a method for face and non-face classification. The method is based on the combined use of the multinomial logit model (MLM) and "kernel feature compound vectors". The NMM is one of the neural network models for multi-class pattern classification, and is supposed to be equal or better in classification performance than linear classification methods. The "Kernel Feature Compound Vectors" are compound feature vectors of geometric image features and Kernel features. Evaluation and comparison experiments were conducted by using face and non-ace images (Face training 100, cross-validation 300, test 325, Non-face training 200, cross-validation 1000, test 1000) gathered from the available face databases and others. The experimental result obtained by the proposed method was the best compared with the results by the Support Vector Machines (SVM) and the Kernel Fisher Discriminant Analysis (KFDA).
介绍了一种人脸与非人脸的分类方法。该方法将多项logit模型(MLM)与“核特征复合向量”相结合。NMM是一种用于多类模式分类的神经网络模型,其分类性能与线性分类方法相当甚至更好。“核特征复合向量”是几何图像特征与核特征的复合特征向量。使用从可用的人脸数据库等中收集的人脸和非人脸图像(人脸训练100张,交叉验证300张,测试325张,非人脸训练200张,交叉验证1000张,测试1000张)进行评估和比较实验。与支持向量机(SVM)和核费雪判别分析(KFDA)的结果相比,该方法得到的实验结果是最好的。
{"title":"Face and non-face classification by multinomial logit model and kernel feature compound vectors","authors":"S. Hasegawa, T. Kurita","doi":"10.1109/ICONIP.2002.1198210","DOIUrl":"https://doi.org/10.1109/ICONIP.2002.1198210","url":null,"abstract":"This paper introduces a method for face and non-face classification. The method is based on the combined use of the multinomial logit model (MLM) and \"kernel feature compound vectors\". The NMM is one of the neural network models for multi-class pattern classification, and is supposed to be equal or better in classification performance than linear classification methods. The \"Kernel Feature Compound Vectors\" are compound feature vectors of geometric image features and Kernel features. Evaluation and comparison experiments were conducted by using face and non-ace images (Face training 100, cross-validation 300, test 325, Non-face training 200, cross-validation 1000, test 1000) gathered from the available face databases and others. The experimental result obtained by the proposed method was the best compared with the results by the Support Vector Machines (SVM) and the Kernel Fisher Discriminant Analysis (KFDA).","PeriodicalId":146553,"journal":{"name":"Proceedings of the 9th International Conference on Neural Information Processing, 2002. ICONIP '02.","volume":"36 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2002-11-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"125426619","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 4
The hybrid method for determining an adaptive step size of the unknown system identification using genetic algorithm and LMS algorithm 采用遗传算法和LMS算法混合确定未知系统辨识的自适应步长
H. Kim, T. Lee, D. Lim, D. Jung
We describe the application of a genetic algorithm (GA) to the problem of parameter optimization for an adaptive finite impulse response (FIR) filter combining genetic algorithm (GA) and least mean square (LMS) algorithm. For system identification problem, LMS algorithm computes the filter coefficients and GA search the optimal step-size adaptively. Because step-size influences on the stability and performance, so it is necessary to apply method that can control it. The simulation results of the GA were compared to the traditional LMS algorithm. We obtained that genetic algorithm was clearly superior (in accuracy) in most cases.
将遗传算法应用于遗传算法与最小均方算法相结合的自适应有限脉冲响应(FIR)滤波器参数优化问题。对于系统辨识问题,LMS算法计算滤波系数,遗传算法自适应搜索最优步长。由于步长影响系统的稳定性和性能,因此有必要采用能够控制步长的方法。将遗传算法的仿真结果与传统LMS算法进行了比较。我们得到遗传算法在大多数情况下明显优于(精度)。
{"title":"The hybrid method for determining an adaptive step size of the unknown system identification using genetic algorithm and LMS algorithm","authors":"H. Kim, T. Lee, D. Lim, D. Jung","doi":"10.1109/ICONIP.2002.1198172","DOIUrl":"https://doi.org/10.1109/ICONIP.2002.1198172","url":null,"abstract":"We describe the application of a genetic algorithm (GA) to the problem of parameter optimization for an adaptive finite impulse response (FIR) filter combining genetic algorithm (GA) and least mean square (LMS) algorithm. For system identification problem, LMS algorithm computes the filter coefficients and GA search the optimal step-size adaptively. Because step-size influences on the stability and performance, so it is necessary to apply method that can control it. The simulation results of the GA were compared to the traditional LMS algorithm. We obtained that genetic algorithm was clearly superior (in accuracy) in most cases.","PeriodicalId":146553,"journal":{"name":"Proceedings of the 9th International Conference on Neural Information Processing, 2002. ICONIP '02.","volume":"69 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2002-11-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126226365","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 3
An efficient learning algorithm for function approximation with radial basis function networks 径向基函数网络函数逼近的一种高效学习算法
Yen-Jen Oyang, Shien-Ching Hwang
This paper proposes a novel learning algorithm for constructing function approximators with radial basis function (RBF) networks. In comparison with the existing learning algorithms, the proposed algorithm features lower time complexity for constructing the RBF network and is able to deliver the same level of accuracy. The time taken by the proposed algorithm to construct the RBF network is in the order of O(|S|), where S is the set of training samples. As far as the time complexity for predicting the function values of input vectors is concerned, the RBF network constructed with the proposed learning algorithm can complete the task in O(|T|), where T is the set of input vectors. Another important feature of the proposed learning algorithm is that the space complexity of the RBF network constructed is O(m|S|), where m is the dimension of the vector space in which the target function is defined.
提出了一种基于径向基函数(RBF)网络构造函数逼近器的学习算法。与现有的学习算法相比,该算法具有较低的构建RBF网络的时间复杂度,并且能够提供相同水平的精度。本文算法构造RBF网络的时间为O(|S|)阶,其中S为训练样本集。就预测输入向量函数值的时间复杂度而言,使用本文提出的学习算法构建的RBF网络可以在O(|T|)内完成任务,其中T为输入向量集合。本文提出的学习算法的另一个重要特征是构建的RBF网络的空间复杂度为O(m|S|),其中m为定义目标函数所在向量空间的维数。
{"title":"An efficient learning algorithm for function approximation with radial basis function networks","authors":"Yen-Jen Oyang, Shien-Ching Hwang","doi":"10.1109/ICONIP.2002.1198218","DOIUrl":"https://doi.org/10.1109/ICONIP.2002.1198218","url":null,"abstract":"This paper proposes a novel learning algorithm for constructing function approximators with radial basis function (RBF) networks. In comparison with the existing learning algorithms, the proposed algorithm features lower time complexity for constructing the RBF network and is able to deliver the same level of accuracy. The time taken by the proposed algorithm to construct the RBF network is in the order of O(|S|), where S is the set of training samples. As far as the time complexity for predicting the function values of input vectors is concerned, the RBF network constructed with the proposed learning algorithm can complete the task in O(|T|), where T is the set of input vectors. Another important feature of the proposed learning algorithm is that the space complexity of the RBF network constructed is O(m|S|), where m is the dimension of the vector space in which the target function is defined.","PeriodicalId":146553,"journal":{"name":"Proceedings of the 9th International Conference on Neural Information Processing, 2002. ICONIP '02.","volume":"56 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2002-11-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126226751","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 10
Coordination and synchronization of locomotion in a virtual robot 虚拟机器人运动的协调与同步
J. Teo, H. Abbass
This paper investigates the use of a multi-objective approach for evolving artificial neural networks that act as controllers for the legged locomotion of a 3-dimensional, artificial quadruped creature simulated in a physics-based environment. The Pareto-frontier Differential Evolution (PDE) algorithm is used to generate a Pareto optimal set of artificial neural networks that optimizes the conflicting objectives of maximizing locomotion behavior and minimizing neural network complexity. Here we provide an insight into how the controller generates the emergent walking behavior in the creature by analyzing the evolved artificial neural networks in operation. A comparison between Pareto optimal controllers showed that ANNs with varying numbers of hidden units resulted in noticeably different locomotion behaviors. We also found that a much higher level of sensory-motor coordination was present in the best evolved controller.
本文研究了在基于物理的环境中模拟三维人工四足动物的腿部运动,作为控制器的进化人工神经网络的多目标方法的使用。利用Pareto边界差分进化算法生成了一个Pareto最优人工神经网络集合,该集合对运动行为最大化和神经网络复杂度最小化这两个相互冲突的目标进行了优化。在这里,我们通过分析运行中的进化人工神经网络来深入了解控制器是如何产生生物的紧急行走行为的。通过与Pareto最优控制器的比较,发现不同隐藏单元数量的人工神经网络会导致明显不同的运动行为。我们还发现,在进化最好的控制器中存在更高水平的感觉-运动协调。
{"title":"Coordination and synchronization of locomotion in a virtual robot","authors":"J. Teo, H. Abbass","doi":"10.1109/ICONIP.2002.1199010","DOIUrl":"https://doi.org/10.1109/ICONIP.2002.1199010","url":null,"abstract":"This paper investigates the use of a multi-objective approach for evolving artificial neural networks that act as controllers for the legged locomotion of a 3-dimensional, artificial quadruped creature simulated in a physics-based environment. The Pareto-frontier Differential Evolution (PDE) algorithm is used to generate a Pareto optimal set of artificial neural networks that optimizes the conflicting objectives of maximizing locomotion behavior and minimizing neural network complexity. Here we provide an insight into how the controller generates the emergent walking behavior in the creature by analyzing the evolved artificial neural networks in operation. A comparison between Pareto optimal controllers showed that ANNs with varying numbers of hidden units resulted in noticeably different locomotion behaviors. We also found that a much higher level of sensory-motor coordination was present in the best evolved controller.","PeriodicalId":146553,"journal":{"name":"Proceedings of the 9th International Conference on Neural Information Processing, 2002. ICONIP '02.","volume":"5 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2002-11-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126101998","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 14
Fuzzy neuro controller for a two-link rigid-flexible manipulator system 两连杆刚柔机械臂系统的模糊神经控制器
L. Tian, Zongyuan Mao
This paper deals with the tracking control problem of a manipulator system with unknown and changing dynamics. In this study, a fuzzy logic controller (FLC) in the feedback configuration is proposed, and an efficient dynamic recurrent neural network (DRNN) in the feedforward configuration is developed. The DRNN, which possesses the ability of approaching arbitrary nonlinear function, is utilized to approximate the inverse dynamics of the robotic manipulator system. Based on the outputs of the FLC, parameter updating equations are derived for the adaptive DRNN model. The analysis of the stability of the system is also carried out. Finally, comparisons between fuzzy control and the proposed controller are carried out. The results demonstrate remarkable performance of the proposed controller for the two-link flexible manipulator system.
研究具有未知动态变化的机械臂系统的跟踪控制问题。在本研究中,提出了一种反馈组态的模糊逻辑控制器(FLC),并开发了一种前馈组态的高效动态递归神经网络(DRNN)。利用DRNN逼近任意非线性函数的能力,对机械臂系统的逆动力学进行逼近。基于FLC的输出,推导了自适应DRNN模型的参数更新方程。对系统的稳定性进行了分析。最后,对模糊控制和所提控制器进行了比较。结果表明,所提出的控制器对两连杆柔性机械臂系统具有良好的控制性能。
{"title":"Fuzzy neuro controller for a two-link rigid-flexible manipulator system","authors":"L. Tian, Zongyuan Mao","doi":"10.1109/ICONIP.2002.1198997","DOIUrl":"https://doi.org/10.1109/ICONIP.2002.1198997","url":null,"abstract":"This paper deals with the tracking control problem of a manipulator system with unknown and changing dynamics. In this study, a fuzzy logic controller (FLC) in the feedback configuration is proposed, and an efficient dynamic recurrent neural network (DRNN) in the feedforward configuration is developed. The DRNN, which possesses the ability of approaching arbitrary nonlinear function, is utilized to approximate the inverse dynamics of the robotic manipulator system. Based on the outputs of the FLC, parameter updating equations are derived for the adaptive DRNN model. The analysis of the stability of the system is also carried out. Finally, comparisons between fuzzy control and the proposed controller are carried out. The results demonstrate remarkable performance of the proposed controller for the two-link flexible manipulator system.","PeriodicalId":146553,"journal":{"name":"Proceedings of the 9th International Conference on Neural Information Processing, 2002. ICONIP '02.","volume":"42 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2002-11-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"116204543","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 8
Locating support vectors via /spl beta/-skeleton technique 通过/spl beta/-skeleton技术定位支持向量
Wan Zhang, Irwin King
Recently, support vector machine (SVM) has become a very dynamic and popular topic in the neural network community for its abilities to perform classification, estimation, and regression. One of the major tasks in the SVM algorithm is to locate the points, or rather support vectors, based on which we construct the discriminant boundary in classification task. In the process of studying the methods for finding the decision boundary, we conceive a method, /spl beta/-skeleton algorithm, which reduces the size of the training set for SVM. We describe their theoretical connections and practical implementation implications. In this paper, we also survey four different methods for classification: the SVM method, k-nearest neighbor method, /spl beta/-skeleton algorithm used in the above two methods. Compared with the methods without using /spl beta/-skeleton algorithm, prediction with the edited set obtained from /spl beta/-skeleton algorithm as the training set, does not lose the accuracy too much but reduces the real running time.
近年来,支持向量机(SVM)因其具有分类、估计和回归的能力而成为神经网络界一个非常活跃和热门的话题。支持向量机算法的主要任务之一是定位点,或者说是支持向量,在此基础上构造分类任务中的判别边界。在研究决策边界寻找方法的过程中,我们提出了一种减少SVM训练集大小的方法——/spl beta/-skeleton算法。我们描述了它们的理论联系和实际实施意义。在本文中,我们还研究了四种不同的分类方法:SVM方法,k-最近邻方法,以上两种方法中使用的/spl beta/-skeleton算法。与不使用/spl beta/-skeleton算法的方法相比,使用/spl beta/-skeleton算法得到的编辑集作为训练集进行预测,在不损失太多精度的同时减少了实际运行时间。
{"title":"Locating support vectors via /spl beta/-skeleton technique","authors":"Wan Zhang, Irwin King","doi":"10.1109/ICONIP.2002.1202855","DOIUrl":"https://doi.org/10.1109/ICONIP.2002.1202855","url":null,"abstract":"Recently, support vector machine (SVM) has become a very dynamic and popular topic in the neural network community for its abilities to perform classification, estimation, and regression. One of the major tasks in the SVM algorithm is to locate the points, or rather support vectors, based on which we construct the discriminant boundary in classification task. In the process of studying the methods for finding the decision boundary, we conceive a method, /spl beta/-skeleton algorithm, which reduces the size of the training set for SVM. We describe their theoretical connections and practical implementation implications. In this paper, we also survey four different methods for classification: the SVM method, k-nearest neighbor method, /spl beta/-skeleton algorithm used in the above two methods. Compared with the methods without using /spl beta/-skeleton algorithm, prediction with the edited set obtained from /spl beta/-skeleton algorithm as the training set, does not lose the accuracy too much but reduces the real running time.","PeriodicalId":146553,"journal":{"name":"Proceedings of the 9th International Conference on Neural Information Processing, 2002. ICONIP '02.","volume":"120 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2002-11-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"121169484","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 22
On the separability of kernel functions 关于核函数的可分性
Tao Wu, Hangen He, D. Hu
How to select a kernel function for the given data is an open problem in the research of support vector machine (SVM). There is a question puzzling many people: suppose the training data are separated nonlinearly in the input space, how do we know that the chosen kernel function can make the training data to be separated linearly in the feature space? A simple method is presented to decide if a selected kernel function can separate the given data linearly or not in the feature space.
如何为给定的数据选择核函数是支持向量机研究中的一个开放性问题。有一个问题困扰着很多人:假设训练数据在输入空间中是非线性分离的,我们如何知道所选择的核函数可以使训练数据在特征空间中线性分离?提出了一种简单的方法来判断所选核函数是否能在特征空间中线性分离给定的数据。
{"title":"On the separability of kernel functions","authors":"Tao Wu, Hangen He, D. Hu","doi":"10.1109/ICONIP.2002.1198220","DOIUrl":"https://doi.org/10.1109/ICONIP.2002.1198220","url":null,"abstract":"How to select a kernel function for the given data is an open problem in the research of support vector machine (SVM). There is a question puzzling many people: suppose the training data are separated nonlinearly in the input space, how do we know that the chosen kernel function can make the training data to be separated linearly in the feature space? A simple method is presented to decide if a selected kernel function can separate the given data linearly or not in the feature space.","PeriodicalId":146553,"journal":{"name":"Proceedings of the 9th International Conference on Neural Information Processing, 2002. ICONIP '02.","volume":"15 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2002-11-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"116737542","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 3
Optical neural network based on the parametrical four-wave mixing process 基于光神经网络的参数化四波混频过程
L. Litinskii, B. Kryzhanovsky, A. Fonarev
In this paper we develop a formalism allowing us to describe operating of a network based on the parametrical four-wave mixing process that is well-known in nonlinear optics. The recognition power of a network using parametric neurons operating with q different frequencies is considered. It is shown that the storage capacity of such a network is higher compared with the Potts-glass models.
本文根据非线性光学中众所周知的参数化四波混频过程,提出了一种描述网络运行的形式。考虑了以q个不同频率工作的参数神经元网络的识别能力。结果表明,与Potts-glass模型相比,该网络的存储容量更高。
{"title":"Optical neural network based on the parametrical four-wave mixing process","authors":"L. Litinskii, B. Kryzhanovsky, A. Fonarev","doi":"10.1109/ICONIP.2002.1198966","DOIUrl":"https://doi.org/10.1109/ICONIP.2002.1198966","url":null,"abstract":"In this paper we develop a formalism allowing us to describe operating of a network based on the parametrical four-wave mixing process that is well-known in nonlinear optics. The recognition power of a network using parametric neurons operating with q different frequencies is considered. It is shown that the storage capacity of such a network is higher compared with the Potts-glass models.","PeriodicalId":146553,"journal":{"name":"Proceedings of the 9th International Conference on Neural Information Processing, 2002. ICONIP '02.","volume":"14 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2002-11-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"121376468","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 5
Maximizing margins of multilayer neural networks 最大化多层神经网络的边界
T. Nishikawa, S. Abe
According to the CARVE algorithm, any pattern classification problem can be synthesized in three layers without misclassification. In this paper, we propose to train multilayer neural network classifiers based on the CARVE algorithm. In hidden layer training, we find a hyperplane that separates a set of data belonging to one class from the remaining data. Then, we remove the separated data from the training data, and repeat this procedure until only the data belonging to one class remain. In determining the hyperplane, we maximize margins heuristically so that data of one class are on one side of the hyperplane. In output layer training, we determine the hyperplane by a quadratic optimization technique. The performance of this new algorithm is evaluated by some benchmark data sets.
根据CARVE算法,任何模式分类问题都可以在三层合成而不会出现误分类。本文提出了一种基于CARVE算法的多层神经网络分类器训练方法。在隐藏层训练中,我们找到一个超平面,将属于一个类的一组数据与其余数据分开。然后,我们从训练数据中删除分离的数据,并重复此过程,直到只剩下属于一个类的数据。在确定超平面时,我们启发式地最大化边距,使一类数据位于超平面的一侧。在输出层训练中,我们采用二次优化技术确定超平面。通过一些基准数据集对新算法的性能进行了评价。
{"title":"Maximizing margins of multilayer neural networks","authors":"T. Nishikawa, S. Abe","doi":"10.1109/ICONIP.2002.1202186","DOIUrl":"https://doi.org/10.1109/ICONIP.2002.1202186","url":null,"abstract":"According to the CARVE algorithm, any pattern classification problem can be synthesized in three layers without misclassification. In this paper, we propose to train multilayer neural network classifiers based on the CARVE algorithm. In hidden layer training, we find a hyperplane that separates a set of data belonging to one class from the remaining data. Then, we remove the separated data from the training data, and repeat this procedure until only the data belonging to one class remain. In determining the hyperplane, we maximize margins heuristically so that data of one class are on one side of the hyperplane. In output layer training, we determine the hyperplane by a quadratic optimization technique. The performance of this new algorithm is evaluated by some benchmark data sets.","PeriodicalId":146553,"journal":{"name":"Proceedings of the 9th International Conference on Neural Information Processing, 2002. ICONIP '02.","volume":"17 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2002-11-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"123890696","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 4
期刊
Proceedings of the 9th International Conference on Neural Information Processing, 2002. ICONIP '02.
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1