Pub Date : 2013-08-01DOI: 10.1109/IJCNN.2013.6707136
A. Luchetta, S. Manetti, M. C. Piccirilli
The aim of this work is to present a novel technique for the identification of lumped circuit models of general distributed apparatus and devices. It is based on the use of a double modified complex value neural network. The method is not oriented to a unique class of electromagnetic systems, but it gives a procedure for the complete validation of the approximated lumped model and the extraction of the electrical parameter values. The inputs of the system are the geometrical (and/or manufacturing) parameters of the considered structure, while the outputs are the lumped circuit parameters. The method follows the Frequency Response Analysis (FRA) approach for elaborating the data presented to the network.
{"title":"Analog system modeling based on a double modified complex valued neural network","authors":"A. Luchetta, S. Manetti, M. C. Piccirilli","doi":"10.1109/IJCNN.2013.6707136","DOIUrl":"https://doi.org/10.1109/IJCNN.2013.6707136","url":null,"abstract":"The aim of this work is to present a novel technique for the identification of lumped circuit models of general distributed apparatus and devices. It is based on the use of a double modified complex value neural network. The method is not oriented to a unique class of electromagnetic systems, but it gives a procedure for the complete validation of the approximated lumped model and the extraction of the electrical parameter values. The inputs of the system are the geometrical (and/or manufacturing) parameters of the considered structure, while the outputs are the lumped circuit parameters. The method follows the Frequency Response Analysis (FRA) approach for elaborating the data presented to the network.","PeriodicalId":376975,"journal":{"name":"The 2013 International Joint Conference on Neural Networks (IJCNN)","volume":"10 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2013-08-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"134115546","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2013-08-01DOI: 10.1109/IJCNN.2013.6706884
Zheng Shou, Yuhao Zhang, H. Cai
In order to learn transformation-invariant features, several effective deep architectures like hierarchical feature learning and variant Deep Belief Networks (DBN) have been proposed. Considering the complexity of those variants, people are interested in whether DBN itself has transformation-invariances. First of all, we use original DBN to test original data. Almost same error rates will be achieved, if we change weights in the bottom interlayer according to transformations occurred in testing data. It implies that weights in the bottom interlayer can store the knowledge to handle transformations such as rotation, shifting, and scaling. Along with the continuous learning ability and good storage of DBN, we present our Weight-Transformed Training Algorithm (WTTA) without augmenting other layers, units or filters to original DBN. Based upon original training method, WTTA is aiming at transforming weights and is still unsupervised. For MNIST handwritten digits recognizing experiments, we adopted 784-100-100-100 DBN to compare the differences of recognizing ability in weights-transformed ranges. Most error rates generated by WTTA were below 25% while most rates generated by original training algorithm exceeded 25%. Then we also did an experiment on part of MIT-CBCL face database, with varying illumination, and the best testing accuracy can be achieved is 87.5%. Besides, similar results can be achieved by datasets covering all kinds of transformations, but WTTA only needs original training data and transform weights after each training loop. Consequently, we can mine inherent transformation-invariances of DBN by WTTA, and DBN itself can recognize transformed data at satisfying error rates without inserting other components.
{"title":"A study of transformation-invariances of deep belief networks","authors":"Zheng Shou, Yuhao Zhang, H. Cai","doi":"10.1109/IJCNN.2013.6706884","DOIUrl":"https://doi.org/10.1109/IJCNN.2013.6706884","url":null,"abstract":"In order to learn transformation-invariant features, several effective deep architectures like hierarchical feature learning and variant Deep Belief Networks (DBN) have been proposed. Considering the complexity of those variants, people are interested in whether DBN itself has transformation-invariances. First of all, we use original DBN to test original data. Almost same error rates will be achieved, if we change weights in the bottom interlayer according to transformations occurred in testing data. It implies that weights in the bottom interlayer can store the knowledge to handle transformations such as rotation, shifting, and scaling. Along with the continuous learning ability and good storage of DBN, we present our Weight-Transformed Training Algorithm (WTTA) without augmenting other layers, units or filters to original DBN. Based upon original training method, WTTA is aiming at transforming weights and is still unsupervised. For MNIST handwritten digits recognizing experiments, we adopted 784-100-100-100 DBN to compare the differences of recognizing ability in weights-transformed ranges. Most error rates generated by WTTA were below 25% while most rates generated by original training algorithm exceeded 25%. Then we also did an experiment on part of MIT-CBCL face database, with varying illumination, and the best testing accuracy can be achieved is 87.5%. Besides, similar results can be achieved by datasets covering all kinds of transformations, but WTTA only needs original training data and transform weights after each training loop. Consequently, we can mine inherent transformation-invariances of DBN by WTTA, and DBN itself can recognize transformed data at satisfying error rates without inserting other components.","PeriodicalId":376975,"journal":{"name":"The 2013 International Joint Conference on Neural Networks (IJCNN)","volume":"43 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2013-08-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"134099051","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2013-08-01DOI: 10.1109/IJCNN.2013.6706833
Oliver W. Layton, N. A. Browning
The spatio-temporal displacement of luminance patterns in a 2D image is called optic flow. Present biologically-inspired approaches to navigation that use optic flow largely focus on the problem of extracting the instantaneous direction of travel (heading) of a mobile agent. Computational models have demonstrated success in estimating heading in highly constrained environments whereby the agent is largely assumed to travel along straight paths. However, drivers competently steer around curved road bends and humans have been shown capable of judging their future, possibly curved, path of travel in addition to instantaneous heading. The computation of the general future path of travel, which need not be straight, is of interest to mobile robotic, autonomous vehicle driving, and path planning applications, yet no biologically-inspired neural network model exists that provides mechanisms through which the future path may be estimated. We present a biologically inspired recurrent neural network, based on brain area MSTd, that can dynamically code both instantaneous heading and path simultaneously. We show that the model performs similarly to humans in judging heading and the curvature of the future path.
{"title":"The simultaneous coding of heading and path in primate MSTd","authors":"Oliver W. Layton, N. A. Browning","doi":"10.1109/IJCNN.2013.6706833","DOIUrl":"https://doi.org/10.1109/IJCNN.2013.6706833","url":null,"abstract":"The spatio-temporal displacement of luminance patterns in a 2D image is called optic flow. Present biologically-inspired approaches to navigation that use optic flow largely focus on the problem of extracting the instantaneous direction of travel (heading) of a mobile agent. Computational models have demonstrated success in estimating heading in highly constrained environments whereby the agent is largely assumed to travel along straight paths. However, drivers competently steer around curved road bends and humans have been shown capable of judging their future, possibly curved, path of travel in addition to instantaneous heading. The computation of the general future path of travel, which need not be straight, is of interest to mobile robotic, autonomous vehicle driving, and path planning applications, yet no biologically-inspired neural network model exists that provides mechanisms through which the future path may be estimated. We present a biologically inspired recurrent neural network, based on brain area MSTd, that can dynamically code both instantaneous heading and path simultaneously. We show that the model performs similarly to humans in judging heading and the curvature of the future path.","PeriodicalId":376975,"journal":{"name":"The 2013 International Joint Conference on Neural Networks (IJCNN)","volume":"33 9","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2013-08-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"131806248","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2013-08-01DOI: 10.1109/IJCNN.2013.6706849
Nicoleta Rogovschi, Lazhar Labiod, M. Nadif
We explore in this paper a novel topological organization algorithm for data clustering and visualization named TPNMF. It leads to a clustering of the data, as well as the projection of the clusters on a two-dimensional grid while preserving the topological order of the initial data. The proposed algorithm is based on a NMF (Nonnegative Matrix Factorization) formalism using a neighborhood function which take into account the topological order of the data. TPNMF was validated on variant real datasets and the experimental results show a good quality of the topological ordering and homogenous clustering.
{"title":"A topographical nonnegative matrix factorization algorithm","authors":"Nicoleta Rogovschi, Lazhar Labiod, M. Nadif","doi":"10.1109/IJCNN.2013.6706849","DOIUrl":"https://doi.org/10.1109/IJCNN.2013.6706849","url":null,"abstract":"We explore in this paper a novel topological organization algorithm for data clustering and visualization named TPNMF. It leads to a clustering of the data, as well as the projection of the clusters on a two-dimensional grid while preserving the topological order of the initial data. The proposed algorithm is based on a NMF (Nonnegative Matrix Factorization) formalism using a neighborhood function which take into account the topological order of the data. TPNMF was validated on variant real datasets and the experimental results show a good quality of the topological ordering and homogenous clustering.","PeriodicalId":376975,"journal":{"name":"The 2013 International Joint Conference on Neural Networks (IJCNN)","volume":"65 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2013-08-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"131814417","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2013-08-01DOI: 10.1109/IJCNN.2013.6706843
George M. Georgiou, K. Voigt
Self-organization is explored with a single complex-valued quadratic neuron. The output is the complex plane. A virtual grid is used to provide desired outputs for each input. Experiments have shown that training is fast. A quadratic neuron with the new training algorithm has been shown to have clustering properties. Data that are in a cluster in the input space tend to cluster on the complex plane. The speed of training and operation allows for efficient high-dimensional data exploration and for real-time critical applications.
{"title":"Self-organizing maps with a single neuron","authors":"George M. Georgiou, K. Voigt","doi":"10.1109/IJCNN.2013.6706843","DOIUrl":"https://doi.org/10.1109/IJCNN.2013.6706843","url":null,"abstract":"Self-organization is explored with a single complex-valued quadratic neuron. The output is the complex plane. A virtual grid is used to provide desired outputs for each input. Experiments have shown that training is fast. A quadratic neuron with the new training algorithm has been shown to have clustering properties. Data that are in a cluster in the input space tend to cluster on the complex plane. The speed of training and operation allows for efficient high-dimensional data exploration and for real-time critical applications.","PeriodicalId":376975,"journal":{"name":"The 2013 International Joint Conference on Neural Networks (IJCNN)","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2013-08-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"131006860","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2013-08-01DOI: 10.1109/IJCNN.2013.6706709
S. Grossberg
Behavioral economics and neuroeconomics concern how humans process multiple alternatives to make their decisions, and propose how discoveries about how the brain works can inform models of economic behavior. This lecture will survey how results about cooperative-competitive and cognitive-emotional dynamics that were discovered to better understand how brains control behavior can shed light on issues of importance in economics, including results about the voting paradox, how to design stable economic markets, irrational decision making under risk (Prospect Theory), probabilistic decision making, preferences for previously unexperienced alternatives over rewarded experiences, and bounded rationality.
{"title":"Behavioral economics and neuroeconomics: Cooperation, competition, preference, and decision making","authors":"S. Grossberg","doi":"10.1109/IJCNN.2013.6706709","DOIUrl":"https://doi.org/10.1109/IJCNN.2013.6706709","url":null,"abstract":"Behavioral economics and neuroeconomics concern how humans process multiple alternatives to make their decisions, and propose how discoveries about how the brain works can inform models of economic behavior. This lecture will survey how results about cooperative-competitive and cognitive-emotional dynamics that were discovered to better understand how brains control behavior can shed light on issues of importance in economics, including results about the voting paradox, how to design stable economic markets, irrational decision making under risk (Prospect Theory), probabilistic decision making, preferences for previously unexperienced alternatives over rewarded experiences, and bounded rationality.","PeriodicalId":376975,"journal":{"name":"The 2013 International Joint Conference on Neural Networks (IJCNN)","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2013-08-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"131153568","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2013-08-01DOI: 10.1109/IJCNN.2013.6706715
B. Apolloni
I propose a three-step discussion following a research path shared in part with John Taylor where the leitmotif is to understand the cooperation between thinking agents: the pRAM architecture, the butler paradigm, and the networked intelligence. All three steps comprise keystones of European projects which one of us has coordinated. The principled philosophy is to “start simple and insert progressive complexity”. The results I discuss only go as far as the “start simple” point. The final goal is to find a bias that underpins the entire research effort. In this paper I will move within the connectionist paradigm at various scales, the largest being one that encompasses an Internet of Things instantiation.
{"title":"Toward a cooperative brain: Continuing the work with John Taylor","authors":"B. Apolloni","doi":"10.1109/IJCNN.2013.6706715","DOIUrl":"https://doi.org/10.1109/IJCNN.2013.6706715","url":null,"abstract":"I propose a three-step discussion following a research path shared in part with John Taylor where the leitmotif is to understand the cooperation between thinking agents: the pRAM architecture, the butler paradigm, and the networked intelligence. All three steps comprise keystones of European projects which one of us has coordinated. The principled philosophy is to “start simple and insert progressive complexity”. The results I discuss only go as far as the “start simple” point. The final goal is to find a bias that underpins the entire research effort. In this paper I will move within the connectionist paradigm at various scales, the largest being one that encompasses an Internet of Things instantiation.","PeriodicalId":376975,"journal":{"name":"The 2013 International Joint Conference on Neural Networks (IJCNN)","volume":"25 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2013-08-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"133427764","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2013-08-01DOI: 10.1109/IJCNN.2013.6707094
I. Dagher, Jamal Hassanieh, Ahmad Younes
Face recognition can be described by a sophisticated mathematical representation and matching procedures. In this paper, Local Derivative Pattern (LDP) descriptors along with the Gabor feature extraction technique were used to achieve highest percentage of recognition possible. A robust comparison method, the Chi Square Distance, was used as a matching algorithm. Four databases involving different image capturing conditions: positioning, illumination and expressions were used. The best results were obtained after applying a voting technique to the Gabor and the LDP features.
{"title":"Face recognition using voting technique for the Gabor and LDP features","authors":"I. Dagher, Jamal Hassanieh, Ahmad Younes","doi":"10.1109/IJCNN.2013.6707094","DOIUrl":"https://doi.org/10.1109/IJCNN.2013.6707094","url":null,"abstract":"Face recognition can be described by a sophisticated mathematical representation and matching procedures. In this paper, Local Derivative Pattern (LDP) descriptors along with the Gabor feature extraction technique were used to achieve highest percentage of recognition possible. A robust comparison method, the Chi Square Distance, was used as a matching algorithm. Four databases involving different image capturing conditions: positioning, illumination and expressions were used. The best results were obtained after applying a voting technique to the Gabor and the LDP features.","PeriodicalId":376975,"journal":{"name":"The 2013 International Joint Conference on Neural Networks (IJCNN)","volume":"32 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2013-08-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"133107554","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2013-08-01DOI: 10.1109/IJCNN.2013.6706863
Zelin Tang, S. Furao, Jinxi Zhao
Gaussian Mixture Models has been widely used in speaker recognition during the last decades. To deal with the dynamic growth of datasets, initial clustering problem and achieving the results of clustering effectively on incremental data, an incremental adaptation method called incremental learning Gaussian mixture model (IGMM) is proposed in this paper. It was applied to speaker recognition system based on Self Organization Incremental Learning Neural Network (SOINN) and improved EM algorithm. SOINN is a Neural Network which can reach a suitable mixture number and appropriate initial cluster for each model. First, the initial training is conducted by SOINN and EM algorithm only need a limited amount of data. Then, the model would adapt to the data available in each session to enrich itself incrementally and recursively. Experiments were taken on the 1st speech separation challenge database. The results show that IGMM outperforms GMM and classical Bayesian adaptation in most of the cases.
{"title":"Speaker recognition based on SOINN and incremental learning Gaussian mixture model","authors":"Zelin Tang, S. Furao, Jinxi Zhao","doi":"10.1109/IJCNN.2013.6706863","DOIUrl":"https://doi.org/10.1109/IJCNN.2013.6706863","url":null,"abstract":"Gaussian Mixture Models has been widely used in speaker recognition during the last decades. To deal with the dynamic growth of datasets, initial clustering problem and achieving the results of clustering effectively on incremental data, an incremental adaptation method called incremental learning Gaussian mixture model (IGMM) is proposed in this paper. It was applied to speaker recognition system based on Self Organization Incremental Learning Neural Network (SOINN) and improved EM algorithm. SOINN is a Neural Network which can reach a suitable mixture number and appropriate initial cluster for each model. First, the initial training is conducted by SOINN and EM algorithm only need a limited amount of data. Then, the model would adapt to the data available in each session to enrich itself incrementally and recursively. Experiments were taken on the 1st speech separation challenge database. The results show that IGMM outperforms GMM and classical Bayesian adaptation in most of the cases.","PeriodicalId":376975,"journal":{"name":"The 2013 International Joint Conference on Neural Networks (IJCNN)","volume":"18 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2013-08-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115470994","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2013-08-01DOI: 10.1109/IJCNN.2013.6707063
Zhicheng Yan, Yizhou Yu
Tf-idf weighting scheme is adopted by state-of-the-art object retrieval systems to reflect the difference in discriminability between visual words. However, we argue it is only suboptimal by noting that tf-idf weighting scheme does not take quantization error into account and exploit word correlation. We view tf-idf weights as an example of diagonal Mahalanobis-type similarity matrix and generalize it into a sparse one by selectively activating off-diagonal elements. Our goal is to separate similarity of relevant images from that of irrelevant ones by a safe margin. We satisfy such similarity constraints by learning an optimal similarity metric from labeled data. An effective scheme is developed to collect training data with an emphasis on cases where the tf-idf weights violates the relative relevance constraints. Experimental results on benchmark datasets indicate the learnt similarity metric consistently and significantly outperforms the tf-idf weighting scheme.
{"title":"Sparse similarity matrix learning for visual object retrieval","authors":"Zhicheng Yan, Yizhou Yu","doi":"10.1109/IJCNN.2013.6707063","DOIUrl":"https://doi.org/10.1109/IJCNN.2013.6707063","url":null,"abstract":"Tf-idf weighting scheme is adopted by state-of-the-art object retrieval systems to reflect the difference in discriminability between visual words. However, we argue it is only suboptimal by noting that tf-idf weighting scheme does not take quantization error into account and exploit word correlation. We view tf-idf weights as an example of diagonal Mahalanobis-type similarity matrix and generalize it into a sparse one by selectively activating off-diagonal elements. Our goal is to separate similarity of relevant images from that of irrelevant ones by a safe margin. We satisfy such similarity constraints by learning an optimal similarity metric from labeled data. An effective scheme is developed to collect training data with an emphasis on cases where the tf-idf weights violates the relative relevance constraints. Experimental results on benchmark datasets indicate the learnt similarity metric consistently and significantly outperforms the tf-idf weighting scheme.","PeriodicalId":376975,"journal":{"name":"The 2013 International Joint Conference on Neural Networks (IJCNN)","volume":"84 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2013-08-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115553290","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}