Pub Date : 2002-11-07DOI: 10.1109/NNSP.2002.1030058
H. Sawada, S. Araki, R. Mukai, S. Makino
This paper presents a method for blind source separation using several separating subsystems whose sensor spacing and filter length can be configured individually. Each subsystem is responsible for source separation of an allocated frequency range. With this mechanism, we can use appropriate sensor spacing as well as filter length for each frequency range. We obtained better separation performance than with the conventional method by using a wide sensor spacing and a long filter for a low frequency range, and a narrow sensor spacing and a short filter for a high frequency range.
{"title":"Blind source separation with different sensor spacing and filter length for each frequency range","authors":"H. Sawada, S. Araki, R. Mukai, S. Makino","doi":"10.1109/NNSP.2002.1030058","DOIUrl":"https://doi.org/10.1109/NNSP.2002.1030058","url":null,"abstract":"This paper presents a method for blind source separation using several separating subsystems whose sensor spacing and filter length can be configured individually. Each subsystem is responsible for source separation of an allocated frequency range. With this mechanism, we can use appropriate sensor spacing as well as filter length for each frequency range. We obtained better separation performance than with the conventional method by using a wide sensor spacing and a long filter for a low frequency range, and a narrow sensor spacing and a short filter for a high frequency range.","PeriodicalId":117945,"journal":{"name":"Proceedings of the 12th IEEE Workshop on Neural Networks for Signal Processing","volume":"40 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2002-11-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130966061","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2002-11-07DOI: 10.1109/NNSP.2002.1030062
R. Mutihac, M. Hulle
The performance of six neuromorphic adaptive structurally different algorithms was analyzed in blind separation of independent artificially generated signals using the stationary linear independent component analysis (ICA) model. The estimated independent components were assessed and compared aiming to rank the neural ICA implementations. All algorithms were run with different contrast functions, which were optimally selected on the basis of maximizing the sum of individual negentropies of the network outputs. Both subGaussian and superGaussian one-dimensional time series were employed throughout the numerical simulations.
{"title":"Neural network implementations of independent component analysis","authors":"R. Mutihac, M. Hulle","doi":"10.1109/NNSP.2002.1030062","DOIUrl":"https://doi.org/10.1109/NNSP.2002.1030062","url":null,"abstract":"The performance of six neuromorphic adaptive structurally different algorithms was analyzed in blind separation of independent artificially generated signals using the stationary linear independent component analysis (ICA) model. The estimated independent components were assessed and compared aiming to rank the neural ICA implementations. All algorithms were run with different contrast functions, which were optimally selected on the basis of maximizing the sum of individual negentropies of the network outputs. Both subGaussian and superGaussian one-dimensional time series were employed throughout the numerical simulations.","PeriodicalId":117945,"journal":{"name":"Proceedings of the 12th IEEE Workshop on Neural Networks for Signal Processing","volume":"33 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2002-11-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"122113667","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2002-11-07DOI: 10.1109/NNSP.2002.1030043
N. Neretti, N. Intrator
We present a general framework for the design of a mother wavelet best adapted to a specific signal or to a class of signals. The filter's coefficients are obtained via optimization of a smooth objective function. We develop an unconstrained gradient-based optimization algorithm for a discrete wavelet transform. The algorithm is extended to the joint optimization of the mother wavelet and of the wavelet packets basis.
{"title":"An adaptive approach to wavelet filters design","authors":"N. Neretti, N. Intrator","doi":"10.1109/NNSP.2002.1030043","DOIUrl":"https://doi.org/10.1109/NNSP.2002.1030043","url":null,"abstract":"We present a general framework for the design of a mother wavelet best adapted to a specific signal or to a class of signals. The filter's coefficients are obtained via optimization of a smooth objective function. We develop an unconstrained gradient-based optimization algorithm for a discrete wavelet transform. The algorithm is extended to the joint optimization of the mother wavelet and of the wavelet packets basis.","PeriodicalId":117945,"journal":{"name":"Proceedings of the 12th IEEE Workshop on Neural Networks for Signal Processing","volume":"66 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2002-11-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"122809336","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2002-11-07DOI: 10.1109/NNSP.2002.1030070
P. Moallem, K. Faez
The reduction of the search region in stereo correspondence can increase the performance of the matching process, in the context of execution time and accuracy. For edge-based stereo matching, we establish the relationship between the search space and parameters like relative displacement of the edges, the disparity under consideration, the image resolution, the CCD dimensions and the focal length of the stereo system. Then, we propose a novel matching strategy for the edge-based stereo. Afterward, we develop a fast algorithm for edge based-stereo with combination of the obtained matching strategy and the multiresolution technique using the Haar wavelet. Considering conventional multiresolution techniques, we show that the execution time of our algorithm is decreased more than 36%. Moreover, the matching rate and the accuracy are increased. Theoretical investigation and experimental results show that our algorithm has a very good performance, therefore this new algorithm is very suitable for fast edge-based stereo applications like stereo robot vision.
{"title":"Fast edge-based stereo matching algorithm based on search space reduction","authors":"P. Moallem, K. Faez","doi":"10.1109/NNSP.2002.1030070","DOIUrl":"https://doi.org/10.1109/NNSP.2002.1030070","url":null,"abstract":"The reduction of the search region in stereo correspondence can increase the performance of the matching process, in the context of execution time and accuracy. For edge-based stereo matching, we establish the relationship between the search space and parameters like relative displacement of the edges, the disparity under consideration, the image resolution, the CCD dimensions and the focal length of the stereo system. Then, we propose a novel matching strategy for the edge-based stereo. Afterward, we develop a fast algorithm for edge based-stereo with combination of the obtained matching strategy and the multiresolution technique using the Haar wavelet. Considering conventional multiresolution techniques, we show that the execution time of our algorithm is decreased more than 36%. Moreover, the matching rate and the accuracy are increased. Theoretical investigation and experimental results show that our algorithm has a very good performance, therefore this new algorithm is very suitable for fast edge-based stereo applications like stereo robot vision.","PeriodicalId":117945,"journal":{"name":"Proceedings of the 12th IEEE Workshop on Neural Networks for Signal Processing","volume":"3 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2002-11-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"131198369","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2002-11-07DOI: 10.1109/NNSP.2002.1030076
S. Ikbal, K. Weber, H. Bourlard
We present an HMM2 based method for speaker normalization. Introduced as an extension of hidden Markov model (HMM), HMM2 differentiates itself from the regular HMM in terms of the emission density modeling, which is done by a set of state-dependent HMMs working in the feature vector space. The emission modeling HMM aims at maximizing the likelihood through optimal alignment of its states across the feature components. This property makes it potentially useful to speaker normalization, when applied to spectrum. With the alignment information we get, it is possible to normalize the speaker related variations through piecewise linear warping of frequency axis of the spectrum. In our case, (emission modeling) HMM based spectral warping is employed in the feature extraction block of regular HMM framework for normalizing the speaker related variabilities. After brief description of HMM2, we present the general approach towards HMM2-based speaker normalization and show, through preliminary experiments, the pertinence of the approach.
{"title":"Speaker normalization using HMM2","authors":"S. Ikbal, K. Weber, H. Bourlard","doi":"10.1109/NNSP.2002.1030076","DOIUrl":"https://doi.org/10.1109/NNSP.2002.1030076","url":null,"abstract":"We present an HMM2 based method for speaker normalization. Introduced as an extension of hidden Markov model (HMM), HMM2 differentiates itself from the regular HMM in terms of the emission density modeling, which is done by a set of state-dependent HMMs working in the feature vector space. The emission modeling HMM aims at maximizing the likelihood through optimal alignment of its states across the feature components. This property makes it potentially useful to speaker normalization, when applied to spectrum. With the alignment information we get, it is possible to normalize the speaker related variations through piecewise linear warping of frequency axis of the spectrum. In our case, (emission modeling) HMM based spectral warping is employed in the feature extraction block of regular HMM framework for normalizing the speaker related variabilities. After brief description of HMM2, we present the general approach towards HMM2-based speaker normalization and show, through preliminary experiments, the pertinence of the approach.","PeriodicalId":117945,"journal":{"name":"Proceedings of the 12th IEEE Workshop on Neural Networks for Signal Processing","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2002-11-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129583875","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2002-11-07DOI: 10.1109/NNSP.2002.1030016
C. G. Molina, J. Mullikin
This paper introduces a new algorithm for DNA sequence analysis, based on the use of a reference DNA sequence for the estimation of base positions, and a probabilistic modelling of trace peaks. The new algorithm has been applied to long read-length DNA sequences and its performance has been compared to the base-calling program Phred. The results reported in this paper, after cross-matching with a finished consensus, show a significant improvement by the new algorithm in the final sequence read-length and in the number of correct bases extracted from DNA traces.
{"title":"A probabilistic approach for long read-length DNA sequence analysis","authors":"C. G. Molina, J. Mullikin","doi":"10.1109/NNSP.2002.1030016","DOIUrl":"https://doi.org/10.1109/NNSP.2002.1030016","url":null,"abstract":"This paper introduces a new algorithm for DNA sequence analysis, based on the use of a reference DNA sequence for the estimation of base positions, and a probabilistic modelling of trace peaks. The new algorithm has been applied to long read-length DNA sequences and its performance has been compared to the base-calling program Phred. The results reported in this paper, after cross-matching with a finished consensus, show a significant improvement by the new algorithm in the final sequence read-length and in the number of correct bases extracted from DNA traces.","PeriodicalId":117945,"journal":{"name":"Proceedings of the 12th IEEE Workshop on Neural Networks for Signal Processing","volume":"35 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2002-11-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"124946836","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2002-11-07DOI: 10.1109/NNSP.2002.1030017
S. Mukhopadhyay, Changhong Tang, Jeffrey R. Huang, Mulong Yu, M. Palakal
Classification of genetic sequence data available in public and private databases is an important problem in using, understanding, retrieving, filtering and correlating such large volumes of information. Although a significant amount of research effort is being spent internationally on this problem, very few studies exist that compare different classification approaches in terms of an objective and quantitative classification performance criterion. In this paper, we present experimental studies for classification of genetic sequences using both unsupervised and supervised approaches, focusing on both computational effort as well as a suitably defined classification performance measure. The results indicate that both unsupervised classification using the Maximin algorithm combined with FASTA sequence alignment algorithm and supervised classification using artificial neural network have good classification performance, with the unsupervised classification performs better and the supervised classification performs faster. A trade-off between the quality of classification and the computational efforts exists. The utilization of these classifiers for retrieval, filtering and correlation of genetic information as well as prediction of functions and structures will be logical future directions for further research.
{"title":"A comparative study of genetic sequence classification algorithms","authors":"S. Mukhopadhyay, Changhong Tang, Jeffrey R. Huang, Mulong Yu, M. Palakal","doi":"10.1109/NNSP.2002.1030017","DOIUrl":"https://doi.org/10.1109/NNSP.2002.1030017","url":null,"abstract":"Classification of genetic sequence data available in public and private databases is an important problem in using, understanding, retrieving, filtering and correlating such large volumes of information. Although a significant amount of research effort is being spent internationally on this problem, very few studies exist that compare different classification approaches in terms of an objective and quantitative classification performance criterion. In this paper, we present experimental studies for classification of genetic sequences using both unsupervised and supervised approaches, focusing on both computational effort as well as a suitably defined classification performance measure. The results indicate that both unsupervised classification using the Maximin algorithm combined with FASTA sequence alignment algorithm and supervised classification using artificial neural network have good classification performance, with the unsupervised classification performs better and the supervised classification performs faster. A trade-off between the quality of classification and the computational efforts exists. The utilization of these classifiers for retrieval, filtering and correlation of genetic information as well as prediction of functions and structures will be logical future directions for further research.","PeriodicalId":117945,"journal":{"name":"Proceedings of the 12th IEEE Workshop on Neural Networks for Signal Processing","volume":"15 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2002-11-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"123822336","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2002-11-07DOI: 10.1109/NNSP.2002.1030030
M. Kimura, Kazumi Saito, N. Ueda
We propose a growing network model and its learning algorithm. Unlike the conventional scale-free models, we incorporate community structure, which is an important characteristic of many real-world networks including the Web. In our experiments, we confirmed that the proposed model exhibits a degree distribution with a power-law tail, and our method can precisely estimate the probability of a new link creation from data without community information. Moreover, by introducing a measure of dynamic hub-degrees, we could predict the change of hub-degrees between communities.
{"title":"Modeling of growing networks with communities","authors":"M. Kimura, Kazumi Saito, N. Ueda","doi":"10.1109/NNSP.2002.1030030","DOIUrl":"https://doi.org/10.1109/NNSP.2002.1030030","url":null,"abstract":"We propose a growing network model and its learning algorithm. Unlike the conventional scale-free models, we incorporate community structure, which is an important characteristic of many real-world networks including the Web. In our experiments, we confirmed that the proposed model exhibits a degree distribution with a power-law tail, and our method can precisely estimate the probability of a new link creation from data without community information. Moreover, by introducing a measure of dynamic hub-degrees, we could predict the change of hub-degrees between communities.","PeriodicalId":117945,"journal":{"name":"Proceedings of the 12th IEEE Workshop on Neural Networks for Signal Processing","volume":"106 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2002-11-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"123671868","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2002-11-07DOI: 10.1109/NNSP.2002.1030054
R. Gil-Pita, P. J. Amores, M. Rosa-Zurera, F. López-Ferreras
An important problem with the use of neural networks in HRR radar target classification is the difficulty in obtaining training data. Training sets are small because of this, making generalization to new data difficult. In order to improve generalization capability, synthetic radar targets are obtained using a novel kernel method for estimating the probability density function of each class of radar targets. Multivariate Gaussians whose parameters are a function of position and data distribution are used as kernels. In order to assess the accuracy of the estimate, the maximum a posteriori criterion has been used in radar target classification, and compared with the k-nearest-neighbour classifier. The proposed method performs better than the k-nearest-neighbour classifier, demonstrating the accuracy of the estimate. After that, the estimated probability density functions are used to classify the synthetic data in order to use a supervised training algorithm for neural networks. The obtained results show that neural networks perform better if this strategy is used to increase the number of training data. Furthermore, computational complexity is dramatically reduced compared with that of the k-nearest neighbour classifier.
{"title":"Improving neural classifiers for ATR using a kernel method for generating synthetic training sets","authors":"R. Gil-Pita, P. J. Amores, M. Rosa-Zurera, F. López-Ferreras","doi":"10.1109/NNSP.2002.1030054","DOIUrl":"https://doi.org/10.1109/NNSP.2002.1030054","url":null,"abstract":"An important problem with the use of neural networks in HRR radar target classification is the difficulty in obtaining training data. Training sets are small because of this, making generalization to new data difficult. In order to improve generalization capability, synthetic radar targets are obtained using a novel kernel method for estimating the probability density function of each class of radar targets. Multivariate Gaussians whose parameters are a function of position and data distribution are used as kernels. In order to assess the accuracy of the estimate, the maximum a posteriori criterion has been used in radar target classification, and compared with the k-nearest-neighbour classifier. The proposed method performs better than the k-nearest-neighbour classifier, demonstrating the accuracy of the estimate. After that, the estimated probability density functions are used to classify the synthetic data in order to use a supervised training algorithm for neural networks. The obtained results show that neural networks perform better if this strategy is used to increase the number of training data. Furthermore, computational complexity is dramatically reduced compared with that of the k-nearest neighbour classifier.","PeriodicalId":117945,"journal":{"name":"Proceedings of the 12th IEEE Workshop on Neural Networks for Signal Processing","volume":"15 5 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2002-11-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128254244","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2002-11-07DOI: 10.1109/NNSP.2002.1030025
Justin C. Sanchez, Sung-Phil Kim, Deniz Erdoğmuş, Y. Rao, J. Príncipe, J. Wessberg, M. Nicolelis
Linear and nonlinear (TDNN) models have been shown to estimate hand position using populations of action potentials collected in the pre-motor and motor cortical areas of a primate's brain. One of the applications of this discovery is to restore movement in patients suffering from paralysis. For real-time implementation of this technology, reliable and accurate signal processing models that produce small error variance in the estimated positions are required. In this paper, we compare the mapping performance of the FIR filter, gamma filter and recurrent neural network (RNN) in the peaks of reaching movements. Each approach has strengths and weaknesses that are compared experimentally. The RNN approach shows very accurate peak position estimations with small error variance.
{"title":"Input-output mapping performance of linear and nonlinear models for estimating hand trajectories from cortical neuronal firing patterns","authors":"Justin C. Sanchez, Sung-Phil Kim, Deniz Erdoğmuş, Y. Rao, J. Príncipe, J. Wessberg, M. Nicolelis","doi":"10.1109/NNSP.2002.1030025","DOIUrl":"https://doi.org/10.1109/NNSP.2002.1030025","url":null,"abstract":"Linear and nonlinear (TDNN) models have been shown to estimate hand position using populations of action potentials collected in the pre-motor and motor cortical areas of a primate's brain. One of the applications of this discovery is to restore movement in patients suffering from paralysis. For real-time implementation of this technology, reliable and accurate signal processing models that produce small error variance in the estimated positions are required. In this paper, we compare the mapping performance of the FIR filter, gamma filter and recurrent neural network (RNN) in the peaks of reaching movements. Each approach has strengths and weaknesses that are compared experimentally. The RNN approach shows very accurate peak position estimations with small error variance.","PeriodicalId":117945,"journal":{"name":"Proceedings of the 12th IEEE Workshop on Neural Networks for Signal Processing","volume":"40 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2002-11-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"122109258","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}