Pub Date : 2002-11-07DOI: 10.1109/NNSP.2002.1030037
Y. Rao, J. Príncipe
We present two algorithms to solve the total least-squares (TLS) problem. The algorithms are on-line with O(N/sup 2/) and O(N) complexity. The convergence of the algorithms is significantly faster than the traditional methods. A mathematical analysis of convergence is also provided along with simulations to substantiate the claims. We also apply the TLS algorithms for FIR system identification with known model order in the presence of noise.
{"title":"Efficient total least squares method for system modeling using minor component analysis","authors":"Y. Rao, J. Príncipe","doi":"10.1109/NNSP.2002.1030037","DOIUrl":"https://doi.org/10.1109/NNSP.2002.1030037","url":null,"abstract":"We present two algorithms to solve the total least-squares (TLS) problem. The algorithms are on-line with O(N/sup 2/) and O(N) complexity. The convergence of the algorithms is significantly faster than the traditional methods. A mathematical analysis of convergence is also provided along with simulations to substantiate the claims. We also apply the TLS algorithms for FIR system identification with known model order in the presence of noise.","PeriodicalId":117945,"journal":{"name":"Proceedings of the 12th IEEE Workshop on Neural Networks for Signal Processing","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2002-11-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"123548728","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2002-11-07DOI: 10.1109/NNSP.2002.1030077
J. Schreiter, U. Ramacher, A. Heittmann, D. Matolin, R. Schüffny
An analog VLSI implementation for pulse coupled neural networks of leakage free integrate-and-fire neurons with adaptive connections is presented. Weight adaptation is based on existing adaptation rules for image segmentation. Although both integrate-and-fire neurons and adaptive weights can be implementation only approximately, simulations have shown, that synchronization properties of the original adaptation rules are preserved.
{"title":"Analog implementation for networks of integrate-and-fire neurons with adaptive local connectivity","authors":"J. Schreiter, U. Ramacher, A. Heittmann, D. Matolin, R. Schüffny","doi":"10.1109/NNSP.2002.1030077","DOIUrl":"https://doi.org/10.1109/NNSP.2002.1030077","url":null,"abstract":"An analog VLSI implementation for pulse coupled neural networks of leakage free integrate-and-fire neurons with adaptive connections is presented. Weight adaptation is based on existing adaptation rules for image segmentation. Although both integrate-and-fire neurons and adaptive weights can be implementation only approximately, simulations have shown, that synchronization properties of the original adaptation rules are preserved.","PeriodicalId":117945,"journal":{"name":"Proceedings of the 12th IEEE Workshop on Neural Networks for Signal Processing","volume":"6 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2002-11-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114342370","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2002-11-07DOI: 10.1109/NNSP.2002.1030023
F. Deleus, P. D. Mazière, M. Hulle
We apply the principle of causal networks to develop a new tool for connectivity analysis in functional magnetic resonance imaging (fMRI). The connections between active brain regions are modelled as causal relationships in a causal network. The causal networks are based on the notion of d-separation in a graph-theoretic context or, equivalently, on the notion of conditional independence in a statistical context. Since relationships between brain regions are believed to be nonlinear in nature, we express the conditional dependencies between the brain regions' activities in terms of conditional mutual information. The density estimates needed for computing the conditional mutual information are obtained with topographic maps, trained with the kernel-based maximum entropy rule (kMER).
{"title":"Functional connectivity modelling in fMRI based on causal networks","authors":"F. Deleus, P. D. Mazière, M. Hulle","doi":"10.1109/NNSP.2002.1030023","DOIUrl":"https://doi.org/10.1109/NNSP.2002.1030023","url":null,"abstract":"We apply the principle of causal networks to develop a new tool for connectivity analysis in functional magnetic resonance imaging (fMRI). The connections between active brain regions are modelled as causal relationships in a causal network. The causal networks are based on the notion of d-separation in a graph-theoretic context or, equivalently, on the notion of conditional independence in a statistical context. Since relationships between brain regions are believed to be nonlinear in nature, we express the conditional dependencies between the brain regions' activities in terms of conditional mutual information. The density estimates needed for computing the conditional mutual information are obtained with topographic maps, trained with the kernel-based maximum entropy rule (kMER).","PeriodicalId":117945,"journal":{"name":"Proceedings of the 12th IEEE Workshop on Neural Networks for Signal Processing","volume":"3 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2002-11-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114862518","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2002-11-07DOI: 10.1109/NNSP.2002.1030021
K. Yosui, T. Kurihara, K. Wada, T. Souma, Takashi Matsumoto
This paper proposes a Rao-Blackwellised sequential Monte Carlo (RBSMC) scheme for on-line learning with feedforward neural nets. The proposed algorithm is tested against an example and the performance is compared with those of the conventional sequential Monte Carlo as well as the extended Kalman filter (EKF). The proposed scheme outperforms those conventional algorithms.
{"title":"Bayesian on-line learning: a sequential Monte Carlo with Rao-Blackwellization","authors":"K. Yosui, T. Kurihara, K. Wada, T. Souma, Takashi Matsumoto","doi":"10.1109/NNSP.2002.1030021","DOIUrl":"https://doi.org/10.1109/NNSP.2002.1030021","url":null,"abstract":"This paper proposes a Rao-Blackwellised sequential Monte Carlo (RBSMC) scheme for on-line learning with feedforward neural nets. The proposed algorithm is tested against an example and the performance is compared with those of the conventional sequential Monte Carlo as well as the extended Kalman filter (EKF). The proposed scheme outperforms those conventional algorithms.","PeriodicalId":117945,"journal":{"name":"Proceedings of the 12th IEEE Workshop on Neural Networks for Signal Processing","volume":"42 3 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2002-11-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127990085","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2002-11-07DOI: 10.1109/NNSP.2002.1030033
Deniz Erdoğmuş, Justin C. Sanchez, J. Príncipe
Kalman filter based training algorithms for recurrent neural networks provide a clever alternative to the standard backpropagation in time. However, these algorithms do not take into account the optimization of the hidden state variables of the recurrent network. In addition, their formulation requires Jacobian evaluations over the entire network, adding to their computational complexity. We propose a spatial-temporal extended Kalman filter algorithm for training recurrent neural network weights and internal states. This new formulation also reduces the computational complexity of Jacobian evaluations drastically by decoupling the gradients of each layer. Monte Carlo comparisons with backpropagation through time point out the robust and fast convergence of the algorithm.
{"title":"Modified Kalman filter based method for training state-recurrent multilayer perceptrons","authors":"Deniz Erdoğmuş, Justin C. Sanchez, J. Príncipe","doi":"10.1109/NNSP.2002.1030033","DOIUrl":"https://doi.org/10.1109/NNSP.2002.1030033","url":null,"abstract":"Kalman filter based training algorithms for recurrent neural networks provide a clever alternative to the standard backpropagation in time. However, these algorithms do not take into account the optimization of the hidden state variables of the recurrent network. In addition, their formulation requires Jacobian evaluations over the entire network, adding to their computational complexity. We propose a spatial-temporal extended Kalman filter algorithm for training recurrent neural network weights and internal states. This new formulation also reduces the computational complexity of Jacobian evaluations drastically by decoupling the gradients of each layer. Monte Carlo comparisons with backpropagation through time point out the robust and fast convergence of the algorithm.","PeriodicalId":117945,"journal":{"name":"Proceedings of the 12th IEEE Workshop on Neural Networks for Signal Processing","volume":"111 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2002-11-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"124053156","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2002-11-07DOI: 10.1109/NNSP.2002.1030029
T. Aida
We analyze the prediction from noised data, based on a regression formulation of the problem. For the regression, we construct a model with a length scale to smooth the data, which is determined by the variance of noise and the speed of the variation of original signals. The model is found to be effective also for prediction. This is because it decreases an uncertain region near a boundary as the speed of the variation of original signals increases, which is a crucial property for accurate prediction.
{"title":"Scaling of a length scale for regression and prediction","authors":"T. Aida","doi":"10.1109/NNSP.2002.1030029","DOIUrl":"https://doi.org/10.1109/NNSP.2002.1030029","url":null,"abstract":"We analyze the prediction from noised data, based on a regression formulation of the problem. For the regression, we construct a model with a length scale to smooth the data, which is determined by the variance of noise and the speed of the variation of original signals. The model is found to be effective also for prediction. This is because it decreases an uncertain region near a boundary as the speed of the variation of original signals increases, which is a crucial property for accurate prediction.","PeriodicalId":117945,"journal":{"name":"Proceedings of the 12th IEEE Workshop on Neural Networks for Signal Processing","volume":"312 6","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2002-11-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"133847061","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2002-11-07DOI: 10.1109/NNSP.2002.1030013
Yoshua Bengio, Nicolas Chapados
Metric-based methods, which use unlabeled data to detect gross differences in behavior away from the training points, have recently been introduced for model selection, often yielding very significant improvements over alternatives (including cross-validation). We introduce extensions that take advantage of the particular case of time-series data in which the task involves prediction with a horizon h. The ideas are: (i) to use at t the h unlabeled examples that precede t for model selection, and (ii) take advantage of the different error distributions of cross-validation and the metric methods. Experimental results establish the effectiveness of these extensions in the context of feature subset selection.
{"title":"Metric-based model selection for time-series forecasting","authors":"Yoshua Bengio, Nicolas Chapados","doi":"10.1109/NNSP.2002.1030013","DOIUrl":"https://doi.org/10.1109/NNSP.2002.1030013","url":null,"abstract":"Metric-based methods, which use unlabeled data to detect gross differences in behavior away from the training points, have recently been introduced for model selection, often yielding very significant improvements over alternatives (including cross-validation). We introduce extensions that take advantage of the particular case of time-series data in which the task involves prediction with a horizon h. The ideas are: (i) to use at t the h unlabeled examples that precede t for model selection, and (ii) take advantage of the different error distributions of cross-validation and the metric methods. Experimental results establish the effectiveness of these extensions in the context of feature subset selection.","PeriodicalId":117945,"journal":{"name":"Proceedings of the 12th IEEE Workshop on Neural Networks for Signal Processing","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2002-11-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"132528488","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2002-11-07DOI: 10.1109/NNSP.2002.1030035
Zhenkun Gou, C. Fyfe
We review a neural implementation of canonical correlation analysis and show, using ideas suggested by ridge regression, how to make the algorithm robust. The network is shown to operate on data sets which exhibit multicollinearity. We develop a second model which not only performs as well on multicollinear data but also on general data sets. This model allows us to vary a single parameter so that the network is capable of performing partial least squares regression (at one extreme) to canonical correlation analysis (at the other) and every intermediate operation between the two. On multicollinear data, the parameter setting is shown to be important but on more general data no particular parameter setting is required. Finally, the algorithm acts on such data as a smoother in that the resulting weight vectors are much smoother and more interpretable than the weights without the robustification term.
{"title":"A robust canonical correlation neural network","authors":"Zhenkun Gou, C. Fyfe","doi":"10.1109/NNSP.2002.1030035","DOIUrl":"https://doi.org/10.1109/NNSP.2002.1030035","url":null,"abstract":"We review a neural implementation of canonical correlation analysis and show, using ideas suggested by ridge regression, how to make the algorithm robust. The network is shown to operate on data sets which exhibit multicollinearity. We develop a second model which not only performs as well on multicollinear data but also on general data sets. This model allows us to vary a single parameter so that the network is capable of performing partial least squares regression (at one extreme) to canonical correlation analysis (at the other) and every intermediate operation between the two. On multicollinear data, the parameter setting is shown to be important but on more general data no particular parameter setting is required. Finally, the algorithm acts on such data as a smoother in that the resulting weight vectors are much smoother and more interpretable than the weights without the robustification term.","PeriodicalId":117945,"journal":{"name":"Proceedings of the 12th IEEE Workshop on Neural Networks for Signal Processing","volume":"61 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2002-11-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130606849","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2002-11-07DOI: 10.1109/NNSP.2002.1030090
Anna Szymkowiak-Have, J. Larsen, L. K. Hansen, P. Philipsen, E. Thieden, H. Wulf
In a medically motivated Sun-exposure study, questionnaires concerning Sun-habits were collected from a number of subjects together with UV radiation measurements. This paper focuses on identifying clusters in the heterogeneous set of data for the purpose of understanding possible relations between Sun-habits exposure and eventually assessing the risk of skin cancer. A general probabilistic framework originally developed for text and Web mining is demonstrated to be useful for clustering of behavioral data. The framework combines principal component subspace projection with probabilistic clustering based on the generalizable Gaussian mixture model.
{"title":"Clustering of Sun exposure measurements","authors":"Anna Szymkowiak-Have, J. Larsen, L. K. Hansen, P. Philipsen, E. Thieden, H. Wulf","doi":"10.1109/NNSP.2002.1030090","DOIUrl":"https://doi.org/10.1109/NNSP.2002.1030090","url":null,"abstract":"In a medically motivated Sun-exposure study, questionnaires concerning Sun-habits were collected from a number of subjects together with UV radiation measurements. This paper focuses on identifying clusters in the heterogeneous set of data for the purpose of understanding possible relations between Sun-habits exposure and eventually assessing the risk of skin cancer. A general probabilistic framework originally developed for text and Web mining is demonstrated to be useful for clustering of behavioral data. The framework combines principal component subspace projection with probabilistic clustering based on the generalizable Gaussian mixture model.","PeriodicalId":117945,"journal":{"name":"Proceedings of the 12th IEEE Workshop on Neural Networks for Signal Processing","volume":"48 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2002-11-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"124421215","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2002-11-07DOI: 10.1109/NNSP.2002.1030051
R. V. Andreão, B. Dorizzi, P. C. Cortez, J. Mota
In this article, we explore the use of a unique type of wavelets for ECG beat detection and classification. Once the different beats are segmented, classification is performed using at the input of a neural network different wavelet scales. This improves the noise resistance and allows a better representation of the different morphologies. The results, evaluated on the MIT/BIH database, are excellent (97.69% on the normal and PVC classes) thanks to the use of a regularization technique.
{"title":"Efficient ECG multi-level wavelet classification through neural network dimensionality reduction","authors":"R. V. Andreão, B. Dorizzi, P. C. Cortez, J. Mota","doi":"10.1109/NNSP.2002.1030051","DOIUrl":"https://doi.org/10.1109/NNSP.2002.1030051","url":null,"abstract":"In this article, we explore the use of a unique type of wavelets for ECG beat detection and classification. Once the different beats are segmented, classification is performed using at the input of a neural network different wavelet scales. This improves the noise resistance and allows a better representation of the different morphologies. The results, evaluated on the MIT/BIH database, are excellent (97.69% on the normal and PVC classes) thanks to the use of a regularization technique.","PeriodicalId":117945,"journal":{"name":"Proceedings of the 12th IEEE Workshop on Neural Networks for Signal Processing","volume":"6 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2002-11-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"124270223","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}