Pub Date : 2002-11-18DOI: 10.1109/ICONIP.2002.1202202
Jagath Rajapakse, A. Cichocki, V. Sanchez A.
There is an increasing interest in analyzing brain images from various imaging modalities, that record the brain activity during functional task, for understanding how the brain functions as well as for the diagnosis and treatment of brain disease. Independent component analysis (ICA), an exploratory and unsupervised technique, separates various signal sources mixed in brain imaging signals such as brain activation and noise, assuming that the sources are mutually independent in the complete statistical sense. This paper summarizes various applications of ICA in processing brain imaging signals: EEG, MEG, fMRI or PET. We highlight the current issues and limitations of applying ICA in these applications, current, and future directions of research.
{"title":"Independent component analysis and beyond in brain imaging: EEG, MEG, fMRI, and PET","authors":"Jagath Rajapakse, A. Cichocki, V. Sanchez A.","doi":"10.1109/ICONIP.2002.1202202","DOIUrl":"https://doi.org/10.1109/ICONIP.2002.1202202","url":null,"abstract":"There is an increasing interest in analyzing brain images from various imaging modalities, that record the brain activity during functional task, for understanding how the brain functions as well as for the diagnosis and treatment of brain disease. Independent component analysis (ICA), an exploratory and unsupervised technique, separates various signal sources mixed in brain imaging signals such as brain activation and noise, assuming that the sources are mutually independent in the complete statistical sense. This paper summarizes various applications of ICA in processing brain imaging signals: EEG, MEG, fMRI or PET. We highlight the current issues and limitations of applying ICA in these applications, current, and future directions of research.","PeriodicalId":146553,"journal":{"name":"Proceedings of the 9th International Conference on Neural Information Processing, 2002. ICONIP '02.","volume":"112 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2002-11-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"122746521","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2002-11-18DOI: 10.1109/ICONIP.2002.1202210
A.A. Ghaibeh, S. Kuroyanagi, A. Iwata
In the field of artificial neural networks, large-scale classification problems are still challenging due to many obstacles such as local minima state, long time computation, and the requirement of large amount of memory. The large-scale network CombNET-II overcomes the local minima state and proves to give good recognition rate in many applications. However CombNET-II still requires a large amount of memory used for the training database and feature space. We propose a revised version of CombNET-II with a considerably lower memory requirement, which makes the problem of large-scale classification more tractable. The memory reduction is achieved by adding a preprocessing stage at the input of each branch network. The purpose of this stage is to select the different features that have the most classification power for each subspace generated by the stem network. Testing our proposed model using Japanese kanji characters shows that the required memory might be reduced by almost 50% without significant decrease in the recognition rate.
{"title":"Efficient subspace learning using a large scale neural network CombNet-II","authors":"A.A. Ghaibeh, S. Kuroyanagi, A. Iwata","doi":"10.1109/ICONIP.2002.1202210","DOIUrl":"https://doi.org/10.1109/ICONIP.2002.1202210","url":null,"abstract":"In the field of artificial neural networks, large-scale classification problems are still challenging due to many obstacles such as local minima state, long time computation, and the requirement of large amount of memory. The large-scale network CombNET-II overcomes the local minima state and proves to give good recognition rate in many applications. However CombNET-II still requires a large amount of memory used for the training database and feature space. We propose a revised version of CombNET-II with a considerably lower memory requirement, which makes the problem of large-scale classification more tractable. The memory reduction is achieved by adding a preprocessing stage at the input of each branch network. The purpose of this stage is to select the different features that have the most classification power for each subspace generated by the stem network. Testing our proposed model using Japanese kanji characters shows that the required memory might be reduced by almost 50% without significant decrease in the recognition rate.","PeriodicalId":146553,"journal":{"name":"Proceedings of the 9th International Conference on Neural Information Processing, 2002. ICONIP '02.","volume":"56 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2002-11-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"122598933","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2002-11-18DOI: 10.1109/ICONIP.2002.1202194
H. Nakano, T. Saito
This paper studies basic dynamics in a pulse-coupled network (PCN) of chaotic spiking oscillators. The PCN exhibits grouping phenomena characterized by partial chaos synchronous phenomena. Calculating transient times to the synchronization, we investigate performance of the PCN.
{"title":"Grouping synchronization in a pulse-coupled network of chaotic spiking oscillators","authors":"H. Nakano, T. Saito","doi":"10.1109/ICONIP.2002.1202194","DOIUrl":"https://doi.org/10.1109/ICONIP.2002.1202194","url":null,"abstract":"This paper studies basic dynamics in a pulse-coupled network (PCN) of chaotic spiking oscillators. The PCN exhibits grouping phenomena characterized by partial chaos synchronous phenomena. Calculating transient times to the synchronization, we investigate performance of the PCN.","PeriodicalId":146553,"journal":{"name":"Proceedings of the 9th International Conference on Neural Information Processing, 2002. ICONIP '02.","volume":"29 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2002-11-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"122872211","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2002-11-18DOI: 10.1109/ICONIP.2002.1202872
T. Aoyama, U. Nagashima
We discuss a neural network solver for the inverse optimization problem. The problem is that input/teaching data include defects, and predict the defect values, and estimate functional relation between the input/output data. The network structure of the solver is series-connected three-layer neural networks. Information propagates among the networks alternatively, and the defects are complemented by the correlations among data. On ideal structure-activity data, we could make the prediction within 0.17-3.6% error.
{"title":"Discussions of neural network solvers for inverse optimization problems","authors":"T. Aoyama, U. Nagashima","doi":"10.1109/ICONIP.2002.1202872","DOIUrl":"https://doi.org/10.1109/ICONIP.2002.1202872","url":null,"abstract":"We discuss a neural network solver for the inverse optimization problem. The problem is that input/teaching data include defects, and predict the defect values, and estimate functional relation between the input/output data. The network structure of the solver is series-connected three-layer neural networks. Information propagates among the networks alternatively, and the defects are complemented by the correlations among data. On ideal structure-activity data, we could make the prediction within 0.17-3.6% error.","PeriodicalId":146553,"journal":{"name":"Proceedings of the 9th International Conference on Neural Information Processing, 2002. ICONIP '02.","volume":"42 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2002-11-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114144118","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
This paper proposes a novel learning algorithm for constructing data classifiers with radial basis function (RBF) networks. The RBF networks constructed with the proposed learning algorithm generally are able to deliver the same level of classification accuracy as the support vector machines (SVM). One important advantage of the proposed learning algorithm, in comparison with the support vector machines, is that the proposed learning algorithm normally takes far less time to figure out optimal parameter values with cross validation. A comparison with the SVM is of interest, because it has been shown in a number of recent studies that the SVM generally is able to deliver higher level of accuracy than the other existing data classification algorithms. The proposed learning algorithm works by constructing one RBF network to approximate the probability density function of each class of objects in the training data set. The main distinction of the proposed learning algorithm is how it exploits local distributions of the training samples in determining the optimal parameter values of the basis functions. As the proposed learning algorithm is instance-based, the data reduction issue is also addressed in this paper. One interesting observation is that, for all three data sets used in data reduction experiments, the number of training samples remaining after a naive data reduction mechanism is applied is quite close to the number of support vectors identified by the SVM software.
{"title":"A novel learning algorithm for data classification with radial basis function networks","authors":"Yen-Jen Oyang, Shien-Ching Hwang, Yu-Yen Ou, Chien-Yu Chen, Zhi-Wei Chen","doi":"10.1109/ICONIP.2002.1198215","DOIUrl":"https://doi.org/10.1109/ICONIP.2002.1198215","url":null,"abstract":"This paper proposes a novel learning algorithm for constructing data classifiers with radial basis function (RBF) networks. The RBF networks constructed with the proposed learning algorithm generally are able to deliver the same level of classification accuracy as the support vector machines (SVM). One important advantage of the proposed learning algorithm, in comparison with the support vector machines, is that the proposed learning algorithm normally takes far less time to figure out optimal parameter values with cross validation. A comparison with the SVM is of interest, because it has been shown in a number of recent studies that the SVM generally is able to deliver higher level of accuracy than the other existing data classification algorithms. The proposed learning algorithm works by constructing one RBF network to approximate the probability density function of each class of objects in the training data set. The main distinction of the proposed learning algorithm is how it exploits local distributions of the training samples in determining the optimal parameter values of the basis functions. As the proposed learning algorithm is instance-based, the data reduction issue is also addressed in this paper. One interesting observation is that, for all three data sets used in data reduction experiments, the number of training samples remaining after a naive data reduction mechanism is applied is quite close to the number of support vectors identified by the SVM software.","PeriodicalId":146553,"journal":{"name":"Proceedings of the 9th International Conference on Neural Information Processing, 2002. ICONIP '02.","volume":"137 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2002-11-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114487780","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2002-11-18DOI: 10.1109/ICONIP.2002.1198166
C. Liou, Yen-Ting Kuo
We test the idea of visualizing economic statistics data on self-organization related maps, which are the LLE, ISOMAP and GTM maps. We report initial results of this work. These three maps all have distinguished theoretical foundations. The statistic data usually span high-dimensional space, sometimes more than 10 dimensions. To perceive these data as a whole and to foresee future trends, perspective visualization assistance is an important issue. We use economic statistics for the United States over the past 25 years (1977 to 2001) and apply them on the maps. The results from these three maps display historic events along with their trends and significance.
{"title":"Economic states on neuronic maps","authors":"C. Liou, Yen-Ting Kuo","doi":"10.1109/ICONIP.2002.1198166","DOIUrl":"https://doi.org/10.1109/ICONIP.2002.1198166","url":null,"abstract":"We test the idea of visualizing economic statistics data on self-organization related maps, which are the LLE, ISOMAP and GTM maps. We report initial results of this work. These three maps all have distinguished theoretical foundations. The statistic data usually span high-dimensional space, sometimes more than 10 dimensions. To perceive these data as a whole and to foresee future trends, perspective visualization assistance is an important issue. We use economic statistics for the United States over the past 25 years (1977 to 2001) and apply them on the maps. The results from these three maps display historic events along with their trends and significance.","PeriodicalId":146553,"journal":{"name":"Proceedings of the 9th International Conference on Neural Information Processing, 2002. ICONIP '02.","volume":"11 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2002-11-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129545002","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2002-11-18DOI: 10.1109/ICONIP.2002.1202817
M. Ogiya, K. Sakai, Y. Hirai
We investigated how the visual system determines 3D depth from the integration of space and time, specifically spatial and temporal binocular disparity. We carried out psychophysical experiments to investigate whether the binocular disparity gives correct 3D depth of objects moving behind a thin slit that controls the type and amount of information available to the visual system. The results indicate that: (1) Wheatstone stereo in corresponding images gives correct depth, (2) Da Vinci stereo in stationary non-corresponding images does not give correct depth judgment, and (3) the time delay between the two images gives correct depth for a wide range of noncorrespondence. The results suggest the cortical mechanism that processes simultaneously spatial and temporal information; presumably the two are inseparable in the neural system.
{"title":"Integration of space and time leading to the simultaneous perception of depth and motion - perception of objects moving behind a thin slit","authors":"M. Ogiya, K. Sakai, Y. Hirai","doi":"10.1109/ICONIP.2002.1202817","DOIUrl":"https://doi.org/10.1109/ICONIP.2002.1202817","url":null,"abstract":"We investigated how the visual system determines 3D depth from the integration of space and time, specifically spatial and temporal binocular disparity. We carried out psychophysical experiments to investigate whether the binocular disparity gives correct 3D depth of objects moving behind a thin slit that controls the type and amount of information available to the visual system. The results indicate that: (1) Wheatstone stereo in corresponding images gives correct depth, (2) Da Vinci stereo in stationary non-corresponding images does not give correct depth judgment, and (3) the time delay between the two images gives correct depth for a wide range of noncorrespondence. The results suggest the cortical mechanism that processes simultaneously spatial and temporal information; presumably the two are inseparable in the neural system.","PeriodicalId":146553,"journal":{"name":"Proceedings of the 9th International Conference on Neural Information Processing, 2002. ICONIP '02.","volume":"51 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2002-11-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129626936","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2002-11-18DOI: 10.1109/ICONIP.2002.1198221
Harotaka Nakayama, Takeshi Asada
Support vector machines (SVMs) are now thought as a powerful method for solving pattern recognition problems. SVMs are usually formulated as quadratic programming. Using another distance function, SVMs are formulated as linear programming. SVMs generally tend to make overlearning. In order to overcome this difficulty, the notion of soft margin method is introduced. In this event, it is difficult to decide the weight for slack variable reflecting soft margin. The soft margin method is extended to multi objective linear programming. It is shown through several examples that SVM reformulated as multi objective linear programming can give a good performance in pattern classification.
{"title":"Support vector machines using multi objective programming and goal programming","authors":"Harotaka Nakayama, Takeshi Asada","doi":"10.1109/ICONIP.2002.1198221","DOIUrl":"https://doi.org/10.1109/ICONIP.2002.1198221","url":null,"abstract":"Support vector machines (SVMs) are now thought as a powerful method for solving pattern recognition problems. SVMs are usually formulated as quadratic programming. Using another distance function, SVMs are formulated as linear programming. SVMs generally tend to make overlearning. In order to overcome this difficulty, the notion of soft margin method is introduced. In this event, it is difficult to decide the weight for slack variable reflecting soft margin. The soft margin method is extended to multi objective linear programming. It is shown through several examples that SVM reformulated as multi objective linear programming can give a good performance in pattern classification.","PeriodicalId":146553,"journal":{"name":"Proceedings of the 9th International Conference on Neural Information Processing, 2002. ICONIP '02.","volume":"2 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2002-11-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129790108","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2002-11-18DOI: 10.1109/ICONIP.2002.1202810
R. Reilly, R. Radach
This paper describes an interactive activation model of eye movement control in reading, Glenmore, that can account within one mechanism for preview and spillover effects, for regressions, progressions, and refixations. The model decouples the decision about when to move the eyes from the word recognition process. The time course of activity in a "fixate centre" determines the triggering of a saccade. The other main feature of the model is the use of a saliency map that acts as an arena for the interplay of bottom-up visual features of the text, and top-down lexical features. These factors combine to create a pattern of. activation that selects one word as the saccade target. Even within the relatively simple framework proposed here, a coherent account has been provided for a range of eye movement control phenomena that have hitherto proved problematic to reconcile.
{"title":"Glenmore: an interactive activation model of eye movement control in reading","authors":"R. Reilly, R. Radach","doi":"10.1109/ICONIP.2002.1202810","DOIUrl":"https://doi.org/10.1109/ICONIP.2002.1202810","url":null,"abstract":"This paper describes an interactive activation model of eye movement control in reading, Glenmore, that can account within one mechanism for preview and spillover effects, for regressions, progressions, and refixations. The model decouples the decision about when to move the eyes from the word recognition process. The time course of activity in a \"fixate centre\" determines the triggering of a saccade. The other main feature of the model is the use of a saliency map that acts as an arena for the interplay of bottom-up visual features of the text, and top-down lexical features. These factors combine to create a pattern of. activation that selects one word as the saccade target. Even within the relatively simple framework proposed here, a coherent account has been provided for a range of eye movement control phenomena that have hitherto proved problematic to reconcile.","PeriodicalId":146553,"journal":{"name":"Proceedings of the 9th International Conference on Neural Information Processing, 2002. ICONIP '02.","volume":"140 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2002-11-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128478191","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2002-11-18DOI: 10.1109/ICONIP.2002.1198985
A. J. Anderson, P. McOwan
A sensorimotor control system for a biologically inspired stealth strategy (motion camouflage) intended to conceal the motion of a predator from its prey is simulated. The control system is formed from three multilayer perceptrons trained using backpropagation. In the simulation the control system, operating using realistic input information, is shown to be able to track prey moving in 3D space. This extends previous work that has only considered two dimensions.
{"title":"3D simulation of a sensorimotor stealth strategy for camouflaging motion","authors":"A. J. Anderson, P. McOwan","doi":"10.1109/ICONIP.2002.1198985","DOIUrl":"https://doi.org/10.1109/ICONIP.2002.1198985","url":null,"abstract":"A sensorimotor control system for a biologically inspired stealth strategy (motion camouflage) intended to conceal the motion of a predator from its prey is simulated. The control system is formed from three multilayer perceptrons trained using backpropagation. In the simulation the control system, operating using realistic input information, is shown to be able to track prey moving in 3D space. This extends previous work that has only considered two dimensions.","PeriodicalId":146553,"journal":{"name":"Proceedings of the 9th International Conference on Neural Information Processing, 2002. ICONIP '02.","volume":"61 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2002-11-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128246989","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}