Pub Date : 2002-11-18DOI: 10.1109/ICONIP.2002.1198980
M. Nagamatu, T. Nakano, N. Hamada, T. Kido, T. Akahoshi
The satisfiability problem (SAT) of the propositional calculus is a well-known NP-complete problem. It requires exponential computation time as the problem size increases. We proposed a neural network, called LPPH, for the SAT. The equilibrium point of the dynamics of the LPPH exactly corresponds to the solution of the SAT, and the dynamics does not stop at any point that is not the solution of the SAT. Experimental results show the effectiveness of the LPPH for solving the SAT. In this paper we extend the dynamics of the LPPH to solve several variations of the SAT, such as, the SAT with an objective function, the SAT with a preliminary solution, and the MAX-SAT. The effectiveness of the extensions is shown by the experiments.
{"title":"Extensions of Lagrange programming neural network for satisfiability problem and its several variations","authors":"M. Nagamatu, T. Nakano, N. Hamada, T. Kido, T. Akahoshi","doi":"10.1109/ICONIP.2002.1198980","DOIUrl":"https://doi.org/10.1109/ICONIP.2002.1198980","url":null,"abstract":"The satisfiability problem (SAT) of the propositional calculus is a well-known NP-complete problem. It requires exponential computation time as the problem size increases. We proposed a neural network, called LPPH, for the SAT. The equilibrium point of the dynamics of the LPPH exactly corresponds to the solution of the SAT, and the dynamics does not stop at any point that is not the solution of the SAT. Experimental results show the effectiveness of the LPPH for solving the SAT. In this paper we extend the dynamics of the LPPH to solve several variations of the SAT, such as, the SAT with an objective function, the SAT with a preliminary solution, and the MAX-SAT. The effectiveness of the extensions is shown by the experiments.","PeriodicalId":146553,"journal":{"name":"Proceedings of the 9th International Conference on Neural Information Processing, 2002. ICONIP '02.","volume":"42 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2002-11-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115483981","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2002-11-18DOI: 10.1109/ICONIP.2002.1198197
E. Germen
The quality of the topology obtained at the end of the training period of Kohonen's self organizing map (SOM) is highly dependent on the learning rate and neighborhood function that are chosen at the beginning. The conventional approaches to determine those parameters do not account for the data statistics and the topological characterization of the neurons. The paper proposes a new parameter, which depends on the hit ratio among the updated neuron and the best matching neuron. It has been shown that by using this parameter with the conventional learning rate and neighborhood functions, much more adequate solution can be obtained since it deserves an information about data statistics during adaptation process.
{"title":"Increasing the topological quality of Kohonen's self organising map by using a hit term","authors":"E. Germen","doi":"10.1109/ICONIP.2002.1198197","DOIUrl":"https://doi.org/10.1109/ICONIP.2002.1198197","url":null,"abstract":"The quality of the topology obtained at the end of the training period of Kohonen's self organizing map (SOM) is highly dependent on the learning rate and neighborhood function that are chosen at the beginning. The conventional approaches to determine those parameters do not account for the data statistics and the topological characterization of the neurons. The paper proposes a new parameter, which depends on the hit ratio among the updated neuron and the best matching neuron. It has been shown that by using this parameter with the conventional learning rate and neighborhood functions, much more adequate solution can be obtained since it deserves an information about data statistics during adaptation process.","PeriodicalId":146553,"journal":{"name":"Proceedings of the 9th International Conference on Neural Information Processing, 2002. ICONIP '02.","volume":"170 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2002-11-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115700587","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2002-11-18DOI: 10.1109/ICONIP.2002.1198196
A. Phuan, S. Prakash
The K-Means Fast Learning Artificial Neural Network (K-FLANN) is an improvement of the original FLANN II (Tay and Evans, 1994). While FLANN II develops inconsistencies in clustering, influenced by data arrangements, K-FLANN bolsters this issue, through relocation of the clustered centroids. Results of the investigation are presented along with a discussion of the fundamental behavior of K-FLANN. Comparisons are made with the K-Means Clustering algorithm and the Kohonen SOM. A further discussion is provided on how K-FLANN can qualify as an alternative method for fast classification.
{"title":"K-Means Fast Learning Artificial Neural Network, an alternative network for classification","authors":"A. Phuan, S. Prakash","doi":"10.1109/ICONIP.2002.1198196","DOIUrl":"https://doi.org/10.1109/ICONIP.2002.1198196","url":null,"abstract":"The K-Means Fast Learning Artificial Neural Network (K-FLANN) is an improvement of the original FLANN II (Tay and Evans, 1994). While FLANN II develops inconsistencies in clustering, influenced by data arrangements, K-FLANN bolsters this issue, through relocation of the clustered centroids. Results of the investigation are presented along with a discussion of the fundamental behavior of K-FLANN. Comparisons are made with the K-Means Clustering algorithm and the Kohonen SOM. A further discussion is provided on how K-FLANN can qualify as an alternative method for fast classification.","PeriodicalId":146553,"journal":{"name":"Proceedings of the 9th International Conference on Neural Information Processing, 2002. ICONIP '02.","volume":"413 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2002-11-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"124415985","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2002-11-18DOI: 10.1109/ICONIP.2002.1202163
Juin-Der Lee, P. Cheng, M. Liou
The Box-Cox transformation is applied to fit a Gaussian mixture distribution to the brain image intensity data. The advantage of using such data-adaptive mixture model is evidenced by yielding better image segmentation results compared to the existing EM procedures using standard Gaussian mixture distribution.
{"title":"MR brain image segmentation by adaptive mixture distribution","authors":"Juin-Der Lee, P. Cheng, M. Liou","doi":"10.1109/ICONIP.2002.1202163","DOIUrl":"https://doi.org/10.1109/ICONIP.2002.1202163","url":null,"abstract":"The Box-Cox transformation is applied to fit a Gaussian mixture distribution to the brain image intensity data. The advantage of using such data-adaptive mixture model is evidenced by yielding better image segmentation results compared to the existing EM procedures using standard Gaussian mixture distribution.","PeriodicalId":146553,"journal":{"name":"Proceedings of the 9th International Conference on Neural Information Processing, 2002. ICONIP '02.","volume":"9 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2002-11-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"124455976","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2002-11-18DOI: 10.1109/ICONIP.2002.1202819
Kangwoo Lee, Jianfeng Feng, H. Buxton
We propose a neural network model based on contextual learning and non-leaky integrate-and-fire (IF) model. The model shows dynamic properties that integrate the inputs from its own module as well as the other module over time. Moreover, the integration of inputs from different modules is not simple accumulation of activation over the time course but depends on the interaction between primary input that the behaviour of a modular network should be based on, and the contextual input that facilitates or interferes with the performance of the modular network. The learning rule is derived under the assumption that time scale of the interval to first spike can be adjusted during the learning process. The model is applied to explain global-to-local processing of Navon type stimuli in which a global letter hierarchically consists of local letters. The model provides interesting insights that may underlie asymmetric response of global and local interaction found in many psychophysical and neuropsychological studies.
{"title":"A dynamic neural network model on global-to-local interaction over time course","authors":"Kangwoo Lee, Jianfeng Feng, H. Buxton","doi":"10.1109/ICONIP.2002.1202819","DOIUrl":"https://doi.org/10.1109/ICONIP.2002.1202819","url":null,"abstract":"We propose a neural network model based on contextual learning and non-leaky integrate-and-fire (IF) model. The model shows dynamic properties that integrate the inputs from its own module as well as the other module over time. Moreover, the integration of inputs from different modules is not simple accumulation of activation over the time course but depends on the interaction between primary input that the behaviour of a modular network should be based on, and the contextual input that facilitates or interferes with the performance of the modular network. The learning rule is derived under the assumption that time scale of the interval to first spike can be adjusted during the learning process. The model is applied to explain global-to-local processing of Navon type stimuli in which a global letter hierarchically consists of local letters. The model provides interesting insights that may underlie asymmetric response of global and local interaction found in many psychophysical and neuropsychological studies.","PeriodicalId":146553,"journal":{"name":"Proceedings of the 9th International Conference on Neural Information Processing, 2002. ICONIP '02.","volume":"20 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2002-11-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"121829059","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2002-11-18DOI: 10.1109/ICONIP.2002.1198969
A. L. Tatuzov
There are significant difficulties in radar automatic data processing arising from poor flexibility of known algorithms and low computational capacity of traditional computer devices. Neural networks can help the radar designer to overcome these difficulties as a result of computational power of neural parallel hardware and adaptive capabilities of neural algorithms. The idea of neural net application in the most difficult radar problems is proposed and analyzed. Some examples of neural methods for radar information processing are proposed and discussed: phase array antenna weights adaptation, genetic algorithms for optimization of multibased coded signals, data associations in multitarget environment, neural training for decision making systems. Results of the analysis for proposed methods prove that a considerable increase in efficiency can be achieved when neural networks are used for radar information processing problems.
{"title":"Neural network methods for radar processing","authors":"A. L. Tatuzov","doi":"10.1109/ICONIP.2002.1198969","DOIUrl":"https://doi.org/10.1109/ICONIP.2002.1198969","url":null,"abstract":"There are significant difficulties in radar automatic data processing arising from poor flexibility of known algorithms and low computational capacity of traditional computer devices. Neural networks can help the radar designer to overcome these difficulties as a result of computational power of neural parallel hardware and adaptive capabilities of neural algorithms. The idea of neural net application in the most difficult radar problems is proposed and analyzed. Some examples of neural methods for radar information processing are proposed and discussed: phase array antenna weights adaptation, genetic algorithms for optimization of multibased coded signals, data associations in multitarget environment, neural training for decision making systems. Results of the analysis for proposed methods prove that a considerable increase in efficiency can be achieved when neural networks are used for radar information processing problems.","PeriodicalId":146553,"journal":{"name":"Proceedings of the 9th International Conference on Neural Information Processing, 2002. ICONIP '02.","volume":"188 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2002-11-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115769864","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2002-11-18DOI: 10.1109/ICONIP.2002.1198150
Dong-Sun Kim, Jin-Tea Kim, Ki-Won Kwon, Duck-Jin Chung
The purpose of this paper is to propose the methodology of low-power circuit design in the aspect of the architecture and circuit level. Recently, more rapid computations are very important event in DSP, image processing and multi-purpose processor. So, it is very important to reduce power consumption in digital circuits and to maintain computational throughput. For this reason, the design experience and research in the early 1990s has demonstrated that doing so requires a "power conscious" design methodology that addresses dissipation at every level of the design hierarchy. Evidently, many pass transistor logic are proposed for reducing the power consumption and circuit size. In this paper, we introduce the methodologies for low-power using pass-transistor and SDD (Signal Dependency Diagram) technique for parallel and pipelined architecture.
{"title":"Low power design using architecture and circuit level approaches","authors":"Dong-Sun Kim, Jin-Tea Kim, Ki-Won Kwon, Duck-Jin Chung","doi":"10.1109/ICONIP.2002.1198150","DOIUrl":"https://doi.org/10.1109/ICONIP.2002.1198150","url":null,"abstract":"The purpose of this paper is to propose the methodology of low-power circuit design in the aspect of the architecture and circuit level. Recently, more rapid computations are very important event in DSP, image processing and multi-purpose processor. So, it is very important to reduce power consumption in digital circuits and to maintain computational throughput. For this reason, the design experience and research in the early 1990s has demonstrated that doing so requires a \"power conscious\" design methodology that addresses dissipation at every level of the design hierarchy. Evidently, many pass transistor logic are proposed for reducing the power consumption and circuit size. In this paper, we introduce the methodologies for low-power using pass-transistor and SDD (Signal Dependency Diagram) technique for parallel and pipelined architecture.","PeriodicalId":146553,"journal":{"name":"Proceedings of the 9th International Conference on Neural Information Processing, 2002. ICONIP '02.","volume":"22 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2002-11-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"116705906","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2002-11-18DOI: 10.1109/ICONIP.2002.1202189
Teck-Sun Tan, G. Huang
Huang, et al. (1996, 2002) proposed architecture selection algorithm called SEDNN to find the minimum architectures for feedforward neural networks based on the Golden section search method and the upper bounds on the number of hidden neurons, as stated in Huang (2002) and Huang et al. (1998), to be 2/spl radic/((m + 2)N) or two layered feedforward network (TLFN) and N for single layer feedforward network (SLFN) where N is the number of training samples and m is the number of output neurons. The SEDNN algorithm worked well with the assumption that time allowed for the execution of the algorithm is infinite. This paper proposed an algorithm similar to the SEDNN, but with an added time factor to cater for applications that requires results within a specified period of time.
{"title":"Time constrain optimal method to find the minimum architectures for feedforward neural networks","authors":"Teck-Sun Tan, G. Huang","doi":"10.1109/ICONIP.2002.1202189","DOIUrl":"https://doi.org/10.1109/ICONIP.2002.1202189","url":null,"abstract":"Huang, et al. (1996, 2002) proposed architecture selection algorithm called SEDNN to find the minimum architectures for feedforward neural networks based on the Golden section search method and the upper bounds on the number of hidden neurons, as stated in Huang (2002) and Huang et al. (1998), to be 2/spl radic/((m + 2)N) or two layered feedforward network (TLFN) and N for single layer feedforward network (SLFN) where N is the number of training samples and m is the number of output neurons. The SEDNN algorithm worked well with the assumption that time allowed for the execution of the algorithm is infinite. This paper proposed an algorithm similar to the SEDNN, but with an added time factor to cater for applications that requires results within a specified period of time.","PeriodicalId":146553,"journal":{"name":"Proceedings of the 9th International Conference on Neural Information Processing, 2002. ICONIP '02.","volume":"35 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2002-11-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"116916959","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2002-11-18DOI: 10.1109/ICONIP.2002.1198119
R. Furumachi, H. Torikai, T. Saito
Applying a higher frequency input to a chaotic spiking neuron, the state is quantized and the chaotic pulse-train is changed into various co-existing super-stable periodic pulse-trains (SSPTs). Using a quantized pulse-position map, the number of the SSPTs and their periods are clarified theoretically. Multiplex correlation characteristics for some set of the SSPTs is also clarified for application to CDMA communication systems.
{"title":"A quantized chaotic spiking neuron and CDMA coding","authors":"R. Furumachi, H. Torikai, T. Saito","doi":"10.1109/ICONIP.2002.1198119","DOIUrl":"https://doi.org/10.1109/ICONIP.2002.1198119","url":null,"abstract":"Applying a higher frequency input to a chaotic spiking neuron, the state is quantized and the chaotic pulse-train is changed into various co-existing super-stable periodic pulse-trains (SSPTs). Using a quantized pulse-position map, the number of the SSPTs and their periods are clarified theoretically. Multiplex correlation characteristics for some set of the SSPTs is also clarified for application to CDMA communication systems.","PeriodicalId":146553,"journal":{"name":"Proceedings of the 9th International Conference on Neural Information Processing, 2002. ICONIP '02.","volume":"5 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2002-11-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"117117275","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2002-11-18DOI: 10.1109/ICONIP.2002.1198144
E.R. Denby
This paper describes an initial study to investigate the role of context in determining colours from a machine learning perspective. A soft-computing technique in the form of fuzzy neural networks is used to perform the intelligent processing of categorising colours given some training. The main hypothesis suggests that the neural network will not perform as well as a human familiar with the NCS colour space, because humans possess context knowledge needed to correctly classify any colour variety into eleven groupings. This paper describes the process taken to create the dataset suitable for the network, and reports on the use of the software called FuzzyCOPE 3/sup /spl copy// to investigate this hypothesis. Further, it points to issues such as what is context knowledge? Can the network's learning be said to possess contextual knowledge of the colour space?.
{"title":"Focusing on soft-computing techniques to model the role of context in determining colours","authors":"E.R. Denby","doi":"10.1109/ICONIP.2002.1198144","DOIUrl":"https://doi.org/10.1109/ICONIP.2002.1198144","url":null,"abstract":"This paper describes an initial study to investigate the role of context in determining colours from a machine learning perspective. A soft-computing technique in the form of fuzzy neural networks is used to perform the intelligent processing of categorising colours given some training. The main hypothesis suggests that the neural network will not perform as well as a human familiar with the NCS colour space, because humans possess context knowledge needed to correctly classify any colour variety into eleven groupings. This paper describes the process taken to create the dataset suitable for the network, and reports on the use of the software called FuzzyCOPE 3/sup /spl copy// to investigate this hypothesis. Further, it points to issues such as what is context knowledge? Can the network's learning be said to possess contextual knowledge of the colour space?.","PeriodicalId":146553,"journal":{"name":"Proceedings of the 9th International Conference on Neural Information Processing, 2002. ICONIP '02.","volume":"3 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2002-11-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"121244280","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}