Pub Date : 2001-07-15DOI: 10.1109/IJCNN.2001.938986
E. Chiarantoni, G. Fornarelli, F. Vacca, S. Vergura
In this paper a model of neural unit that take into account the effect of mean time decay output ("stress") observed in the Hodgkin-Huxley model is presented. A simplified version of the stress effect is implemented in a static neuron element by means of a dynamical threshold. A rule to vary the threshold adopting local information is then presented and the effects of this law over the learning are examined in the class of standard competitive learning rule. The properties of stability of this model are examined and it is shown that the proposed unit, under appropriate hypothesis, is able to find autonomously (i.e. without requiring any interaction with other units) a local maximum of density in the input data set space (feature).
{"title":"Dynamical threshold for a feature detector neural model","authors":"E. Chiarantoni, G. Fornarelli, F. Vacca, S. Vergura","doi":"10.1109/IJCNN.2001.938986","DOIUrl":"https://doi.org/10.1109/IJCNN.2001.938986","url":null,"abstract":"In this paper a model of neural unit that take into account the effect of mean time decay output (\"stress\") observed in the Hodgkin-Huxley model is presented. A simplified version of the stress effect is implemented in a static neuron element by means of a dynamical threshold. A rule to vary the threshold adopting local information is then presented and the effects of this law over the learning are examined in the class of standard competitive learning rule. The properties of stability of this model are examined and it is shown that the proposed unit, under appropriate hypothesis, is able to find autonomously (i.e. without requiring any interaction with other units) a local maximum of density in the input data set space (feature).","PeriodicalId":346955,"journal":{"name":"IJCNN'01. International Joint Conference on Neural Networks. Proceedings (Cat. No.01CH37222)","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2001-07-15","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130960076","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2001-07-15DOI: 10.1109/IJCNN.2001.938446
Wlodzislaw Duch, K. Grudzinski
Logical rules are not the only way to understand the structure of data. Prototype-based rules evaluate similarity to a small set of prototypes using optimized similarity measures. Such rules include crisp and fuzzy logic rules as special cases and are natural way of categorization from psychological point of view. An elimination procedure selecting good prototypes from a training set has been described. Illustrative applications on several datasets show that a few prototypes may indeed explain the data structure.
{"title":"Prototype based rules-a new way to understand the data","authors":"Wlodzislaw Duch, K. Grudzinski","doi":"10.1109/IJCNN.2001.938446","DOIUrl":"https://doi.org/10.1109/IJCNN.2001.938446","url":null,"abstract":"Logical rules are not the only way to understand the structure of data. Prototype-based rules evaluate similarity to a small set of prototypes using optimized similarity measures. Such rules include crisp and fuzzy logic rules as special cases and are natural way of categorization from psychological point of view. An elimination procedure selecting good prototypes from a training set has been described. Illustrative applications on several datasets show that a few prototypes may indeed explain the data structure.","PeriodicalId":346955,"journal":{"name":"IJCNN'01. International Joint Conference on Neural Networks. Proceedings (Cat. No.01CH37222)","volume":"235 ","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2001-07-15","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"133420797","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2001-07-15DOI: 10.1109/IJCNN.2001.939119
C. Depradine
Code and design conventions can be considered to be rules-of-thumb or best-practices that improve the maintainability of software applications. Generally, each programming language has its own conventions. For example, the Java language has a set of code and design conventions that have been documented by its creators. In general, these conventions are maintained manually by the programmer since automated support is usually restricted to the automatic generation of code. The Design and Code Convention Checker system, DChk (pronounced D-Check) enables the maintenance of various object oriented design principles and code conventions during the development of Java programs. It scans Java code, listing any discovered violations of specific design and code conventions as well as recommended actions, associated reasoning and relevant reading material. The paper discusses this system, its associated tools and the relevant methodology involved in its use.
{"title":"Creating a Java design and code convention mentor using evolutionary computation","authors":"C. Depradine","doi":"10.1109/IJCNN.2001.939119","DOIUrl":"https://doi.org/10.1109/IJCNN.2001.939119","url":null,"abstract":"Code and design conventions can be considered to be rules-of-thumb or best-practices that improve the maintainability of software applications. Generally, each programming language has its own conventions. For example, the Java language has a set of code and design conventions that have been documented by its creators. In general, these conventions are maintained manually by the programmer since automated support is usually restricted to the automatic generation of code. The Design and Code Convention Checker system, DChk (pronounced D-Check) enables the maintenance of various object oriented design principles and code conventions during the development of Java programs. It scans Java code, listing any discovered violations of specific design and code conventions as well as recommended actions, associated reasoning and relevant reading material. The paper discusses this system, its associated tools and the relevant methodology involved in its use.","PeriodicalId":346955,"journal":{"name":"IJCNN'01. International Joint Conference on Neural Networks. Proceedings (Cat. No.01CH37222)","volume":"10 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2001-07-15","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"132266159","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2001-07-15DOI: 10.1109/IJCNN.2001.939044
E. Mizutani, S. Dreyfus
This paper presents the complexity analysis of a standard supervised MLP-learning algorithm in conjunction with the well-known backpropagation, an efficient method for evaluation of derivatives, in either batch or incremental learning mode. In particular, we detail the cost per epoch (i.e., operations required for processing one sweep of all the training data) using "approximate" FLOPs (floating point operations) in a typical backpropagation for solving neural networks nonlinear least squares problems. Furthermore, we identify erroneous complexity analyses found in the past NN literature. Our operation-count formula would be very useful for a given MLP architecture to compare learning algorithms.
{"title":"On complexity analysis of supervised MLP-learning for algorithmic comparisons","authors":"E. Mizutani, S. Dreyfus","doi":"10.1109/IJCNN.2001.939044","DOIUrl":"https://doi.org/10.1109/IJCNN.2001.939044","url":null,"abstract":"This paper presents the complexity analysis of a standard supervised MLP-learning algorithm in conjunction with the well-known backpropagation, an efficient method for evaluation of derivatives, in either batch or incremental learning mode. In particular, we detail the cost per epoch (i.e., operations required for processing one sweep of all the training data) using \"approximate\" FLOPs (floating point operations) in a typical backpropagation for solving neural networks nonlinear least squares problems. Furthermore, we identify erroneous complexity analyses found in the past NN literature. Our operation-count formula would be very useful for a given MLP architecture to compare learning algorithms.","PeriodicalId":346955,"journal":{"name":"IJCNN'01. International Joint Conference on Neural Networks. Proceedings (Cat. No.01CH37222)","volume":"5 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2001-07-15","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115888795","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2001-07-15DOI: 10.1109/IJCNN.2001.1016719
De-shuang Huang, Z. Chi
This paper proposes using constrained learning algorithm (CLA) to solve linear equations, where the corresponding conrtraint relations for this problem is just the linear equations. As a result. the CLA can be effectively and appropriately applied It was found in experiments that the convergent speedfor this CLA is much faster than the recursive least square back propagation (RLS-BP) algorithm. Finally, related experimental results are presented.
{"title":"Solving linear simultaneous equations by constraining learning neural networks","authors":"De-shuang Huang, Z. Chi","doi":"10.1109/IJCNN.2001.1016719","DOIUrl":"https://doi.org/10.1109/IJCNN.2001.1016719","url":null,"abstract":"This paper proposes using constrained learning algorithm (CLA) to solve linear equations, where the corresponding conrtraint relations for this problem is just the linear equations. As a result. the CLA can be effectively and appropriately applied It was found in experiments that the convergent speedfor this CLA is much faster than the recursive least square back propagation (RLS-BP) algorithm. Finally, related experimental results are presented.","PeriodicalId":346955,"journal":{"name":"IJCNN'01. International Joint Conference on Neural Networks. Proceedings (Cat. No.01CH37222)","volume":"8 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2001-07-15","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"124327772","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2001-07-15DOI: 10.1109/IJCNN.2001.939073
P. Melin, O. Castillo
We describe different hybrid intelligent approaches for controlling nonlinear dynamical systems in manufacturing applications. The hybrid approaches combine soft computing techniques and mathematical models to achieve the goal of controlling the manufacturing process to follow a desired production plan. We develop several hybrid architectures that combine fuzzy logic, neural networks, and genetic algorithms, to compare the performance of each of these combinations and decide on the best one for our purpose. We consider the case of controlling nonlinear electrochemical processes to test our hybrid approach for control. Electrochemical processes, like the ones used in battery formation, are very complex and for this reason very difficult to control. We have achieved very good results using fuzzy logic for control, neural networks for modelling the process, and genetic algorithms for tuning the hybrid intelligent system.
{"title":"Intelligent control of nonlinear dynamical systems with a neuro-fuzzy-genetic approach","authors":"P. Melin, O. Castillo","doi":"10.1109/IJCNN.2001.939073","DOIUrl":"https://doi.org/10.1109/IJCNN.2001.939073","url":null,"abstract":"We describe different hybrid intelligent approaches for controlling nonlinear dynamical systems in manufacturing applications. The hybrid approaches combine soft computing techniques and mathematical models to achieve the goal of controlling the manufacturing process to follow a desired production plan. We develop several hybrid architectures that combine fuzzy logic, neural networks, and genetic algorithms, to compare the performance of each of these combinations and decide on the best one for our purpose. We consider the case of controlling nonlinear electrochemical processes to test our hybrid approach for control. Electrochemical processes, like the ones used in battery formation, are very complex and for this reason very difficult to control. We have achieved very good results using fuzzy logic for control, neural networks for modelling the process, and genetic algorithms for tuning the hybrid intelligent system.","PeriodicalId":346955,"journal":{"name":"IJCNN'01. International Joint Conference on Neural Networks. Proceedings (Cat. No.01CH37222)","volume":"58 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2001-07-15","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"124507790","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2001-07-15DOI: 10.1109/IJCNN.2001.938450
M. Breazu, D. Volovici, I.Z. Mihu, R. Brad
We propose, for an image compression system based on the Karhunen-Loeve transform implemented by neural networks, to take into consideration the 8 square isometries of an image block. The proper isometry applied puts the 8*8 square image block in a standard position, before applying the image block as input to the neural network architecture. The standard position is defined based on the variance of its four 4*4 sub-blocks (quadro partitioned) and brings the sub-block having the greatest variance in a specific corner and in another specific adjoining corner the sub-block having the second variance (if this is not possible the third is considered). The use of this "preprocessing" phase was expected to improve the learning and representation ability of the network and, therefore, to improve the compression results. Experimental results have proven that the expectations were fulfilled and the isometries are, from now, worth taking into consideration.
{"title":"Improving Karhunen-Loeve based transform coding by using square isometries","authors":"M. Breazu, D. Volovici, I.Z. Mihu, R. Brad","doi":"10.1109/IJCNN.2001.938450","DOIUrl":"https://doi.org/10.1109/IJCNN.2001.938450","url":null,"abstract":"We propose, for an image compression system based on the Karhunen-Loeve transform implemented by neural networks, to take into consideration the 8 square isometries of an image block. The proper isometry applied puts the 8*8 square image block in a standard position, before applying the image block as input to the neural network architecture. The standard position is defined based on the variance of its four 4*4 sub-blocks (quadro partitioned) and brings the sub-block having the greatest variance in a specific corner and in another specific adjoining corner the sub-block having the second variance (if this is not possible the third is considered). The use of this \"preprocessing\" phase was expected to improve the learning and representation ability of the network and, therefore, to improve the compression results. Experimental results have proven that the expectations were fulfilled and the isometries are, from now, worth taking into consideration.","PeriodicalId":346955,"journal":{"name":"IJCNN'01. International Joint Conference on Neural Networks. Proceedings (Cat. No.01CH37222)","volume":"21 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2001-07-15","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114466365","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2001-07-15DOI: 10.1109/IJCNN.2001.939055
Y. Matsuyama, S. Imahara
A class of independent component analysis (ICA) algorithms using a minimization of the convex divergence, called the f-ICA, is presented. This algorithm is a super class of the minimum mutual information ICA and our own /spl alpha/-ICA. The following properties are obtained: 1) the f-ICA can be implemented by both momentum and turbo methods, and their combination is also possible; 2) the formerly presented /spl alpha/-ICA can claim an equivalent form to the f-ICA if the design parameter /spl alpha/ is chosen appropriately; 3) the f-ICA is much faster than the minimum mutual information ICA; and 4) additional complexity required to the divergence ICA is light, and thus this algorithm is applicable to a large amount of data via conventional personal computers. Detection of human brain areas that strongly respond to moving objects is reported in this paper.
{"title":"Independent component analysis by convex divergence minimization: applications to brain fMRI analysis","authors":"Y. Matsuyama, S. Imahara","doi":"10.1109/IJCNN.2001.939055","DOIUrl":"https://doi.org/10.1109/IJCNN.2001.939055","url":null,"abstract":"A class of independent component analysis (ICA) algorithms using a minimization of the convex divergence, called the f-ICA, is presented. This algorithm is a super class of the minimum mutual information ICA and our own /spl alpha/-ICA. The following properties are obtained: 1) the f-ICA can be implemented by both momentum and turbo methods, and their combination is also possible; 2) the formerly presented /spl alpha/-ICA can claim an equivalent form to the f-ICA if the design parameter /spl alpha/ is chosen appropriately; 3) the f-ICA is much faster than the minimum mutual information ICA; and 4) additional complexity required to the divergence ICA is light, and thus this algorithm is applicable to a large amount of data via conventional personal computers. Detection of human brain areas that strongly respond to moving objects is reported in this paper.","PeriodicalId":346955,"journal":{"name":"IJCNN'01. International Joint Conference on Neural Networks. Proceedings (Cat. No.01CH37222)","volume":"30 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2001-07-15","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114387381","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2001-07-15DOI: 10.1109/IJCNN.2001.939112
N. Vlajic, Dimitrios Makrakis, Charalambos Charalambous
Wireless data broadcasting (WDB) is proven to be an efficient information delivery mechanism of nearly unlimited scalability. However, successful performance of a WDB based system is not always guaranteed-it strongly depends on the system's ability to identify the most popular information (documents) among users and accurately estimate their actual request probabilities. In this paper, we argue that a recently proposed unsupervised neural network algorithm possesses the key properties of an ideal estimator of document request probabilities. Obtained simulation results support the theoretical assumptions and suggest a near optimal performance of a WDB based system employing the given algorithm.
{"title":"Near optimal wireless data broadcasting based on an unsupervised neural network learning algorithm","authors":"N. Vlajic, Dimitrios Makrakis, Charalambos Charalambous","doi":"10.1109/IJCNN.2001.939112","DOIUrl":"https://doi.org/10.1109/IJCNN.2001.939112","url":null,"abstract":"Wireless data broadcasting (WDB) is proven to be an efficient information delivery mechanism of nearly unlimited scalability. However, successful performance of a WDB based system is not always guaranteed-it strongly depends on the system's ability to identify the most popular information (documents) among users and accurately estimate their actual request probabilities. In this paper, we argue that a recently proposed unsupervised neural network algorithm possesses the key properties of an ideal estimator of document request probabilities. Obtained simulation results support the theoretical assumptions and suggest a near optimal performance of a WDB based system employing the given algorithm.","PeriodicalId":346955,"journal":{"name":"IJCNN'01. International Joint Conference on Neural Networks. Proceedings (Cat. No.01CH37222)","volume":"20 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2001-07-15","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"116872057","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2001-07-15DOI: 10.1109/IJCNN.2001.938512
S. M. Sait, Habib Youssef, A. El-Maleh, M. Minhas, King Fahd
We employ two iterative heuristics for the optimization of VLSI standard cell placement. These heuristics are based on genetic algorithms (GA) and tabu search (TS) respectively. We address a multiobjective version of the problem, in which power dissipation, timing performance, and interconnect wire length are optimized while layout width is taken as a constraint. Fuzzy rules are incorporated in order to design a multiobjective cost function that integrates the costs of three objectives in a single overall cost value. A series of experiments is performed to study the effect of important algorithmic parameters of GA and TS. Both the techniques are applied to ISCAS-85/89 benchmark circuits and experimental results are reported and compared.
{"title":"Iterative heuristics for multiobjective VLSI standard cell placement","authors":"S. M. Sait, Habib Youssef, A. El-Maleh, M. Minhas, King Fahd","doi":"10.1109/IJCNN.2001.938512","DOIUrl":"https://doi.org/10.1109/IJCNN.2001.938512","url":null,"abstract":"We employ two iterative heuristics for the optimization of VLSI standard cell placement. These heuristics are based on genetic algorithms (GA) and tabu search (TS) respectively. We address a multiobjective version of the problem, in which power dissipation, timing performance, and interconnect wire length are optimized while layout width is taken as a constraint. Fuzzy rules are incorporated in order to design a multiobjective cost function that integrates the costs of three objectives in a single overall cost value. A series of experiments is performed to study the effect of important algorithmic parameters of GA and TS. Both the techniques are applied to ISCAS-85/89 benchmark circuits and experimental results are reported and compared.","PeriodicalId":346955,"journal":{"name":"IJCNN'01. International Joint Conference on Neural Networks. Proceedings (Cat. No.01CH37222)","volume":"18 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2001-07-15","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"117029703","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}