Pub Date : 1999-07-10DOI: 10.1109/IJCNN.1999.831578
A. Hirabayashi, H. Ogawa
Memorization learning (ML) is a method for supervised learning which reduces the training errors only. However, it does not guarantee good generalization capability in principle. This observation leads to two problems: 1) to clarify the reason why good generalization capability is obtainable by ML; and 2) to clarify to what extent memorization learning can be used. Ogawa (1995) introduced the concept of 'admissibility' and provided a clear answer to the first problem. In this paper, we solve the second problem when training examples are noiseless. It is theoretically shown that ML can provide the same generalization capability as any learning method in 'the family of projection learning' when proper training examples are chosen.
{"title":"What can memorization learning do?","authors":"A. Hirabayashi, H. Ogawa","doi":"10.1109/IJCNN.1999.831578","DOIUrl":"https://doi.org/10.1109/IJCNN.1999.831578","url":null,"abstract":"Memorization learning (ML) is a method for supervised learning which reduces the training errors only. However, it does not guarantee good generalization capability in principle. This observation leads to two problems: 1) to clarify the reason why good generalization capability is obtainable by ML; and 2) to clarify to what extent memorization learning can be used. Ogawa (1995) introduced the concept of 'admissibility' and provided a clear answer to the first problem. In this paper, we solve the second problem when training examples are noiseless. It is theoretically shown that ML can provide the same generalization capability as any learning method in 'the family of projection learning' when proper training examples are chosen.","PeriodicalId":157719,"journal":{"name":"IJCNN'99. International Joint Conference on Neural Networks. Proceedings (Cat. No.99CH36339)","volume":"84 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1999-07-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129365231","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 1999-07-10DOI: 10.1109/IJCNN.1999.836211
Shaun Gittens, R. Goodwin, J. Kalagnanam, S. Murthy
Evolutionary population-based search methods are often used to find a Pareto-optimal set of solutions for hard multicriteria optimization problems. We utilize one such agent architecture to evolve good solution sets to these problems, deploying agents to progressively add, modify and delete candidate solutions in one or more populations over time. Here we describe how we assign neural nets to aid agent decision-making and encourage cooperation to improve convergence to good Pareto optimal solution sets. This paper describes the design and results of this approach and suggests paths for further study.
{"title":"Using neural networks in agent teams to speedup solution discovery for hard multi-criteria problems","authors":"Shaun Gittens, R. Goodwin, J. Kalagnanam, S. Murthy","doi":"10.1109/IJCNN.1999.836211","DOIUrl":"https://doi.org/10.1109/IJCNN.1999.836211","url":null,"abstract":"Evolutionary population-based search methods are often used to find a Pareto-optimal set of solutions for hard multicriteria optimization problems. We utilize one such agent architecture to evolve good solution sets to these problems, deploying agents to progressively add, modify and delete candidate solutions in one or more populations over time. Here we describe how we assign neural nets to aid agent decision-making and encourage cooperation to improve convergence to good Pareto optimal solution sets. This paper describes the design and results of this approach and suggests paths for further study.","PeriodicalId":157719,"journal":{"name":"IJCNN'99. International Joint Conference on Neural Networks. Proceedings (Cat. No.99CH36339)","volume":"9 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1999-07-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"124650231","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 1999-07-10DOI: 10.1109/IJCNN.1999.831088
S. Fiori, P. Burrascano
We present a new class of learning models for linear as well as nonlinear neural learners, deriving from the study of the dynamics of an abstract rigid mechanical system. The set of equations describing the motion of this system may be readily interpreted as a learning rule for orthogonal networks. As a simple example of how to use the learning theory, a case of the orthonormal independent component analysis based on the Bell-Sejlunoski's InfoMax principle is discussed through simulations.
{"title":"'Mechanical' neural learning and InfoMax orthonormal independent component analysis","authors":"S. Fiori, P. Burrascano","doi":"10.1109/IJCNN.1999.831088","DOIUrl":"https://doi.org/10.1109/IJCNN.1999.831088","url":null,"abstract":"We present a new class of learning models for linear as well as nonlinear neural learners, deriving from the study of the dynamics of an abstract rigid mechanical system. The set of equations describing the motion of this system may be readily interpreted as a learning rule for orthogonal networks. As a simple example of how to use the learning theory, a case of the orthonormal independent component analysis based on the Bell-Sejlunoski's InfoMax principle is discussed through simulations.","PeriodicalId":157719,"journal":{"name":"IJCNN'99. International Joint Conference on Neural Networks. Proceedings (Cat. No.99CH36339)","volume":"27 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1999-07-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129480207","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 1999-07-10DOI: 10.1109/IJCNN.1999.836251
Lale Özyilmaz, T. Yıldırım, H. Seker
The aim of this work is to classify EMG signals using a new neural network architecture to control multifunction prostheses. The control of these prostheses can be made using myoelectric signals taken from a single pair of surface electrodes. This case has been demonstrated specifically for use by above elbow amputees. The ability to separate different muscle contraction characters depends on myoelectric signal information. Therefore, the classification of these signals is investigated. The proposed neural network algorithm here makes the user learn better and faster.
{"title":"EMG signal classification using conic section function neural networks","authors":"Lale Özyilmaz, T. Yıldırım, H. Seker","doi":"10.1109/IJCNN.1999.836251","DOIUrl":"https://doi.org/10.1109/IJCNN.1999.836251","url":null,"abstract":"The aim of this work is to classify EMG signals using a new neural network architecture to control multifunction prostheses. The control of these prostheses can be made using myoelectric signals taken from a single pair of surface electrodes. This case has been demonstrated specifically for use by above elbow amputees. The ability to separate different muscle contraction characters depends on myoelectric signal information. Therefore, the classification of these signals is investigated. The proposed neural network algorithm here makes the user learn better and faster.","PeriodicalId":157719,"journal":{"name":"IJCNN'99. International Joint Conference on Neural Networks. Proceedings (Cat. No.99CH36339)","volume":"20 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1999-07-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129887667","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 1999-07-10DOI: 10.1109/IJCNN.1999.831152
Nojun Kwak, Chong-Ho Choi
In classification problems, we use a set of attributes which are relevant, irrelevant or redundant. By selecting only the relevant attributes of the data as input features of a classifying system and excluding redundant ones, higher performance is expected with smaller computational effort. We propose an algorithm of feature selection that makes more careful use of the mutual informations between input attributes and others than the mutual information feature selector (MIFS). The proposed algorithm is applied in several feature selection problems and compared with the MIFS. Experimental results show that the proposed algorithm can be well used in feature selection problems.
{"title":"Improved mutual information feature selector for neural networks in supervised learning","authors":"Nojun Kwak, Chong-Ho Choi","doi":"10.1109/IJCNN.1999.831152","DOIUrl":"https://doi.org/10.1109/IJCNN.1999.831152","url":null,"abstract":"In classification problems, we use a set of attributes which are relevant, irrelevant or redundant. By selecting only the relevant attributes of the data as input features of a classifying system and excluding redundant ones, higher performance is expected with smaller computational effort. We propose an algorithm of feature selection that makes more careful use of the mutual informations between input attributes and others than the mutual information feature selector (MIFS). The proposed algorithm is applied in several feature selection problems and compared with the MIFS. Experimental results show that the proposed algorithm can be well used in feature selection problems.","PeriodicalId":157719,"journal":{"name":"IJCNN'99. International Joint Conference on Neural Networks. Proceedings (Cat. No.99CH36339)","volume":"40 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1999-07-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126700794","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 1999-07-10DOI: 10.1109/IJCNN.1999.830858
H. Seker, D. H. Evans
Fuzzy neural networks have been shown to be superior to conventional multilayered backpropagation neural networks (BPNN). However, it is still an important problem to make fuzzy neural networks learn faster and to optimise membership functions of fuzzy rule based models to converge to a local minimum. Moreover, while learning faster and optimising, it is important to use less memory and to need less CPU time. In this paper, to overcome these problems, we propose non-normalised compensatory hybrid fuzzy neural networks (non-normalised CFBPNN) incorporating fuzzy c-means clustering as a fuzzy inference engine, fuzzy logic and backpropagation learning algorithms. The results have shown that the proposed algorithm overcomes these problems, and yields a very high performance. This algorithm was tested on the XOR problem, nonlinear function learning and pattern classification, and compared with normalised CFBPNN and BPNN to verify the algorithm.
{"title":"Non-normalised compensatory hybrid fuzzy neural networks","authors":"H. Seker, D. H. Evans","doi":"10.1109/IJCNN.1999.830858","DOIUrl":"https://doi.org/10.1109/IJCNN.1999.830858","url":null,"abstract":"Fuzzy neural networks have been shown to be superior to conventional multilayered backpropagation neural networks (BPNN). However, it is still an important problem to make fuzzy neural networks learn faster and to optimise membership functions of fuzzy rule based models to converge to a local minimum. Moreover, while learning faster and optimising, it is important to use less memory and to need less CPU time. In this paper, to overcome these problems, we propose non-normalised compensatory hybrid fuzzy neural networks (non-normalised CFBPNN) incorporating fuzzy c-means clustering as a fuzzy inference engine, fuzzy logic and backpropagation learning algorithms. The results have shown that the proposed algorithm overcomes these problems, and yields a very high performance. This algorithm was tested on the XOR problem, nonlinear function learning and pattern classification, and compared with normalised CFBPNN and BPNN to verify the algorithm.","PeriodicalId":157719,"journal":{"name":"IJCNN'99. International Joint Conference on Neural Networks. Proceedings (Cat. No.99CH36339)","volume":"19 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1999-07-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126792586","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 1999-07-10DOI: 10.1109/IJCNN.1999.833517
E. Chang
Many operating agencies are currently developing computerized freeway traffic management systems to support traffic operations as part of the intelligent transportation system (ITS) user service improvements. This study illustrates the importance of using simplified data analysis and presents a promising approach for improving demand prediction and traffic data modeling to support pro-active control. This study found that the approach of combining advanced neural networks and conventional error correction is promising for improved ITS applications.
{"title":"Estimate traffic control patterns using a hybrid neural network","authors":"E. Chang","doi":"10.1109/IJCNN.1999.833517","DOIUrl":"https://doi.org/10.1109/IJCNN.1999.833517","url":null,"abstract":"Many operating agencies are currently developing computerized freeway traffic management systems to support traffic operations as part of the intelligent transportation system (ITS) user service improvements. This study illustrates the importance of using simplified data analysis and presents a promising approach for improving demand prediction and traffic data modeling to support pro-active control. This study found that the approach of combining advanced neural networks and conventional error correction is promising for improved ITS applications.","PeriodicalId":157719,"journal":{"name":"IJCNN'99. International Joint Conference on Neural Networks. Proceedings (Cat. No.99CH36339)","volume":"24 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1999-07-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127022517","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 1999-07-10DOI: 10.1109/IJCNN.1999.831053
R. Sunderam
Thresholded binary networks of the Hopfield-type offer feasible configurations which are capable of recovering the regularized least-squares solution in certain inverse problem formulations. The proposed architectures and algorithms also permit hybrid electro-optical implementations. These architectures are determined from partitions of the original network and are based on forms of data representation. Sequential and parallel updates on these partitions are adopted to optimize the objective criterion. The algorithms consist of minimizing a suboptimal objective criterion in the currently active partition. Once the local minima is attained, an inactive partition is chosen to continue the minimization. An application to digital image restoration is considered.
{"title":"Partitioned architectures for large scale data recovery","authors":"R. Sunderam","doi":"10.1109/IJCNN.1999.831053","DOIUrl":"https://doi.org/10.1109/IJCNN.1999.831053","url":null,"abstract":"Thresholded binary networks of the Hopfield-type offer feasible configurations which are capable of recovering the regularized least-squares solution in certain inverse problem formulations. The proposed architectures and algorithms also permit hybrid electro-optical implementations. These architectures are determined from partitions of the original network and are based on forms of data representation. Sequential and parallel updates on these partitions are adopted to optimize the objective criterion. The algorithms consist of minimizing a suboptimal objective criterion in the currently active partition. Once the local minima is attained, an inactive partition is chosen to continue the minimization. An application to digital image restoration is considered.","PeriodicalId":157719,"journal":{"name":"IJCNN'99. International Joint Conference on Neural Networks. Proceedings (Cat. No.99CH36339)","volume":"19 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1999-07-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129217851","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 1999-07-10DOI: 10.1109/IJCNN.1999.833476
R. Sun
This paper deals with knowledge extraction from reinforcement learners. It addresses two approaches towards knowledge extraction: the extraction of explicit, symbolic rules front neural reinforcement learners; and the extraction of complete plans from such learners. The advantages of such knowledge extraction include: the improvement of learning (especially with the rule extraction approach); and the improvement of the usability of results of learning.
{"title":"Knowledge extraction from reinforcement learning","authors":"R. Sun","doi":"10.1109/IJCNN.1999.833476","DOIUrl":"https://doi.org/10.1109/IJCNN.1999.833476","url":null,"abstract":"This paper deals with knowledge extraction from reinforcement learners. It addresses two approaches towards knowledge extraction: the extraction of explicit, symbolic rules front neural reinforcement learners; and the extraction of complete plans from such learners. The advantages of such knowledge extraction include: the improvement of learning (especially with the rule extraction approach); and the improvement of the usability of results of learning.","PeriodicalId":157719,"journal":{"name":"IJCNN'99. International Joint Conference on Neural Networks. Proceedings (Cat. No.99CH36339)","volume":"73 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1999-07-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"124184838","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 1999-07-10DOI: 10.1109/IJCNN.1999.831119
Kenji Mizoguchi, M. Hagiwara
We propose a novel neural network for four-term analogy based on area representation. It can deal with four-term analogy such as "teacher: student=doctor: ?". The proposed network is composed of three map layers and an input layer. The area representation method based on Kohonen feature map (KFM) is employed in order to represent knowledge, so that similar concepts are mapped in nearer area in the map layer. The proposed mechanism in the map layer can realize the movement of the excited area to the near area. We carried out some computer simulations and confirmed as follows: 1) similar concepts are mapped in the nearer area in the map layer; 2) the excited area moves among similar concepts; 3) the proposed network realizes four-term analogy; and 4) the network is robust for the lack of connections.
{"title":"A novel neural network for four-term analogy based on area representation","authors":"Kenji Mizoguchi, M. Hagiwara","doi":"10.1109/IJCNN.1999.831119","DOIUrl":"https://doi.org/10.1109/IJCNN.1999.831119","url":null,"abstract":"We propose a novel neural network for four-term analogy based on area representation. It can deal with four-term analogy such as \"teacher: student=doctor: ?\". The proposed network is composed of three map layers and an input layer. The area representation method based on Kohonen feature map (KFM) is employed in order to represent knowledge, so that similar concepts are mapped in nearer area in the map layer. The proposed mechanism in the map layer can realize the movement of the excited area to the near area. We carried out some computer simulations and confirmed as follows: 1) similar concepts are mapped in the nearer area in the map layer; 2) the excited area moves among similar concepts; 3) the proposed network realizes four-term analogy; and 4) the network is robust for the lack of connections.","PeriodicalId":157719,"journal":{"name":"IJCNN'99. International Joint Conference on Neural Networks. Proceedings (Cat. No.99CH36339)","volume":"490 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1999-07-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"123195463","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}