Pub Date : 1992-06-07DOI: 10.1109/IJCNN.1992.227268
M. Sungur, U. Halici
A neural network approach for playing the game tic-tac-toe is introduced. The problem is considered as a combinatorial optimization problem aiming to maximize the value of a heuristic evaluation function. The proposed design guarantees a feasible solution, including in the cases where a winning move is never missed and a losing position is always prevented, if possible. The design has been implemented on a Hopfield network, a Boltzmann machine, and a Gaussian machine. The performance of the models was compared through simulation.<>
{"title":"Optimizing neural networks for playing tic-tac-toe","authors":"M. Sungur, U. Halici","doi":"10.1109/IJCNN.1992.227268","DOIUrl":"https://doi.org/10.1109/IJCNN.1992.227268","url":null,"abstract":"A neural network approach for playing the game tic-tac-toe is introduced. The problem is considered as a combinatorial optimization problem aiming to maximize the value of a heuristic evaluation function. The proposed design guarantees a feasible solution, including in the cases where a winning move is never missed and a losing position is always prevented, if possible. The design has been implemented on a Hopfield network, a Boltzmann machine, and a Gaussian machine. The performance of the models was compared through simulation.<<ETX>>","PeriodicalId":286849,"journal":{"name":"[Proceedings 1992] IJCNN International Joint Conference on Neural Networks","volume":"73 3 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1992-06-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"131303894","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 1992-06-07DOI: 10.1109/IJCNN.1992.226881
S. Geva, J. Sitte, G. Willshire
The truck backer-upper has been used to demonstrate the ability of neural networks to solve highly nonlinear control problems where the solution is not easily obtained by analytical techniques. The authors demonstrate that good linear solutions to this problem exist, and that it is very easy to find such solutions. It is shown how to design a controller to perform this task, and how it is implemented with a single control neuron. The control neuron requires only two input variables and two weights to produce correct steering signals. The probability that random weights are adequate to solve the problem is so high that a random search is highly successful. It is shown that a single neuron is also sufficient to solve the seemingly more difficult task of backing up a truck with two trailers, and that with small addition in network complexity the problem of providing minimum length backup trajectories can be solved too.<>
{"title":"A one neuron truck backer-upper","authors":"S. Geva, J. Sitte, G. Willshire","doi":"10.1109/IJCNN.1992.226881","DOIUrl":"https://doi.org/10.1109/IJCNN.1992.226881","url":null,"abstract":"The truck backer-upper has been used to demonstrate the ability of neural networks to solve highly nonlinear control problems where the solution is not easily obtained by analytical techniques. The authors demonstrate that good linear solutions to this problem exist, and that it is very easy to find such solutions. It is shown how to design a controller to perform this task, and how it is implemented with a single control neuron. The control neuron requires only two input variables and two weights to produce correct steering signals. The probability that random weights are adequate to solve the problem is so high that a random search is highly successful. It is shown that a single neuron is also sufficient to solve the seemingly more difficult task of backing up a truck with two trailers, and that with small addition in network complexity the problem of providing minimum length backup trajectories can be solved too.<<ETX>>","PeriodicalId":286849,"journal":{"name":"[Proceedings 1992] IJCNN International Joint Conference on Neural Networks","volume":"37 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1992-06-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126702259","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 1992-06-07DOI: 10.1109/IJCNN.1992.287076
S. Lucas
The algebraic learning paradigm is described in relation to syntactic neural networks. In algebraic learning, each free parameter of the net is given a unique variable name, and the net output is then expressed as a sum of products of these variables, for each training sentence. The expressions are equated to true if the sentence is a positive sample and false if the sentence is a negative sample. A constraint satisfaction procedure is then used to find an assignment to the variables such that all the equations are satisfied. Such an assignment must yield a network that parses all the positive samples and none of the negative samples, and hence a correct grammar. Unfortunately, the algorithm grows exponentially in time and space with respect to string length. A number of ways of countering this growth, using the inference of a tiny subset of context-free English as a example, are explored.<>
{"title":"An algebraic approach to learning in syntactic neural networks","authors":"S. Lucas","doi":"10.1109/IJCNN.1992.287076","DOIUrl":"https://doi.org/10.1109/IJCNN.1992.287076","url":null,"abstract":"The algebraic learning paradigm is described in relation to syntactic neural networks. In algebraic learning, each free parameter of the net is given a unique variable name, and the net output is then expressed as a sum of products of these variables, for each training sentence. The expressions are equated to true if the sentence is a positive sample and false if the sentence is a negative sample. A constraint satisfaction procedure is then used to find an assignment to the variables such that all the equations are satisfied. Such an assignment must yield a network that parses all the positive samples and none of the negative samples, and hence a correct grammar. Unfortunately, the algorithm grows exponentially in time and space with respect to string length. A number of ways of countering this growth, using the inference of a tiny subset of context-free English as a example, are explored.<<ETX>>","PeriodicalId":286849,"journal":{"name":"[Proceedings 1992] IJCNN International Joint Conference on Neural Networks","volume":"3 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1992-06-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126913155","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 1992-06-07DOI: 10.1109/IJCNN.1992.227247
H. Kinugasa, H. Kamata, Y. Ishida
The authors present a new system for Japanese word recognition by neural networks using the vocal tract area. They present a method by which the vocal tract area is directly estimated from speech waves. The estimation method applies an adaptive inverse filter to the autocorrelation coefficients. A neural network learning algorithm developed by Y. Ishida et al. (1991), which is based on the conjugate gradient method, is used. The speaker-independent word recognition results for a vocabulary of 10 Japanese words demonstrated the effectiveness of the method.<>
{"title":"Recognition of Japanese words by neural networks using vocal tract area","authors":"H. Kinugasa, H. Kamata, Y. Ishida","doi":"10.1109/IJCNN.1992.227247","DOIUrl":"https://doi.org/10.1109/IJCNN.1992.227247","url":null,"abstract":"The authors present a new system for Japanese word recognition by neural networks using the vocal tract area. They present a method by which the vocal tract area is directly estimated from speech waves. The estimation method applies an adaptive inverse filter to the autocorrelation coefficients. A neural network learning algorithm developed by Y. Ishida et al. (1991), which is based on the conjugate gradient method, is used. The speaker-independent word recognition results for a vocabulary of 10 Japanese words demonstrated the effectiveness of the method.<<ETX>>","PeriodicalId":286849,"journal":{"name":"[Proceedings 1992] IJCNN International Joint Conference on Neural Networks","volume":"42 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1992-06-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"121613184","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 1992-06-07DOI: 10.1109/IJCNN.1992.287195
J. Jaskolski
A new family of neural network (NN) architectures is presented. This family of architectures solves the problem of constructing and training minimal NN classification expert systems by using switching theory. The primary insight that leads to the use of switching theory is that the problem of minimizing the number of rules and the number of IF statements (antecedents) per rule in a NN expert system can be recast into the problem of minimizing the number of digital gates and the number of connections between digital gates in a VLSI circuits. The rules that the NN generates to perform a task are readily extractable from the network's weights and topology. Analysis and simulations on the Mushroom database illustrate the system's performance.<>
{"title":"Construction of neural network classification expert systems using switching theory algorithms","authors":"J. Jaskolski","doi":"10.1109/IJCNN.1992.287195","DOIUrl":"https://doi.org/10.1109/IJCNN.1992.287195","url":null,"abstract":"A new family of neural network (NN) architectures is presented. This family of architectures solves the problem of constructing and training minimal NN classification expert systems by using switching theory. The primary insight that leads to the use of switching theory is that the problem of minimizing the number of rules and the number of IF statements (antecedents) per rule in a NN expert system can be recast into the problem of minimizing the number of digital gates and the number of connections between digital gates in a VLSI circuits. The rules that the NN generates to perform a task are readily extractable from the network's weights and topology. Analysis and simulations on the Mushroom database illustrate the system's performance.<<ETX>>","PeriodicalId":286849,"journal":{"name":"[Proceedings 1992] IJCNN International Joint Conference on Neural Networks","volume":"54 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1992-06-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"122958303","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 1992-06-07DOI: 10.1109/IJCNN.1992.227308
T. Chen, W. Lin, C.-T. Chen
A novel approach to 3D nonrigid motion analysis using artificial neural networks is presented. A set of neural networks is proposed to tackle the problem of nonrigidity in 3D motion estimation. Constraints are specified to ensure a stable and global consistent estimation of local deformations. The assignments of weights between two layers, the initial values of the outputs, and the connections between each network reflect the constraints defined. The objective of the proposed neural networks is to find the optimal deformation matrices that satisfy the constraints for all the points on the surface of the nonrigid object. Experimental results on synthetic and real data are provided.<>
{"title":"Artificial neural networks for 3D nonrigid motion analysis","authors":"T. Chen, W. Lin, C.-T. Chen","doi":"10.1109/IJCNN.1992.227308","DOIUrl":"https://doi.org/10.1109/IJCNN.1992.227308","url":null,"abstract":"A novel approach to 3D nonrigid motion analysis using artificial neural networks is presented. A set of neural networks is proposed to tackle the problem of nonrigidity in 3D motion estimation. Constraints are specified to ensure a stable and global consistent estimation of local deformations. The assignments of weights between two layers, the initial values of the outputs, and the connections between each network reflect the constraints defined. The objective of the proposed neural networks is to find the optimal deformation matrices that satisfy the constraints for all the points on the surface of the nonrigid object. Experimental results on synthetic and real data are provided.<<ETX>>","PeriodicalId":286849,"journal":{"name":"[Proceedings 1992] IJCNN International Joint Conference on Neural Networks","volume":"26 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1992-06-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"123112589","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 1992-06-07DOI: 10.1109/IJCNN.1992.287176
Hee-Seung Na, Youngjin Park
The concept of a symmetric neural network, which is not only structurally symmetric but also has symmetric weight distribution, is presented. The concept is further expanded to constrained networks, which may also be applied to some nonsymmetric problems in which there is some prior knowledge of the weight distribution pattern. Because these neural networks cannot be trained by the conventional training algorithm, which destroys the weight structure of the neural networks, a proper training algorithm is suggested. Three examples are shown to demonstrate the applicability of the proposed ideas. Use of the proposed concepts results in improved system performance, reduced network dimension, less computational load, and improved learning for the examples considered.<>
{"title":"Symmetric neural networks and its examples","authors":"Hee-Seung Na, Youngjin Park","doi":"10.1109/IJCNN.1992.287176","DOIUrl":"https://doi.org/10.1109/IJCNN.1992.287176","url":null,"abstract":"The concept of a symmetric neural network, which is not only structurally symmetric but also has symmetric weight distribution, is presented. The concept is further expanded to constrained networks, which may also be applied to some nonsymmetric problems in which there is some prior knowledge of the weight distribution pattern. Because these neural networks cannot be trained by the conventional training algorithm, which destroys the weight structure of the neural networks, a proper training algorithm is suggested. Three examples are shown to demonstrate the applicability of the proposed ideas. Use of the proposed concepts results in improved system performance, reduced network dimension, less computational load, and improved learning for the examples considered.<<ETX>>","PeriodicalId":286849,"journal":{"name":"[Proceedings 1992] IJCNN International Joint Conference on Neural Networks","volume":"43 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1992-06-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"123119696","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 1992-06-07DOI: 10.1109/IJCNN.1992.226868
K. Cheng, Din-Yuen Chan, Sheeng-Horng Liou
A subband coding scheme and Bayesian neural network (BNN) approach to the analysis of electromyographic (EMG) signals of upper extremity limb functions are presented. Three channels of EMG signals recorded from the biceps, triceps and one muscle of the forearm are used for discriminating six primitive motions associated with the limb. A set of parameters is extracted from the spectrum of the EMG signals combining with the subband coding technique for data compression. Each sequence of EMG signals is cut into five frames from the primary point located by the energy threshold method. From each frame, the parameters are then obtained by the integration of the subbands. The temporal as well as the spectral characteristics can be implicitly or directly included in the parameters. The BNN is used as a subnet for discriminating one motion. From the results, it is shown that an average recognition rate of 85% may be achieved.<>
{"title":"A subband coding scheme and the Bayesian neural network for EMG function analysis","authors":"K. Cheng, Din-Yuen Chan, Sheeng-Horng Liou","doi":"10.1109/IJCNN.1992.226868","DOIUrl":"https://doi.org/10.1109/IJCNN.1992.226868","url":null,"abstract":"A subband coding scheme and Bayesian neural network (BNN) approach to the analysis of electromyographic (EMG) signals of upper extremity limb functions are presented. Three channels of EMG signals recorded from the biceps, triceps and one muscle of the forearm are used for discriminating six primitive motions associated with the limb. A set of parameters is extracted from the spectrum of the EMG signals combining with the subband coding technique for data compression. Each sequence of EMG signals is cut into five frames from the primary point located by the energy threshold method. From each frame, the parameters are then obtained by the integration of the subbands. The temporal as well as the spectral characteristics can be implicitly or directly included in the parameters. The BNN is used as a subnet for discriminating one motion. From the results, it is shown that an average recognition rate of 85% may be achieved.<<ETX>>","PeriodicalId":286849,"journal":{"name":"[Proceedings 1992] IJCNN International Joint Conference on Neural Networks","volume":"21 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1992-06-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126495598","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 1992-06-07DOI: 10.1109/IJCNN.1992.227191
F. McFadden, Y. Peng, J. Reggia
It has been observed in networks with rapidly varying connection strengths that individual node activation levels can grow explosively in a phase where total network activation remains bounded. On the basis of the results reported by F. McFadden et al. (1991), the authors extend the previous research to apply to a more general class of connectionist models, and they identify additional phase transition boundaries not covered by previous research. Sufficient conditions are derived for boundedness of the activation vector of the system, not only for total activation. In addition, sufficient conditions are derived for divergence in the absence of external input. The mathematical results are illustrated by computer simulation results using a competitive activation model, and the simulations are used for exploration of the phase space.<>
{"title":"Local analysis of phase transitions in networks with varying connection strengths","authors":"F. McFadden, Y. Peng, J. Reggia","doi":"10.1109/IJCNN.1992.227191","DOIUrl":"https://doi.org/10.1109/IJCNN.1992.227191","url":null,"abstract":"It has been observed in networks with rapidly varying connection strengths that individual node activation levels can grow explosively in a phase where total network activation remains bounded. On the basis of the results reported by F. McFadden et al. (1991), the authors extend the previous research to apply to a more general class of connectionist models, and they identify additional phase transition boundaries not covered by previous research. Sufficient conditions are derived for boundedness of the activation vector of the system, not only for total activation. In addition, sufficient conditions are derived for divergence in the absence of external input. The mathematical results are illustrated by computer simulation results using a competitive activation model, and the simulations are used for exploration of the phase space.<<ETX>>","PeriodicalId":286849,"journal":{"name":"[Proceedings 1992] IJCNN International Joint Conference on Neural Networks","volume":"8 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1992-06-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126620432","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 1992-06-07DOI: 10.1109/IJCNN.1992.226908
L. R. Kern
The Naval Air Warfare Center Weapons Division is designing and developing a real-time neural processor for missile seeker applications. The system uses a high-speed digital computer as the user interface and as a monitor for processing. The use of a standard digital computer as the user interface allows the user to develop the process in whatever programming environment desired. With the capability to store up to 64 k of output data on each frame, it is possible to process two-dimensional image data in excess of video rates. The real-time communication bus, with user-defined interconnect structures, enables the system to solve a wide variety of problems. The system is best suited to perform local area processing on two-dimensional images. Using this system, each layer has the capacity to represent up to 65536 neurons. The fully operational system may contain up to 12 of these layers, giving the total system a capacity in excess of 745000 neurons.<>
{"title":"Design and development of a real-time neural processor using the Intel 80170NX ETANN","authors":"L. R. Kern","doi":"10.1109/IJCNN.1992.226908","DOIUrl":"https://doi.org/10.1109/IJCNN.1992.226908","url":null,"abstract":"The Naval Air Warfare Center Weapons Division is designing and developing a real-time neural processor for missile seeker applications. The system uses a high-speed digital computer as the user interface and as a monitor for processing. The use of a standard digital computer as the user interface allows the user to develop the process in whatever programming environment desired. With the capability to store up to 64 k of output data on each frame, it is possible to process two-dimensional image data in excess of video rates. The real-time communication bus, with user-defined interconnect structures, enables the system to solve a wide variety of problems. The system is best suited to perform local area processing on two-dimensional images. Using this system, each layer has the capacity to represent up to 65536 neurons. The fully operational system may contain up to 12 of these layers, giving the total system a capacity in excess of 745000 neurons.<<ETX>>","PeriodicalId":286849,"journal":{"name":"[Proceedings 1992] IJCNN International Joint Conference on Neural Networks","volume":"2 3","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1992-06-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114095143","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}