Pub Date : 1992-06-07DOI: 10.1109/IJCNN.1992.287155
Q. Chen, A. F. Norcio
A research framework for building a user model system by utilizing artificial neural network (ANN) approaches is proposed. First, some problems in user modeling are discussed which underlie the motivations of introducing ANN approaches. Second, some considerations on ANN properties and their applications in task-related user modeling are presented. Finally, an ANN-based, integrated user modeling system is proposed which incorporates conventional symbolic reasoning approaches in a multilevel processing environment.<>
{"title":"Modeling users with neural architectures","authors":"Q. Chen, A. F. Norcio","doi":"10.1109/IJCNN.1992.287155","DOIUrl":"https://doi.org/10.1109/IJCNN.1992.287155","url":null,"abstract":"A research framework for building a user model system by utilizing artificial neural network (ANN) approaches is proposed. First, some problems in user modeling are discussed which underlie the motivations of introducing ANN approaches. Second, some considerations on ANN properties and their applications in task-related user modeling are presented. Finally, an ANN-based, integrated user modeling system is proposed which incorporates conventional symbolic reasoning approaches in a multilevel processing environment.<<ETX>>","PeriodicalId":286849,"journal":{"name":"[Proceedings 1992] IJCNN International Joint Conference on Neural Networks","volume":"19 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1992-06-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"116494124","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 1992-06-07DOI: 10.1109/IJCNN.1992.226921
S. S. Watkins, P. Chau, R. Tawel
An electronic neurocomputer which implements a radial basis function neural network (RBFNN) is described. The RBFNN is a network that utilizes a radial basis function as the transfer function. The key advantages of RBFNNs over existing neural network architectures include reduced learning time and the ease of VLSI implementation. This neurocomputer is based on an analog/digital hybrid design and has been constructed with both custom analog VLSI circuits and a commercially available digital signal processor. The hybrid architecture is selected because it offers high computational performance while compensating for analog inaccuracies, and it features the ability to model large problems.<>
{"title":"A radial basis function neurocomputer implemented with analog VLSI circuits","authors":"S. S. Watkins, P. Chau, R. Tawel","doi":"10.1109/IJCNN.1992.226921","DOIUrl":"https://doi.org/10.1109/IJCNN.1992.226921","url":null,"abstract":"An electronic neurocomputer which implements a radial basis function neural network (RBFNN) is described. The RBFNN is a network that utilizes a radial basis function as the transfer function. The key advantages of RBFNNs over existing neural network architectures include reduced learning time and the ease of VLSI implementation. This neurocomputer is based on an analog/digital hybrid design and has been constructed with both custom analog VLSI circuits and a commercially available digital signal processor. The hybrid architecture is selected because it offers high computational performance while compensating for analog inaccuracies, and it features the ability to model large problems.<<ETX>>","PeriodicalId":286849,"journal":{"name":"[Proceedings 1992] IJCNN International Joint Conference on Neural Networks","volume":"35 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1992-06-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"123208919","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 1992-06-07DOI: 10.1109/IJCNN.1992.287114
W. F. Schmidt, R. Duin
The weight space of feedforward networks is described by a probability density function where the probability is maximum for the optimal set of weights. This probability density function is given by a property of maximum likelihood estimators and the covariance matrix of this distribution is the Cramer-Rao lower bound. For certain classes of problems the optimization of the mean squared error is equal to the maximum likelihood estimator. For these problems the probability density function is closely related to the mean squared error criterion and therefore results derived from the probability density function hold for the mean squared error surface. An analysis of the probability density function provides some theoretical understanding of the error surface and learning dynamics.<>
{"title":"Feed forward networks and the Cramer-Rao bound","authors":"W. F. Schmidt, R. Duin","doi":"10.1109/IJCNN.1992.287114","DOIUrl":"https://doi.org/10.1109/IJCNN.1992.287114","url":null,"abstract":"The weight space of feedforward networks is described by a probability density function where the probability is maximum for the optimal set of weights. This probability density function is given by a property of maximum likelihood estimators and the covariance matrix of this distribution is the Cramer-Rao lower bound. For certain classes of problems the optimization of the mean squared error is equal to the maximum likelihood estimator. For these problems the probability density function is closely related to the mean squared error criterion and therefore results derived from the probability density function hold for the mean squared error surface. An analysis of the probability density function provides some theoretical understanding of the error surface and learning dynamics.<<ETX>>","PeriodicalId":286849,"journal":{"name":"[Proceedings 1992] IJCNN International Joint Conference on Neural Networks","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1992-06-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"122129850","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 1992-06-07DOI: 10.1109/IJCNN.1992.287116
R. Kamimura
A method with which internal representations (hidden unit patterns) are organized so as to increase information-theoretical redundancy in recurrent neural networks is presented. The information-theoretical redundancy is supposed to reflect the degree of organization or structure in hidden unit patterns. The representation by this method is expected to make it possible to interpret a mechanism of networks easily and explicitly. One of the problems in recurrent neural networks is that connection weights are smaller as the number of units in networks is larger, while producing uniform or random activity values at hidden units. Thus, it is difficult to interpret the meaning of hidden units. To cope with this problem, a complexity term proposed by D.E. Rumelhart was used. By using a modified complexity term, connections of networks could be highly activated, meaning that the connections could take larger absolute values. After a brief formulation of recurrent backpropagation with the complexity term, three experimental results-the XOR problem, a negation problem, and a sentence well-formedness problem-are presented.<>
{"title":"Generation of organized internal representation in recurrent neural networks","authors":"R. Kamimura","doi":"10.1109/IJCNN.1992.287116","DOIUrl":"https://doi.org/10.1109/IJCNN.1992.287116","url":null,"abstract":"A method with which internal representations (hidden unit patterns) are organized so as to increase information-theoretical redundancy in recurrent neural networks is presented. The information-theoretical redundancy is supposed to reflect the degree of organization or structure in hidden unit patterns. The representation by this method is expected to make it possible to interpret a mechanism of networks easily and explicitly. One of the problems in recurrent neural networks is that connection weights are smaller as the number of units in networks is larger, while producing uniform or random activity values at hidden units. Thus, it is difficult to interpret the meaning of hidden units. To cope with this problem, a complexity term proposed by D.E. Rumelhart was used. By using a modified complexity term, connections of networks could be highly activated, meaning that the connections could take larger absolute values. After a brief formulation of recurrent backpropagation with the complexity term, three experimental results-the XOR problem, a negation problem, and a sentence well-formedness problem-are presented.<<ETX>>","PeriodicalId":286849,"journal":{"name":"[Proceedings 1992] IJCNN International Joint Conference on Neural Networks","volume":"82 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1992-06-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"124132174","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 1992-06-07DOI: 10.1109/IJCNN.1992.227044
N. Toomarian, J. Barhen
A neural algorithm for rapidly simulating a certain class of nonlinear wave phenomena using analog VLSI neural hardware is presented and applied to the Korteweg-de Vries partial differential equation. The corresponding neural architecture is obtained from a pseudospectral representation of the spatial dependence, along with a leap-frog scheme for the temporal evolution. Numerical simulations demonstrated the robustness of the proposed approach.<>
{"title":"Fast neural solution of a nonlinear wave equation","authors":"N. Toomarian, J. Barhen","doi":"10.1109/IJCNN.1992.227044","DOIUrl":"https://doi.org/10.1109/IJCNN.1992.227044","url":null,"abstract":"A neural algorithm for rapidly simulating a certain class of nonlinear wave phenomena using analog VLSI neural hardware is presented and applied to the Korteweg-de Vries partial differential equation. The corresponding neural architecture is obtained from a pseudospectral representation of the spatial dependence, along with a leap-frog scheme for the temporal evolution. Numerical simulations demonstrated the robustness of the proposed approach.<<ETX>>","PeriodicalId":286849,"journal":{"name":"[Proceedings 1992] IJCNN International Joint Conference on Neural Networks","volume":"16 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1992-06-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"125889862","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 1992-06-07DOI: 10.1109/IJCNN.1992.226875
R. G. Khanchustambham, G.M. Zhang
A framework for sensor-based intelligent decision-making systems to perform online monitoring is proposed. Such a monitoring system interprets the detected signals from the sensors, extracts the relevant information, and decides on the appropriate control action. Emphasis is given to applying neural networks to perform information processing, and to recognizing the process abnormalities in machining operations. A prototype monitoring system is implemented. For signal detection, an instrumented force transducer is designed and used in a real-time turning operation. A neural network monitor, based on a feedforward backpropagation algorithm, is developed. The monitor is trained by the detected cutting force signal and measured surface finish. The superior learning and noise suppression abilities of the developed monitor enable high success rates for monitoring the cutting force and the quality of surface finish under the machining of advanced ceramic materials.<>
{"title":"A neural network approach to on-line monitoring of a turning process","authors":"R. G. Khanchustambham, G.M. Zhang","doi":"10.1109/IJCNN.1992.226875","DOIUrl":"https://doi.org/10.1109/IJCNN.1992.226875","url":null,"abstract":"A framework for sensor-based intelligent decision-making systems to perform online monitoring is proposed. Such a monitoring system interprets the detected signals from the sensors, extracts the relevant information, and decides on the appropriate control action. Emphasis is given to applying neural networks to perform information processing, and to recognizing the process abnormalities in machining operations. A prototype monitoring system is implemented. For signal detection, an instrumented force transducer is designed and used in a real-time turning operation. A neural network monitor, based on a feedforward backpropagation algorithm, is developed. The monitor is trained by the detected cutting force signal and measured surface finish. The superior learning and noise suppression abilities of the developed monitor enable high success rates for monitoring the cutting force and the quality of surface finish under the machining of advanced ceramic materials.<<ETX>>","PeriodicalId":286849,"journal":{"name":"[Proceedings 1992] IJCNN International Joint Conference on Neural Networks","volume":"119 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1992-06-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126046764","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 1992-06-07DOI: 10.1109/IJCNN.1992.227323
B. Guile
There are several types of commercialization process, each with its own pace, indicators of progress, organizational approaches, and risk factors. Understanding when each of these is under way, and the nature of the true challenges faced, is a major factor in consistently successful efforts. The work described here is drawn from a National Academy of Engineering (NAE) study of USA industrial commercialization: the translation of innovative ideas into marketplace success as profitable products, processes, and services. An important aspect of the NAE study was the search for useful tools to aid the management process. The best commercialization organization depends on the nature of the commercialization activity at hand. These observations place emphasis on the importance of the leadership of a company understanding the nature of the business at hand.<>
{"title":"Profiting from innovation","authors":"B. Guile","doi":"10.1109/IJCNN.1992.227323","DOIUrl":"https://doi.org/10.1109/IJCNN.1992.227323","url":null,"abstract":"There are several types of commercialization process, each with its own pace, indicators of progress, organizational approaches, and risk factors. Understanding when each of these is under way, and the nature of the true challenges faced, is a major factor in consistently successful efforts. The work described here is drawn from a National Academy of Engineering (NAE) study of USA industrial commercialization: the translation of innovative ideas into marketplace success as profitable products, processes, and services. An important aspect of the NAE study was the search for useful tools to aid the management process. The best commercialization organization depends on the nature of the commercialization activity at hand. These observations place emphasis on the importance of the leadership of a company understanding the nature of the business at hand.<<ETX>>","PeriodicalId":286849,"journal":{"name":"[Proceedings 1992] IJCNN International Joint Conference on Neural Networks","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1992-06-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126111181","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 1992-06-07DOI: 10.1109/IJCNN.1992.227089
V. V. Phansalkar, M. Thathachar
A feedforward network composed of units of teams of parameterized learning automata is considered as a model of a reinforcement learning system. The parameters of each learning automaton are updated using an algorithm consisting of a gradient following term and a random perturbation term. The algorithm is approximated by the Langevin equation. It is shown that it converges to the global maximum. The algorithm is decentralized and the units do not have any information exchange during updating. Simulation results on a pattern recognition problem show that reasonable rates of convergence can be obtained.<>
{"title":"Global convergence of feedforward networks of learning automata","authors":"V. V. Phansalkar, M. Thathachar","doi":"10.1109/IJCNN.1992.227089","DOIUrl":"https://doi.org/10.1109/IJCNN.1992.227089","url":null,"abstract":"A feedforward network composed of units of teams of parameterized learning automata is considered as a model of a reinforcement learning system. The parameters of each learning automaton are updated using an algorithm consisting of a gradient following term and a random perturbation term. The algorithm is approximated by the Langevin equation. It is shown that it converges to the global maximum. The algorithm is decentralized and the units do not have any information exchange during updating. Simulation results on a pattern recognition problem show that reasonable rates of convergence can be obtained.<<ETX>>","PeriodicalId":286849,"journal":{"name":"[Proceedings 1992] IJCNN International Joint Conference on Neural Networks","volume":"102 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1992-06-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"124716621","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 1992-06-07DOI: 10.1109/IJCNN.1992.287196
S. Wang, B. El Ayeb
Diagnosis is an active research area where many diagnostic methods have been proposed. The features that characterize these methods are made explicit in a conventional framework as well as in a neural framework. Specifically, it is shown that each representation type of diagnostic knowledge requires a specific reasoning type whichever framework is adopted, logical or neural. A competition-based neural architecture is proposed to mechanize hypothetical reasoning.<>
{"title":"Diagnosis: hypothetical reasoning with a competition-based neural architecture","authors":"S. Wang, B. El Ayeb","doi":"10.1109/IJCNN.1992.287196","DOIUrl":"https://doi.org/10.1109/IJCNN.1992.287196","url":null,"abstract":"Diagnosis is an active research area where many diagnostic methods have been proposed. The features that characterize these methods are made explicit in a conventional framework as well as in a neural framework. Specifically, it is shown that each representation type of diagnostic knowledge requires a specific reasoning type whichever framework is adopted, logical or neural. A competition-based neural architecture is proposed to mechanize hypothetical reasoning.<<ETX>>","PeriodicalId":286849,"journal":{"name":"[Proceedings 1992] IJCNN International Joint Conference on Neural Networks","volume":"50 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1992-06-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"124861666","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 1992-06-07DOI: 10.1109/IJCNN.1992.227314
W. Gan
The author proposes the use of fuzzy neural networks to improve the resolution and segmentation of medical images. The backpropagation neural network is used to obtain an optimized membership function. The algorithms are presented to implement the fuzzy neural networks for both types of applications. Preliminary results are given. The advantage of using fuzzy neural networks compared with conventional neural networks is to reduce the number of elements in each neural network layer. Thus computation time can be reduced. Only tomographic images are considered.<>
{"title":"Application of fuzzy neural networks to medical image processing","authors":"W. Gan","doi":"10.1109/IJCNN.1992.227314","DOIUrl":"https://doi.org/10.1109/IJCNN.1992.227314","url":null,"abstract":"The author proposes the use of fuzzy neural networks to improve the resolution and segmentation of medical images. The backpropagation neural network is used to obtain an optimized membership function. The algorithms are presented to implement the fuzzy neural networks for both types of applications. Preliminary results are given. The advantage of using fuzzy neural networks compared with conventional neural networks is to reduce the number of elements in each neural network layer. Thus computation time can be reduced. Only tomographic images are considered.<<ETX>>","PeriodicalId":286849,"journal":{"name":"[Proceedings 1992] IJCNN International Joint Conference on Neural Networks","volume":"16 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1992-06-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128180328","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}