Pub Date : 1994-06-27DOI: 10.1109/ICNN.1994.374745
R. Kane, N. Milgram
This paper describes a forecasting approach using constrained networks. Two complementary approaches are proposed. The main property of the first approach is to lead to an efficient predictive algorithm based on backpropagation. Some units are constrained to hold the logical information of the network whereas the unconstrained unit keep the numerical information. Therefore the task of each unit is defined during the training. The second approach is focused on rules extraction. Using constrained networks, we are able to extract information from trained networks. This property is essential as it is possible to analysis, explain, extract and therefore control what happens inside trained networks. Simulation results for these approaches are reported.<>
{"title":"Financial forecasting and rules extraction from trained networks","authors":"R. Kane, N. Milgram","doi":"10.1109/ICNN.1994.374745","DOIUrl":"https://doi.org/10.1109/ICNN.1994.374745","url":null,"abstract":"This paper describes a forecasting approach using constrained networks. Two complementary approaches are proposed. The main property of the first approach is to lead to an efficient predictive algorithm based on backpropagation. Some units are constrained to hold the logical information of the network whereas the unconstrained unit keep the numerical information. Therefore the task of each unit is defined during the training. The second approach is focused on rules extraction. Using constrained networks, we are able to extract information from trained networks. This property is essential as it is possible to analysis, explain, extract and therefore control what happens inside trained networks. Simulation results for these approaches are reported.<<ETX>>","PeriodicalId":209128,"journal":{"name":"Proceedings of 1994 IEEE International Conference on Neural Networks (ICNN'94)","volume":"10 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1994-06-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128434174","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 1994-06-27DOI: 10.1109/ICNN.1994.374491
K. Khorasani, W. Weng
In this paper two new structures (algorithms) are proposed for adaptively adjusting the network structure. Both neuron pruning and neuron generating are considered for a feedforward neural network. Simulations results are presented to confirm the improvements that are obtained as a result of utilizing the proposed algorithms.<>
{"title":"Structure adaptation in feed-forward neural networks","authors":"K. Khorasani, W. Weng","doi":"10.1109/ICNN.1994.374491","DOIUrl":"https://doi.org/10.1109/ICNN.1994.374491","url":null,"abstract":"In this paper two new structures (algorithms) are proposed for adaptively adjusting the network structure. Both neuron pruning and neuron generating are considered for a feedforward neural network. Simulations results are presented to confirm the improvements that are obtained as a result of utilizing the proposed algorithms.<<ETX>>","PeriodicalId":209128,"journal":{"name":"Proceedings of 1994 IEEE International Conference on Neural Networks (ICNN'94)","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1994-06-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128655117","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 1994-06-27DOI: 10.1109/ICNN.1994.374560
H. Burke, D. B. Rosen, P. Goodman
Survival prediction is important in cancer because it determines therapy, matches patients for clinical trials, and provides patient information. Is a backpropagation neural network more accurate at predicting survival in breast cancer than the current staging system? For over thirty years cancer outcome prediction has been based on the pTNM staging system. There are two problems with this system: (1) it is not very accurate, and (2) its accuracy can not be improved because predictive variables can not be added to the model without increasing the model's complexity to the point where it is no longer useful to the clinician. Using the area under the curve (AUC) of the receiver operating characteristic, the authors compare the accuracy of the following predictive models: pTNM stage, principal components analysis, classification and regression trees, logistic regression, cascade correlation neural network, conjugate gradient descent neural network, backpropagation neural network, and probabilistic neural network. Using just the TNM variables both the backpropagation neural network, AUC.768, and the probabilistic neural network, AUC.759, are significantly more accurate than the pTNM stage system, AUC.720 (all SEs<.01, p<.01 for both models compared to the pTNM model). Adding variables further increases the prediction accuracy of the backpropagation neural network, AUC.779, and the probabilistic neural network, AUC.777. Adding the new prognostic factors p53 and HER-2/neu increases the backpropagation neural network's accuracy to an AUC of .850. The neural networks perform equally well when applied to another breast cancer data set and to a colorectal cancer data set. Neural networks are able to significantly improve breast cancer outcome prediction accuracy when compared to the TNM stage system. They can combine prognostic factors to further improve accuracy. Neural networks are robust across data bases and cancer sites. Neural networks can perform as well as the best traditional prediction methods, and they can capture the power of nonmonotonic predictors and discover complex genetic interactions.<>
{"title":"Comparing artificial neural networks to other statistical methods for medical outcome prediction","authors":"H. Burke, D. B. Rosen, P. Goodman","doi":"10.1109/ICNN.1994.374560","DOIUrl":"https://doi.org/10.1109/ICNN.1994.374560","url":null,"abstract":"Survival prediction is important in cancer because it determines therapy, matches patients for clinical trials, and provides patient information. Is a backpropagation neural network more accurate at predicting survival in breast cancer than the current staging system? For over thirty years cancer outcome prediction has been based on the pTNM staging system. There are two problems with this system: (1) it is not very accurate, and (2) its accuracy can not be improved because predictive variables can not be added to the model without increasing the model's complexity to the point where it is no longer useful to the clinician. Using the area under the curve (AUC) of the receiver operating characteristic, the authors compare the accuracy of the following predictive models: pTNM stage, principal components analysis, classification and regression trees, logistic regression, cascade correlation neural network, conjugate gradient descent neural network, backpropagation neural network, and probabilistic neural network. Using just the TNM variables both the backpropagation neural network, AUC.768, and the probabilistic neural network, AUC.759, are significantly more accurate than the pTNM stage system, AUC.720 (all SEs<.01, p<.01 for both models compared to the pTNM model). Adding variables further increases the prediction accuracy of the backpropagation neural network, AUC.779, and the probabilistic neural network, AUC.777. Adding the new prognostic factors p53 and HER-2/neu increases the backpropagation neural network's accuracy to an AUC of .850. The neural networks perform equally well when applied to another breast cancer data set and to a colorectal cancer data set. Neural networks are able to significantly improve breast cancer outcome prediction accuracy when compared to the TNM stage system. They can combine prognostic factors to further improve accuracy. Neural networks are robust across data bases and cancer sites. Neural networks can perform as well as the best traditional prediction methods, and they can capture the power of nonmonotonic predictors and discover complex genetic interactions.<<ETX>>","PeriodicalId":209128,"journal":{"name":"Proceedings of 1994 IEEE International Conference on Neural Networks (ICNN'94)","volume":"111 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1994-06-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"124767909","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 1994-06-27DOI: 10.1109/ICNN.1994.374324
D. V. Prokhorov
Dynamic analysis of nonlinear system requires tool for study of arbitrary sets of positive semi-trajectories for the system rather than only single semi-trajectories. Such a study is difficult because of very high computational complexity. This paper proposes a Lyapunov machine as a possible tool for stability analysis of nonlinear autonomous systems. The Lyapunov machine is able to test global asymptotic stability, to isolate local asymptotic stability domains and to approximate a Lyapunov function for the system.<>
{"title":"A Lyapunov machine for stability analysis of nonlinear systems","authors":"D. V. Prokhorov","doi":"10.1109/ICNN.1994.374324","DOIUrl":"https://doi.org/10.1109/ICNN.1994.374324","url":null,"abstract":"Dynamic analysis of nonlinear system requires tool for study of arbitrary sets of positive semi-trajectories for the system rather than only single semi-trajectories. Such a study is difficult because of very high computational complexity. This paper proposes a Lyapunov machine as a possible tool for stability analysis of nonlinear autonomous systems. The Lyapunov machine is able to test global asymptotic stability, to isolate local asymptotic stability domains and to approximate a Lyapunov function for the system.<<ETX>>","PeriodicalId":209128,"journal":{"name":"Proceedings of 1994 IEEE International Conference on Neural Networks (ICNN'94)","volume":"29 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1994-06-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"124971196","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 1994-06-27DOI: 10.1109/ICNN.1994.374510
M. Yacoub
In the Fibonacci numeration system of order s(s/spl ges/2), every positive integer admits a unique representation which does not contain s consecutive digits equal to 1 (called normal form). We show how this normal form can be obtained from any representation by recurrent neural networks. The addition of two integers in this system and the conversion from a Fibonacci representation to a standard binary representation (and conversely) can also be realized using recurrent neural networks.<>
{"title":"Recurrent neural networks and Fibonacci numeration system of order s(s/spl ges/2)","authors":"M. Yacoub","doi":"10.1109/ICNN.1994.374510","DOIUrl":"https://doi.org/10.1109/ICNN.1994.374510","url":null,"abstract":"In the Fibonacci numeration system of order s(s/spl ges/2), every positive integer admits a unique representation which does not contain s consecutive digits equal to 1 (called normal form). We show how this normal form can be obtained from any representation by recurrent neural networks. The addition of two integers in this system and the conversion from a Fibonacci representation to a standard binary representation (and conversely) can also be realized using recurrent neural networks.<<ETX>>","PeriodicalId":209128,"journal":{"name":"Proceedings of 1994 IEEE International Conference on Neural Networks (ICNN'94)","volume":"13 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1994-06-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129482731","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 1994-06-27DOI: 10.1109/ICNN.1994.374939
J. Jean, K. Xue, S. Goel
Pattern theory is an engineering theory of algorithm design which provides a robust characterization of all types of patterns. Similar to logical neural networks, the theory can be used to generalize from a set of training data. However, it optimizes network architectures as well as the "weights" of the resulting machine. In this paper, the application of the theory to character recognition is considered. The application requires a simple extension to the theory and a faster algorithm to perform a basic decomposition operation. Such an algorithm is developed and described in the paper. Some simulation results of the algorithm are also included.<>
{"title":"Pattern theory for character recognition","authors":"J. Jean, K. Xue, S. Goel","doi":"10.1109/ICNN.1994.374939","DOIUrl":"https://doi.org/10.1109/ICNN.1994.374939","url":null,"abstract":"Pattern theory is an engineering theory of algorithm design which provides a robust characterization of all types of patterns. Similar to logical neural networks, the theory can be used to generalize from a set of training data. However, it optimizes network architectures as well as the \"weights\" of the resulting machine. In this paper, the application of the theory to character recognition is considered. The application requires a simple extension to the theory and a faster algorithm to perform a basic decomposition operation. Such an algorithm is developed and described in the paper. Some simulation results of the algorithm are also included.<<ETX>>","PeriodicalId":209128,"journal":{"name":"Proceedings of 1994 IEEE International Conference on Neural Networks (ICNN'94)","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1994-06-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130327899","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 1994-06-27DOI: 10.1109/ICNN.1994.374400
Shuqing Zeng, Yongbao He
This paper reviews the current fuzzy control technology from the engineering point of view, and presents a new method for learning and tuning a fuzzy controller based on genetic algorithm (GA) for a dynamic system. In particular, it enhances the fuzzy controller with self-learning capability for achieving the prescribed control objective into near optimal manner. The methodology first adopts expert experiences, it then uses the GA to find the fuzzy controller's optimal set of parameters. In using GA, we must define an objective function to measure the performance of the controller. Since the behaviour of the dynamic system is hard to predict, a three-layer forward network has been adopted. For the purpose to accelerate the learning process, a conventional simplex optimal algorithm is used to reduce the search space. Finally, an example is given to show the potential of the method.<>
{"title":"Learning and tuning fuzzy logic controllers through genetic algorithm","authors":"Shuqing Zeng, Yongbao He","doi":"10.1109/ICNN.1994.374400","DOIUrl":"https://doi.org/10.1109/ICNN.1994.374400","url":null,"abstract":"This paper reviews the current fuzzy control technology from the engineering point of view, and presents a new method for learning and tuning a fuzzy controller based on genetic algorithm (GA) for a dynamic system. In particular, it enhances the fuzzy controller with self-learning capability for achieving the prescribed control objective into near optimal manner. The methodology first adopts expert experiences, it then uses the GA to find the fuzzy controller's optimal set of parameters. In using GA, we must define an objective function to measure the performance of the controller. Since the behaviour of the dynamic system is hard to predict, a three-layer forward network has been adopted. For the purpose to accelerate the learning process, a conventional simplex optimal algorithm is used to reduce the search space. Finally, an example is given to show the potential of the method.<<ETX>>","PeriodicalId":209128,"journal":{"name":"Proceedings of 1994 IEEE International Conference on Neural Networks (ICNN'94)","volume":"47 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1994-06-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130364586","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 1994-06-27DOI: 10.1109/ICNN.1994.374666
Hyoung‐Gweon Park, Se-Young Oh
A real-time visual servo tracking system for an industrial robot has been developed. The position sensitive detector or PSD, instead of the CCD, is used as a real time vision sensor due to its fast response (The position is converted to analog current). A neural network learns the complex association between the object position and its sensor reading and uses it to track that object. It also turns out that this scheme lends itself to a convenient way to teach a workpath for the robot. Furthermore, for real-time use of the neural net, a novel architecture has been developed based on the concept of input space partitioning and local learning. It exhibits characteristics of fast processing and learning as well as optimal usage of hidden neurons.<>
{"title":"A neural network based real-time robot tracking controller using position sensitive detectors","authors":"Hyoung‐Gweon Park, Se-Young Oh","doi":"10.1109/ICNN.1994.374666","DOIUrl":"https://doi.org/10.1109/ICNN.1994.374666","url":null,"abstract":"A real-time visual servo tracking system for an industrial robot has been developed. The position sensitive detector or PSD, instead of the CCD, is used as a real time vision sensor due to its fast response (The position is converted to analog current). A neural network learns the complex association between the object position and its sensor reading and uses it to track that object. It also turns out that this scheme lends itself to a convenient way to teach a workpath for the robot. Furthermore, for real-time use of the neural net, a novel architecture has been developed based on the concept of input space partitioning and local learning. It exhibits characteristics of fast processing and learning as well as optimal usage of hidden neurons.<<ETX>>","PeriodicalId":209128,"journal":{"name":"Proceedings of 1994 IEEE International Conference on Neural Networks (ICNN'94)","volume":"3 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1994-06-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126700737","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 1994-06-27DOI: 10.1109/ICNN.1994.374349
Chang-Jiu Chen, J. Cheung, A. Haque
In this paper, we employ the memorized vectors in our high capacity model to apply to the Hopfield model. We find that the Hopfield model can also have a high capacity.<>
{"title":"High capacity for the Hopfield neural networks","authors":"Chang-Jiu Chen, J. Cheung, A. Haque","doi":"10.1109/ICNN.1994.374349","DOIUrl":"https://doi.org/10.1109/ICNN.1994.374349","url":null,"abstract":"In this paper, we employ the memorized vectors in our high capacity model to apply to the Hopfield model. We find that the Hopfield model can also have a high capacity.<<ETX>>","PeriodicalId":209128,"journal":{"name":"Proceedings of 1994 IEEE International Conference on Neural Networks (ICNN'94)","volume":"21 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1994-06-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126876312","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 1994-06-27DOI: 10.1109/ICNN.1994.374966
S. Madarasmi, T. Pong, D. Kersten
This paper presents a computational model for obtaining relative depth information from image contours. Local occlusion properties such as T-junctions and concavity are used to arrive at a global percept of distinct surfaces at various relative depths. A multilayer representation is used to classify each image pixel into the appropriate depth plane based on the local information from the occluding contours. A Bayesian framework is used to incorporate the constraints defined by the contours and the prior constraints. A solution corresponding to the maximum posteriori probability is then determined, resulting in a depth assignment and surface assignment for each image site or pixel. The algorithm was tested on various contour images, including two classes of illusory surfaces: the Kanizsa (1979) and the line termination illusory contours.<>
{"title":"Illusory contour detection using MRF models","authors":"S. Madarasmi, T. Pong, D. Kersten","doi":"10.1109/ICNN.1994.374966","DOIUrl":"https://doi.org/10.1109/ICNN.1994.374966","url":null,"abstract":"This paper presents a computational model for obtaining relative depth information from image contours. Local occlusion properties such as T-junctions and concavity are used to arrive at a global percept of distinct surfaces at various relative depths. A multilayer representation is used to classify each image pixel into the appropriate depth plane based on the local information from the occluding contours. A Bayesian framework is used to incorporate the constraints defined by the contours and the prior constraints. A solution corresponding to the maximum posteriori probability is then determined, resulting in a depth assignment and surface assignment for each image site or pixel. The algorithm was tested on various contour images, including two classes of illusory surfaces: the Kanizsa (1979) and the line termination illusory contours.<<ETX>>","PeriodicalId":209128,"journal":{"name":"Proceedings of 1994 IEEE International Conference on Neural Networks (ICNN'94)","volume":"7 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1994-06-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129122680","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}