Pub Date : 2002-08-07DOI: 10.1109/IJCNN.2002.1007714
T. Niimura, H. Ko, K. Ozawa
Presents a fuzzy regression model to estimate uncertain electricity market prices in a deregulated industry environment. The price of electricity in a deregulated market is very volatile in time. Therefore, it is difficult to estimate an accurate market price using historically observed data. In the proposed method, uncertain market prices are estimated by an autoregressive model using a neural network, and the time series model is extended to a fuzzy model to consider the possible ranges of market prices. The neural network finds the crisp value for the AR model and then the low and high ranges of the fuzzy model are found by linear programming. Therefore, the proposed model can represent the possible ranges of a day-ahead market price. For a numerical example, the model is applied to California Power Exchange market data.
{"title":"A day-ahead electricity price prediction based on a fuzzy-neuro autoregressive model in a deregulated electricity market","authors":"T. Niimura, H. Ko, K. Ozawa","doi":"10.1109/IJCNN.2002.1007714","DOIUrl":"https://doi.org/10.1109/IJCNN.2002.1007714","url":null,"abstract":"Presents a fuzzy regression model to estimate uncertain electricity market prices in a deregulated industry environment. The price of electricity in a deregulated market is very volatile in time. Therefore, it is difficult to estimate an accurate market price using historically observed data. In the proposed method, uncertain market prices are estimated by an autoregressive model using a neural network, and the time series model is extended to a fuzzy model to consider the possible ranges of market prices. The neural network finds the crisp value for the AR model and then the low and high ranges of the fuzzy model are found by linear programming. Therefore, the proposed model can represent the possible ranges of a day-ahead market price. For a numerical example, the model is applied to California Power Exchange market data.","PeriodicalId":382771,"journal":{"name":"Proceedings of the 2002 International Joint Conference on Neural Networks. IJCNN'02 (Cat. No.02CH37290)","volume":"2 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2002-08-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129147057","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2002-08-07DOI: 10.1109/IJCNN.2002.1007667
V. Asirvadam, S.F. McLoone, G. Irwin
Novel separable recursive training strategies are derived for the training of feedforward neural networks. These hybrid algorithms combine nonlinear recursive optimization of hidden-layer nonlinear weights with recursive least-squares optimization of linear output-layer weights in one integrated routine. Experimental results for two benchmark problems demonstrate the superiority of the new hybrid training schemes compared to conventional counterparts.
{"title":"Separable recursive training algorithms for feedforward neural networks","authors":"V. Asirvadam, S.F. McLoone, G. Irwin","doi":"10.1109/IJCNN.2002.1007667","DOIUrl":"https://doi.org/10.1109/IJCNN.2002.1007667","url":null,"abstract":"Novel separable recursive training strategies are derived for the training of feedforward neural networks. These hybrid algorithms combine nonlinear recursive optimization of hidden-layer nonlinear weights with recursive least-squares optimization of linear output-layer weights in one integrated routine. Experimental results for two benchmark problems demonstrate the superiority of the new hybrid training schemes compared to conventional counterparts.","PeriodicalId":382771,"journal":{"name":"Proceedings of the 2002 International Joint Conference on Neural Networks. IJCNN'02 (Cat. No.02CH37290)","volume":"2017 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2002-08-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"123381421","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2002-08-07DOI: 10.1109/IJCNN.2002.1005598
D. Sbarbaro, T. Johansen
Multidimensional sensors can deliver vast and rich information about the operation of industrial processes. They are popular at the supervisory level in industrial applications; however, their use at control level is not very common. There are no standard methodologies to design a control system based on the information provided by this type of sensors. This paper describes an approach,. based. on self-organizing maps or Kohonen's networks, for integrating the information provided by multidimensional sensors in process control. A simulated example illustrates the main characteristics and performance of the proposed approach.
{"title":"A self-organizing approach for integrating multidimensional sensors in process control","authors":"D. Sbarbaro, T. Johansen","doi":"10.1109/IJCNN.2002.1005598","DOIUrl":"https://doi.org/10.1109/IJCNN.2002.1005598","url":null,"abstract":"Multidimensional sensors can deliver vast and rich information about the operation of industrial processes. They are popular at the supervisory level in industrial applications; however, their use at control level is not very common. There are no standard methodologies to design a control system based on the information provided by this type of sensors. This paper describes an approach,. based. on self-organizing maps or Kohonen's networks, for integrating the information provided by multidimensional sensors in process control. A simulated example illustrates the main characteristics and performance of the proposed approach.","PeriodicalId":382771,"journal":{"name":"Proceedings of the 2002 International Joint Conference on Neural Networks. IJCNN'02 (Cat. No.02CH37290)","volume":"30 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2002-08-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"116302782","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2002-08-07DOI: 10.1109/IJCNN.2002.1007559
R. Zakrzewski
This paper presents an approach for verifying performance of a feedforward neural net trained as a static nonlinear estimator, with a view to its use on commercial aircraft. The problem is important in context of safety-critical applications that require certification, such as flight software in aircraft. The algorithm presented here extends the previously published verification method developed for nets that approximate look-up tables. Through a suitable transformation, the problem is converted into verifying an approximation to a look-up table over a hyper-rectangular domain. Then, the previously developed technique is used. It is based on traversing a uniform testing grid and evaluating the error at its every node. The process results in guaranteed upper bounds on the error between the neural net estimate and the true value of the estimated quantity. The method allows deterministic verification of nets trained off-line to perform safety-critical estimation tasks.
{"title":"Verification of performance of a neural network estimator","authors":"R. Zakrzewski","doi":"10.1109/IJCNN.2002.1007559","DOIUrl":"https://doi.org/10.1109/IJCNN.2002.1007559","url":null,"abstract":"This paper presents an approach for verifying performance of a feedforward neural net trained as a static nonlinear estimator, with a view to its use on commercial aircraft. The problem is important in context of safety-critical applications that require certification, such as flight software in aircraft. The algorithm presented here extends the previously published verification method developed for nets that approximate look-up tables. Through a suitable transformation, the problem is converted into verifying an approximation to a look-up table over a hyper-rectangular domain. Then, the previously developed technique is used. It is based on traversing a uniform testing grid and evaluating the error at its every node. The process results in guaranteed upper bounds on the error between the neural net estimate and the true value of the estimated quantity. The method allows deterministic verification of nets trained off-line to perform safety-critical estimation tasks.","PeriodicalId":382771,"journal":{"name":"Proceedings of the 2002 International Joint Conference on Neural Networks. IJCNN'02 (Cat. No.02CH37290)","volume":"114 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2002-08-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"116496348","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2002-08-07DOI: 10.1109/IJCNN.2002.1007570
B. Chandra, S. Singh
There are some basic real life problems that cannot be solved using classical mathematical techniques. In this paper functional networks have been effectively used to solve practical CAD problems related to plant engineering industry. Modular construction of plants is becoming popular due to severe weather conditions at plant sites. The modules are transported to and assembled at the actual plant site. The temporary structure should be safe during lifting. For this, it is essential to find the rotation position of the model once it is lifted. This rotation position will depend on the center of gravity of the module and the center of rotation about which the module will rotate. If cables meet at a point then this will be the point of rotation. If they do not meet then there is no classical mathematical technique available to find the center of rotation. In this paper functional networks have been successfully applied to solve this problem.
{"title":"Functional networks for CAD problems","authors":"B. Chandra, S. Singh","doi":"10.1109/IJCNN.2002.1007570","DOIUrl":"https://doi.org/10.1109/IJCNN.2002.1007570","url":null,"abstract":"There are some basic real life problems that cannot be solved using classical mathematical techniques. In this paper functional networks have been effectively used to solve practical CAD problems related to plant engineering industry. Modular construction of plants is becoming popular due to severe weather conditions at plant sites. The modules are transported to and assembled at the actual plant site. The temporary structure should be safe during lifting. For this, it is essential to find the rotation position of the model once it is lifted. This rotation position will depend on the center of gravity of the module and the center of rotation about which the module will rotate. If cables meet at a point then this will be the point of rotation. If they do not meet then there is no classical mathematical technique available to find the center of rotation. In this paper functional networks have been successfully applied to solve this problem.","PeriodicalId":382771,"journal":{"name":"Proceedings of the 2002 International Joint Conference on Neural Networks. IJCNN'02 (Cat. No.02CH37290)","volume":"3 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2002-08-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114705718","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2002-08-07DOI: 10.1109/IJCNN.2002.1007647
S. Bohté, J. Kok, H. la Poutré
The design of neural networks that are able to efficiently detect conjunctions of features is an important open challenge. We develop a feedforward spiking neural network that requires a constant number of neurons for detecting a conjunction irrespective of the size of the retinal input field, and for up to four simultaneously present feature-conjunctions.
{"title":"Implementing position-invariant detection of feature-conjunctions in a network of spiking neurons","authors":"S. Bohté, J. Kok, H. la Poutré","doi":"10.1109/IJCNN.2002.1007647","DOIUrl":"https://doi.org/10.1109/IJCNN.2002.1007647","url":null,"abstract":"The design of neural networks that are able to efficiently detect conjunctions of features is an important open challenge. We develop a feedforward spiking neural network that requires a constant number of neurons for detecting a conjunction irrespective of the size of the retinal input field, and for up to four simultaneously present feature-conjunctions.","PeriodicalId":382771,"journal":{"name":"Proceedings of the 2002 International Joint Conference on Neural Networks. IJCNN'02 (Cat. No.02CH37290)","volume":"105 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2002-08-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"124087198","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2002-08-07DOI: 10.1109/IJCNN.2002.1007487
J. Bezdek, R. Hathaway
A method is given for visually assessing the cluster tendency of a set of Objects O = {o/sub 1/, . . . ,o/sub n/} when they are represented either as object vectors or by numerical pairwise dissimilarity values. The objects are reordered and the reordered matrix of pair wise object dissimilarities is displayed as an intensity image. Clusters are indicated by dark blocks of pixels along the diagonal.
{"title":"VAT: a tool for visual assessment of (cluster) tendency","authors":"J. Bezdek, R. Hathaway","doi":"10.1109/IJCNN.2002.1007487","DOIUrl":"https://doi.org/10.1109/IJCNN.2002.1007487","url":null,"abstract":"A method is given for visually assessing the cluster tendency of a set of Objects O = {o/sub 1/, . . . ,o/sub n/} when they are represented either as object vectors or by numerical pairwise dissimilarity values. The objects are reordered and the reordered matrix of pair wise object dissimilarities is displayed as an intensity image. Clusters are indicated by dark blocks of pixels along the diagonal.","PeriodicalId":382771,"journal":{"name":"Proceedings of the 2002 International Joint Conference on Neural Networks. IJCNN'02 (Cat. No.02CH37290)","volume":"61 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2002-08-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126450904","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2002-08-07DOI: 10.1109/IJCNN.2002.1005460
Doosung Hwang, F. Fotouhi
The Hopfield neural optimization has been studied in maximum clique problem. Its drawback with this approach has the tendency to produce locally optimal solutions due to the descent convergence of the energy function. In order to solve maximum clique problems, the discrete Hopfield neural optimization is studied by combining heuristics such as annealing method and scheduled learning rate which can permit the ascent modification. Each neuron is updated in accordance with a hill-climbing modification. The modifications provide a mechanism for escaping local feasible solutions by varying the direction of motion equation of the neurons. The effectiveness of both modifications is shown through various tests on random graphs and DIMACS benchmark graphs in terms of clique size and computation time.
{"title":"Modifications of discrete Hopfield neural optimization in maximum clique problem","authors":"Doosung Hwang, F. Fotouhi","doi":"10.1109/IJCNN.2002.1005460","DOIUrl":"https://doi.org/10.1109/IJCNN.2002.1005460","url":null,"abstract":"The Hopfield neural optimization has been studied in maximum clique problem. Its drawback with this approach has the tendency to produce locally optimal solutions due to the descent convergence of the energy function. In order to solve maximum clique problems, the discrete Hopfield neural optimization is studied by combining heuristics such as annealing method and scheduled learning rate which can permit the ascent modification. Each neuron is updated in accordance with a hill-climbing modification. The modifications provide a mechanism for escaping local feasible solutions by varying the direction of motion equation of the neurons. The effectiveness of both modifications is shown through various tests on random graphs and DIMACS benchmark graphs in terms of clique size and computation time.","PeriodicalId":382771,"journal":{"name":"Proceedings of the 2002 International Joint Conference on Neural Networks. IJCNN'02 (Cat. No.02CH37290)","volume":"6 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2002-08-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126458254","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2002-08-07DOI: 10.1109/IJCNN.2002.1007549
R. Kozma, D. Harter, S. Achunala
Biological brains are capable of adaptive behavior to sustain performance in tasks in the face of increasingly difficult constraints.. We present a task with varying conditions of resource and time constraints. We compare our heuristic and neural network models to human data and speculate about dynamic mechanisms of action selection.
{"title":"Action selection under constraints: dynamic optimization of behavior in machines and humans","authors":"R. Kozma, D. Harter, S. Achunala","doi":"10.1109/IJCNN.2002.1007549","DOIUrl":"https://doi.org/10.1109/IJCNN.2002.1007549","url":null,"abstract":"Biological brains are capable of adaptive behavior to sustain performance in tasks in the face of increasingly difficult constraints.. We present a task with varying conditions of resource and time constraints. We compare our heuristic and neural network models to human data and speculate about dynamic mechanisms of action selection.","PeriodicalId":382771,"journal":{"name":"Proceedings of the 2002 International Joint Conference on Neural Networks. IJCNN'02 (Cat. No.02CH37290)","volume":"172 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2002-08-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"132465415","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2002-08-07DOI: 10.1109/IJCNN.2002.1007546
Tony R. Martinez, martinez
We present the pair attribute learning (PAL) algorithm for the selection of relevant inputs and network topology. Correlations on training instance pairs are used to drive network construction of a single-hidden layer MLP. Results on nine learning problems demonstrate 70% less complexity, on average, without a significant loss of accuracy.
{"title":"Pair attribute learning: network construction using pair features","authors":"Tony R. Martinez, martinez","doi":"10.1109/IJCNN.2002.1007546","DOIUrl":"https://doi.org/10.1109/IJCNN.2002.1007546","url":null,"abstract":"We present the pair attribute learning (PAL) algorithm for the selection of relevant inputs and network topology. Correlations on training instance pairs are used to drive network construction of a single-hidden layer MLP. Results on nine learning problems demonstrate 70% less complexity, on average, without a significant loss of accuracy.","PeriodicalId":382771,"journal":{"name":"Proceedings of the 2002 International Joint Conference on Neural Networks. IJCNN'02 (Cat. No.02CH37290)","volume":"39 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2002-08-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130012965","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}