Pub Date : 2006-06-07DOI: 10.1109/ICCIS.2006.252347
P. Park, S. Kim, J. Moon, M. Shin
This paper presents an efficient MPC algorithm for uncertain time-varying systems with input constraints. The proposed algorithm adopts the method of increasing free control horizon in the dual mode (i.e., a free control mode in the first finite horizon and a state-feedback mode in the following infinite horizon) paradigm so as to enlarge the set of stabilizable initial states. In the method, however, since the number of LMIs growing exponentially with the free control horizon makes the corresponding optimization problems intractable even for small horizon, it is impracticable to blindly increase the free control horizon. The objective of this paper is to relax the restriction on increase of the free control horizon, incurred on computational burdens in the method. By choosing a combination of hyper-boxes including a possible region of the initial states and then by designing a priori zone controller for each hyper-box so as to send any initial states in the hyper-box into the invariant ellipsoidal target set, the algorithm can dramatically reduce the on-line computational burden for enlarging the set of stabilizable initial states
{"title":"An Efficient MPC Algorithm based on a Priori Zone Control","authors":"P. Park, S. Kim, J. Moon, M. Shin","doi":"10.1109/ICCIS.2006.252347","DOIUrl":"https://doi.org/10.1109/ICCIS.2006.252347","url":null,"abstract":"This paper presents an efficient MPC algorithm for uncertain time-varying systems with input constraints. The proposed algorithm adopts the method of increasing free control horizon in the dual mode (i.e., a free control mode in the first finite horizon and a state-feedback mode in the following infinite horizon) paradigm so as to enlarge the set of stabilizable initial states. In the method, however, since the number of LMIs growing exponentially with the free control horizon makes the corresponding optimization problems intractable even for small horizon, it is impracticable to blindly increase the free control horizon. The objective of this paper is to relax the restriction on increase of the free control horizon, incurred on computational burdens in the method. By choosing a combination of hyper-boxes including a possible region of the initial states and then by designing a priori zone controller for each hyper-box so as to send any initial states in the hyper-box into the invariant ellipsoidal target set, the algorithm can dramatically reduce the on-line computational burden for enlarging the set of stabilizable initial states","PeriodicalId":296028,"journal":{"name":"2006 IEEE Conference on Cybernetics and Intelligent Systems","volume":"8 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2006-06-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"133740552","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2006-06-07DOI: 10.1109/ICCIS.2006.252313
Photchanan Ratanajaipan, E. Nantajeewarawat, V. Wuwongse
An application profile specifies a set of terms, drawn from one or more standard namespaces, for annotation of data, and constrains their usage and interpretations in a particular local application. An approach to defining an application profile using the OWL and OWL/XDD languages is proposed - the former is a standard Web ontology language and the latter is a definite-clause-style knowledge representation language that uses XML expressions as their underlying data structure. Constraints are defined in terms of rules, which are represented as XDD clauses. As an illustration, application of the approach to defining Dublin core metadata initiative's library application profile (DC-Lib), along with the possibility of extending it by describing finer-grained semantic constraints, is demonstrated. A prototype catalog validation system has been implemented, and some experimental results are shown
{"title":"OWL/XDD: A Formal Language for Application Profiles","authors":"Photchanan Ratanajaipan, E. Nantajeewarawat, V. Wuwongse","doi":"10.1109/ICCIS.2006.252313","DOIUrl":"https://doi.org/10.1109/ICCIS.2006.252313","url":null,"abstract":"An application profile specifies a set of terms, drawn from one or more standard namespaces, for annotation of data, and constrains their usage and interpretations in a particular local application. An approach to defining an application profile using the OWL and OWL/XDD languages is proposed - the former is a standard Web ontology language and the latter is a definite-clause-style knowledge representation language that uses XML expressions as their underlying data structure. Constraints are defined in terms of rules, which are represented as XDD clauses. As an illustration, application of the approach to defining Dublin core metadata initiative's library application profile (DC-Lib), along with the possibility of extending it by describing finer-grained semantic constraints, is demonstrated. A prototype catalog validation system has been implemented, and some experimental results are shown","PeriodicalId":296028,"journal":{"name":"2006 IEEE Conference on Cybernetics and Intelligent Systems","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2006-06-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129824083","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2006-06-07DOI: 10.1109/ICCIS.2006.252289
Y. Nguwi, A. Kouzani
An automatic road sign recognition system identifies road signs from within images captured by an imaging sensor on-board of a vehicle, and assists the driver to properly operate the vehicle. Most existing systems include a detection phase and a classification phase. This paper classifies the methods applied to road sign recognition into three groups: colour-based, shape-based, and others. In this paper, the issues associated with automatic road sign recognition are addressed, the popular existing methods developed to tackle the road sign recognition problem are reviewed, and a comparison of the features of these methods is given
{"title":"A Study on Automatic Recognition of Road Signs","authors":"Y. Nguwi, A. Kouzani","doi":"10.1109/ICCIS.2006.252289","DOIUrl":"https://doi.org/10.1109/ICCIS.2006.252289","url":null,"abstract":"An automatic road sign recognition system identifies road signs from within images captured by an imaging sensor on-board of a vehicle, and assists the driver to properly operate the vehicle. Most existing systems include a detection phase and a classification phase. This paper classifies the methods applied to road sign recognition into three groups: colour-based, shape-based, and others. In this paper, the issues associated with automatic road sign recognition are addressed, the popular existing methods developed to tackle the road sign recognition problem are reviewed, and a comparison of the features of these methods is given","PeriodicalId":296028,"journal":{"name":"2006 IEEE Conference on Cybernetics and Intelligent Systems","volume":"21 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2006-06-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"132771869","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2006-06-07DOI: 10.1109/ICCIS.2006.252245
Y. Yoshida
In this paper, we discuss an evaluation method of fuzzy numbers as mean values and measurement of fuzziness defined by fuzzy measures, and the presented method is applicable to fuzzy numbers and fuzzy stochastic process defined by fuzzy numbers/fuzzy random variables in decision making. We compare the measurement of fuzziness and the variance as a factor to measure uncertainty. Formulae are also given to apply the results to triangle-type fuzzy numbers and trapezoidal-type fuzzy numbers
{"title":"Mean Values of Fuzzy Numbers and the Measurement of Fuzziness by Evaluation Measures","authors":"Y. Yoshida","doi":"10.1109/ICCIS.2006.252245","DOIUrl":"https://doi.org/10.1109/ICCIS.2006.252245","url":null,"abstract":"In this paper, we discuss an evaluation method of fuzzy numbers as mean values and measurement of fuzziness defined by fuzzy measures, and the presented method is applicable to fuzzy numbers and fuzzy stochastic process defined by fuzzy numbers/fuzzy random variables in decision making. We compare the measurement of fuzziness and the variance as a factor to measure uncertainty. Formulae are also given to apply the results to triangle-type fuzzy numbers and trapezoidal-type fuzzy numbers","PeriodicalId":296028,"journal":{"name":"2006 IEEE Conference on Cybernetics and Intelligent Systems","volume":"60 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2006-06-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"131395221","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2006-06-07DOI: 10.1109/ICCIS.2006.252314
W. Wettayaprasit, U. Sangket
This paper presents a method of linguistic rule extraction from neural networks nodes pruning using frequency interval data representation. The method composes of two steps which are 1) neural networks nodes pruning by analysis on the maximum weight and 2) linguistic rule extraction using frequency interval data representation. The study has tested with the benchmark data sets such as heart disease, Wisconsin breast cancer, Pima Indians diabetes, and electrocardiography data set of heart disease patients from hospitals in Thailand. The study found that the linguistic rules received had high accuracy and easy to understand. The number of rules and the number of conjunction of conditions were small and the training time was also decreased
{"title":"Linguistic Knowledge Extraction from Neural Networks Using Maximum Weight and Frequency Data Representation","authors":"W. Wettayaprasit, U. Sangket","doi":"10.1109/ICCIS.2006.252314","DOIUrl":"https://doi.org/10.1109/ICCIS.2006.252314","url":null,"abstract":"This paper presents a method of linguistic rule extraction from neural networks nodes pruning using frequency interval data representation. The method composes of two steps which are 1) neural networks nodes pruning by analysis on the maximum weight and 2) linguistic rule extraction using frequency interval data representation. The study has tested with the benchmark data sets such as heart disease, Wisconsin breast cancer, Pima Indians diabetes, and electrocardiography data set of heart disease patients from hospitals in Thailand. The study found that the linguistic rules received had high accuracy and easy to understand. The number of rules and the number of conjunction of conditions were small and the training time was also decreased","PeriodicalId":296028,"journal":{"name":"2006 IEEE Conference on Cybernetics and Intelligent Systems","volume":"65 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2006-06-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"124850371","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2006-06-07DOI: 10.1109/ICCIS.2006.252332
K. Jearanaitanakij, O. Pinngern
We present an analysis on the minimum number of hidden units that is required to recognize English capital letters of the artificial neural network. The letter font that we use as a case study is the system font. In order to have the minimum number of hidden units, the number of input features has to be minimized. Firstly, we apply our heuristic for pruning unnecessary features from the data set. The small number of the remaining features leads the artificial neural network to have the small number of input units as well. The reason is a particular feature has a one-to-one mapping relationship onto the input unit. Next, the hidden units are pruned away from the network by using the hidden unit pruning heuristic. Both pruning heuristic is based on the notion of the information gain. They can efficiently prune away the unnecessary features and hidden units from the network. The experimental results show the minimum number of hidden units required to train the artificial neural network to recognize English capital letters in system font. In addition, the accuracy rate of the classification produced by the artificial neural network is practically high. As a result, the final artificial neural network that we produce is fantastically compact and reliable
{"title":"Hidden Unit Reduction of Artificial Neural Network on English Capital Letter Recognition","authors":"K. Jearanaitanakij, O. Pinngern","doi":"10.1109/ICCIS.2006.252332","DOIUrl":"https://doi.org/10.1109/ICCIS.2006.252332","url":null,"abstract":"We present an analysis on the minimum number of hidden units that is required to recognize English capital letters of the artificial neural network. The letter font that we use as a case study is the system font. In order to have the minimum number of hidden units, the number of input features has to be minimized. Firstly, we apply our heuristic for pruning unnecessary features from the data set. The small number of the remaining features leads the artificial neural network to have the small number of input units as well. The reason is a particular feature has a one-to-one mapping relationship onto the input unit. Next, the hidden units are pruned away from the network by using the hidden unit pruning heuristic. Both pruning heuristic is based on the notion of the information gain. They can efficiently prune away the unnecessary features and hidden units from the network. The experimental results show the minimum number of hidden units required to train the artificial neural network to recognize English capital letters in system font. In addition, the accuracy rate of the classification produced by the artificial neural network is practically high. As a result, the final artificial neural network that we produce is fantastically compact and reliable","PeriodicalId":296028,"journal":{"name":"2006 IEEE Conference on Cybernetics and Intelligent Systems","volume":"73 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2006-06-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128963593","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2006-06-07DOI: 10.1109/ICCIS.2006.252336
B. Chandra, P. Paul V
Decision trees algorithms have been suggested in the past for classification of numeric as well as categorical attributes. SLIQ algorithm was proposed (Mehta et al., 1996) as an improvement over ID3 and C4.5 algorithms (Quinlan, 1993). Elegant Decision Tree Algorithm was proposed (Chandra et al. 2002) to improve the performance of SLIQ. In this paper a novel approach has been presented for the choice of split value of attributes. The issue of reducing the number of split points has been addressed. It has been shown on various datasets taken from UCI machine learning data repository that this approach gives better classification accuracy as compared to C4.5, SLIQ and Elegant Decision Tree Algorithm (EDTA) and at the same time the number of split points to be evaluated is much less compared to that of SLIQ and EDTA
决策树算法在过去被建议用于数字和分类属性的分类。作为对ID3和C4.5算法的改进(Quinlan, 1993),提出了SLIQ算法(Mehta et al., 1996)。为了提高SLIQ的性能,提出了优雅决策树算法(Chandra et al. 2002)。本文提出了一种新的属性分割值选择方法。减少分界点数目的问题已得到解决。从UCI机器学习数据库中获取的各种数据集表明,与C4.5、SLIQ和优雅决策树算法(EDTA)相比,这种方法具有更好的分类精度,同时与SLIQ和EDTA相比,需要评估的分裂点数量要少得多
{"title":"A Robust Algorithm for Classification Using Decision Trees","authors":"B. Chandra, P. Paul V","doi":"10.1109/ICCIS.2006.252336","DOIUrl":"https://doi.org/10.1109/ICCIS.2006.252336","url":null,"abstract":"Decision trees algorithms have been suggested in the past for classification of numeric as well as categorical attributes. SLIQ algorithm was proposed (Mehta et al., 1996) as an improvement over ID3 and C4.5 algorithms (Quinlan, 1993). Elegant Decision Tree Algorithm was proposed (Chandra et al. 2002) to improve the performance of SLIQ. In this paper a novel approach has been presented for the choice of split value of attributes. The issue of reducing the number of split points has been addressed. It has been shown on various datasets taken from UCI machine learning data repository that this approach gives better classification accuracy as compared to C4.5, SLIQ and Elegant Decision Tree Algorithm (EDTA) and at the same time the number of split points to be evaluated is much less compared to that of SLIQ and EDTA","PeriodicalId":296028,"journal":{"name":"2006 IEEE Conference on Cybernetics and Intelligent Systems","volume":"12 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2006-06-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"125810324","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
The thermal effect on machine tools has become a well-recognized problem in response to the increasing requirement of product quality. The performance of a thermal error compensation system basically depends on the accuracy and robustness of the thermal error model. This paper presents a thermal error model using two mathematic schemes: GM(1, N) model of the grey system theory and the adaptive network-based fuzzy inference system (ANFIS). First, the measured temperature and deformation results were analyzed via the GM(1, N) model to obtain the influence ranking of temperature ascent on thermal drift of spindle. Then, using the high-ranking temperature ascents as the input of ANFIS and training these data by hybrid learning rule, the thermal compensation model can be quickly built. The GM(1, N) model is used to effectively reduce the number of temperature sensors putting on the machine structure in prediction, and the ANFIS has the advantages of good accuracy and robustness. Eventually, tests of no-load and real-cutting operations were conducted and the comparison results show that the modeling schemes of ANFIS coupled with the GM(1, N) has good prediction ability
{"title":"Thermal Error Modeling of a Machining Center using Grey System Theory and Adaptive Network-Based Fuzzy Inference System","authors":"Kun-Chieh Wang","doi":"10.1299/JSMEC.49.1179","DOIUrl":"https://doi.org/10.1299/JSMEC.49.1179","url":null,"abstract":"The thermal effect on machine tools has become a well-recognized problem in response to the increasing requirement of product quality. The performance of a thermal error compensation system basically depends on the accuracy and robustness of the thermal error model. This paper presents a thermal error model using two mathematic schemes: GM(1, N) model of the grey system theory and the adaptive network-based fuzzy inference system (ANFIS). First, the measured temperature and deformation results were analyzed via the GM(1, N) model to obtain the influence ranking of temperature ascent on thermal drift of spindle. Then, using the high-ranking temperature ascents as the input of ANFIS and training these data by hybrid learning rule, the thermal compensation model can be quickly built. The GM(1, N) model is used to effectively reduce the number of temperature sensors putting on the machine structure in prediction, and the ANFIS has the advantages of good accuracy and robustness. Eventually, tests of no-load and real-cutting operations were conducted and the comparison results show that the modeling schemes of ANFIS coupled with the GM(1, N) has good prediction ability","PeriodicalId":296028,"journal":{"name":"2006 IEEE Conference on Cybernetics and Intelligent Systems","volume":"35 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2006-06-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"124330875","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2006-06-07DOI: 10.1109/ICCIS.2006.252320
Ting-Cheng Chang, Chuen-Jiuan Jane, Yuan-Paio Lee
The main purpose of paper is to establish a system, which combines rough set and grey theory. This model is used to let the time-serial, season-serial or regular data have the dynamic trend concepts by grey prediction, then, select the data sets with trend value through rough set screening system. It mainly is applied for a portfolio prediction in the stock market. Our study first predicts each listed company's attributes of condition and decision-making by grey prediction, secondly groups their attributes by K-means grouping tools, then filters and categorizes the groups with the classified capacity of rough set for uncertain and non-sufficient information and selects the stock portfolio. And then we evaluate the company shares from the portfolio according to their past EPS and ROE and elect the better ones again. Finally, the selected companies are arranged in order with grey relation and determine the weight of each share in the portfolio according to it. The experimental result in Taiwan: during five years (2000-2004), the average annual rate of return was 38.1%. The portfolio determined by the model overran the market dramatically
{"title":"A Forecasting Model of Dynamic Grey Rough Set and its Application on Stock Selection","authors":"Ting-Cheng Chang, Chuen-Jiuan Jane, Yuan-Paio Lee","doi":"10.1109/ICCIS.2006.252320","DOIUrl":"https://doi.org/10.1109/ICCIS.2006.252320","url":null,"abstract":"The main purpose of paper is to establish a system, which combines rough set and grey theory. This model is used to let the time-serial, season-serial or regular data have the dynamic trend concepts by grey prediction, then, select the data sets with trend value through rough set screening system. It mainly is applied for a portfolio prediction in the stock market. Our study first predicts each listed company's attributes of condition and decision-making by grey prediction, secondly groups their attributes by K-means grouping tools, then filters and categorizes the groups with the classified capacity of rough set for uncertain and non-sufficient information and selects the stock portfolio. And then we evaluate the company shares from the portfolio according to their past EPS and ROE and elect the better ones again. Finally, the selected companies are arranged in order with grey relation and determine the weight of each share in the portfolio according to it. The experimental result in Taiwan: during five years (2000-2004), the average annual rate of return was 38.1%. The portfolio determined by the model overran the market dramatically","PeriodicalId":296028,"journal":{"name":"2006 IEEE Conference on Cybernetics and Intelligent Systems","volume":"48 11","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2006-06-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114060099","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2006-06-07DOI: 10.1109/ICCIS.2006.252292
N. Umashankar, V. Karthik
In flexible manufacturing systems (FMS), automated guided vehicles (AGVs) are used for transportation of the processed materials between various pickup and delivery points. The assignment of an AGV to a workcentre from a set of workcentres simultaneously requesting the service for transport of a part is often solved in real-time with simple dispatching rules. This paper proposes an intelligent dispatching approach for the AGVs based on multi-criteria fuzzy logic controller, which simultaneously takes into account multiple aspects in every dispatching decision. The controller operates in two stages in which the second stage is constructed as a conflict resolving tool between two equally ranked AGVs for a particular workcentre. The control system is being implemented using MATLAB and its fuzzy inference engine. Sample runs have been provided to illustrate the controller implementation
{"title":"Multi-criteria Intelligent Dispatching Control of Automated Guided Vehicles in FMS","authors":"N. Umashankar, V. Karthik","doi":"10.1109/ICCIS.2006.252292","DOIUrl":"https://doi.org/10.1109/ICCIS.2006.252292","url":null,"abstract":"In flexible manufacturing systems (FMS), automated guided vehicles (AGVs) are used for transportation of the processed materials between various pickup and delivery points. The assignment of an AGV to a workcentre from a set of workcentres simultaneously requesting the service for transport of a part is often solved in real-time with simple dispatching rules. This paper proposes an intelligent dispatching approach for the AGVs based on multi-criteria fuzzy logic controller, which simultaneously takes into account multiple aspects in every dispatching decision. The controller operates in two stages in which the second stage is constructed as a conflict resolving tool between two equally ranked AGVs for a particular workcentre. The control system is being implemented using MATLAB and its fuzzy inference engine. Sample runs have been provided to illustrate the controller implementation","PeriodicalId":296028,"journal":{"name":"2006 IEEE Conference on Cybernetics and Intelligent Systems","volume":"363 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2006-06-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126808497","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}