Pub Date : 2012-07-15DOI: 10.1109/ICMLC.2012.6359603
Ting-Jung Yu, K. R. Lai
This paper presents a framework of fuzzy constraint- directed agent negotiation with learning element to improve the quality of negotiation. The learning element involves: 1) fuzzy probability constraint for regularizing the opponent's behavior to decrease the noisy beliefs about the opponent, 2) instance matching method for reusing the prior opponent knowledge to infer the similar feasible actions from similar situations, and 3) the proposed adaptive interaction for specifying the appropriate tradeoff among feasible proposals to reach an agent's local or global goal.
{"title":"A framework of fuzzy constraint-directed agent negotiation with learning element","authors":"Ting-Jung Yu, K. R. Lai","doi":"10.1109/ICMLC.2012.6359603","DOIUrl":"https://doi.org/10.1109/ICMLC.2012.6359603","url":null,"abstract":"This paper presents a framework of fuzzy constraint- directed agent negotiation with learning element to improve the quality of negotiation. The learning element involves: 1) fuzzy probability constraint for regularizing the opponent's behavior to decrease the noisy beliefs about the opponent, 2) instance matching method for reusing the prior opponent knowledge to infer the similar feasible actions from similar situations, and 3) the proposed adaptive interaction for specifying the appropriate tradeoff among feasible proposals to reach an agent's local or global goal.","PeriodicalId":128006,"journal":{"name":"2012 International Conference on Machine Learning and Cybernetics","volume":"110 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2012-07-15","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115226497","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2012-07-15DOI: 10.1109/ICMLC.2012.6359612
R. Wai, Yu-Chih Huang, Yi-Chang Chen
In recent years, an intelligent micro-grid system composed of renewable energy sources is becoming one of the interesting research topics. The success design of long-term load forecasting (LTLF) enables the intelligent micro-grid system to manipulate an optimized loading and unloading control by measuring the electrical supply for achieving the best economical and power efficiency. In this study, intelligent forecasting structures via a similar time method with historical load change rates are developed based on the basic frameworks of fuzzy neural network (FNN) and particle swarm optimization (PSO). In the regulative aspect of network parameters, conventional back-propagation (BP) and PSO tuning algorithms are used, and varied learning rates are designed in the sense of discrete-time Lyapunov stability theory. The performance comparisons of different intelligent forecasting structures including neural network (NN) structure with BP tuning algorithm (NN-BP), FNN structure with BP tuning algorithm (FNN-BP), FNN structure with BP tuning algorithm and varied learning rates (FNN-BP-V), FNN structure with PSO tuning algorithm (FNN-PSO) and PSO structure are given by numerical simulations of a real case in Taiwan campus.
{"title":"Design of intelligent long-term load forecasting with fuzzy neural network and particle swarm optimization","authors":"R. Wai, Yu-Chih Huang, Yi-Chang Chen","doi":"10.1109/ICMLC.2012.6359612","DOIUrl":"https://doi.org/10.1109/ICMLC.2012.6359612","url":null,"abstract":"In recent years, an intelligent micro-grid system composed of renewable energy sources is becoming one of the interesting research topics. The success design of long-term load forecasting (LTLF) enables the intelligent micro-grid system to manipulate an optimized loading and unloading control by measuring the electrical supply for achieving the best economical and power efficiency. In this study, intelligent forecasting structures via a similar time method with historical load change rates are developed based on the basic frameworks of fuzzy neural network (FNN) and particle swarm optimization (PSO). In the regulative aspect of network parameters, conventional back-propagation (BP) and PSO tuning algorithms are used, and varied learning rates are designed in the sense of discrete-time Lyapunov stability theory. The performance comparisons of different intelligent forecasting structures including neural network (NN) structure with BP tuning algorithm (NN-BP), FNN structure with BP tuning algorithm (FNN-BP), FNN structure with BP tuning algorithm and varied learning rates (FNN-BP-V), FNN structure with PSO tuning algorithm (FNN-PSO) and PSO structure are given by numerical simulations of a real case in Taiwan campus.","PeriodicalId":128006,"journal":{"name":"2012 International Conference on Machine Learning and Cybernetics","volume":"51 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2012-07-15","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115946309","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2012-07-15DOI: 10.1109/ICMLC.2012.6358890
Y. Vagh, Jitian Xiao
This paper is a continuation of the series of qualitative and quantitative investigations carried out for the processing and analysis of geographic land-use data in an agricultural context. The geographic data was made up of crop and cereal production land use profiles. These were linked to previously recorded climatic data from fixed weather stations in Australia that was interpolated using ordinary krigeing to fit a surface grid. In this investigation, the stochastic average monthly temperature profiles for a selected study area were used to determine the effects on crop production. The areas within the study area were spatially scaled to correspond to individual shires within the South West Agricultural region of Western Australia. The temperature was sampled for three selected years of crop production for 2002, 2003 and 2005. The evaluation was carried out using graphical, correlation and data mining regression techniques in order to detect the patterns of crop production. The patterns suggested that crop production can generally be expected to increase with an increase in temperature during the wheat growing season for some shires.
{"title":"Mining temperature profile data for shire-level crop yield prediction","authors":"Y. Vagh, Jitian Xiao","doi":"10.1109/ICMLC.2012.6358890","DOIUrl":"https://doi.org/10.1109/ICMLC.2012.6358890","url":null,"abstract":"This paper is a continuation of the series of qualitative and quantitative investigations carried out for the processing and analysis of geographic land-use data in an agricultural context. The geographic data was made up of crop and cereal production land use profiles. These were linked to previously recorded climatic data from fixed weather stations in Australia that was interpolated using ordinary krigeing to fit a surface grid. In this investigation, the stochastic average monthly temperature profiles for a selected study area were used to determine the effects on crop production. The areas within the study area were spatially scaled to correspond to individual shires within the South West Agricultural region of Western Australia. The temperature was sampled for three selected years of crop production for 2002, 2003 and 2005. The evaluation was carried out using graphical, correlation and data mining regression techniques in order to detect the patterns of crop production. The patterns suggested that crop production can generally be expected to increase with an increase in temperature during the wheat growing season for some shires.","PeriodicalId":128006,"journal":{"name":"2012 International Conference on Machine Learning and Cybernetics","volume":"36 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2012-07-15","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115976368","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2012-07-15DOI: 10.1109/ICMLC.2012.6359571
Junchi Liang, J. You, Guoqiang Han, Le Li
Recently, solving multiobjective problems are gaining more and more attention due to its useful applications in the area of engineering, bioinformatics, pattern recognition. Although there exist a lot of multiobjective evolutionary algorithms (MOEAs) for solving multiobjective problems, few of them considers the evolutionary process in both the solution space and the objective space. In the paper, we will propose a new hybrid multiobjective evolutionary algorithm named as double space based multiobjective evolutionary algorithms (DS-MOEA) to perform multiobjective optimization. Compared with traditional MOEAs, DS-MOEA not only considers the evolutionary process in the solution space, but also takes into account the knowledge learning process in the objective space. The results in the experiment illustrate that DS-MOEA works well during the process of solving multiobjective problems.
{"title":"Double space based multiobjective evolutionary algorithm","authors":"Junchi Liang, J. You, Guoqiang Han, Le Li","doi":"10.1109/ICMLC.2012.6359571","DOIUrl":"https://doi.org/10.1109/ICMLC.2012.6359571","url":null,"abstract":"Recently, solving multiobjective problems are gaining more and more attention due to its useful applications in the area of engineering, bioinformatics, pattern recognition. Although there exist a lot of multiobjective evolutionary algorithms (MOEAs) for solving multiobjective problems, few of them considers the evolutionary process in both the solution space and the objective space. In the paper, we will propose a new hybrid multiobjective evolutionary algorithm named as double space based multiobjective evolutionary algorithms (DS-MOEA) to perform multiobjective optimization. Compared with traditional MOEAs, DS-MOEA not only considers the evolutionary process in the solution space, but also takes into account the knowledge learning process in the objective space. The results in the experiment illustrate that DS-MOEA works well during the process of solving multiobjective problems.","PeriodicalId":128006,"journal":{"name":"2012 International Conference on Machine Learning and Cybernetics","volume":"13 4 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2012-07-15","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"116772893","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2012-07-15DOI: 10.1109/ICMLC.2012.6359665
Chun-Ming Tsai
The conventional document processing systems include document analysis (DA), document classification, and document understanding. These systems are step by step. If the results in the previous step are improper, the current step will produce improper results. Furthermore, the binarization methods in DA to threshold an A4-sized color image are inefficient because they scan the entire image at least once. The block segmentation methods in DA to segment an A4-sized binary image are inefficient since they scan the entire image at least twice. The layout analysis methods in DA are also inefficient. They use global and local analysis and scan the entire image at least once. In this article, an intelligent, efficient, and effective document processing system is proposed to solve the abovementioned problems. The proposed method includes document binarization and mixed-based layout analysis. The binarization method only scans the border image. The mixed-based layout analysis mixed uses block segmentation and classification. The block segmentation only scans the background image. The block classification uses background gap and writing format to classify blocks. Experimental results show that the performance of the proposed method is better than FineReader 11.0 in visual measurement.
{"title":"Intelligent document processing system for conference article","authors":"Chun-Ming Tsai","doi":"10.1109/ICMLC.2012.6359665","DOIUrl":"https://doi.org/10.1109/ICMLC.2012.6359665","url":null,"abstract":"The conventional document processing systems include document analysis (DA), document classification, and document understanding. These systems are step by step. If the results in the previous step are improper, the current step will produce improper results. Furthermore, the binarization methods in DA to threshold an A4-sized color image are inefficient because they scan the entire image at least once. The block segmentation methods in DA to segment an A4-sized binary image are inefficient since they scan the entire image at least twice. The layout analysis methods in DA are also inefficient. They use global and local analysis and scan the entire image at least once. In this article, an intelligent, efficient, and effective document processing system is proposed to solve the abovementioned problems. The proposed method includes document binarization and mixed-based layout analysis. The binarization method only scans the border image. The mixed-based layout analysis mixed uses block segmentation and classification. The block segmentation only scans the background image. The block classification uses background gap and writing format to classify blocks. Experimental results show that the performance of the proposed method is better than FineReader 11.0 in visual measurement.","PeriodicalId":128006,"journal":{"name":"2012 International Conference on Machine Learning and Cybernetics","volume":"46 3 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2012-07-15","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115078871","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2012-07-15DOI: 10.1109/ICMLC.2012.6359631
An-Zen Shih
In this paper we describe a system which uses linguistic expression and fractal dimensions to retrieve images in database. Brodaz texture images are used for our experiment. In addition, Taruma features are used to extract images of the database and several linguistic expressions are used for classified the images. These linguistic expressions, with the help of fractal dimension are more efficient in searching images. By visual inspection, we are satisfied with our experiment results.
{"title":"The approach of using fractal dimension and linguistic descriptors in CBIR","authors":"An-Zen Shih","doi":"10.1109/ICMLC.2012.6359631","DOIUrl":"https://doi.org/10.1109/ICMLC.2012.6359631","url":null,"abstract":"In this paper we describe a system which uses linguistic expression and fractal dimensions to retrieve images in database. Brodaz texture images are used for our experiment. In addition, Taruma features are used to extract images of the database and several linguistic expressions are used for classified the images. These linguistic expressions, with the help of fractal dimension are more efficient in searching images. By visual inspection, we are satisfied with our experiment results.","PeriodicalId":128006,"journal":{"name":"2012 International Conference on Machine Learning and Cybernetics","volume":"27 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2012-07-15","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"123496500","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2012-07-15DOI: 10.1109/ICMLC.2012.6358975
Pavel Surynek
A problem of cooperative path-planning is addressed from the perspective of propositional satisfiability in this paper. Two new encodings of the problem as SAT are proposed and evaluated. Together with the existent solution optimization method which locally improves a sub-optimal solution of the problem through SAT solving, one of the new encodings constitute a state-of-the-art method for cooperative path-planning in highly occupied environments.
{"title":"Application of propositional satisfiability to special cases of cooperative path-planning","authors":"Pavel Surynek","doi":"10.1109/ICMLC.2012.6358975","DOIUrl":"https://doi.org/10.1109/ICMLC.2012.6358975","url":null,"abstract":"A problem of cooperative path-planning is addressed from the perspective of propositional satisfiability in this paper. Two new encodings of the problem as SAT are proposed and evaluated. Together with the existent solution optimization method which locally improves a sub-optimal solution of the problem through SAT solving, one of the new encodings constitute a state-of-the-art method for cooperative path-planning in highly occupied environments.","PeriodicalId":128006,"journal":{"name":"2012 International Conference on Machine Learning and Cybernetics","volume":"31 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2012-07-15","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"123693710","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2012-07-15DOI: 10.1109/ICMLC.2012.6358930
Jingyuan Han, Yan-Bo Yang, Yun-He Zhao
Multiple criteria decision-making research has become a main area of research for entrepreneurial environment because of its complexity. The evaluation model of entrepreneurial environment was explored in this paper. This paper develops an evaluation model based on fuzzy theory and Analytic Hierarchy Process (AHP) for evaluating entrepreneurial environment, whose features was fuzzy and vague, to help the government and entrepreneur in a complicated environment. The model was used in evaluation on entrepreneurial environment for eleven regions of Hebei Province. The evaluation model provides an accurate, effective, and systematic decision support tool.
{"title":"Evaluation of entrepreneurial environment based on fuzzy comprehensive evaluation method","authors":"Jingyuan Han, Yan-Bo Yang, Yun-He Zhao","doi":"10.1109/ICMLC.2012.6358930","DOIUrl":"https://doi.org/10.1109/ICMLC.2012.6358930","url":null,"abstract":"Multiple criteria decision-making research has become a main area of research for entrepreneurial environment because of its complexity. The evaluation model of entrepreneurial environment was explored in this paper. This paper develops an evaluation model based on fuzzy theory and Analytic Hierarchy Process (AHP) for evaluating entrepreneurial environment, whose features was fuzzy and vague, to help the government and entrepreneur in a complicated environment. The model was used in evaluation on entrepreneurial environment for eleven regions of Hebei Province. The evaluation model provides an accurate, effective, and systematic decision support tool.","PeriodicalId":128006,"journal":{"name":"2012 International Conference on Machine Learning and Cybernetics","volume":"69 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2012-07-15","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"124446669","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2012-07-15DOI: 10.1109/ICMLC.2012.6358971
Xu Zhou, Shuxia Lu, Lisha Hu, Meng Zhang
For the problem of imbalanced data classification which was not discussed in the standard Extreme Support Vector Machines (ESVM), an imbalanced extreme support vector machines (IESVM) was proposed. Firstly, a preliminary normal vector of separating hyperplane is obtained directly by geometric analysis. Secondly, penalty factors are obtained which are based on the information provided by data sets projecting onto the preliminary normal vector. Finally, the final separation hyperplane is got through the improved ESVM training. IESVM can overcome disadvantages of traditional designing methods which only consider the imbalance of samples size and can improve the generalization ability of ESVM. Experimental results show that the method can effectively enhance the classification performance on imbalanced data sets.
{"title":"Imbalanced extreme support vector machine","authors":"Xu Zhou, Shuxia Lu, Lisha Hu, Meng Zhang","doi":"10.1109/ICMLC.2012.6358971","DOIUrl":"https://doi.org/10.1109/ICMLC.2012.6358971","url":null,"abstract":"For the problem of imbalanced data classification which was not discussed in the standard Extreme Support Vector Machines (ESVM), an imbalanced extreme support vector machines (IESVM) was proposed. Firstly, a preliminary normal vector of separating hyperplane is obtained directly by geometric analysis. Secondly, penalty factors are obtained which are based on the information provided by data sets projecting onto the preliminary normal vector. Finally, the final separation hyperplane is got through the improved ESVM training. IESVM can overcome disadvantages of traditional designing methods which only consider the imbalance of samples size and can improve the generalization ability of ESVM. Experimental results show that the method can effectively enhance the classification performance on imbalanced data sets.","PeriodicalId":128006,"journal":{"name":"2012 International Conference on Machine Learning and Cybernetics","volume":"55 32 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2012-07-15","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"124758234","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2012-07-15DOI: 10.1109/ICMLC.2012.6359563
Juntao Xue, Cui-Rong Wang, Shao-Fang Xing
The detection of moving objects is a key step in the traffic video monitoring system. The most common way to detect moving objects is background subtraction and the critical technique is the background modeling. In this paper, we propose a method combining LBP with Gauss for the detection of the moving objects. We adopt the method of parallel processing in order to improve the processing speed in the implementation of the algorithm. And the video sequences are used to test the proposed method. Experiments show that our methods have high real-time in background updating.
{"title":"Dual background modeling of traffic image based on LBP and Gaussian","authors":"Juntao Xue, Cui-Rong Wang, Shao-Fang Xing","doi":"10.1109/ICMLC.2012.6359563","DOIUrl":"https://doi.org/10.1109/ICMLC.2012.6359563","url":null,"abstract":"The detection of moving objects is a key step in the traffic video monitoring system. The most common way to detect moving objects is background subtraction and the critical technique is the background modeling. In this paper, we propose a method combining LBP with Gauss for the detection of the moving objects. We adopt the method of parallel processing in order to improve the processing speed in the implementation of the algorithm. And the video sequences are used to test the proposed method. Experiments show that our methods have high real-time in background updating.","PeriodicalId":128006,"journal":{"name":"2012 International Conference on Machine Learning and Cybernetics","volume":"67 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2012-07-15","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"125106470","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}