Pub Date : 2011-07-26DOI: 10.1109/ICNC.2011.6022403
Jisen Yin, Jian Li, You Xiao
This paper proposes an effective way of searching for initial value, which is based on Gauss-Newton iteration and combined with the common features of nonlinear rheological equation of drilling fluids. With this method, we can find a fine initial value, which can be applied to the parameter estimation of nonlinear rheological model of drilling fluids. This method overcomes the shortcomings of Gauss-Newton method which strongly depends on the initial value and the iteration may not be convergent in practical application and fully exerts the advantages of Gauss-Newton method which has smaller workload in each step and faster pace of convergence. Large quantities of measured drilling fluids examples show that the rheological parameters estimated by this method have a fine statistical characteristic, that is, fitting residual is nearly unbiased and variance is almost minimum. Besides, the fitting residual is smaller than the one of traditional linear regression and has excellent statistical properties.
{"title":"A new methodology of nonlinear parameter approximation used for rheological model of drilling fluids","authors":"Jisen Yin, Jian Li, You Xiao","doi":"10.1109/ICNC.2011.6022403","DOIUrl":"https://doi.org/10.1109/ICNC.2011.6022403","url":null,"abstract":"This paper proposes an effective way of searching for initial value, which is based on Gauss-Newton iteration and combined with the common features of nonlinear rheological equation of drilling fluids. With this method, we can find a fine initial value, which can be applied to the parameter estimation of nonlinear rheological model of drilling fluids. This method overcomes the shortcomings of Gauss-Newton method which strongly depends on the initial value and the iteration may not be convergent in practical application and fully exerts the advantages of Gauss-Newton method which has smaller workload in each step and faster pace of convergence. Large quantities of measured drilling fluids examples show that the rheological parameters estimated by this method have a fine statistical characteristic, that is, fitting residual is nearly unbiased and variance is almost minimum. Besides, the fitting residual is smaller than the one of traditional linear regression and has excellent statistical properties.","PeriodicalId":299503,"journal":{"name":"2011 Seventh International Conference on Natural Computation","volume":"24 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2011-07-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"124055165","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2011-07-26DOI: 10.1109/ICNC.2011.6022399
Jungang Luo, Jiancang Xie, Yuxin Ma, Gang Zhang
In order to overcome the disadvantages of quasi-genetic algorithm of slow convergence speed and premature convergence, an improved genetic algorithm of directional self-learning (DSLGA) is proposed in this paper. The directional information is introduced in local search process of the self-learning operator. And the search direction is guided by the pseudo-gradient of the function. By competition, cooperation and learning among the individuals, best solution is updated continuously. And a deletion operator is proposed in order to increase the population diversity, which avoid premature convergence and improve the algorithm convergence speed. Theoretical analysis has proved that DSLGA has the characteristic of global convergence. In experiment, DSLGA was tested by 5 unconstrained high-dimensional functions, and the results were compared with MAGA. Finally, the DSLGA was applied to optimal parameters estimation for Muskingum model, and compared with GAGA and MAGA. The experiment and application results show that DSLGA performs much better than the above algorithms both in quality of solutions and in computational complexity. So the effectiveness of algorithm is obvious.
{"title":"An improved genetic algorithm for hydrological model calibration","authors":"Jungang Luo, Jiancang Xie, Yuxin Ma, Gang Zhang","doi":"10.1109/ICNC.2011.6022399","DOIUrl":"https://doi.org/10.1109/ICNC.2011.6022399","url":null,"abstract":"In order to overcome the disadvantages of quasi-genetic algorithm of slow convergence speed and premature convergence, an improved genetic algorithm of directional self-learning (DSLGA) is proposed in this paper. The directional information is introduced in local search process of the self-learning operator. And the search direction is guided by the pseudo-gradient of the function. By competition, cooperation and learning among the individuals, best solution is updated continuously. And a deletion operator is proposed in order to increase the population diversity, which avoid premature convergence and improve the algorithm convergence speed. Theoretical analysis has proved that DSLGA has the characteristic of global convergence. In experiment, DSLGA was tested by 5 unconstrained high-dimensional functions, and the results were compared with MAGA. Finally, the DSLGA was applied to optimal parameters estimation for Muskingum model, and compared with GAGA and MAGA. The experiment and application results show that DSLGA performs much better than the above algorithms both in quality of solutions and in computational complexity. So the effectiveness of algorithm is obvious.","PeriodicalId":299503,"journal":{"name":"2011 Seventh International Conference on Natural Computation","volume":"70 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2011-07-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127638668","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2011-07-26DOI: 10.1109/ICNC.2011.6022400
Chengguan Xiang, Shihuan Xiong
By making use of the previous result of sequential pattern mining, a projection database will be build to help decrease the scanning times of the whole database and the creation of the candidate sequence, which can make up for the weakness of the GSP. In this way, the mining efficiency is enhanced; the demand of the computing speed of the massive data is satisfied. So it is convenient to search for the right cost information from the massive data and then to proceed with cost analysis and cost prediction. The application of the improved time sequential pattern to the cost prediction in the enterprises demonstrates that this kind of computing system can enhance the accuracy and promptness of cost prediction effectively.
{"title":"The GSP algorithm in dynamic cost prediction of enterprise","authors":"Chengguan Xiang, Shihuan Xiong","doi":"10.1109/ICNC.2011.6022400","DOIUrl":"https://doi.org/10.1109/ICNC.2011.6022400","url":null,"abstract":"By making use of the previous result of sequential pattern mining, a projection database will be build to help decrease the scanning times of the whole database and the creation of the candidate sequence, which can make up for the weakness of the GSP. In this way, the mining efficiency is enhanced; the demand of the computing speed of the massive data is satisfied. So it is convenient to search for the right cost information from the massive data and then to proceed with cost analysis and cost prediction. The application of the improved time sequential pattern to the cost prediction in the enterprises demonstrates that this kind of computing system can enhance the accuracy and promptness of cost prediction effectively.","PeriodicalId":299503,"journal":{"name":"2011 Seventh International Conference on Natural Computation","volume":"42 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2011-07-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"131985395","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2011-07-26DOI: 10.1109/ICNC.2011.6022100
Junheong Park, K. Sim, Seung-Min Park
Support Vector Machine maximizes a margin between two groups. Variance-considered machine improves SVM to align hyper plane according to two classes' variance and prior probability to reduce the error rate. There is probabilistically imprecise things those data classified by VCM. In this paper, we introduce the VCM and try to propose a concept that is to confer reliability estimated by Mahalanobis distance upon data separated by VCM.
{"title":"Notice of RetractionStudy on advanced variance-considered machines using Mahalanobis distance","authors":"Junheong Park, K. Sim, Seung-Min Park","doi":"10.1109/ICNC.2011.6022100","DOIUrl":"https://doi.org/10.1109/ICNC.2011.6022100","url":null,"abstract":"Support Vector Machine maximizes a margin between two groups. Variance-considered machine improves SVM to align hyper plane according to two classes' variance and prior probability to reduce the error rate. There is probabilistically imprecise things those data classified by VCM. In this paper, we introduce the VCM and try to propose a concept that is to confer reliability estimated by Mahalanobis distance upon data separated by VCM.","PeriodicalId":299503,"journal":{"name":"2011 Seventh International Conference on Natural Computation","volume":"11 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2011-07-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130429841","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2011-07-26DOI: 10.1109/ICNC.2011.6022522
Fuzheng Gao, T. Zhang, Feng Chang
An upwind finite volume element method (FVEM) is constructed for computing the nonlinear evolutional problems. The priori error estimates in L2-norm and H1-norm are derived to determine the errors in the approximate solution. Numerical experiment shows that the method is a very effective engineering computing method.
{"title":"An upwind finite volume element method for nonlinear evolutional problem and theory analysis","authors":"Fuzheng Gao, T. Zhang, Feng Chang","doi":"10.1109/ICNC.2011.6022522","DOIUrl":"https://doi.org/10.1109/ICNC.2011.6022522","url":null,"abstract":"An upwind finite volume element method (FVEM) is constructed for computing the nonlinear evolutional problems. The priori error estimates in L2-norm and H1-norm are derived to determine the errors in the approximate solution. Numerical experiment shows that the method is a very effective engineering computing method.","PeriodicalId":299503,"journal":{"name":"2011 Seventh International Conference on Natural Computation","volume":"30 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2011-07-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"131492949","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2011-07-26DOI: 10.1109/ICNC.2011.6022123
Xiaolian Guo, Haiying Wang, D. H. Glass
The Bayesian self-organizing map (BSOM) algorithm is an extended self-organizing learning process, which uses the neurons' estimated posterior probabilities to replace the distance measure and neighborhood function. It is used in such areas as data clustering and density estimation. However, the impact of learning parameters has not been rigorously studied. Based on the analysis of two synthetic datasets, this paper investigates the impact of the selection of learning parameters such as the learning rates, the initial mean values, the initial covariance matrices, the input order and the number of iterations. The experimental results indicate that the BSOM algorithm is not sensitive to the initial mean values and the number of iterations, however, it is rather sensitive to the learning rates, the initial covariance matrices and the input order.
{"title":"The impact of learning parameters on Bayesian self-organizing maps: An empirical study","authors":"Xiaolian Guo, Haiying Wang, D. H. Glass","doi":"10.1109/ICNC.2011.6022123","DOIUrl":"https://doi.org/10.1109/ICNC.2011.6022123","url":null,"abstract":"The Bayesian self-organizing map (BSOM) algorithm is an extended self-organizing learning process, which uses the neurons' estimated posterior probabilities to replace the distance measure and neighborhood function. It is used in such areas as data clustering and density estimation. However, the impact of learning parameters has not been rigorously studied. Based on the analysis of two synthetic datasets, this paper investigates the impact of the selection of learning parameters such as the learning rates, the initial mean values, the initial covariance matrices, the input order and the number of iterations. The experimental results indicate that the BSOM algorithm is not sensitive to the initial mean values and the number of iterations, however, it is rather sensitive to the learning rates, the initial covariance matrices and the input order.","PeriodicalId":299503,"journal":{"name":"2011 Seventh International Conference on Natural Computation","volume":"14 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2011-07-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"131561169","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Since congestion of traffic is ubiquitous in the modern city, optimizing the behavior of traffic lights for efficient traffic flow is a critically important goal. However,agents often select only locally optimal actions without coordinating their neighbor intersections. In this paper, an urban road traffic area-wide coordination control algorithm based on collaborative Q-learning is proposed. The agent model of traffic intersections is demonstrated. The algorithm substantially reduces average vehicular delay by using a collaborative Q-learning algorithm and can cooperative control of multiple intersections to achieve a near optimal control policy. The computer simulation results show that the control algorithm can effectively reduce the average delay time and play a very good control effect with multi-intersections, so the coordination method used in this paper is effective.
{"title":"Multi-intersections traffic signal intelligent control using collaborative q-learning algorithm","authors":"Chungui Li, Xianglei Yan, Fei-Ying Lin, Hongling Zhang","doi":"10.1109/ICNC.2011.6022063","DOIUrl":"https://doi.org/10.1109/ICNC.2011.6022063","url":null,"abstract":"Since congestion of traffic is ubiquitous in the modern city, optimizing the behavior of traffic lights for efficient traffic flow is a critically important goal. However,agents often select only locally optimal actions without coordinating their neighbor intersections. In this paper, an urban road traffic area-wide coordination control algorithm based on collaborative Q-learning is proposed. The agent model of traffic intersections is demonstrated. The algorithm substantially reduces average vehicular delay by using a collaborative Q-learning algorithm and can cooperative control of multiple intersections to achieve a near optimal control policy. The computer simulation results show that the control algorithm can effectively reduce the average delay time and play a very good control effect with multi-intersections, so the coordination method used in this paper is effective.","PeriodicalId":299503,"journal":{"name":"2011 Seventh International Conference on Natural Computation","volume":"7 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2011-07-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"133080911","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2011-07-26DOI: 10.1109/ICNC.2011.6022432
Changgen Liu, Jinbao Zhang, Yunfang Sun
The kernel function and the surface particle tracking method is very important for Smoothed-particle hydrodynamics (SPH) model. A new kernel function and surface particle tracking method are introduced in SPH model in this paper, and the optimized SPH model is verified by experimental data. Then the optimized model is used in simulating the interaction between wave and semi-circular breakwater to calculate the transmission and reflection coefficient, and in the interaction between structure and tsunami is as well studied by using the 3D SPH model.
{"title":"The optimization of SPH method and its application in simulation of water wave","authors":"Changgen Liu, Jinbao Zhang, Yunfang Sun","doi":"10.1109/ICNC.2011.6022432","DOIUrl":"https://doi.org/10.1109/ICNC.2011.6022432","url":null,"abstract":"The kernel function and the surface particle tracking method is very important for Smoothed-particle hydrodynamics (SPH) model. A new kernel function and surface particle tracking method are introduced in SPH model in this paper, and the optimized SPH model is verified by experimental data. Then the optimized model is used in simulating the interaction between wave and semi-circular breakwater to calculate the transmission and reflection coefficient, and in the interaction between structure and tsunami is as well studied by using the 3D SPH model.","PeriodicalId":299503,"journal":{"name":"2011 Seventh International Conference on Natural Computation","volume":"6 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2011-07-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130799011","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2011-07-26DOI: 10.1109/ICNC.2011.6021921
Lincang Ju, Dekuan Song, Beibei Shi, Qiang Zhao
This paper analyses the main fault factors on wind turbine, and presents three general faults: gear box fault, leeway system fault and generator fault. After the analysis and research of the basic principle of Back-Propagation Neural Network based on LM arithmetic, a three-layer Back-Propagation Network faults predictive diagnosis model is built. Data from two wind turbines are used to test the effectiveness of this method.
{"title":"Fault predictive diagnosis of wind turbine based on LM arithmetic of Artificial Neural Network theory","authors":"Lincang Ju, Dekuan Song, Beibei Shi, Qiang Zhao","doi":"10.1109/ICNC.2011.6021921","DOIUrl":"https://doi.org/10.1109/ICNC.2011.6021921","url":null,"abstract":"This paper analyses the main fault factors on wind turbine, and presents three general faults: gear box fault, leeway system fault and generator fault. After the analysis and research of the basic principle of Back-Propagation Neural Network based on LM arithmetic, a three-layer Back-Propagation Network faults predictive diagnosis model is built. Data from two wind turbines are used to test the effectiveness of this method.","PeriodicalId":299503,"journal":{"name":"2011 Seventh International Conference on Natural Computation","volume":"153 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2011-07-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"133133430","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2011-07-26DOI: 10.1109/ICNC.2011.6022159
Xin-guang Li, Min-feng Yao, Wen-Tao Huang
Aiming at the disadvantages of the single BP neural network in speech recognition, a method of speech recognition based on k-means clustering and neural network ensembles is presented in this paper. At first, a number of individual neural networks are trained, and then the k-means clustering algorithm is used to select a part of the trained individuals' weights and thresholds for improving diversity. After that, the individuals of the nearest clustering center are selected to make up the membership's initial weights and thresholds of the ensemble learning. The method not only overcomes the shortcomings that single BP neural network model is easy to local convergence and is lack of stability, but also solves the problems that the traditional adaboost method in training time is too long and the diversity of individual network is not obvious. The final experiment results prove the effectiveness of this method when applied to speakers of independent speech recognition.
{"title":"Speech recognition based on k-means clustering and neural network ensembles","authors":"Xin-guang Li, Min-feng Yao, Wen-Tao Huang","doi":"10.1109/ICNC.2011.6022159","DOIUrl":"https://doi.org/10.1109/ICNC.2011.6022159","url":null,"abstract":"Aiming at the disadvantages of the single BP neural network in speech recognition, a method of speech recognition based on k-means clustering and neural network ensembles is presented in this paper. At first, a number of individual neural networks are trained, and then the k-means clustering algorithm is used to select a part of the trained individuals' weights and thresholds for improving diversity. After that, the individuals of the nearest clustering center are selected to make up the membership's initial weights and thresholds of the ensemble learning. The method not only overcomes the shortcomings that single BP neural network model is easy to local convergence and is lack of stability, but also solves the problems that the traditional adaboost method in training time is too long and the diversity of individual network is not obvious. The final experiment results prove the effectiveness of this method when applied to speakers of independent speech recognition.","PeriodicalId":299503,"journal":{"name":"2011 Seventh International Conference on Natural Computation","volume":"113 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2011-07-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127829130","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}