Pub Date : 2020-02-01DOI: 10.1142/s0218488520970016
{"title":"Acknowledgements to the Referees (2019)","authors":"","doi":"10.1142/s0218488520970016","DOIUrl":"https://doi.org/10.1142/s0218488520970016","url":null,"abstract":"","PeriodicalId":50283,"journal":{"name":"International Journal of Uncertainty Fuzziness and Knowledge-Based Systems","volume":"36 1","pages":""},"PeriodicalIF":1.5,"publicationDate":"2020-02-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"73308532","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2020-02-01DOI: 10.1142/s0218488520970028
Please send your abstracts (or copies of papers that you want to see reviewed here) to vladik@utep.edu, or by regular mail to Vladik Kreinovich, Department of Computer Science, University of Texas at El Paso, El Paso, TX 79968, USA…
请将您的摘要(或您希望在这里看到的论文副本)发送到vladik@utep.edu,或通过常规邮件发送到德克萨斯州埃尔帕索大学计算机科学系Vladik Kreinovich, El Paso, TX 79968, USA…
{"title":"Interval Methods in Knowledge Representation","authors":"","doi":"10.1142/s0218488520970028","DOIUrl":"https://doi.org/10.1142/s0218488520970028","url":null,"abstract":"Please send your abstracts (or copies of papers that you want to see reviewed here) to vladik@utep.edu, or by regular mail to Vladik Kreinovich, Department of Computer Science, University of Texas at El Paso, El Paso, TX 79968, USA…","PeriodicalId":50283,"journal":{"name":"International Journal of Uncertainty Fuzziness and Knowledge-Based Systems","volume":"1 1","pages":""},"PeriodicalIF":1.5,"publicationDate":"2020-02-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"90112543","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2015-02-15DOI: 10.1142/S021848851550004X
Gu Jifa, Mao Jian, Cui Tie-jun, Li Chongwei
The geographical world is an intricate system that comprises the interaction of the Earth's atmosphere, hydrosphere, biosphere, lithosphere, and pedosphere. Existing technologies and systems can only store, represent, and analyze crisp or type-I fuzzy spatial data and obtain spatial knowledge on several discrete scales. However, these technologies are limited to multi-scale and high-order vagueness spatial data representation and analysis, particularly regarding the representation and acquisition of multi-scale knowledge. In this paper, the uncertainty in geographic information systems (GISs) and existing problems in classical spatial analysis methods are summarized. Innovative concepts, such as the scale aggregation model and scale polymorphism, are proposed. A multi-scale fuzzy spatial analysis framework based on an interval type-II fuzzy set is introduced, and critical points are highlighted, such as an interval type-II fuzzy geographical object model (the boundary model and metric methods for geometric properties), direction relations, topological relations, and overlap methods. An actual case based on a multi-scale regional debris-flow hazard assessment is used to confirm the validity of the theory proposed in this paper.
{"title":"A Multi-Scale Fuzzy Spatial Analysis Framework for Large Data Based on IT2 FS","authors":"Gu Jifa, Mao Jian, Cui Tie-jun, Li Chongwei","doi":"10.1142/S021848851550004X","DOIUrl":"https://doi.org/10.1142/S021848851550004X","url":null,"abstract":"The geographical world is an intricate system that comprises the interaction of the Earth's atmosphere, hydrosphere, biosphere, lithosphere, and pedosphere. Existing technologies and systems can only store, represent, and analyze crisp or type-I fuzzy spatial data and obtain spatial knowledge on several discrete scales. However, these technologies are limited to multi-scale and high-order vagueness spatial data representation and analysis, particularly regarding the representation and acquisition of multi-scale knowledge. In this paper, the uncertainty in geographic information systems (GISs) and existing problems in classical spatial analysis methods are summarized. Innovative concepts, such as the scale aggregation model and scale polymorphism, are proposed. A multi-scale fuzzy spatial analysis framework based on an interval type-II fuzzy set is introduced, and critical points are highlighted, such as an interval type-II fuzzy geographical object model (the boundary model and metric methods for geometric properties), direction relations, topological relations, and overlap methods. An actual case based on a multi-scale regional debris-flow hazard assessment is used to confirm the validity of the theory proposed in this paper.","PeriodicalId":50283,"journal":{"name":"International Journal of Uncertainty Fuzziness and Knowledge-Based Systems","volume":"1 1","pages":"73-104"},"PeriodicalIF":1.5,"publicationDate":"2015-02-15","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"80497335","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2013-10-31DOI: 10.1142/S0218488513400151
Hai-Jun Rong, G. Huang, Yong-Qi Liang
Recently an Online Sequential Fuzzy Extreme Learning (OS-Fuzzy-ELM) algorithm has been developed by Rong et al. for the RBF-like fuzzy neural systems where a fuzzy inference system is equivalent to a RBF network under some conditions. In the paper the learning ability of the batch version of OS-Fuzzy-ELM, called as Fuzzy-ELM is further evaluated to train a class of fuzzy inference systems which can not be represented by the RBF networks. The equivalence between the output of the fuzzy system and that of a generalized Single-Hidden Layer Feedforward Network as presented in Huang et al. is shown first, which is then used to prove the validity of the Fuzzy-ELM algorithm. In Fuzzy-ELM, the parameters of the fuzzy membership functions are randomly assigned and then the corresponding consequent parameters are determined analytically. Besides an input variable selection method based on the correlation measure is proposed to select the relevant inputs as the inputs of the fuzzy system. This can avoid the exponential increase of number of fuzzy rules with the increase of dimension of input variables while maintaining the testing performance and reducing the computation burden. Performance comparison of Fuzzy-ELM with other existing algorithms is presented using some real-world regression benchmark problems. The results show that the proposed Fuzzy-ELM produces similar or better accuracies with a significantly lower training time.
{"title":"FUZZY EXTREME LEARNING MACHINE FOR A CLASS OF FUZZY INFERENCE SYSTEMS","authors":"Hai-Jun Rong, G. Huang, Yong-Qi Liang","doi":"10.1142/S0218488513400151","DOIUrl":"https://doi.org/10.1142/S0218488513400151","url":null,"abstract":"Recently an Online Sequential Fuzzy Extreme Learning (OS-Fuzzy-ELM) algorithm has been developed by Rong et al. for the RBF-like fuzzy neural systems where a fuzzy inference system is equivalent to a RBF network under some conditions. In the paper the learning ability of the batch version of OS-Fuzzy-ELM, called as Fuzzy-ELM is further evaluated to train a class of fuzzy inference systems which can not be represented by the RBF networks. The equivalence between the output of the fuzzy system and that of a generalized Single-Hidden Layer Feedforward Network as presented in Huang et al. is shown first, which is then used to prove the validity of the Fuzzy-ELM algorithm. In Fuzzy-ELM, the parameters of the fuzzy membership functions are randomly assigned and then the corresponding consequent parameters are determined analytically. Besides an input variable selection method based on the correlation measure is proposed to select the relevant inputs as the inputs of the fuzzy system. This can avoid the exponential increase of number of fuzzy rules with the increase of dimension of input variables while maintaining the testing performance and reducing the computation burden. Performance comparison of Fuzzy-ELM with other existing algorithms is presented using some real-world regression benchmark problems. The results show that the proposed Fuzzy-ELM produces similar or better accuracies with a significantly lower training time.","PeriodicalId":50283,"journal":{"name":"International Journal of Uncertainty Fuzziness and Knowledge-Based Systems","volume":"11 1","pages":"51-61"},"PeriodicalIF":1.5,"publicationDate":"2013-10-31","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"78609699","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2013-10-31DOI: 10.1142/S0218488513400138
Junhai Zhai, Hong-Yu Xu, Yan Li
Extreme learning machine (ELM) is an efficient and practical learning algorithm used for training single hidden layer feed-forward neural networks (SLFNs). ELM can provide good generalization performance at extremely fast learning speed. However, ELM suffers from instability and over-fitting, especially on relatively large datasets. Based on probabilistic SLFNs, an approach of fusion of extreme learning machine (F-ELM) with fuzzy integral is proposed in this paper. The proposed algorithm consists of three stages. Firstly, the bootstrap technique is employed to generate several subsets of original dataset. Secondly, probabilistic SLFNs are trained with ELM algorithm on each subset. Finally, the trained probabilistic SLFNs are fused with fuzzy integral. The experimental results show that the proposed approach can alleviate to some extent the problems mentioned above, and can increase the prediction accuracy.
{"title":"FUSION OF EXTREME LEARNING MACHINE WITH FUZZY INTEGRAL","authors":"Junhai Zhai, Hong-Yu Xu, Yan Li","doi":"10.1142/S0218488513400138","DOIUrl":"https://doi.org/10.1142/S0218488513400138","url":null,"abstract":"Extreme learning machine (ELM) is an efficient and practical learning algorithm used for training single hidden layer feed-forward neural networks (SLFNs). ELM can provide good generalization performance at extremely fast learning speed. However, ELM suffers from instability and over-fitting, especially on relatively large datasets. Based on probabilistic SLFNs, an approach of fusion of extreme learning machine (F-ELM) with fuzzy integral is proposed in this paper. The proposed algorithm consists of three stages. Firstly, the bootstrap technique is employed to generate several subsets of original dataset. Secondly, probabilistic SLFNs are trained with ELM algorithm on each subset. Finally, the trained probabilistic SLFNs are fused with fuzzy integral. The experimental results show that the proposed approach can alleviate to some extent the problems mentioned above, and can increase the prediction accuracy.","PeriodicalId":50283,"journal":{"name":"International Journal of Uncertainty Fuzziness and Knowledge-Based Systems","volume":"19 1","pages":"23-34"},"PeriodicalIF":1.5,"publicationDate":"2013-10-31","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"90885555","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2013-10-31DOI: 10.1142/S0218488513400126
Shuangquan Wang, Yiqiang Chen, Zhenyu Chen
As one important clue to understand people's behavior and life pattern, transportation mode (such as walking, bicycling, taking bus, driving, taking light-rail or subway, etc.) information has already widely used in mobile recommendation, route planning, social networking and health caring. This paper proposes a transportation mode recognition method using probability fusion of extreme learning machines (ELMs). Two ELM classification models are trained to recognize accelerometer data and Global Positioning System (GPS) data, respectively. Fuzzy output vectors of these two ELMs are transformed into probability vectors and fused to determine the final result. Experimental results verify that the proposed method is effective and can obtain higher recognition accuracy than traditional fusion methods.
{"title":"RECOGNIZING TRANSPORTATION MODE ON MOBILE PHONE USING PROBABILITY FUSION OF EXTREME LEARNING MACHINES","authors":"Shuangquan Wang, Yiqiang Chen, Zhenyu Chen","doi":"10.1142/S0218488513400126","DOIUrl":"https://doi.org/10.1142/S0218488513400126","url":null,"abstract":"As one important clue to understand people's behavior and life pattern, transportation mode (such as walking, bicycling, taking bus, driving, taking light-rail or subway, etc.) information has already widely used in mobile recommendation, route planning, social networking and health caring. This paper proposes a transportation mode recognition method using probability fusion of extreme learning machines (ELMs). Two ELM classification models are trained to recognize accelerometer data and Global Positioning System (GPS) data, respectively. Fuzzy output vectors of these two ELMs are transformed into probability vectors and fused to determine the final result. Experimental results verify that the proposed method is effective and can obtain higher recognition accuracy than traditional fusion methods.","PeriodicalId":50283,"journal":{"name":"International Journal of Uncertainty Fuzziness and Knowledge-Based Systems","volume":"23 1","pages":"13-22"},"PeriodicalIF":1.5,"publicationDate":"2013-10-31","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"86353124","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2013-10-31DOI: 10.1142/S0218488513400187
P. Wong, C. Vong, C. Cheung, K. Wong
To predict the performance of a diesel engine, current practice relies on the use of black-box identification where numerous experiments must be carried out in order to obtain numerical values for model training. Although many diesel engine models based on artificial neural networks (ANNs) have already been developed, they have many drawbacks such as local minima, user burden on selection of optimal network structure, large training data size and poor generalization performance, making themselves difficult to be put into practice. This paper proposes to use extreme learning machine (ELM), which can overcome most of the aforementioned drawbacks, to model the emission characteristics and the brake-specific fuel consumption of the diesel engine under scarce and exponential sample data sets. The resulting ELM model is compared with those developed using popular ANNs such as radial basis function neural network (RBFNN) and advanced techniques such as support vector machine (SVM) and its variants, namely least squares support vector machine (LS-SVM) and relevance vector machine (RVM). Furthermore, some emission outputs of diesel engines suffer from the problem of exponentiality (i.e., the output y grows up exponentially along input x) that will deteriorate the prediction accuracy. A logarithmic transformation is therefore applied to preprocess and post-process the sample data sets in order to improve the prediction accuracy of the model. Evaluation results show that ELM with the logarithmic transformation is better than SVM, LS-SVM, RVM and RBFNN with/without the logarithmic transformation, regardless the model accuracy and training time.
{"title":"DIESEL ENGINE MODELLING USING EXTREME LEARNING MACHINE UNDER SCARCE AND EXPONENTIAL DATA SETS","authors":"P. Wong, C. Vong, C. Cheung, K. Wong","doi":"10.1142/S0218488513400187","DOIUrl":"https://doi.org/10.1142/S0218488513400187","url":null,"abstract":"To predict the performance of a diesel engine, current practice relies on the use of black-box identification where numerous experiments must be carried out in order to obtain numerical values for model training. Although many diesel engine models based on artificial neural networks (ANNs) have already been developed, they have many drawbacks such as local minima, user burden on selection of optimal network structure, large training data size and poor generalization performance, making themselves difficult to be put into practice. This paper proposes to use extreme learning machine (ELM), which can overcome most of the aforementioned drawbacks, to model the emission characteristics and the brake-specific fuel consumption of the diesel engine under scarce and exponential sample data sets. The resulting ELM model is compared with those developed using popular ANNs such as radial basis function neural network (RBFNN) and advanced techniques such as support vector machine (SVM) and its variants, namely least squares support vector machine (LS-SVM) and relevance vector machine (RVM). Furthermore, some emission outputs of diesel engines suffer from the problem of exponentiality (i.e., the output y grows up exponentially along input x) that will deteriorate the prediction accuracy. A logarithmic transformation is therefore applied to preprocess and post-process the sample data sets in order to improve the prediction accuracy of the model. Evaluation results show that ELM with the logarithmic transformation is better than SVM, LS-SVM, RVM and RBFNN with/without the logarithmic transformation, regardless the model accuracy and training time.","PeriodicalId":50283,"journal":{"name":"International Journal of Uncertainty Fuzziness and Knowledge-Based Systems","volume":"1 1","pages":"87-98"},"PeriodicalIF":1.5,"publicationDate":"2013-10-31","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"79285323","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2013-10-31DOI: 10.1142/S0218488513400229
Ke Li, Ran Wang, S. Kwong, Jingjing Cao
Extreme Learning Machine (ELM) is an emergent technique for training Single-hidden Layer Feedforward Networks (SLFNs). It attracts significant interest during the recent years, but the randomly assigned network parameters might cause high learning risks. This fact motivates our idea in this paper to propose an evolving ELM paradigm for classification problems. In this paradigm, a Differential Evolution (DE) variant, which can online select the appropriate operator for offspring generation and adaptively adjust the corresponding control parameters, is proposed for optimizing the network. In addition, a 5-fold cross validation is adopted in the fitness assignment procedure, for improving the generalization capability. Empirical studies on several real-world classification data sets have demonstrated that the evolving ELM paradigm can generally outperform the original ELM as well as several recent classification algorithms.
{"title":"EVOLVING EXTREME LEARNING MACHINE PARADIGM WITH ADAPTIVE OPERATOR SELECTION AND PARAMETER CONTROL","authors":"Ke Li, Ran Wang, S. Kwong, Jingjing Cao","doi":"10.1142/S0218488513400229","DOIUrl":"https://doi.org/10.1142/S0218488513400229","url":null,"abstract":"Extreme Learning Machine (ELM) is an emergent technique for training Single-hidden Layer Feedforward Networks (SLFNs). It attracts significant interest during the recent years, but the randomly assigned network parameters might cause high learning risks. This fact motivates our idea in this paper to propose an evolving ELM paradigm for classification problems. In this paradigm, a Differential Evolution (DE) variant, which can online select the appropriate operator for offspring generation and adaptively adjust the corresponding control parameters, is proposed for optimizing the network. In addition, a 5-fold cross validation is adopted in the fitness assignment procedure, for improving the generalization capability. Empirical studies on several real-world classification data sets have demonstrated that the evolving ELM paradigm can generally outperform the original ELM as well as several recent classification algorithms.","PeriodicalId":50283,"journal":{"name":"International Journal of Uncertainty Fuzziness and Knowledge-Based Systems","volume":"26 1","pages":"143-154"},"PeriodicalIF":1.5,"publicationDate":"2013-10-31","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"90353600","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2013-10-31DOI: 10.1142/S0218488513400199
J. M. López-Guede, B. Fernández-Gauna, M. Graña
This paper addresses the problem of efficiency in reinforcement learning of Single Robot Hose Transport (SRHT) by training an Extreme Learning Machine (ELM) from the state-action value Q-table, obtaining large reduction in data space requirements because the number of ELM parameters is much less than the Q-table's size. Moreover, ELM implements a continuous map which can produce compact representations of the Q-table, and generalizations to increased space resolution and unknown situations. In this paper we evaluate empirically three strategies to formulate ELM learning to provide approximations to the Q-table, namely as classification, multi-variate regression and several independent regression problems.
{"title":"STATE-ACTION VALUE FUNCTION MODELED BY ELM IN REINFORCEMENT LEARNING FOR HOSE CONTROL PROBLEMS","authors":"J. M. López-Guede, B. Fernández-Gauna, M. Graña","doi":"10.1142/S0218488513400199","DOIUrl":"https://doi.org/10.1142/S0218488513400199","url":null,"abstract":"This paper addresses the problem of efficiency in reinforcement learning of Single Robot Hose Transport (SRHT) by training an Extreme Learning Machine (ELM) from the state-action value Q-table, obtaining large reduction in data space requirements because the number of ELM parameters is much less than the Q-table's size. Moreover, ELM implements a continuous map which can produce compact representations of the Q-table, and generalizations to increased space resolution and unknown situations. In this paper we evaluate empirically three strategies to formulate ELM learning to provide approximations to the Q-table, namely as classification, multi-variate regression and several independent regression problems.","PeriodicalId":50283,"journal":{"name":"International Journal of Uncertainty Fuzziness and Knowledge-Based Systems","volume":"66 1","pages":"99-116"},"PeriodicalIF":1.5,"publicationDate":"2013-10-31","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"85055219","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2013-10-31DOI: 10.1142/S0218488513400114
Ran Wang, S. Kwong, D. D. Wang
It is experimentally observed that the approximate errors of extreme learning machine (ELM) are dependent on the uniformity of training samples after the network architecture is fixed, and the uniformity, which is usually measured by the variance of distances among samples, varies with the linear transformation induced by the random weight matrix. By analyzing the dimension increase process in ELM, this paper gives an approximate relation between the uniformities before and after the linear transformation. Furthermore, by restricting ELM with a two-dimensional space, it gives an upper bound of ELM approximate error which is dependent on the distributive uniformity of training samples. The analytic results provide some useful guidelines to make clear the impact of random weights on ELM approximate ability and improve ELM prediction accuracy.
{"title":"An analysis of ELM approximate error based on random weight matrix","authors":"Ran Wang, S. Kwong, D. D. Wang","doi":"10.1142/S0218488513400114","DOIUrl":"https://doi.org/10.1142/S0218488513400114","url":null,"abstract":"It is experimentally observed that the approximate errors of extreme learning machine (ELM) are dependent on the uniformity of training samples after the network architecture is fixed, and the uniformity, which is usually measured by the variance of distances among samples, varies with the linear transformation induced by the random weight matrix. By analyzing the dimension increase process in ELM, this paper gives an approximate relation between the uniformities before and after the linear transformation. Furthermore, by restricting ELM with a two-dimensional space, it gives an upper bound of ELM approximate error which is dependent on the distributive uniformity of training samples. The analytic results provide some useful guidelines to make clear the impact of random weights on ELM approximate ability and improve ELM prediction accuracy.","PeriodicalId":50283,"journal":{"name":"International Journal of Uncertainty Fuzziness and Knowledge-Based Systems","volume":"96 1","pages":"1-12"},"PeriodicalIF":1.5,"publicationDate":"2013-10-31","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"73064195","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}