Pub Date : 2016-08-01DOI: 10.1109/FSKD.2016.7603190
Baorong He, Dekuan Xu
Song titles is a special form of language expression with modernity and popularity: they are short in form and concise in meaning and can reflect the ideology and values of an era. In this paper, we built a co-occurrence network with the titles of approximately six thousand Chinese popular songs. We make an all-round research about it from the perspective of complex networks and explain such characteristics as small world effect, scale-free, hierarchy, betweenness centrality and assortiveness and so on. This paper reveals the unique nature of the co-occurrence network of the titles of popular songs and broadens the scope of language network studies.
{"title":"An exploration on the word co-occurrence network of Chinese popular song titles","authors":"Baorong He, Dekuan Xu","doi":"10.1109/FSKD.2016.7603190","DOIUrl":"https://doi.org/10.1109/FSKD.2016.7603190","url":null,"abstract":"Song titles is a special form of language expression with modernity and popularity: they are short in form and concise in meaning and can reflect the ideology and values of an era. In this paper, we built a co-occurrence network with the titles of approximately six thousand Chinese popular songs. We make an all-round research about it from the perspective of complex networks and explain such characteristics as small world effect, scale-free, hierarchy, betweenness centrality and assortiveness and so on. This paper reveals the unique nature of the co-occurrence network of the titles of popular songs and broadens the scope of language network studies.","PeriodicalId":373155,"journal":{"name":"2016 12th International Conference on Natural Computation, Fuzzy Systems and Knowledge Discovery (ICNC-FSKD)","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2016-08-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"131125437","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2016-08-01DOI: 10.1109/FSKD.2016.7603521
Teck Yan Tan, Li Zhang, Ming Jiang
It is challenging to develop an intelligent agent-based or robotic system to conduct long-term automatic health monitoring and robust efficient disease diagnosis as autonomous e-Carers in real-world applications. In this research, we aim to deal with such challenges by presenting an intelligent decision support system for skin lesion recognition as the initial step, which could be embedded into an intelligent service robot for health monitoring in home environments to promote early diagnosis. The system is developed to identify benign and malignant skin lesions using multiple steps, including pre-processing such as noise removal, segmentation, feature extraction from lesion regions, feature selection and classification. After extracting thousands of raw shape, colour and texture features from the lesion areas, a Genetic Algorithm (GA) is used to identify the most discriminating significant feature subsets for healthy and cancerous cases. A Support Vector Machine classifier has been employed to perform benign and malignant lesion recognition. Evaluated with 1300 images from the Dermofit dermoscopy image database, the empirical results indicate that our approach achieves superior performance in comparison to other related research reported in the literature.
{"title":"An intelligent decision support system for skin cancer detection from dermoscopic images","authors":"Teck Yan Tan, Li Zhang, Ming Jiang","doi":"10.1109/FSKD.2016.7603521","DOIUrl":"https://doi.org/10.1109/FSKD.2016.7603521","url":null,"abstract":"It is challenging to develop an intelligent agent-based or robotic system to conduct long-term automatic health monitoring and robust efficient disease diagnosis as autonomous e-Carers in real-world applications. In this research, we aim to deal with such challenges by presenting an intelligent decision support system for skin lesion recognition as the initial step, which could be embedded into an intelligent service robot for health monitoring in home environments to promote early diagnosis. The system is developed to identify benign and malignant skin lesions using multiple steps, including pre-processing such as noise removal, segmentation, feature extraction from lesion regions, feature selection and classification. After extracting thousands of raw shape, colour and texture features from the lesion areas, a Genetic Algorithm (GA) is used to identify the most discriminating significant feature subsets for healthy and cancerous cases. A Support Vector Machine classifier has been employed to perform benign and malignant lesion recognition. Evaluated with 1300 images from the Dermofit dermoscopy image database, the empirical results indicate that our approach achieves superior performance in comparison to other related research reported in the literature.","PeriodicalId":373155,"journal":{"name":"2016 12th International Conference on Natural Computation, Fuzzy Systems and Knowledge Discovery (ICNC-FSKD)","volume":"68 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2016-08-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"131073763","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2016-08-01DOI: 10.1109/FSKD.2016.7603228
Yin Liu, Xiyue Wang, Yipeng Hang, Ling He, H. Yin, Chuxian Liu
The speakers with cleft palate, due to the defective velopharyngeal mechanism, allow the passage of air through the nasal cavity, which introduces inappropriate nasal resonance during speech production and results in hypernasal speech. The existence of hypernasality severely reduces the intelligibility of the speech. The treatment of cleft palate hypernasal speech requires the follow up operation to close fracture and restore the normal voice. Speech evaluation is essential to assess the hypernasality grades. In this work, an automatic hypernasality grades detection method is proposed in cleft palate speech. After a low quefrency liftering at 90 quefrencies cutoff in cepstrum domain, a homomorphic spectrum is calculated as the extraction feature. Then a BP neural network classifier based on natural computation is applied to detect four grades of hypernasality: normal, mild, moderate and severe. The experiment results show that the classification accuracy for four grades of hypernasality is above 80%.
{"title":"Hypemasality detection in cleft palate speech based on natural computation","authors":"Yin Liu, Xiyue Wang, Yipeng Hang, Ling He, H. Yin, Chuxian Liu","doi":"10.1109/FSKD.2016.7603228","DOIUrl":"https://doi.org/10.1109/FSKD.2016.7603228","url":null,"abstract":"The speakers with cleft palate, due to the defective velopharyngeal mechanism, allow the passage of air through the nasal cavity, which introduces inappropriate nasal resonance during speech production and results in hypernasal speech. The existence of hypernasality severely reduces the intelligibility of the speech. The treatment of cleft palate hypernasal speech requires the follow up operation to close fracture and restore the normal voice. Speech evaluation is essential to assess the hypernasality grades. In this work, an automatic hypernasality grades detection method is proposed in cleft palate speech. After a low quefrency liftering at 90 quefrencies cutoff in cepstrum domain, a homomorphic spectrum is calculated as the extraction feature. Then a BP neural network classifier based on natural computation is applied to detect four grades of hypernasality: normal, mild, moderate and severe. The experiment results show that the classification accuracy for four grades of hypernasality is above 80%.","PeriodicalId":373155,"journal":{"name":"2016 12th International Conference on Natural Computation, Fuzzy Systems and Knowledge Discovery (ICNC-FSKD)","volume":"11 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2016-08-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"131158491","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2016-08-01DOI: 10.1109/FSKD.2016.7603285
Huijuan Zhang, R. Sun
Because of rare earth futures stock variability and uncertainty of the market, many investors hope to be able to predict the price of rare earth futures on the stock market in the future. The neural network does do better than others in short-term forecasting, and there is no need to establish a complex nonlinear mathematical model and relationship. Based on these advantages, this paper uses the neural network based on genetic algorithm to predict the closing price of rare earth stock by analyzing the historical data of rare earth stock. In the genetic algorithm, the parameters such as crossover rate, mutation rate, iterations and population size are analyzed. Based on the parameter analysis results, a hybrid machine learning model which is suitable for the prediction of rare earth stock is established, which provides a reference for the investors.
{"title":"Parameter analysis of hybrid intelligent model for the prediction of rare earth stock futures","authors":"Huijuan Zhang, R. Sun","doi":"10.1109/FSKD.2016.7603285","DOIUrl":"https://doi.org/10.1109/FSKD.2016.7603285","url":null,"abstract":"Because of rare earth futures stock variability and uncertainty of the market, many investors hope to be able to predict the price of rare earth futures on the stock market in the future. The neural network does do better than others in short-term forecasting, and there is no need to establish a complex nonlinear mathematical model and relationship. Based on these advantages, this paper uses the neural network based on genetic algorithm to predict the closing price of rare earth stock by analyzing the historical data of rare earth stock. In the genetic algorithm, the parameters such as crossover rate, mutation rate, iterations and population size are analyzed. Based on the parameter analysis results, a hybrid machine learning model which is suitable for the prediction of rare earth stock is established, which provides a reference for the investors.","PeriodicalId":373155,"journal":{"name":"2016 12th International Conference on Natural Computation, Fuzzy Systems and Knowledge Discovery (ICNC-FSKD)","volume":"34 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2016-08-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128740848","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2016-08-01DOI: 10.1109/FSKD.2016.7603425
Xiaoxue Wang, Conghui Zhu, Sheng Li, T. Zhao, Dequan Zheng
Statistical machine translation (SMT) plays more and more important role now. The performance of the SMT is largely dependent on the size and quality of training data. But the demands for translation is rich, how to make the best of limited in-domain data to satisfy the needs of translation coming from different domains is one of the hot focus in current SMT. Domain adaption aims to obviously improve the specific-domain performance by bringing much out-of-domain parallel corpus at the absence of in-domain parallel corpus. Domain adaption is one of the keys to get the SMT into practical application. This paper introduces mainstream methods of domain adaption for SMT, compares advantages and disadvantages of representative methods based on the result of the same data and shows personal views about the possible future direction of domain adaption for SMT.
{"title":"Domain adaptation for statistical machine translation","authors":"Xiaoxue Wang, Conghui Zhu, Sheng Li, T. Zhao, Dequan Zheng","doi":"10.1109/FSKD.2016.7603425","DOIUrl":"https://doi.org/10.1109/FSKD.2016.7603425","url":null,"abstract":"Statistical machine translation (SMT) plays more and more important role now. The performance of the SMT is largely dependent on the size and quality of training data. But the demands for translation is rich, how to make the best of limited in-domain data to satisfy the needs of translation coming from different domains is one of the hot focus in current SMT. Domain adaption aims to obviously improve the specific-domain performance by bringing much out-of-domain parallel corpus at the absence of in-domain parallel corpus. Domain adaption is one of the keys to get the SMT into practical application. This paper introduces mainstream methods of domain adaption for SMT, compares advantages and disadvantages of representative methods based on the result of the same data and shows personal views about the possible future direction of domain adaption for SMT.","PeriodicalId":373155,"journal":{"name":"2016 12th International Conference on Natural Computation, Fuzzy Systems and Knowledge Discovery (ICNC-FSKD)","volume":"49 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2016-08-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"124800369","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2016-08-01DOI: 10.1109/FSKD.2016.7603201
Zhi-wei Xing, Yunxiao Tang, Qian Luo
In view of the status and problems of the airport flight slot allocation, such as flight delays, traffic jams, and resource waste, etc. this paper analyzes the game phenomenon that exist between the management department and the airlines as well as among the various airlines during the process of time slot allocation, by adopting the idea of game theory. Through the establishment of certain assumptions and game theory principle, this study build the Stackelberg game model about flight slot allocation, and obtain the Nash equilibrium, at the same time put forward the strategy model based on credibility priority. All the above are in order to realize the perfect optimization of the time slot allocation. In the end, this paper analyze the computational case according to the actual departure data of a western airport, proving that by using the game strategy model with both price and credibility to optimize, the resource utilization, fairness and allocation efficiency all have been largely improved during the process of flight slot allocation.
{"title":"A flight slot allocation model based on game theory","authors":"Zhi-wei Xing, Yunxiao Tang, Qian Luo","doi":"10.1109/FSKD.2016.7603201","DOIUrl":"https://doi.org/10.1109/FSKD.2016.7603201","url":null,"abstract":"In view of the status and problems of the airport flight slot allocation, such as flight delays, traffic jams, and resource waste, etc. this paper analyzes the game phenomenon that exist between the management department and the airlines as well as among the various airlines during the process of time slot allocation, by adopting the idea of game theory. Through the establishment of certain assumptions and game theory principle, this study build the Stackelberg game model about flight slot allocation, and obtain the Nash equilibrium, at the same time put forward the strategy model based on credibility priority. All the above are in order to realize the perfect optimization of the time slot allocation. In the end, this paper analyze the computational case according to the actual departure data of a western airport, proving that by using the game strategy model with both price and credibility to optimize, the resource utilization, fairness and allocation efficiency all have been largely improved during the process of flight slot allocation.","PeriodicalId":373155,"journal":{"name":"2016 12th International Conference on Natural Computation, Fuzzy Systems and Knowledge Discovery (ICNC-FSKD)","volume":"28 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2016-08-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128131249","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2016-08-01DOI: 10.1109/FSKD.2016.7603360
R. Katarzyniak, Wojciech A. Lorkiewicz, Dominik P. Wiecek
An original model of linguistic summaries extracted from episodic data is briefly presented. In particular, a class of linguistic summaries expressed as modal equivalences is considered. The model is tailored to the concept of autonomous agent systems, and is supported by several detailed, non-technical, natural language processing and knowledge representation theories. Complementary to the well known classic interpretation of linguistic summaries, based on the fuzzy sets theory, the proposed model deals with a different class of vague cognitive concepts. The class consists of epistemic modalities, in particular the concepts of knowledge, belief and possibility. Each sub-class of linguistic summaries is processed as understood in the context of natural systems and supported by related cognitive semantics. Remarks on relevant implementation technologies are given, and an illustrative computational example is presented.
{"title":"Modal linguistic summaries based on natural language equivalence with cognitive semantics","authors":"R. Katarzyniak, Wojciech A. Lorkiewicz, Dominik P. Wiecek","doi":"10.1109/FSKD.2016.7603360","DOIUrl":"https://doi.org/10.1109/FSKD.2016.7603360","url":null,"abstract":"An original model of linguistic summaries extracted from episodic data is briefly presented. In particular, a class of linguistic summaries expressed as modal equivalences is considered. The model is tailored to the concept of autonomous agent systems, and is supported by several detailed, non-technical, natural language processing and knowledge representation theories. Complementary to the well known classic interpretation of linguistic summaries, based on the fuzzy sets theory, the proposed model deals with a different class of vague cognitive concepts. The class consists of epistemic modalities, in particular the concepts of knowledge, belief and possibility. Each sub-class of linguistic summaries is processed as understood in the context of natural systems and supported by related cognitive semantics. Remarks on relevant implementation technologies are given, and an illustrative computational example is presented.","PeriodicalId":373155,"journal":{"name":"2016 12th International Conference on Natural Computation, Fuzzy Systems and Knowledge Discovery (ICNC-FSKD)","volume":"17 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2016-08-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"122951962","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2016-08-01DOI: 10.1109/FSKD.2016.7603392
Wei Ai, Dapu Li
With the emergence of the big data age, how to get valuable hot topic from the vast amount of digitized textual materials quickly and accurately has attracted more and more attention. This paper proposes a parallel Two-phase Mic-mac Hot Topic Detection(TMHTD) method specially design for microblogging in “Big Data” environment, which is implemented based on Apache Spark cloud computing environment. TMHTD is a distributed clustering framework for documents sets with two phases, including micro-clustering and macro-clustering. In the first phase, TMHTD partitions original data sets into a group of smaller data sets, and these data subsets are clustered into many small topics, producing intermediate results. In the second phase, the intermediate results are integrated into one, further clustered, and achieve the final hot topic sets. To improve the accuracy of the hot topic detection, an optimization of TMHTD is proposed. To handle large databases, we deliberately design a group of MapReduce jobs to concretely accomplish the hot topic detection in a highly scalable way. Extensive experimental results indicate that the accuracy and performance of TMHTD algorithm can be improved significantly over existing approaches.
{"title":"Parallelizing hot topic detection of microblog on spark","authors":"Wei Ai, Dapu Li","doi":"10.1109/FSKD.2016.7603392","DOIUrl":"https://doi.org/10.1109/FSKD.2016.7603392","url":null,"abstract":"With the emergence of the big data age, how to get valuable hot topic from the vast amount of digitized textual materials quickly and accurately has attracted more and more attention. This paper proposes a parallel Two-phase Mic-mac Hot Topic Detection(TMHTD) method specially design for microblogging in “Big Data” environment, which is implemented based on Apache Spark cloud computing environment. TMHTD is a distributed clustering framework for documents sets with two phases, including micro-clustering and macro-clustering. In the first phase, TMHTD partitions original data sets into a group of smaller data sets, and these data subsets are clustered into many small topics, producing intermediate results. In the second phase, the intermediate results are integrated into one, further clustered, and achieve the final hot topic sets. To improve the accuracy of the hot topic detection, an optimization of TMHTD is proposed. To handle large databases, we deliberately design a group of MapReduce jobs to concretely accomplish the hot topic detection in a highly scalable way. Extensive experimental results indicate that the accuracy and performance of TMHTD algorithm can be improved significantly over existing approaches.","PeriodicalId":373155,"journal":{"name":"2016 12th International Conference on Natural Computation, Fuzzy Systems and Knowledge Discovery (ICNC-FSKD)","volume":"42 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2016-08-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"123143062","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2016-08-01DOI: 10.1109/FSKD.2016.7603174
Shengyu Pei, Lang Tong
As the K-means algorithm is dependent on the initial clustering center, and the particle swarm optimization (PSO) converges prematurely and is easily trapped in local minima, a Gaussian kernel particle swarm optimization clustering algorithm is proposed in this paper. The algorithm adopts the theory of good point set to initialize population, which makes the initial clustering center more rational. Particle swarm iteration formula was optimized by using Gaussian kernel method, which makes particle swarm algorithm converge rapidly to the global optimal. By testing 23 UCI data sets, the experimental results show that the clustering effect of the proposed algorithm is better than that of the K-means and the traditional particle swarm optimization clustering algorithm.
{"title":"Gaussian kernel particle swarm optimization clustering algorithm","authors":"Shengyu Pei, Lang Tong","doi":"10.1109/FSKD.2016.7603174","DOIUrl":"https://doi.org/10.1109/FSKD.2016.7603174","url":null,"abstract":"As the K-means algorithm is dependent on the initial clustering center, and the particle swarm optimization (PSO) converges prematurely and is easily trapped in local minima, a Gaussian kernel particle swarm optimization clustering algorithm is proposed in this paper. The algorithm adopts the theory of good point set to initialize population, which makes the initial clustering center more rational. Particle swarm iteration formula was optimized by using Gaussian kernel method, which makes particle swarm algorithm converge rapidly to the global optimal. By testing 23 UCI data sets, the experimental results show that the clustering effect of the proposed algorithm is better than that of the K-means and the traditional particle swarm optimization clustering algorithm.","PeriodicalId":373155,"journal":{"name":"2016 12th International Conference on Natural Computation, Fuzzy Systems and Knowledge Discovery (ICNC-FSKD)","volume":"125 3 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2016-08-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126274833","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2016-08-01DOI: 10.1109/FSKD.2016.7603283
Huanlin Liu, Ling Yu
Moving force is very important for bridge design, structural analysis and structural health monitoring. Some studies on moving force identification (MFI) attract extensive attentions in the past decades. A novel two-step MFI method is proposed based on particle swarm optimization (PSO) and time domain method (TDM) in this study. The new proposed MFI method includes two steps. In the first step, the PSO is used to identify the constant loads without matrix inversion. In the second step, the conventional TDM is employed to estimate the rest time-varying loads where the Tikhonov regularization and general cross validation (GCV) are introduced to improve the MFI accuracy and to select optimal regularization parameters, respectively. A simply supported beam bridge subjected to moving forces is taken as a numerical simulation example to assess the performance of the proposed method. The illustrated results show that the new two-step MFI method can more effectively identify the moving forces compared to the conventional TDM and the improved Tikhonov regularization method, the proposed new method can provide more accurate MFI results on two moving forces under eight combinations of bridge responses.
{"title":"Moving force identification based on particle swarm optimization","authors":"Huanlin Liu, Ling Yu","doi":"10.1109/FSKD.2016.7603283","DOIUrl":"https://doi.org/10.1109/FSKD.2016.7603283","url":null,"abstract":"Moving force is very important for bridge design, structural analysis and structural health monitoring. Some studies on moving force identification (MFI) attract extensive attentions in the past decades. A novel two-step MFI method is proposed based on particle swarm optimization (PSO) and time domain method (TDM) in this study. The new proposed MFI method includes two steps. In the first step, the PSO is used to identify the constant loads without matrix inversion. In the second step, the conventional TDM is employed to estimate the rest time-varying loads where the Tikhonov regularization and general cross validation (GCV) are introduced to improve the MFI accuracy and to select optimal regularization parameters, respectively. A simply supported beam bridge subjected to moving forces is taken as a numerical simulation example to assess the performance of the proposed method. The illustrated results show that the new two-step MFI method can more effectively identify the moving forces compared to the conventional TDM and the improved Tikhonov regularization method, the proposed new method can provide more accurate MFI results on two moving forces under eight combinations of bridge responses.","PeriodicalId":373155,"journal":{"name":"2016 12th International Conference on Natural Computation, Fuzzy Systems and Knowledge Discovery (ICNC-FSKD)","volume":"43 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2016-08-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127133439","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}