Yong Zhang, Qingdong Du, Shidong Yu, Jeng-Shyang Pan
Fuzzy information fusion methods are adopted widely to resolve the complicated nonlinear problems in recent years. This paper proposes a fusion learning algorithm of radial basis function (RBF) neural network based on fuzzy evolution Kalman filtering. By using this proposed method, monitoring data are extracted and optimized in mine safety monitoring, and Matlab simulation results are analyzed. The results show that this method has feasibility and rapid learning efficiency, which can improve precision and reliability in mine monitoring systems.
{"title":"RBF Neural Network Based on Fuzzy Evolution Kalman Filtering and Application in Mine Safety Monitoring","authors":"Yong Zhang, Qingdong Du, Shidong Yu, Jeng-Shyang Pan","doi":"10.1109/HIS.2009.96","DOIUrl":"https://doi.org/10.1109/HIS.2009.96","url":null,"abstract":"Fuzzy information fusion methods are adopted widely to resolve the complicated nonlinear problems in recent years. This paper proposes a fusion learning algorithm of radial basis function (RBF) neural network based on fuzzy evolution Kalman filtering. By using this proposed method, monitoring data are extracted and optimized in mine safety monitoring, and Matlab simulation results are analyzed. The results show that this method has feasibility and rapid learning efficiency, which can improve precision and reliability in mine monitoring systems.","PeriodicalId":414085,"journal":{"name":"2009 Ninth International Conference on Hybrid Intelligent Systems","volume":"46 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2009-08-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127665385","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Image thresholding is an important technique for image processing and pattern recognition. In this paper, a new multilevel image thresholding algorithm based on the technology of the honey bee mating optimization (HBMO) is proposed. Three different methods such as the particle swarm optimization (PSO), the hybrid cooperative-comprehensive learning based PSO algorithm (HCOCLPSO) and the Fast Otsu’s method are also implemented for comparison with the results of the proposed method. The experimental results reveal two important interested results for other three image thresholding methods. One is that the results of PSO and Fast Ostu’s method are unstable that extraordinary segmentations are generated. Another is that the results of HCOCLPSO are superior to original PSO method, but it still slower than ones of HBMO and it had similar segmentation results with the ones of the honey bee mating optimization.
{"title":"Multi-level Thresholding Selection by Using the Honey Bee Mating Optimization","authors":"Ren-Jean Liou, M. Horng, Ting-Wei Jiang","doi":"10.1109/HIS.2009.37","DOIUrl":"https://doi.org/10.1109/HIS.2009.37","url":null,"abstract":"Image thresholding is an important technique for image processing and pattern recognition. In this paper, a new multilevel image thresholding algorithm based on the technology of the honey bee mating optimization (HBMO) is proposed. Three different methods such as the particle swarm optimization (PSO), the hybrid cooperative-comprehensive learning based PSO algorithm (HCOCLPSO) and the Fast Otsu’s method are also implemented for comparison with the results of the proposed method. The experimental results reveal two important interested results for other three image thresholding methods. One is that the results of PSO and Fast Ostu’s method are unstable that extraordinary segmentations are generated. Another is that the results of HCOCLPSO are superior to original PSO method, but it still slower than ones of HBMO and it had similar segmentation results with the ones of the honey bee mating optimization.","PeriodicalId":414085,"journal":{"name":"2009 Ninth International Conference on Hybrid Intelligent Systems","volume":"192 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2009-08-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114221794","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
The scoring mechanism of the text features is the unique way for determining the key ideas in the text to be presented as text summary. The efficiency of the technique used for scoring the text sentences could produce good summary. The feature scores are imprecise and uncertain, this marks the differentiation between the important features and unimportant is difficult task. In this paper, we introduce fuzzy logic to deal with this problem. Our approach used important features based on fuzzy logic to extract the sentences. In our experiment, we used 30 test documents in DUC2002 data set. Each document is prepared by preprocessing process: sentence segmentation, tokenization, removing stop word, and word stemming. Then, we use 9 important features and calculate their score for each sentence. We propose a method using fuzzy logic for sentence extraction and compare our results with the baseline summarizer and Microsoft Word 2007 summarizers. The results show that the highest average precision, recall, and F-measure for the summaries were obtained from fuzzy method.
文本特征的评分机制是确定文本中要作为文本摘要呈现的关键思想的独特方法。对文本句子进行评分的技术效率可以产生很好的摘要。特征分数的不精确和不确定,这标志着区分重要特征和不重要特征是一项艰巨的任务。本文引入模糊逻辑来处理这一问题。我们的方法使用基于模糊逻辑的重要特征来提取句子。在我们的实验中,我们使用了DUC2002数据集中的30个测试文档。每个文档都经过预处理过程:句子分割、标记化、删除停止词和词干提取。然后,我们使用9个重要特征并计算每个句子的分数。我们提出了一种使用模糊逻辑进行句子提取的方法,并将我们的结果与基线摘要器和Microsoft Word 2007摘要器进行了比较。结果表明,模糊方法对摘要的平均精密度、召回率和f -测度均达到最高。
{"title":"Sentence Features Fusion for Text Summarization Using Fuzzy Logic","authors":"Ladda Suanmali, M. Binwahlan, N. Salim","doi":"10.1109/HIS.2009.36","DOIUrl":"https://doi.org/10.1109/HIS.2009.36","url":null,"abstract":"The scoring mechanism of the text features is the unique way for determining the key ideas in the text to be presented as text summary. The efficiency of the technique used for scoring the text sentences could produce good summary. The feature scores are imprecise and uncertain, this marks the differentiation between the important features and unimportant is difficult task. In this paper, we introduce fuzzy logic to deal with this problem. Our approach used important features based on fuzzy logic to extract the sentences. In our experiment, we used 30 test documents in DUC2002 data set. Each document is prepared by preprocessing process: sentence segmentation, tokenization, removing stop word, and word stemming. Then, we use 9 important features and calculate their score for each sentence. We propose a method using fuzzy logic for sentence extraction and compare our results with the baseline summarizer and Microsoft Word 2007 summarizers. The results show that the highest average precision, recall, and F-measure for the summaries were obtained from fuzzy method.","PeriodicalId":414085,"journal":{"name":"2009 Ninth International Conference on Hybrid Intelligent Systems","volume":"375 3","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2009-08-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114008207","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
In Automatic Question Answering System, topic identification is usually based on relevancy computation of sentences. This paper introduces an approach to compute topic relevancy. Using the semantic computation in HowNet, the relevancy of sentences can be calculated. The topic relevancy of sentences can be calculated through the computation of relevancy between words of sentence and subject words. Experimental results show the effectiveness of the method.
{"title":"Research on Topic Relevancy of Sentences Based on HowNet Semantic Computation","authors":"Jinzhong Xu, Jie Liu, Xiaoming Liu","doi":"10.1109/HIS.2009.150","DOIUrl":"https://doi.org/10.1109/HIS.2009.150","url":null,"abstract":"In Automatic Question Answering System, topic identification is usually based on relevancy computation of sentences. This paper introduces an approach to compute topic relevancy. Using the semantic computation in HowNet, the relevancy of sentences can be calculated. The topic relevancy of sentences can be calculated through the computation of relevancy between words of sentence and subject words. Experimental results show the effectiveness of the method.","PeriodicalId":414085,"journal":{"name":"2009 Ninth International Conference on Hybrid Intelligent Systems","volume":"271 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2009-08-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"122532308","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
The automatic recognition of unknown words is an important problem in Chinese information processing. Based on the characteristics of words, this paper proposes a method to recognize new words using high frequent strings. Firstly, the high frequent strings from each single document are extracted as candidate strings. Then the strings that cannot satisfy the characteristics of word’s distribution and word’s independently usage are removed. Finally, segment the entire corpus with these candidate strings, and count the word-frequency for further filtering. Experimental results show that, on the documents about basketball downloaded from Zaobao Newspaper, this method achieves an F-score of 79.39%.
{"title":"Chinese Unknown Words Extraction Based on Word-Level Characteristics","authors":"Wenbo Pang, Xiaozhong Fan, Yijun Gu, Jiangde Yu","doi":"10.1109/HIS.2009.77","DOIUrl":"https://doi.org/10.1109/HIS.2009.77","url":null,"abstract":"The automatic recognition of unknown words is an important problem in Chinese information processing. Based on the characteristics of words, this paper proposes a method to recognize new words using high frequent strings. Firstly, the high frequent strings from each single document are extracted as candidate strings. Then the strings that cannot satisfy the characteristics of word’s distribution and word’s independently usage are removed. Finally, segment the entire corpus with these candidate strings, and count the word-frequency for further filtering. Experimental results show that, on the documents about basketball downloaded from Zaobao Newspaper, this method achieves an F-score of 79.39%.","PeriodicalId":414085,"journal":{"name":"2009 Ninth International Conference on Hybrid Intelligent Systems","volume":"3 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2009-08-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"121020116","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
In this quickly developed and shifting era of Internet, how to make use of webpage indexing structure or search engines which let information demanders fast and precisely search and extract out advantage information has become extremely important capability in users on the Web. This paper combined a data mining tool SPSS Clementine with the domain ontology to mine out usefully important information from huge datum, and then to employ Java to develop an information recommender for scholars--- Onto Recommender, in which can recommend suitably important information to scholars. The preliminary experiment outcomes proved the reliability and validation of the recommender achieving the regular-level outcomes of information recommendation, and accordingly proved the feasibility of the related techniques proposed in this paper.
{"title":"Ontology-Supported Web Recommender for Scholar Information","authors":"Sheng-Yuan Yang, Chun-Liang Hsu","doi":"10.1109/HIS.2009.61","DOIUrl":"https://doi.org/10.1109/HIS.2009.61","url":null,"abstract":"In this quickly developed and shifting era of Internet, how to make use of webpage indexing structure or search engines which let information demanders fast and precisely search and extract out advantage information has become extremely important capability in users on the Web. This paper combined a data mining tool SPSS Clementine with the domain ontology to mine out usefully important information from huge datum, and then to employ Java to develop an information recommender for scholars--- Onto Recommender, in which can recommend suitably important information to scholars. The preliminary experiment outcomes proved the reliability and validation of the recommender achieving the regular-level outcomes of information recommendation, and accordingly proved the feasibility of the related techniques proposed in this paper.","PeriodicalId":414085,"journal":{"name":"2009 Ninth International Conference on Hybrid Intelligent Systems","volume":"54 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2009-08-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126626161","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
The de-noise result of traditional wavelet is related to the wavelet basis function. In the process of ultrasonic testing, flaw echo signal exited the characteristics of electrical noise, scattering noise in ultrasonic testing, which was sometimes very difficult to eliminate. Considering the distinctness of distribution between defects signals and noises, the second generation wavelet transform (SGWT) de-noising ultrasonic testing signal processing was proposed. Wavelet basis function with some special characteristic can be obtained by means of designing prediction and updating coefficient. Study mathematics properties of flaw echo signals and analyze the composition of noises and their characters in ultrasonic echo signals. The processed detail coefficient and the approximate coefficient are used to construct the signal. Wavelet transform coefficients of noise were filtered by changing threshold on the different scale and reconstructed the detection echo in order to enhance signal-to-noise ratio. The Experiments result shows that the method can improve the signal noise ratio and the distinguish ability of signals of different defects classes, and suppress energy attenuation as well as signal distortion efficiently. And flaw location accuracy and longitudinal resolution are advanced too.
{"title":"Ultrasonic Testing Signal Processing of Weld Flaw Based on the Second Generation Wavelet","authors":"Gaohua Liao, Junmei Xi","doi":"10.1109/HIS.2009.111","DOIUrl":"https://doi.org/10.1109/HIS.2009.111","url":null,"abstract":"The de-noise result of traditional wavelet is related to the wavelet basis function. In the process of ultrasonic testing, flaw echo signal exited the characteristics of electrical noise, scattering noise in ultrasonic testing, which was sometimes very difficult to eliminate. Considering the distinctness of distribution between defects signals and noises, the second generation wavelet transform (SGWT) de-noising ultrasonic testing signal processing was proposed. Wavelet basis function with some special characteristic can be obtained by means of designing prediction and updating coefficient. Study mathematics properties of flaw echo signals and analyze the composition of noises and their characters in ultrasonic echo signals. The processed detail coefficient and the approximate coefficient are used to construct the signal. Wavelet transform coefficients of noise were filtered by changing threshold on the different scale and reconstructed the detection echo in order to enhance signal-to-noise ratio. The Experiments result shows that the method can improve the signal noise ratio and the distinguish ability of signals of different defects classes, and suppress energy attenuation as well as signal distortion efficiently. And flaw location accuracy and longitudinal resolution are advanced too.","PeriodicalId":414085,"journal":{"name":"2009 Ninth International Conference on Hybrid Intelligent Systems","volume":"42 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2009-08-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126904423","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Schema-free query processing has become a key technique of XML query for the absence of structural information or over multiple data sources. It is an important issue to effectively and efficiently filter out correct results from many meaningless results. To solve the basic problem of what are meaningful results, we propose a structure-based model for meaningful result determination. This model covers more correct results than typical model like Interconnection- Relationship and MLCAS. The extensive experiments show that our approach has better querying quality for real XML documents.
{"title":"Effectively Answering Schema-Free XML Queries over Document-Centric Data","authors":"Xiaoli Li, Xiaoguang Li, Baoyan Song","doi":"10.1109/HIS.2009.267","DOIUrl":"https://doi.org/10.1109/HIS.2009.267","url":null,"abstract":"Schema-free query processing has become a key technique of XML query for the absence of structural information or over multiple data sources. It is an important issue to effectively and efficiently filter out correct results from many meaningless results. To solve the basic problem of what are meaningful results, we propose a structure-based model for meaningful result determination. This model covers more correct results than typical model like Interconnection- Relationship and MLCAS. The extensive experiments show that our approach has better querying quality for real XML documents.","PeriodicalId":414085,"journal":{"name":"2009 Ninth International Conference on Hybrid Intelligent Systems","volume":"96 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2009-08-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127669493","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
A fast rerouting scheme is proposed to guarantee the QoS performance of rerouted path when handling link and node failures. The new scheme is based on deflection routing mechanism and improves on two aspects: first, it can promptly handle single node failure as well as single link failure, second, QoS metric is taken into accounted when calculating recovery routes. Simulation results show that the proposed scheme could achieve as optimal performance as constrained Bellman-Ford algorithm.
{"title":"Optimizing the QoS Performance of Fast Rerouting","authors":"X. Li, Zhen Qin, Tao Yu","doi":"10.1109/HIS.2009.277","DOIUrl":"https://doi.org/10.1109/HIS.2009.277","url":null,"abstract":"A fast rerouting scheme is proposed to guarantee the QoS performance of rerouted path when handling link and node failures. The new scheme is based on deflection routing mechanism and improves on two aspects: first, it can promptly handle single node failure as well as single link failure, second, QoS metric is taken into accounted when calculating recovery routes. Simulation results show that the proposed scheme could achieve as optimal performance as constrained Bellman-Ford algorithm.","PeriodicalId":414085,"journal":{"name":"2009 Ninth International Conference on Hybrid Intelligent Systems","volume":"60 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2009-08-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129209202","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Based on the analysis of the EPC Binary-tree algorithm, an improved algorithm is proposed. Compared with EPC Binary-tree algorithm the proposed algorithm will get more efficiency, higher utilization, and much more precision with fewer slots.
{"title":"An Improved Anti-collision Algorithm in RFID System","authors":"Jia-lin Ma, Xu Wei","doi":"10.1109/HIS.2009.241","DOIUrl":"https://doi.org/10.1109/HIS.2009.241","url":null,"abstract":"Based on the analysis of the EPC Binary-tree algorithm, an improved algorithm is proposed. Compared with EPC Binary-tree algorithm the proposed algorithm will get more efficiency, higher utilization, and much more precision with fewer slots.","PeriodicalId":414085,"journal":{"name":"2009 Ninth International Conference on Hybrid Intelligent Systems","volume":"8 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2009-08-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128543397","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}