Pub Date : 2009-09-22DOI: 10.1109/GRC.2009.5255037
Ji-hui Yang
In this paper, firstly, we present fuzzy relation linear programming with fuzzy objective coefficients, it is expanded to conventional fuzzy relation linear programming with crisp objective coefficients. Secondly, a solution procedure is given based on a norm of trapezoid fuzzy number. And finally, A numerical example is given for illustration purpose.
{"title":"Fuzzy relation linear programming","authors":"Ji-hui Yang","doi":"10.1109/GRC.2009.5255037","DOIUrl":"https://doi.org/10.1109/GRC.2009.5255037","url":null,"abstract":"In this paper, firstly, we present fuzzy relation linear programming with fuzzy objective coefficients, it is expanded to conventional fuzzy relation linear programming with crisp objective coefficients. Secondly, a solution procedure is given based on a norm of trapezoid fuzzy number. And finally, A numerical example is given for illustration purpose.","PeriodicalId":388774,"journal":{"name":"2009 IEEE International Conference on Granular Computing","volume":"36 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2009-09-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"133814946","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2009-09-22DOI: 10.1109/GRC.2009.5255044
Xiaolin Xu, Guanglin Xu, Jia-li Feng
Based on input and output relationship of Qualitative Mapping(QM), the attribute computing network model has been created. It brings forward a kind of computing method using input to adjust qualitative benchmark of attribute network, which makes it possible to achieve pattern recognition. Now the new attribute computing network model combined pattern recognition with synthetic evaluation is established. Firstly qualitative benchmarks of indexes are gotten by boundary study, and then by way of marking, preference for indexes is obtained, and lastly a set of satisfactory degrees for indexes is computed and outputted in descending sequence which ameliorates the effect of old satisfactory degree. Finally the simulation experiment is carried out to validate the theoretical model.
{"title":"A kind of synthetic evaluation method based on the attribute computing network","authors":"Xiaolin Xu, Guanglin Xu, Jia-li Feng","doi":"10.1109/GRC.2009.5255044","DOIUrl":"https://doi.org/10.1109/GRC.2009.5255044","url":null,"abstract":"Based on input and output relationship of Qualitative Mapping(QM), the attribute computing network model has been created. It brings forward a kind of computing method using input to adjust qualitative benchmark of attribute network, which makes it possible to achieve pattern recognition. Now the new attribute computing network model combined pattern recognition with synthetic evaluation is established. Firstly qualitative benchmarks of indexes are gotten by boundary study, and then by way of marking, preference for indexes is obtained, and lastly a set of satisfactory degrees for indexes is computed and outputted in descending sequence which ameliorates the effect of old satisfactory degree. Finally the simulation experiment is carried out to validate the theoretical model.","PeriodicalId":388774,"journal":{"name":"2009 IEEE International Conference on Granular Computing","volume":"22 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2009-09-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"133716585","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2009-09-22DOI: 10.1109/GRC.2009.5255133
Bo Guo, Wei Chen, Zhenyuan Wang
In some real optimization problems, the objective function may not be differentiable with respect to the unknown parameters at some points such that the gradient does not exist at those points. Replacing the classical gradient, this paper tries to use pseudo gradient search for solving a nonlinear optimization problem—nonlinear multiregression based on the Choquet integral with a linear core. It is a local search method with rapid search speed.
{"title":"Pseudo gradient search for solving nonlinear multiregression based on the Choquet integral","authors":"Bo Guo, Wei Chen, Zhenyuan Wang","doi":"10.1109/GRC.2009.5255133","DOIUrl":"https://doi.org/10.1109/GRC.2009.5255133","url":null,"abstract":"In some real optimization problems, the objective function may not be differentiable with respect to the unknown parameters at some points such that the gradient does not exist at those points. Replacing the classical gradient, this paper tries to use pseudo gradient search for solving a nonlinear optimization problem—nonlinear multiregression based on the Choquet integral with a linear core. It is a local search method with rapid search speed.","PeriodicalId":388774,"journal":{"name":"2009 IEEE International Conference on Granular Computing","volume":"28 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2009-09-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"116612519","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2009-09-22DOI: 10.1109/GRC.2009.5255158
Jinguang Chen, Tingting He, Zhuoming Gui, Fang Li
Research on sentence compression has been undergoing for many years in other languages, especially in English, but research on Chinese sentence compression is rarely found. In this paper, we describe an efficient probabilistic and syntactic approach to Chinese sentence compression. We introduce the classical noisy-channel approach into Chinese sentence compression and improve it in many ways. Since there is no parallel training corpus in Chinese, we use the unsupervised learning method. This paper also presents a novel bottom-up optimizing algorithm which considers both bigram and syntactic probabilities for generating candidate compressed sentences. We evaluate results against manual compressions and a simple baseline. The experiments show the effectiveness of the proposed approach.
{"title":"Probabilistic unsupervised Chinese sentence compression","authors":"Jinguang Chen, Tingting He, Zhuoming Gui, Fang Li","doi":"10.1109/GRC.2009.5255158","DOIUrl":"https://doi.org/10.1109/GRC.2009.5255158","url":null,"abstract":"Research on sentence compression has been undergoing for many years in other languages, especially in English, but research on Chinese sentence compression is rarely found. In this paper, we describe an efficient probabilistic and syntactic approach to Chinese sentence compression. We introduce the classical noisy-channel approach into Chinese sentence compression and improve it in many ways. Since there is no parallel training corpus in Chinese, we use the unsupervised learning method. This paper also presents a novel bottom-up optimizing algorithm which considers both bigram and syntactic probabilities for generating candidate compressed sentences. We evaluate results against manual compressions and a simple baseline. The experiments show the effectiveness of the proposed approach.","PeriodicalId":388774,"journal":{"name":"2009 IEEE International Conference on Granular Computing","volume":"84 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2009-09-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115233841","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2009-09-22DOI: 10.1109/GRC.2009.5255051
Jianwei Xiang, Xia Ke
Analysis of how to extract medical diagnosis rules from medical cases. Based on the rough set theory, a way of acquiring knowledge is introduced. Using this theory, we analyze the data, propose some possible rules and reveal an optimized probability formula. The steps of implementation, which includes the continual information discrimination system, information reduction system, decision acquirement rules, decision model generation, etc., are explained through case study. In the end, the whole process of knowledge acquirement is discussed, which can effectively solve the choke point problem of acquiring knowledge in the expert system. At the same time, it also provides a new way to solve the application of artificial intelligence technology in the field of medicinal diagnosis.
{"title":"A novel extracting medical diagnosis rules based on rough sets","authors":"Jianwei Xiang, Xia Ke","doi":"10.1109/GRC.2009.5255051","DOIUrl":"https://doi.org/10.1109/GRC.2009.5255051","url":null,"abstract":"Analysis of how to extract medical diagnosis rules from medical cases. Based on the rough set theory, a way of acquiring knowledge is introduced. Using this theory, we analyze the data, propose some possible rules and reveal an optimized probability formula. The steps of implementation, which includes the continual information discrimination system, information reduction system, decision acquirement rules, decision model generation, etc., are explained through case study. In the end, the whole process of knowledge acquirement is discussed, which can effectively solve the choke point problem of acquiring knowledge in the expert system. At the same time, it also provides a new way to solve the application of artificial intelligence technology in the field of medicinal diagnosis.","PeriodicalId":388774,"journal":{"name":"2009 IEEE International Conference on Granular Computing","volume":"122 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2009-09-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115249097","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2009-09-22DOI: 10.1109/GRC.2009.5255082
Ji Ma
Genetic algorithms have been applied in various application domains and research fields related to biology, chemistry, especially computer science and engineering. In this paper, we will discuss the applications of generic algorithms in project scheduling. The problem is described, the algorithm is outlined, and the strengths and weaknesses are compared. Finally, the future trends in this direction are predicted.
{"title":"Project scheduling based on genetic algorithm","authors":"Ji Ma","doi":"10.1109/GRC.2009.5255082","DOIUrl":"https://doi.org/10.1109/GRC.2009.5255082","url":null,"abstract":"Genetic algorithms have been applied in various application domains and research fields related to biology, chemistry, especially computer science and engineering. In this paper, we will discuss the applications of generic algorithms in project scheduling. The problem is described, the algorithm is outlined, and the strengths and weaknesses are compared. Finally, the future trends in this direction are predicted.","PeriodicalId":388774,"journal":{"name":"2009 IEEE International Conference on Granular Computing","volume":"54 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2009-09-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"124798029","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2009-09-22DOI: 10.1109/GRC.2009.5255080
S. Miyamoto, Mitsuaki Yamazaki, Wataru Hashimoto
This paper discusses a method of semi-supervised fuzzy clustering with target clusters. The method uses two kinds of additional terms to ordinary fuzzy c-means objective function. One term consists of the sum of squared differences between the target cluster memberships and the membership of the solution, whereas second term has the sum of absolute differences of those memberships. While the former has a closed formula for the membership solution, the second requires a complicated algorithm. However, numerical example show that the latter method of the absolute differences works better.
{"title":"Fuzzy semi-supervised clustering with target clusters using different additional terms","authors":"S. Miyamoto, Mitsuaki Yamazaki, Wataru Hashimoto","doi":"10.1109/GRC.2009.5255080","DOIUrl":"https://doi.org/10.1109/GRC.2009.5255080","url":null,"abstract":"This paper discusses a method of semi-supervised fuzzy clustering with target clusters. The method uses two kinds of additional terms to ordinary fuzzy c-means objective function. One term consists of the sum of squared differences between the target cluster memberships and the membership of the solution, whereas second term has the sum of absolute differences of those memberships. While the former has a closed formula for the membership solution, the second requires a complicated algorithm. However, numerical example show that the latter method of the absolute differences works better.","PeriodicalId":388774,"journal":{"name":"2009 IEEE International Conference on Granular Computing","volume":"48 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2009-09-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"124854330","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2009-09-22DOI: 10.1109/GRC.2009.5255032
Jun Yang, Zhangyan Xu
The key of attribute reduction based on rough set is find the core attributes. Most existing works are mainly based on Hu's discernibility matrix. Till now, there are three kinds of core attributes: Hu's core based on discernibility matrix (denoted by Core1(C)), core based on positive region (denoted by Core2(C)), and core based on information entropy (denoted by Core3(C)). Some researchers have been pointed out that these three kinds of cores are not equivalent to each other. Based on the above three kinds of core attributes, we at first propose three kinds of simplified discernibility matrices and their corresponding cores, which are denoted by SDCore1(C), SDCore2(C), and SDCore3(C) respectively. And then it is proved that Core1(C)=SDCore1(C), Core2(C)= SDCore2(C), and Core3(C)=SDCore3(C). Finally, based on three proposed simplified discernibility matrices and their corresponding cores, it is proved that Core2(C)⊆Core3(C)⊆Core1(C).
{"title":"Different core attributes's comparison and analysis","authors":"Jun Yang, Zhangyan Xu","doi":"10.1109/GRC.2009.5255032","DOIUrl":"https://doi.org/10.1109/GRC.2009.5255032","url":null,"abstract":"The key of attribute reduction based on rough set is find the core attributes. Most existing works are mainly based on Hu's discernibility matrix. Till now, there are three kinds of core attributes: Hu's core based on discernibility matrix (denoted by Core1(C)), core based on positive region (denoted by Core2(C)), and core based on information entropy (denoted by Core3(C)). Some researchers have been pointed out that these three kinds of cores are not equivalent to each other. Based on the above three kinds of core attributes, we at first propose three kinds of simplified discernibility matrices and their corresponding cores, which are denoted by SDCore1(C), SDCore2(C), and SDCore3(C) respectively. And then it is proved that Core1(C)=SDCore1(C), Core2(C)= SDCore2(C), and Core3(C)=SDCore3(C). Finally, based on three proposed simplified discernibility matrices and their corresponding cores, it is proved that Core2(C)⊆Core3(C)⊆Core1(C).","PeriodicalId":388774,"journal":{"name":"2009 IEEE International Conference on Granular Computing","volume":"327 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2009-09-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"123314086","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2009-09-22DOI: 10.1109/GRC.2009.5255002
T. Lin
5th GrC model is the formal model specified into the category of sets It is a theory of ordered granules, namely, granules are ordered “subsets” of the universe, We extract a 5th GrC model from a set of web pages. A granule is a high frequent sequence of keywords, It is a tuple in a relation and naturally carries some concept expressed in web pages. The concept analysis in this paper is about true human concepts that are expressed in web documents.
{"title":"Concept analysis in web informatics- 5th GrC model - Using ordered granules","authors":"T. Lin","doi":"10.1109/GRC.2009.5255002","DOIUrl":"https://doi.org/10.1109/GRC.2009.5255002","url":null,"abstract":"5th GrC model is the formal model specified into the category of sets It is a theory of ordered granules, namely, granules are ordered “subsets” of the universe, We extract a 5th GrC model from a set of web pages. A granule is a high frequent sequence of keywords, It is a tuple in a relation and naturally carries some concept expressed in web pages. The concept analysis in this paper is about true human concepts that are expressed in web documents.","PeriodicalId":388774,"journal":{"name":"2009 IEEE International Conference on Granular Computing","volume":"33 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2009-09-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128641568","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2009-09-22DOI: 10.1109/GRC.2009.5255110
Song Jin, Hongfei Lin, Sui Su
In traditional query expansion techniques, we choose the expansion terms based on their weights in the relevant documents. However, this kind of approaches does not take into account the semantic relationship between the original query terms and the expansion terms. Folksonomy is a social service in Web 2.0, which provides a large amount of social annotations. As the core of folksonomy, tags are high quality descriptors of the information contents and topics. Moreover, different tags describing the same information resource are semantically related to some extent. In this paper, we propose a query expansion method that utilizes the tag co-occurrence information to select the most appropriate expansion terms. Experimental results show that our tag co-occurrence-based query expansion technique consistently improves retrieval performance, compared with no-expansion method. This means the expansion terms we selected are semantically related to the original query, and tags of folksonomy will be the new resource of expansion terms.
{"title":"Query expansion based on folksonomy tag co-occurrence analysis","authors":"Song Jin, Hongfei Lin, Sui Su","doi":"10.1109/GRC.2009.5255110","DOIUrl":"https://doi.org/10.1109/GRC.2009.5255110","url":null,"abstract":"In traditional query expansion techniques, we choose the expansion terms based on their weights in the relevant documents. However, this kind of approaches does not take into account the semantic relationship between the original query terms and the expansion terms. Folksonomy is a social service in Web 2.0, which provides a large amount of social annotations. As the core of folksonomy, tags are high quality descriptors of the information contents and topics. Moreover, different tags describing the same information resource are semantically related to some extent. In this paper, we propose a query expansion method that utilizes the tag co-occurrence information to select the most appropriate expansion terms. Experimental results show that our tag co-occurrence-based query expansion technique consistently improves retrieval performance, compared with no-expansion method. This means the expansion terms we selected are semantically related to the original query, and tags of folksonomy will be the new resource of expansion terms.","PeriodicalId":388774,"journal":{"name":"2009 IEEE International Conference on Granular Computing","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2009-09-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129431925","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}