Accuracy of computer understanding query of natural language is key to the quality of the natural language interface. Through the study of the Chinese query sentences, which include the imperative sentences and special questions, the yes-or-no questions, the positive and negative questions, choosing questions etc, the relation of composing conception, logical conception and standard conception is studied and built. The conception of the query target is decomposed into three steps, which are direct query target step, logic discursion target step and compare judge target step, the relation of the three steps also has been studied .The query semantic template ID and sentence type ID recognition and query sentence, above three steps query target recognize arithmetic are constructed, so the base of product the SELECT'S clause of SQL's sentence is established.
{"title":"A Study of the Query Target of the Chinese Query Sentence","authors":"Fengbin Zheng, Xiajiong Shen, Qiang Ge","doi":"10.1109/GrC.2007.91","DOIUrl":"https://doi.org/10.1109/GrC.2007.91","url":null,"abstract":"Accuracy of computer understanding query of natural language is key to the quality of the natural language interface. Through the study of the Chinese query sentences, which include the imperative sentences and special questions, the yes-or-no questions, the positive and negative questions, choosing questions etc, the relation of composing conception, logical conception and standard conception is studied and built. The conception of the query target is decomposed into three steps, which are direct query target step, logic discursion target step and compare judge target step, the relation of the three steps also has been studied .The query semantic template ID and sentence type ID recognition and query sentence, above three steps query target recognize arithmetic are constructed, so the base of product the SELECT'S clause of SQL's sentence is established.","PeriodicalId":259430,"journal":{"name":"2007 IEEE International Conference on Granular Computing (GRC 2007)","volume":"5 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2007-11-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114151493","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Yong Hu, Juhua Chen, Jiaxing Huang, Mei Liu, Kang Xie
Uncertainty during the period of software project development often brings huge risks to contractors and clients. Developing an effective method to predict the cost and quality of software projects based on facts such as project characteristics and two-side cooperation capability at the beginning of the project can aid us in finding ways to reduce the risks. Bayesian belief network (BBN) is a good tool for analyzing uncertain consequences, but it is difficult to produce precise network structure and conditional probability table. In this paper, we build up the network structure by Delphi method for conditional probability table learning, and learn to update the probability table and confidence levels of the nodes continuously according to application cases, which would subsequently make the evaluation network to have learning abilities, and to evaluate the software development risks in organizations more accurately. This paper also introduces the EM algorithm to enhance the ability in producing hidden nodes caused by variant software projects.
{"title":"Analyzing Software System Quality Risk Using Bayesian Belief Network","authors":"Yong Hu, Juhua Chen, Jiaxing Huang, Mei Liu, Kang Xie","doi":"10.1109/GrC.2007.83","DOIUrl":"https://doi.org/10.1109/GrC.2007.83","url":null,"abstract":"Uncertainty during the period of software project development often brings huge risks to contractors and clients. Developing an effective method to predict the cost and quality of software projects based on facts such as project characteristics and two-side cooperation capability at the beginning of the project can aid us in finding ways to reduce the risks. Bayesian belief network (BBN) is a good tool for analyzing uncertain consequences, but it is difficult to produce precise network structure and conditional probability table. In this paper, we build up the network structure by Delphi method for conditional probability table learning, and learn to update the probability table and confidence levels of the nodes continuously according to application cases, which would subsequently make the evaluation network to have learning abilities, and to evaluate the software development risks in organizations more accurately. This paper also introduces the EM algorithm to enhance the ability in producing hidden nodes caused by variant software projects.","PeriodicalId":259430,"journal":{"name":"2007 IEEE International Conference on Granular Computing (GRC 2007)","volume":"36 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2007-11-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114310181","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
By incorporating the Runge-Kutta methods with functions defined within the frameworks of multilayered granular domains, a nonlinear continuous-time dynamic process can be efficiently modeled. The several layers allow for the construction of models spanning different granular size to be used for applications that require different levels of precision and efficiency. In this paper, we discuss a particular implementation of this approach using multilinear interpolation functions.
{"title":"Modeling Dynamic Processes Using Granular Runge-Kutta Methods","authors":"T. Co","doi":"10.1109/GrC.2007.100","DOIUrl":"https://doi.org/10.1109/GrC.2007.100","url":null,"abstract":"By incorporating the Runge-Kutta methods with functions defined within the frameworks of multilayered granular domains, a nonlinear continuous-time dynamic process can be efficiently modeled. The several layers allow for the construction of models spanning different granular size to be used for applications that require different levels of precision and efficiency. In this paper, we discuss a particular implementation of this approach using multilinear interpolation functions.","PeriodicalId":259430,"journal":{"name":"2007 IEEE International Conference on Granular Computing (GRC 2007)","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2007-11-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"123949730","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
One of the most important properties of autonomous vehicle is the reliability which means to detect the fault by itself and then isolate the fault. This paper combined the neural-fuzzy model with the fault hypothesis test, and put forward a neuro-fuzzy model-based Cumulative-Sum (NFCUSUM) algorithm. It gave the assumptions aiming at the faults and set the alarm when the probability of the fault case was greater than the probability of the normal case. Under the fault case the system is called to have a fault, otherwise it is normal. The core of the NFCUSUM algorithm is to find a logic fault detector (decision function) which expresses whether the fault occurs at one sample time. The design idea of the decision function is that the system is suffered a fault and gives alarm when the value of the decision function is over the preset threshold; otherwise the system is in normal mode. The simulation results in Matlab show that the logic fault detector designed by the NFCUSUM algorithm in this paper is practical, efficient and robust.
{"title":"Neuro-Fuzzy Model-Based CUSUM Method Application in Fault Detection on an Autonomous Vehicle","authors":"Jun Xie, Gaowei Yan, Keming Xie, T. Y. Lin","doi":"10.1109/GrC.2007.148","DOIUrl":"https://doi.org/10.1109/GrC.2007.148","url":null,"abstract":"One of the most important properties of autonomous vehicle is the reliability which means to detect the fault by itself and then isolate the fault. This paper combined the neural-fuzzy model with the fault hypothesis test, and put forward a neuro-fuzzy model-based Cumulative-Sum (NFCUSUM) algorithm. It gave the assumptions aiming at the faults and set the alarm when the probability of the fault case was greater than the probability of the normal case. Under the fault case the system is called to have a fault, otherwise it is normal. The core of the NFCUSUM algorithm is to find a logic fault detector (decision function) which expresses whether the fault occurs at one sample time. The design idea of the decision function is that the system is suffered a fault and gives alarm when the value of the decision function is over the preset threshold; otherwise the system is in normal mode. The simulation results in Matlab show that the logic fault detector designed by the NFCUSUM algorithm in this paper is practical, efficient and robust.","PeriodicalId":259430,"journal":{"name":"2007 IEEE International Conference on Granular Computing (GRC 2007)","volume":"99 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2007-11-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"116665032","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
One of the important thoughts in mathematical logic is the way of formalization for practical statements. This paper just adopts the method to make formalization for granular computing. Based on this logical method, formulas of a particular kind are constructed on a universal set U. The structure consisting of the universal set, and the all-formula set, is defined as a granular space. Through a formula on the granular space, a semantic set can be separated from Un (nges1). This derives the definition of granules on the granular space. On the basis of the granular space and the granules, granular computing is defined through correspondences which connect some granules with another granule or with an object. This arrives at the goal of formalization for granular computing.
{"title":"A Logical Method of Formalization for Granular Computing","authors":"Lin Yan, Qing Liu","doi":"10.1109/GrC.2007.18","DOIUrl":"https://doi.org/10.1109/GrC.2007.18","url":null,"abstract":"One of the important thoughts in mathematical logic is the way of formalization for practical statements. This paper just adopts the method to make formalization for granular computing. Based on this logical method, formulas of a particular kind are constructed on a universal set U. The structure consisting of the universal set, and the all-formula set, is defined as a granular space. Through a formula on the granular space, a semantic set can be separated from Un (nges1). This derives the definition of granules on the granular space. On the basis of the granular space and the granules, granular computing is defined through correspondences which connect some granules with another granule or with an object. This arrives at the goal of formalization for granular computing.","PeriodicalId":259430,"journal":{"name":"2007 IEEE International Conference on Granular Computing (GRC 2007)","volume":"44 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2007-11-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"116971847","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Rough set theory is based on information granules. This paper studies information granules based on a decision logic language in information tables. In this paper, the theorems of determining definable granules and definable partitions are given. Furthermore, this paper gives the definitions of the definable upper and lower approximations of indefinable granules, and studies their properties. Through the descriptions of the definable upper and lower approximations, we propose a way of describing indefinable granules. As a result, we can obtain some explicit and useful information on indefinable granules. This is then an approach to discover knowledge hidden in indefinable granules.
{"title":"A Study of Information Granules","authors":"Xiaosheng Wang","doi":"10.1109/GrC.2007.80","DOIUrl":"https://doi.org/10.1109/GrC.2007.80","url":null,"abstract":"Rough set theory is based on information granules. This paper studies information granules based on a decision logic language in information tables. In this paper, the theorems of determining definable granules and definable partitions are given. Furthermore, this paper gives the definitions of the definable upper and lower approximations of indefinable granules, and studies their properties. Through the descriptions of the definable upper and lower approximations, we propose a way of describing indefinable granules. As a result, we can obtain some explicit and useful information on indefinable granules. This is then an approach to discover knowledge hidden in indefinable granules.","PeriodicalId":259430,"journal":{"name":"2007 IEEE International Conference on Granular Computing (GRC 2007)","volume":"70 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2007-11-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"123223554","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
An automatic classification system coping with graph patterns with node and edge labels belonging to continuous vector spaces is proposed. An algorithm based on inexact matching techniques is used to discover recurrent subgraphs in the original patterns, the synthesized prototypes of which are called symbols. Each original graph is then represented by a vector signature describing it in terms of the presence of symbol instances found in it. This signature is called symbolic histogram. A genetic algorithm is employed for the automatic selection of the relevant symbols, while a K-nn classifier is used as the core inductive inference engine. Performance tests have been carried out using algorithmically generated synthetic data sets.
{"title":"Automatic Classification of Graphs by Symbolic Histograms","authors":"G. D. Vescovo, A. Rizzi","doi":"10.1109/GrC.2007.140","DOIUrl":"https://doi.org/10.1109/GrC.2007.140","url":null,"abstract":"An automatic classification system coping with graph patterns with node and edge labels belonging to continuous vector spaces is proposed. An algorithm based on inexact matching techniques is used to discover recurrent subgraphs in the original patterns, the synthesized prototypes of which are called symbols. Each original graph is then represented by a vector signature describing it in terms of the presence of symbol instances found in it. This signature is called symbolic histogram. A genetic algorithm is employed for the automatic selection of the relevant symbols, while a K-nn classifier is used as the core inductive inference engine. Performance tests have been carried out using algorithmically generated synthetic data sets.","PeriodicalId":259430,"journal":{"name":"2007 IEEE International Conference on Granular Computing (GRC 2007)","volume":"20 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2007-11-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"124868574","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
H. Abe, S. Tsumoto, M. Ohsaki, H. Yokoi, Takahira Yamaguchi
In this paper, we present an evaluation of learning costs of rule evaluation models based on objective indices for an iterative rule evaluation support method in data mining post-processing. Post-processing of mined results is one of the key processes in a data mining process. However, it is difficult for human experts to find out valuable knowledge from several thousands of rules obtained with a large dataset with noises. To reduce the costs in such rule evaluation task, we have developed the rule evaluation support method with rule evaluation models, which learn from objective indices for mined classification rules and evaluations by a human expert for each rule. To estimate learning costs for predicting human interests with objective rule evaluation indices, we have done the two case studies with actual data mining results, which include different phases of human interests. With regarding to these results, we discuss about the relationship between performances of learning algorithms and human hypothesis construction process.
{"title":"Evaluation of Learning Costs of Rule Evaluation Models Based on Objective Indices to Predict Human Hypothesis Construction Phases","authors":"H. Abe, S. Tsumoto, M. Ohsaki, H. Yokoi, Takahira Yamaguchi","doi":"10.1109/GrC.2007.155","DOIUrl":"https://doi.org/10.1109/GrC.2007.155","url":null,"abstract":"In this paper, we present an evaluation of learning costs of rule evaluation models based on objective indices for an iterative rule evaluation support method in data mining post-processing. Post-processing of mined results is one of the key processes in a data mining process. However, it is difficult for human experts to find out valuable knowledge from several thousands of rules obtained with a large dataset with noises. To reduce the costs in such rule evaluation task, we have developed the rule evaluation support method with rule evaluation models, which learn from objective indices for mined classification rules and evaluations by a human expert for each rule. To estimate learning costs for predicting human interests with objective rule evaluation indices, we have done the two case studies with actual data mining results, which include different phases of human interests. With regarding to these results, we discuss about the relationship between performances of learning algorithms and human hypothesis construction process.","PeriodicalId":259430,"journal":{"name":"2007 IEEE International Conference on Granular Computing (GRC 2007)","volume":"20 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2007-11-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"117230292","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Rough support vector machines (RSVMs) supplement conventional support vector machines (SVMs) by providing a better representation of the boundary region. Increasing interest has been paid to the theoretical development of RSVMs, which has already lead to a modification of existing SVM implementations as RSVMs. This paper shows how to extend the use of precision and recall from a SVM implementation to a RSVM implementation. Our approach is demonstrated in practice with the help of Gist, a popular SVM implementation.
{"title":"Precision and Recall in Rough Support Vector Machines","authors":"P. Lingras, C. Butz","doi":"10.1109/GrC.2007.77","DOIUrl":"https://doi.org/10.1109/GrC.2007.77","url":null,"abstract":"Rough support vector machines (RSVMs) supplement conventional support vector machines (SVMs) by providing a better representation of the boundary region. Increasing interest has been paid to the theoretical development of RSVMs, which has already lead to a modification of existing SVM implementations as RSVMs. This paper shows how to extend the use of precision and recall from a SVM implementation to a RSVM implementation. Our approach is demonstrated in practice with the help of Gist, a popular SVM implementation.","PeriodicalId":259430,"journal":{"name":"2007 IEEE International Conference on Granular Computing (GRC 2007)","volume":"10 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2007-11-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"121355029","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
J. R. Castro, O. Castillo, P. Melin, Antonio Rodríguez Díaz
In this paper, a class of interval type-2 fuzzy neural networks (IT2FNN) is proposed, which is functionally equivalent to interval type-2 fuzzy inference systems. The computational process envisioned for a fuzzy-neural system is as follows: it starts with the development of an "interval type-2 fuzzy neuron", which is based on biological neural morphologies, followed by learning mechanisms. We describe how to decompose the parameter set such that the hybrid learning rule of adaptive networks can be applied to the IT2FNN architecture.
{"title":"Hybrid Learning Algorithm for Interval Type-2 Fuzzy Neural Networks","authors":"J. R. Castro, O. Castillo, P. Melin, Antonio Rodríguez Díaz","doi":"10.1109/GrC.2007.116","DOIUrl":"https://doi.org/10.1109/GrC.2007.116","url":null,"abstract":"In this paper, a class of interval type-2 fuzzy neural networks (IT2FNN) is proposed, which is functionally equivalent to interval type-2 fuzzy inference systems. The computational process envisioned for a fuzzy-neural system is as follows: it starts with the development of an \"interval type-2 fuzzy neuron\", which is based on biological neural morphologies, followed by learning mechanisms. We describe how to decompose the parameter set such that the hybrid learning rule of adaptive networks can be applied to the IT2FNN architecture.","PeriodicalId":259430,"journal":{"name":"2007 IEEE International Conference on Granular Computing (GRC 2007)","volume":"34 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2007-11-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115996394","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}