Pub Date : 2002-11-04DOI: 10.1109/TAI.2002.1180825
N. Bourbakis
This paper presents two methods for comparison of images and evaluation of visibility of artifacts due to hidden information, changes or noise. The first method is based on pixel flow functions (PFF) able to detect changes in images by projecting the pixel values vertically, horizontally and diagonally. These projections create "functions" related with the average values of pixels summarized horizontally, vertically and diagonally. These functions represent image signatures. The comparison of image signatures defines differences in images. The second method is based on a heuristic graph model, known as local-global graph (LGG), for evaluating visibility of modifications in digital images. The LGG is based on segmentation and comparing the segments while thresholding the differences in their attributes. The methods have been implemented in C++ and their performance is presented.
{"title":"Detecting similarities and differences in images using the PFF and LGG approaches","authors":"N. Bourbakis","doi":"10.1109/TAI.2002.1180825","DOIUrl":"https://doi.org/10.1109/TAI.2002.1180825","url":null,"abstract":"This paper presents two methods for comparison of images and evaluation of visibility of artifacts due to hidden information, changes or noise. The first method is based on pixel flow functions (PFF) able to detect changes in images by projecting the pixel values vertically, horizontally and diagonally. These projections create \"functions\" related with the average values of pixels summarized horizontally, vertically and diagonally. These functions represent image signatures. The comparison of image signatures defines differences in images. The second method is based on a heuristic graph model, known as local-global graph (LGG), for evaluating visibility of modifications in digital images. The LGG is based on segmentation and comparing the segments while thresholding the differences in their attributes. The methods have been implemented in C++ and their performance is presented.","PeriodicalId":197064,"journal":{"name":"14th IEEE International Conference on Tools with Artificial Intelligence, 2002. (ICTAI 2002). Proceedings.","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2002-11-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130369117","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2002-11-04DOI: 10.1109/TAI.2002.1180842
Sheng-Tun Li, Shu‐Ching Chen
Wavelet neural networks (WNN) have recently attracted great interest, because of their advantages over radial basis function networks (RBFN) as they are universal approximators but achieve faster convergence and are capable of dealing with the so-called "curse of dimensionality". In addition, WNN are generalized RBFN. However, the generalization performance of WNN trained by least-squares approach deteriorates when outliers are present. In this paper, we propose a robust wavelet neural network based on the theory of robust regression for dealing with outliers in the framework of function approximation. By adaptively adjusting the number of training data involved during training, the efficiency loss in the presence of Gaussian noise is accommodated. Simulation results are demonstrated to validate the generalization ability and efficiency of the proposed network.
{"title":"Function approximation using robust wavelet neural networks","authors":"Sheng-Tun Li, Shu‐Ching Chen","doi":"10.1109/TAI.2002.1180842","DOIUrl":"https://doi.org/10.1109/TAI.2002.1180842","url":null,"abstract":"Wavelet neural networks (WNN) have recently attracted great interest, because of their advantages over radial basis function networks (RBFN) as they are universal approximators but achieve faster convergence and are capable of dealing with the so-called \"curse of dimensionality\". In addition, WNN are generalized RBFN. However, the generalization performance of WNN trained by least-squares approach deteriorates when outliers are present. In this paper, we propose a robust wavelet neural network based on the theory of robust regression for dealing with outliers in the framework of function approximation. By adaptively adjusting the number of training data involved during training, the efficiency loss in the presence of Gaussian noise is accommodated. Simulation results are demonstrated to validate the generalization ability and efficiency of the proposed network.","PeriodicalId":197064,"journal":{"name":"14th IEEE International Conference on Tools with Artificial Intelligence, 2002. (ICTAI 2002). Proceedings.","volume":"2 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2002-11-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"131054733","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2002-11-04DOI: 10.1109/TAI.2002.1180836
M. Tounsi, P. David
In this paper, we present a new cooperative framework based on using successively two local search algorithms to solve constraint satisfaction and optimization problems. Our technique is based on the integration of local search algorithms as a mechanism to diversify the search instead of using a build on diversification mechanisms. Thus we avoid tuning the multiple parameters to escape from a local optimum. This technique improves the existing methods: it is generic especially when the given problem can be expressed as a constraint satisfaction problem. We present the way the local search algorithm can be used to diversify the search in order to solve real examination timetabling problems. We describe how the local search algorithm can be used to assist any other specific local search algorithm to escape from local optimality.
{"title":"Local search algorithm to improve the local search","authors":"M. Tounsi, P. David","doi":"10.1109/TAI.2002.1180836","DOIUrl":"https://doi.org/10.1109/TAI.2002.1180836","url":null,"abstract":"In this paper, we present a new cooperative framework based on using successively two local search algorithms to solve constraint satisfaction and optimization problems. Our technique is based on the integration of local search algorithms as a mechanism to diversify the search instead of using a build on diversification mechanisms. Thus we avoid tuning the multiple parameters to escape from a local optimum. This technique improves the existing methods: it is generic especially when the given problem can be expressed as a constraint satisfaction problem. We present the way the local search algorithm can be used to diversify the search in order to solve real examination timetabling problems. We describe how the local search algorithm can be used to assist any other specific local search algorithm to escape from local optimality.","PeriodicalId":197064,"journal":{"name":"14th IEEE International Conference on Tools with Artificial Intelligence, 2002. (ICTAI 2002). Proceedings.","volume":"43 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2002-11-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128538347","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2002-11-04DOI: 10.1109/TAI.2002.1180805
M. Iwen, A. Mali
SAT-based planners have been characterized as disjunctive planners that maintain a compact representation of search space of action sequences. Several ideas from refinement planners (conjunctive planners) have been used to improve performance of SAT-based planners or get a better understanding of planning as SAT One important lesson from refinement planning is that backward search being goal directed can be more efficient than forward search. Another lesson is that bidirectional search is generally not efficient. This is because the forward and backward searches can miss each other Though effect of direction of plan refinement (forward, backward, bidirectional etc.) on efficiency of plan synthesis has been deeply investigated in refinement planning, the effect of directional solving of SAT encodings is not investigated in depth. We solved several propositional encodings of benchmark planning problems with a modified form (DSatz) of the systematic SAT solver Satz. DSatz offers 21 options for solving a SAT encoding of a planning problem, where the options are about assigning truth values to action and/or fluent variables in forward or backward or both directions, in an intermittent or non-intermittent style. Our investigation shows that backward search on plan encodings (assigning values to fluent variables first, starting with goal) is very inferior We also show bidirectional solving options and forward solving options turn out to be far more efficient than other solving options. Our empirical results show that the efficient systematic solver Satz which exploits variable dependencies call be significantly enhanced with use of our variable ordering heuristics which are also computationally very cheap to apply. Our main results are that directionality does matter in solving SAT encodings of planning problems and that certain directional solving options are superior to others.
基于 SAT 的规划器的特点是,它能保持行动序列搜索空间的紧凑表示。细化规划器(连接规划器)中的一些思想已被用于提高基于 SAT 的规划器的性能,或更好地理解作为 SAT 的规划。细化规划器的一个重要经验是,目标定向的后向搜索可能比前向搜索更有效。另一个教训是,双向搜索通常效率不高。虽然细化规划中已经深入研究了规划细化方向(前向、后向、双向等)对规划合成效率的影响,但对 SAT 编码的定向求解的影响还没有深入研究。我们使用系统 SAT 求解器 Satz 的改进形式(DSatz)求解了几个基准规划问题的命题编码。DSatz 为解决规划问题的 SAT 编码提供了 21 个选项,这些选项涉及以间歇式或非间歇式方式向前或向后或双向为行动和/或流畅变量分配真值。我们的研究表明,在计划编码上进行后向搜索(首先为流变变量赋值,然后从目标开始)的效率非常低。我们还表明,双向求解选项和前向求解选项的效率远远高于其他求解选项。我们的实证结果表明,利用变量依赖性的高效系统求解器 Satz 可以通过使用我们的变量排序启发式方法显著提高效率,而且这种方法的计算成本也非常低。我们的主要结果表明,在求解规划问题的 SAT 编码时,方向性确实很重要,而且某些方向性求解方案优于其他方案。
{"title":"DSatz: a directional SAT solver for planning","authors":"M. Iwen, A. Mali","doi":"10.1109/TAI.2002.1180805","DOIUrl":"https://doi.org/10.1109/TAI.2002.1180805","url":null,"abstract":"SAT-based planners have been characterized as disjunctive planners that maintain a compact representation of search space of action sequences. Several ideas from refinement planners (conjunctive planners) have been used to improve performance of SAT-based planners or get a better understanding of planning as SAT One important lesson from refinement planning is that backward search being goal directed can be more efficient than forward search. Another lesson is that bidirectional search is generally not efficient. This is because the forward and backward searches can miss each other Though effect of direction of plan refinement (forward, backward, bidirectional etc.) on efficiency of plan synthesis has been deeply investigated in refinement planning, the effect of directional solving of SAT encodings is not investigated in depth. We solved several propositional encodings of benchmark planning problems with a modified form (DSatz) of the systematic SAT solver Satz. DSatz offers 21 options for solving a SAT encoding of a planning problem, where the options are about assigning truth values to action and/or fluent variables in forward or backward or both directions, in an intermittent or non-intermittent style. Our investigation shows that backward search on plan encodings (assigning values to fluent variables first, starting with goal) is very inferior We also show bidirectional solving options and forward solving options turn out to be far more efficient than other solving options. Our empirical results show that the efficient systematic solver Satz which exploits variable dependencies call be significantly enhanced with use of our variable ordering heuristics which are also computationally very cheap to apply. Our main results are that directionality does matter in solving SAT encodings of planning problems and that certain directional solving options are superior to others.","PeriodicalId":197064,"journal":{"name":"14th IEEE International Conference on Tools with Artificial Intelligence, 2002. (ICTAI 2002). Proceedings.","volume":"9 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2002-11-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129748022","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2002-11-04DOI: 10.1109/TAI.2002.1180843
Chen-Yuan Chen, Cheng-Wu Chen, W. Chiang, Jing-Dong Hwang
A backpropagation network can always be used in modeling. This study is concerned with the stability problem of a neural network (NN) system which consists of a few subsystems represented by NN models. In this paper, the dynamics of each NN model is converted into linear inclusion representation. Subsequently, based on the representations, the stability conditions in terms of Lyapunov's direct method is derived to guarantee the asymptotic stability of NN systems.
{"title":"A neural-network approach to modeling and analysis","authors":"Chen-Yuan Chen, Cheng-Wu Chen, W. Chiang, Jing-Dong Hwang","doi":"10.1109/TAI.2002.1180843","DOIUrl":"https://doi.org/10.1109/TAI.2002.1180843","url":null,"abstract":"A backpropagation network can always be used in modeling. This study is concerned with the stability problem of a neural network (NN) system which consists of a few subsystems represented by NN models. In this paper, the dynamics of each NN model is converted into linear inclusion representation. Subsequently, based on the representations, the stability conditions in terms of Lyapunov's direct method is derived to guarantee the asymptotic stability of NN systems.","PeriodicalId":197064,"journal":{"name":"14th IEEE International Conference on Tools with Artificial Intelligence, 2002. (ICTAI 2002). Proceedings.","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2002-11-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129997723","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2002-11-04DOI: 10.1109/TAI.2002.1180818
Richard Fox, Mari Bowden
Typical grammar checking software use some form of natural language parsing to determine if errors exist in the text. If a sentence is found ungrammatical, the grammar checker usually seeks a single grammatical error as an explanation. For non-native speakers of English, it is possible that a given sentence contain multiple errors and grammar checkers may not adequately explain these mistakes. This paper presents GRADES, a diagnostic program that detects and explains grammatical mistakes made by non-native English speakers. GRADES performs its diagnostic task, not through parsing, but through the application of classification and pattern matching rules. This makes the diagnostic process more efficient than other grammar checkers. GRADES is envisioned as a tool to help non-native English speakers learn to correct their English mistakes, but is also a demonstration that grammar checking need not rely on parsing techniques.
{"title":"Automated diagnosis of non-native English speaker's natural language","authors":"Richard Fox, Mari Bowden","doi":"10.1109/TAI.2002.1180818","DOIUrl":"https://doi.org/10.1109/TAI.2002.1180818","url":null,"abstract":"Typical grammar checking software use some form of natural language parsing to determine if errors exist in the text. If a sentence is found ungrammatical, the grammar checker usually seeks a single grammatical error as an explanation. For non-native speakers of English, it is possible that a given sentence contain multiple errors and grammar checkers may not adequately explain these mistakes. This paper presents GRADES, a diagnostic program that detects and explains grammatical mistakes made by non-native English speakers. GRADES performs its diagnostic task, not through parsing, but through the application of classification and pattern matching rules. This makes the diagnostic process more efficient than other grammar checkers. GRADES is envisioned as a tool to help non-native English speakers learn to correct their English mistakes, but is also a demonstration that grammar checking need not rely on parsing techniques.","PeriodicalId":197064,"journal":{"name":"14th IEEE International Conference on Tools with Artificial Intelligence, 2002. (ICTAI 2002). Proceedings.","volume":"156 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2002-11-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"122183831","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2002-11-04DOI: 10.1109/TAI.2002.1180791
N. Papadakis, D. Plexousakis
The ramification problem is a hard and ever present problem in systems exhibiting dynamic behavior. The area of temporal databases in particular still lacks satisfactory solutions to the ramification problem. In this paper we address the ramification problem based on causal relationships that take time into account. We study the problem for both instantaneous actions and actions with duration. The proposed solution advances previous work by considering actions with effects occurring in any of the possible future situations resulting from an action's execution.
{"title":"Actions with duration and constraints: the ramification problem in temporal databases","authors":"N. Papadakis, D. Plexousakis","doi":"10.1109/TAI.2002.1180791","DOIUrl":"https://doi.org/10.1109/TAI.2002.1180791","url":null,"abstract":"The ramification problem is a hard and ever present problem in systems exhibiting dynamic behavior. The area of temporal databases in particular still lacks satisfactory solutions to the ramification problem. In this paper we address the ramification problem based on causal relationships that take time into account. We study the problem for both instantaneous actions and actions with duration. The proposed solution advances previous work by considering actions with effects occurring in any of the possible future situations resulting from an action's execution.","PeriodicalId":197064,"journal":{"name":"14th IEEE International Conference on Tools with Artificial Intelligence, 2002. (ICTAI 2002). Proceedings.","volume":"12 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2002-11-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"116935289","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2002-11-04DOI: 10.1109/TAI.2002.1180787
John D. Holt, S. M. Chung
In this paper, we propose a new algorithm named multipass with inverted hashing and pruning (MIHP) for mining association rules between words in text databases. The characteristics of text databases are quite different from those of retail transaction databases, and existing mining algorithms cannot handle text databases efficiently because of the large number of itemsets (i.e., words) that need to be counted. Two well-known mining algorithms, the apriori algorithm and the direct hashing and pruning (DHP) algorithm, are evaluated in the context of mining text databases, and are compared with the proposed MIHP algorithm. It has been shown that the MIHP algorithm performs better for large text databases.
{"title":"Mining association rules in text databases using multipass with inverted hashing and pruning","authors":"John D. Holt, S. M. Chung","doi":"10.1109/TAI.2002.1180787","DOIUrl":"https://doi.org/10.1109/TAI.2002.1180787","url":null,"abstract":"In this paper, we propose a new algorithm named multipass with inverted hashing and pruning (MIHP) for mining association rules between words in text databases. The characteristics of text databases are quite different from those of retail transaction databases, and existing mining algorithms cannot handle text databases efficiently because of the large number of itemsets (i.e., words) that need to be counted. Two well-known mining algorithms, the apriori algorithm and the direct hashing and pruning (DHP) algorithm, are evaluated in the context of mining text databases, and are compared with the proposed MIHP algorithm. It has been shown that the MIHP algorithm performs better for large text databases.","PeriodicalId":197064,"journal":{"name":"14th IEEE International Conference on Tools with Artificial Intelligence, 2002. (ICTAI 2002). Proceedings.","volume":"5 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2002-11-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"133955480","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2002-11-04DOI: 10.1109/TAI.2002.1180826
T. Khoshgoftaar, Naeem Seliya
Predicting the quality of system modules prior to software testing and operations can benefit the software development team. Such a timely reliability estimation can be used to direct cost-effective quality improvement efforts to the high-risk modules. Tree-based software quality classification models based on software metrics are used to predict whether a software module is fault-prone or not fault-prone. They are white box quality estimation models with good accuracy, and are simple and easy to interpret. This paper presents an in-depth study of calibrating classification trees for software quality estimation using the SPRINT decision tree algorithm. Many classification algorithms have memory limitations including the requirement that data sets be memory resident. SPRINT removes all of these limitations and provides a fast and scalable analysis. It is an extension of a commonly used decision tree algorithm, CART, and provides a unique tree-pruning technique based on the minimum description length (MDL) principle. Combining the MDL pruning technique and the modified classification algorithm, SPRINT yields classification trees with useful prediction accuracy. The case study used comprises of software metrics and fault data collected over four releases from a very large telecommunications system. It is observed that classification trees built by SPRINT are more balanced and demonstrate better stability in comparison to those built by CART.
{"title":"Software quality classification modeling using the SPRINT decision tree algorithm","authors":"T. Khoshgoftaar, Naeem Seliya","doi":"10.1109/TAI.2002.1180826","DOIUrl":"https://doi.org/10.1109/TAI.2002.1180826","url":null,"abstract":"Predicting the quality of system modules prior to software testing and operations can benefit the software development team. Such a timely reliability estimation can be used to direct cost-effective quality improvement efforts to the high-risk modules. Tree-based software quality classification models based on software metrics are used to predict whether a software module is fault-prone or not fault-prone. They are white box quality estimation models with good accuracy, and are simple and easy to interpret. This paper presents an in-depth study of calibrating classification trees for software quality estimation using the SPRINT decision tree algorithm. Many classification algorithms have memory limitations including the requirement that data sets be memory resident. SPRINT removes all of these limitations and provides a fast and scalable analysis. It is an extension of a commonly used decision tree algorithm, CART, and provides a unique tree-pruning technique based on the minimum description length (MDL) principle. Combining the MDL pruning technique and the modified classification algorithm, SPRINT yields classification trees with useful prediction accuracy. The case study used comprises of software metrics and fault data collected over four releases from a very large telecommunications system. It is observed that classification trees built by SPRINT are more balanced and demonstrate better stability in comparison to those built by CART.","PeriodicalId":197064,"journal":{"name":"14th IEEE International Conference on Tools with Artificial Intelligence, 2002. (ICTAI 2002). Proceedings.","volume":"381 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2002-11-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"132877744","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2002-11-04DOI: 10.1109/TAI.2002.1180812
Peng Wang, Du Zhang
This paper describes a Bayesian learning based approach to protein secondary structure prediction. Four secondary structure types are considered, including /spl alpha/-helix, /spl beta/-strand, /spl beta/-turn and coil. A six-letter exchange group is utilized to represent a protein sequence. Training cases are expressed as sequence quaternion. A tool called Predictor is developed in Java that implements the proposed approach. To evaluate the tool, we select, from the protein data bank and based on the principle of one-protein-per-family according to the structure family of SCOP, six hundred and twenty-three known proteins without pair wise sequence homology. Several training/test data splits have been tried. The results show that our proposed approach can produce prediction accuracy comparable to those of the traditional prediction methods. Predictor has user-friendly and easy-to-use GUIs, and is of practical value to the molecular biology researchers.
{"title":"Protein secondary structure prediction with Bayesian learning method","authors":"Peng Wang, Du Zhang","doi":"10.1109/TAI.2002.1180812","DOIUrl":"https://doi.org/10.1109/TAI.2002.1180812","url":null,"abstract":"This paper describes a Bayesian learning based approach to protein secondary structure prediction. Four secondary structure types are considered, including /spl alpha/-helix, /spl beta/-strand, /spl beta/-turn and coil. A six-letter exchange group is utilized to represent a protein sequence. Training cases are expressed as sequence quaternion. A tool called Predictor is developed in Java that implements the proposed approach. To evaluate the tool, we select, from the protein data bank and based on the principle of one-protein-per-family according to the structure family of SCOP, six hundred and twenty-three known proteins without pair wise sequence homology. Several training/test data splits have been tried. The results show that our proposed approach can produce prediction accuracy comparable to those of the traditional prediction methods. Predictor has user-friendly and easy-to-use GUIs, and is of practical value to the molecular biology researchers.","PeriodicalId":197064,"journal":{"name":"14th IEEE International Conference on Tools with Artificial Intelligence, 2002. (ICTAI 2002). Proceedings.","volume":"7 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2002-11-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"134179019","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}