This paper presents a web robot web-robot learning powered by Bluetooth communication system. The web-robot system is used as the virtual robot laboratory integrating a number of disciplines in engineering. This virtual laboratory is a valuable teaching tool for engineering education used at any time and from any location through Internet. The mobile robot was controlled with robot server named as control center. The server can be connected to mobile robot via Bluetooth adapter. The mobile robot system focuses on vision sensing. Real time image processing techniques are realized by the web robot system. This system can also realize monitoring, tele-controlling, parameter adjusting and reprogramming through Internet exclusively with a standard Web browser without the need of any additional software
{"title":"Web Robot Learning Powered by Bluetooth Communication System","authors":"Ş. Sağiroğlu, N. Yilmaz, M. Wani","doi":"10.1109/ICMLA.2006.53","DOIUrl":"https://doi.org/10.1109/ICMLA.2006.53","url":null,"abstract":"This paper presents a web robot web-robot learning powered by Bluetooth communication system. The web-robot system is used as the virtual robot laboratory integrating a number of disciplines in engineering. This virtual laboratory is a valuable teaching tool for engineering education used at any time and from any location through Internet. The mobile robot was controlled with robot server named as control center. The server can be connected to mobile robot via Bluetooth adapter. The mobile robot system focuses on vision sensing. Real time image processing techniques are realized by the web robot system. This system can also realize monitoring, tele-controlling, parameter adjusting and reprogramming through Internet exclusively with a standard Web browser without the need of any additional software","PeriodicalId":297071,"journal":{"name":"2006 5th International Conference on Machine Learning and Applications (ICMLA'06)","volume":"36 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2006-12-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129568076","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
R. Venkatesh, C. Rowland, Hongjin Huang, Olivia T. Abar, J. Sninsky
The iterative technique proposed in this paper provides an effective way to select a robust model in wide data settings such as in genomics and gene expression studies where number of markers Gt number of samples. This technique can be quite useful when an independent test set is not available and crossvalidation is used as a validation step. It removes many of the ambiguities surrounding the final model selection process giving a computationally simple and transparent way to choose a robust model. The robust model selection is mainly accomplished by utilizing the fold frequencies of markers selected in repeated crossvalidation experiments in a direct and effective manner. The technique, both in terms of feature selection and classification is not method specific and therefore can be used with different sets of feature selection and classification methods. The usefulness of this technique extends even to situations where independent test set is available. Using this technique it allows one to squeeze extra performance out of the feature selection procedure and increase the odds of replication in an independent test set. Frequently only one test set is available and in this case use of this technique can help avoid repeated use of the test set. Availability of techniques such as one described in this study can be of great practical value in developing biomedical genomic applications e.g., molecular diagnostic tests. The technique was successfully applied to a complex real world data set and significant improvements were demonstrated in terms of compactness, accuracy and generalizability of the model
{"title":"Robust Model Selection Using Cross Validation: A Simple Iterative Technique for Developing Robust Gene Signatures in Biomedical Genomics Applications","authors":"R. Venkatesh, C. Rowland, Hongjin Huang, Olivia T. Abar, J. Sninsky","doi":"10.1109/ICMLA.2006.45","DOIUrl":"https://doi.org/10.1109/ICMLA.2006.45","url":null,"abstract":"The iterative technique proposed in this paper provides an effective way to select a robust model in wide data settings such as in genomics and gene expression studies where number of markers Gt number of samples. This technique can be quite useful when an independent test set is not available and crossvalidation is used as a validation step. It removes many of the ambiguities surrounding the final model selection process giving a computationally simple and transparent way to choose a robust model. The robust model selection is mainly accomplished by utilizing the fold frequencies of markers selected in repeated crossvalidation experiments in a direct and effective manner. The technique, both in terms of feature selection and classification is not method specific and therefore can be used with different sets of feature selection and classification methods. The usefulness of this technique extends even to situations where independent test set is available. Using this technique it allows one to squeeze extra performance out of the feature selection procedure and increase the odds of replication in an independent test set. Frequently only one test set is available and in this case use of this technique can help avoid repeated use of the test set. Availability of techniques such as one described in this study can be of great practical value in developing biomedical genomic applications e.g., molecular diagnostic tests. The technique was successfully applied to a complex real world data set and significant improvements were demonstrated in terms of compactness, accuracy and generalizability of the model","PeriodicalId":297071,"journal":{"name":"2006 5th International Conference on Machine Learning and Applications (ICMLA'06)","volume":"33 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2006-12-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115081229","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
This paper addresses in silico prediction of protein structural classes as defined in the SCOP database. The SCOP defines total of 11 classes, while majority of proteins are classified to the 4 classes: all-alpha all-beta alpha/beta, and alpha+beta. The main goals of this paper are to experimentally evaluate the impact of predicted protein secondary structure content on the structural class prediction and to develop a novel protein sequence representation. The experiments include application of three protein sequence representations and four classifiers to prediction of both 4 and 11 structural classes. The predictions are performed using a large dataset of low homology (twilight zone) sequences. The proposed sequence representation includes the predicted structural content, which provides the strongest contribution towards classification, composition and composition moment vectors, hydrophobic autocorrelations, chemical group composition and molecular weight of the protein. The predicted content values are shown on average to improve the prediction accuracy by 3.3% and 4.2% for the 4 and 11 classes, respectively, when compared to sequence representation that does not utilize this information. Finally, we propose a very compact, 20 dimensional sequence representation that is shown to improve the prediction accuracy by 5.1-8.5% when compared with recently published results
{"title":"Impact of the Predicted Protein Structural Content on Prediction of Structural Classes for the Twilight Zone Proteins","authors":"Lukasz Kurgan, M. Rahbari, L. Homaeian","doi":"10.1109/ICMLA.2006.27","DOIUrl":"https://doi.org/10.1109/ICMLA.2006.27","url":null,"abstract":"This paper addresses in silico prediction of protein structural classes as defined in the SCOP database. The SCOP defines total of 11 classes, while majority of proteins are classified to the 4 classes: all-alpha all-beta alpha/beta, and alpha+beta. The main goals of this paper are to experimentally evaluate the impact of predicted protein secondary structure content on the structural class prediction and to develop a novel protein sequence representation. The experiments include application of three protein sequence representations and four classifiers to prediction of both 4 and 11 structural classes. The predictions are performed using a large dataset of low homology (twilight zone) sequences. The proposed sequence representation includes the predicted structural content, which provides the strongest contribution towards classification, composition and composition moment vectors, hydrophobic autocorrelations, chemical group composition and molecular weight of the protein. The predicted content values are shown on average to improve the prediction accuracy by 3.3% and 4.2% for the 4 and 11 classes, respectively, when compared to sequence representation that does not utilize this information. Finally, we propose a very compact, 20 dimensional sequence representation that is shown to improve the prediction accuracy by 5.1-8.5% when compared with recently published results","PeriodicalId":297071,"journal":{"name":"2006 5th International Conference on Machine Learning and Applications (ICMLA'06)","volume":"25 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2006-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"124915399","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
This paper presents an application where machine learning techniques are used to mine data gathered from online poker in order to explain what signifies successful play. The study focuses on short-handed small stakes Texas Hold'em, and the data set used contains 105 human players, each having played more than 500 hands. Techniques used are decision trees and G-REX, a rule extractor based on genetic programming. The overall result is that the rules induced are rather compact and have very high accuracy, thus providing good explanations of successful play. It is of course quite hard to assess the quality of the rules; i.e. if they provide something novel and non-trivial. The main picture is, however, that obtained rules are consistent with established poker theory. With this in mind, we believe that the suggested techniques will in future studies, where substantially more data is available, produce clear and accurate descriptions of what constitutes the difference between winning and losing in poker
{"title":"Explaining Winning Poker--A Data Mining Approach","authors":"U. Johansson, Cecilia Sönströd, L. Niklasson","doi":"10.1109/ICMLA.2006.23","DOIUrl":"https://doi.org/10.1109/ICMLA.2006.23","url":null,"abstract":"This paper presents an application where machine learning techniques are used to mine data gathered from online poker in order to explain what signifies successful play. The study focuses on short-handed small stakes Texas Hold'em, and the data set used contains 105 human players, each having played more than 500 hands. Techniques used are decision trees and G-REX, a rule extractor based on genetic programming. The overall result is that the rules induced are rather compact and have very high accuracy, thus providing good explanations of successful play. It is of course quite hard to assess the quality of the rules; i.e. if they provide something novel and non-trivial. The main picture is, however, that obtained rules are consistent with established poker theory. With this in mind, we believe that the suggested techniques will in future studies, where substantially more data is available, produce clear and accurate descriptions of what constitutes the difference between winning and losing in poker","PeriodicalId":297071,"journal":{"name":"2006 5th International Conference on Machine Learning and Applications (ICMLA'06)","volume":"72 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2006-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"123211983","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
The study on how to represent appearance instances was the focus in most previous work in face recognition. Little attention, however, was given to the problem of how to select "good" instances for a gallery, which may be called the facial identity representation problem. This paper gives an evaluation of the identity representability of facial expressions. The identity representability of an expression is measured by the recognition accuracy achieved by using its samples as the gallery data. We use feature pixel distributions to represent appearance instances. A feature pixel distribution of an image is based on the number of occurrence of detected feature pixels (corners) in regular grids of an image plane. We propose imbalance oriented redundancy reduction for feature pixel detection. Our experimental evaluation indicates that certain facial expressions, such as the neutral, have stronger identity representability than other expressions, in various feature pixel distributions
{"title":"Identity Representability of Facial Expressions: An Evaluation Using Feature Pixel Distributions","authors":"Qi Li, C. Kambhamettu","doi":"10.1109/ICMLA.2006.26","DOIUrl":"https://doi.org/10.1109/ICMLA.2006.26","url":null,"abstract":"The study on how to represent appearance instances was the focus in most previous work in face recognition. Little attention, however, was given to the problem of how to select \"good\" instances for a gallery, which may be called the facial identity representation problem. This paper gives an evaluation of the identity representability of facial expressions. The identity representability of an expression is measured by the recognition accuracy achieved by using its samples as the gallery data. We use feature pixel distributions to represent appearance instances. A feature pixel distribution of an image is based on the number of occurrence of detected feature pixels (corners) in regular grids of an image plane. We propose imbalance oriented redundancy reduction for feature pixel detection. Our experimental evaluation indicates that certain facial expressions, such as the neutral, have stronger identity representability than other expressions, in various feature pixel distributions","PeriodicalId":297071,"journal":{"name":"2006 5th International Conference on Machine Learning and Applications (ICMLA'06)","volume":"25 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2006-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129144194","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}