S. Shen, Gregory M. P. O'Hare, M. O'Grady, Guangqing Wang
Many embedded devices are characterized by their resource-boundedness. Wireless Sensor Networks (WSNs) are a topical case in point, with energy being the dominant constraint. The issue of the intelligent utilization of energy in sensor nodes is of crucial importance as well as being a formidable software engineering challenge in its own right. Evaluation of an arbitrary intelligence mechanism is difficult as it involves various environmental uncertainties thereby making its effectiveness difficult to assess. Within this paper, Sensorworld is harnessed as a platform for the evaluation and comparison of resource-bounded intelligence. A suite of simulations on effectiveness, utility and energy consumption within the context of dynamism and reasoning strategy are presented. These demonstrate that the validation and comparison of different reasoning strategies is a viable and attainable objective within computationally resource-constrained scenarios.
{"title":"Simulating resource-bounded intelligence for wireless sensor networks","authors":"S. Shen, Gregory M. P. O'Hare, M. O'Grady, Guangqing Wang","doi":"10.3233/KES-140290","DOIUrl":"https://doi.org/10.3233/KES-140290","url":null,"abstract":"Many embedded devices are characterized by their resource-boundedness. Wireless Sensor Networks (WSNs) are a topical case in point, with energy being the dominant constraint. The issue of the intelligent utilization of energy in sensor nodes is of crucial importance as well as being a formidable software engineering challenge in its own right. Evaluation of an arbitrary intelligence mechanism is difficult as it involves various environmental uncertainties thereby making its effectiveness difficult to assess. Within this paper, Sensorworld is harnessed as a platform for the evaluation and comparison of resource-bounded intelligence. A suite of simulations on effectiveness, utility and energy consumption within the context of dynamism and reasoning strategy are presented. These demonstrate that the validation and comparison of different reasoning strategies is a viable and attainable objective within computationally resource-constrained scenarios.","PeriodicalId":210048,"journal":{"name":"Int. J. Knowl. Based Intell. Eng. Syst.","volume":"24 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2014-04-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"122334996","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Collaborative filtering (CF) is one of the most successful and effective recommendation techniques for personalized information access. This method makes recommendations based on past transactions and feedback from users sharing similar interests. However, many commercial recommender systems are widely adopting the CF algorithms; these methods are required to have the ability to deal with sparsity in data and to scale with the increasing number of users and items. The proposed approach addresses the problems of sparsity and scalability by first clustering users based on their rating patterns and then inferring clusters (neighborhoods) by applying two knowledge-based techniques: rule-based reasoning (RBR) and case-based reasoning (CBR) individually. Further to improve accuracy of the system, HRC (hybridization of RBR and CBR) procedure is employed to generate an optimal neighborhood for an active user. The proposed three neighborhood generation procedures are then combined with CF to develop RBR/CF, CBR/CF, and HBR/CF schemes for recommendations. An empirical study reveals that the RBR/CF and CBR/CF perform better than other state-of-the-art CF algorithms, whereas HRC/CF clearly outperforms the rest of the schemes.
{"title":"A hybrid knowledge-based approach to collaborative filtering for improved recommendations","authors":"S. Tyagi, K. K. Bharadwaj","doi":"10.3233/KES-140292","DOIUrl":"https://doi.org/10.3233/KES-140292","url":null,"abstract":"Collaborative filtering (CF) is one of the most successful and effective recommendation techniques for personalized information access. This method makes recommendations based on past transactions and feedback from users sharing similar interests. However, many commercial recommender systems are widely adopting the CF algorithms; these methods are required to have the ability to deal with sparsity in data and to scale with the increasing number of users and items. The proposed approach addresses the problems of sparsity and scalability by first clustering users based on their rating patterns and then inferring clusters (neighborhoods) by applying two knowledge-based techniques: rule-based reasoning (RBR) and case-based reasoning (CBR) individually. Further to improve accuracy of the system, HRC (hybridization of RBR and CBR) procedure is employed to generate an optimal neighborhood for an active user. The proposed three neighborhood generation procedures are then combined with CF to develop RBR/CF, CBR/CF, and HBR/CF schemes for recommendations. An empirical study reveals that the RBR/CF and CBR/CF perform better than other state-of-the-art CF algorithms, whereas HRC/CF clearly outperforms the rest of the schemes.","PeriodicalId":210048,"journal":{"name":"Int. J. Knowl. Based Intell. Eng. Syst.","volume":"84 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2014-04-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"124994626","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Feature based face recognition algorithms are computationally efficient compared to model based approaches. These algorithms have proved themselves for face identification under variations in poses. However, the literature lacks with direct and detailed investigation of these algorithms in completely equal working conditions. This motivates us to carry out an independent performance analysis of well known feature based face identification algorithms for different poses with mug-shot face database situation. The analysis focuses on variations in performance of feature based algorithms in terms of identification rates due to variation in poses. The analysis is carried out in face identification scenario using large amount of images from the standard face databases such as AT&T, Georgian Face database and Head Pose Image database. We analysed state-of-the art feature based algorithms such as PCA, log Gabor, DCT and FPLBP and found that, log Gabor outperforms for larger degree of pose variation with an average identification rate 82.47% with three training images for Head Pose Image database.
{"title":"Independent analysis of feature based face recognition algorithms under varying poses","authors":"Kavita R. Singh, M. Zaveri, M. Raghuwanshi","doi":"10.3233/KES-140286","DOIUrl":"https://doi.org/10.3233/KES-140286","url":null,"abstract":"Feature based face recognition algorithms are computationally efficient compared to model based approaches. These algorithms have proved themselves for face identification under variations in poses. However, the literature lacks with direct and detailed investigation of these algorithms in completely equal working conditions. This motivates us to carry out an independent performance analysis of well known feature based face identification algorithms for different poses with mug-shot face database situation. The analysis focuses on variations in performance of feature based algorithms in terms of identification rates due to variation in poses. The analysis is carried out in face identification scenario using large amount of images from the standard face databases such as AT&T, Georgian Face database and Head Pose Image database. We analysed state-of-the art feature based algorithms such as PCA, log Gabor, DCT and FPLBP and found that, log Gabor outperforms for larger degree of pose variation with an average identification rate 82.47% with three training images for Head Pose Image database.","PeriodicalId":210048,"journal":{"name":"Int. J. Knowl. Based Intell. Eng. Syst.","volume":"14 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2014-04-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130125101","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Feature selection plays an important role in data mining, machine learning and pattern recognition, especially for large scale data with high dimensions. Many selection techniques have been proposed during past years. Their general purposes are to exploit certain metric to measure the relevance or irrelevance between different features of data for certain task, and then select fewer features without deteriorating discriminative capability. Each technique, however, has not absolutely better performance than others' for all kinds of data, due to the data characterized by incorrectness, incompleteness, inconsistency, and diversity. Based on this fact, this paper put forward to a new scheme based on partition clustering for feature selection, which is a special preprocessing procedure and independent of selection techniques. Experimental results carried out on UCI data sets show that the performance achieved by our proposed scheme is better than selection techniques without using this scheme in most cases.
{"title":"Feature selection based on partition clustering","authors":"Shuang Liu, Qiang Zhao, Xiang Wu","doi":"10.3233/KES-140293","DOIUrl":"https://doi.org/10.3233/KES-140293","url":null,"abstract":"Feature selection plays an important role in data mining, machine learning and pattern recognition, especially for large scale data with high dimensions. Many selection techniques have been proposed during past years. Their general purposes are to exploit certain metric to measure the relevance or irrelevance between different features of data for certain task, and then select fewer features without deteriorating discriminative capability. Each technique, however, has not absolutely better performance than others' for all kinds of data, due to the data characterized by incorrectness, incompleteness, inconsistency, and diversity. Based on this fact, this paper put forward to a new scheme based on partition clustering for feature selection, which is a special preprocessing procedure and independent of selection techniques. Experimental results carried out on UCI data sets show that the performance achieved by our proposed scheme is better than selection techniques without using this scheme in most cases.","PeriodicalId":210048,"journal":{"name":"Int. J. Knowl. Based Intell. Eng. Syst.","volume":"48 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2014-04-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128195512","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
In this paper, we propose a new digital watermarking method based on fractal image coding using DWT and HVS. The method decomposes an original image into subbands LL1 and LL2 using DWT. In fractal image coding, LL1 and LL2 are used for the range blocks region and domain blocks region, respectively. This scheme can embed the watermark robustly against JPEG compression because it embeds the watermark sequence into the subband LL1 using fractal image coding. In addition, the quality of the watermarked image is good in spite of embedding into LL1, because we handle the fractal processing to only the candidate range blocks selected by an embedding judgment processing based on HVS. Experimental results show that the proposed method not only improves robustness against JPEG compression but also improves the quality of the watermarked image when compared with conventional techniques.
{"title":"Digital watermarking based on fractal image coding using DWT and HVS","authors":"S. Ohga, R. Hamabe","doi":"10.3233/KES-140288","DOIUrl":"https://doi.org/10.3233/KES-140288","url":null,"abstract":"In this paper, we propose a new digital watermarking method based on fractal image coding using DWT and HVS. The method decomposes an original image into subbands LL1 and LL2 using DWT. In fractal image coding, LL1 and LL2 are used for the range blocks region and domain blocks region, respectively. This scheme can embed the watermark robustly against JPEG compression because it embeds the watermark sequence into the subband LL1 using fractal image coding. In addition, the quality of the watermarked image is good in spite of embedding into LL1, because we handle the fractal processing to only the candidate range blocks selected by an embedding judgment processing based on HVS. Experimental results show that the proposed method not only improves robustness against JPEG compression but also improves the quality of the watermarked image when compared with conventional techniques.","PeriodicalId":210048,"journal":{"name":"Int. J. Knowl. Based Intell. Eng. Syst.","volume":"47 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2014-04-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"125060494","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Markov Logic Networks (MLN) is a unified framework integrating first-order logic and probabilistic inference. Most existing methods of MLN learning are supervised approaches requiring a large amount of training examples, leading to a substantial amount of human effort for preparing these training examples. To reduce such human effort, we have developed a semi-supervised framework for learning an MLN, in particular structure learning of MLN, from a set of unlabeled data and a limited number of labeled training examples. To achieve this, we aim at maximizing the expected pseudo-log-likelihood function of the observation from the set of unlabeled data, instead of maximizing the pseudo-log-likelihood function of the labeled training examples, which is commonly used in supervised learning of MLN. To evaluate our proposed method, we have conducted experiments on two different datasets and the empirical results demonstrate that our framework is effective, outperforming existing approach which considers labeled training examples alone.
{"title":"Learning Markov logic networks with limited number of labeled training examples","authors":"Tak-Lam Wong","doi":"10.3233/KES-140289","DOIUrl":"https://doi.org/10.3233/KES-140289","url":null,"abstract":"Markov Logic Networks (MLN) is a unified framework integrating first-order logic and probabilistic inference. Most existing methods of MLN learning are supervised approaches requiring a large amount of training examples, leading to a substantial amount of human effort for preparing these training examples. To reduce such human effort, we have developed a semi-supervised framework for learning an MLN, in particular structure learning of MLN, from a set of unlabeled data and a limited number of labeled training examples. To achieve this, we aim at maximizing the expected pseudo-log-likelihood function of the observation from the set of unlabeled data, instead of maximizing the pseudo-log-likelihood function of the labeled training examples, which is commonly used in supervised learning of MLN. To evaluate our proposed method, we have conducted experiments on two different datasets and the empirical results demonstrate that our framework is effective, outperforming existing approach which considers labeled training examples alone.","PeriodicalId":210048,"journal":{"name":"Int. J. Knowl. Based Intell. Eng. Syst.","volume":"71 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2014-04-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"122420895","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Brain-computer interfaces (BCI) are useful devices that allow direct control of external devices using thoughts, i.e. brain's electrical activity. There are several BCI paradigms, of which steady state visual evoked potential (SSVEP) is the most commonly used due to its quick response and accuracy. SSVEP stimuli are typically generated by varying the luminance of a target for a set number of frames or display events. Conventionally, SSVEP based BCI paradigms use magnitude (amplitude) information from frequency domain but recently, SSVEP based BCI paradigms have begun to utilize phase information to discriminate between similar frequency targets. This paper will demonstrate that using a single frame to modulate a stimulus may lead to a bi-modal distribution of SSVEP as a consequence of a user attending both transition edges. This incoherence, while of less importance in traditional magnitude domain SSVEP BCIs becomes critical when phase is taken into account. An alternative modulation technique incorporating a 50% duty cycle is also a popular method for generating SSVEP stimuli but has a unimodal distribution due to user's forced attention to a single transition edge. This paper demonstrates that utilizing the second method results in significantly enhanced performance in information transfer rate in a phase discrimination SSVEP based BCI.
{"title":"On the stimulus duty cycle in steady state visual evoked potential","authors":"John J. Wilson, R. Palaniappan","doi":"10.3233/KES-140287","DOIUrl":"https://doi.org/10.3233/KES-140287","url":null,"abstract":"Brain-computer interfaces (BCI) are useful devices that allow direct control of external devices using thoughts, i.e. brain's electrical activity. There are several BCI paradigms, of which steady state visual evoked potential (SSVEP) is the most commonly used due to its quick response and accuracy. SSVEP stimuli are typically generated by varying the luminance of a target for a set number of frames or display events. Conventionally, SSVEP based BCI paradigms use magnitude (amplitude) information from frequency domain but recently, SSVEP based BCI paradigms have begun to utilize phase information to discriminate between similar frequency targets. This paper will demonstrate that using a single frame to modulate a stimulus may lead to a bi-modal distribution of SSVEP as a consequence of a user attending both transition edges. This incoherence, while of less importance in traditional magnitude domain SSVEP BCIs becomes critical when phase is taken into account. An alternative modulation technique incorporating a 50% duty cycle is also a popular method for generating SSVEP stimuli but has a unimodal distribution due to user's forced attention to a single transition edge. This paper demonstrates that utilizing the second method results in significantly enhanced performance in information transfer rate in a phase discrimination SSVEP based BCI.","PeriodicalId":210048,"journal":{"name":"Int. J. Knowl. Based Intell. Eng. Syst.","volume":"62 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2014-04-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127158929","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
This article describes the influence of risk management on financial markets. Institutional investors, such as pension funds, are legally required to follow a duty of care. Implementing adequate risk management is regarded as a central part of institutional investors' legal responsibilities and considered to be effective in terms of limiting investors' losses. The most prevalent risk management method is known as Value at Risk VaR and this is the focal point of my analysis. As a result of intensive agent-based modeling and experimentation, I have concluded that: 1 market prices could deviate from fundamental values when risk management criteria are too strict; 2 the larger the disparity of investors' estimations of stock prices becomes, the larger the tendency of deviation from fundamental values; 3 the same tendency can be observed under market conditions where heterogeneous investors trade. These results suggest that risk management which is required by law as a duty of care could contribute to market inefficiencies. If so, this is significant from both practical and academic points of view. Furthermore, I believe this paper proves the efficacy of agent-based modeling in analyzing the impact of certain regulations and laws on financial markets under realistic market conditions.
{"title":"Analyzing the influence of Value at Risk on financial markets through agent-based modeling","authors":"Hiroshi Takahashi","doi":"10.3233/KES-130276","DOIUrl":"https://doi.org/10.3233/KES-130276","url":null,"abstract":"This article describes the influence of risk management on financial markets. Institutional investors, such as pension funds, are legally required to follow a duty of care. Implementing adequate risk management is regarded as a central part of institutional investors' legal responsibilities and considered to be effective in terms of limiting investors' losses. The most prevalent risk management method is known as Value at Risk VaR and this is the focal point of my analysis. As a result of intensive agent-based modeling and experimentation, I have concluded that: 1 market prices could deviate from fundamental values when risk management criteria are too strict; 2 the larger the disparity of investors' estimations of stock prices becomes, the larger the tendency of deviation from fundamental values; 3 the same tendency can be observed under market conditions where heterogeneous investors trade. These results suggest that risk management which is required by law as a duty of care could contribute to market inefficiencies. If so, this is significant from both practical and academic points of view. Furthermore, I believe this paper proves the efficacy of agent-based modeling in analyzing the impact of certain regulations and laws on financial markets under realistic market conditions.","PeriodicalId":210048,"journal":{"name":"Int. J. Knowl. Based Intell. Eng. Syst.","volume":"66 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2013-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130005373","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Case-Based Reasoning CBR is a powerful tool for decision making as it approaches human natural thinking process, based on the reuse of past experiences in solving new problems. A CBR system is a combination of processes and knowledge called "knowledge containers", its reasoning power can be improved through the use of domain knowledge. CBR systems combining case specific knowledge with general domain knowledge models are called Knowledge Intensive CBR KI-CBR. Although CBR claims to reduce the effort required for developing knowledge-based systems substantially when compared with more traditional Artificial Intelligence approaches, the implementation of a CBR application from scratch is still a time consuming task. The present work aims to develop a CBR application for fault diagnosis of steam turbines that integrates a domain knowledge modeling in an ontological form and focuses on the similarity-based retrieval step. This system is viewed as a KI-CBR system based on domain ontology, built around jCOLIBRI and myCBR, two well-known frameworks to design KI-CBR systems. During the prototyping process, the use and functionality of the two focused frameworks are examined. A comparative study is performed with results presenting advantages provided by the use of ontologies with CBR systems and demonstrating that jCOLIBRI is well adapted to design KI-CBR system.
{"title":"A fault diagnosis application based on a combination case-based reasoning and ontology approach","authors":"N. Dendani, Tarek Khadir","doi":"10.3233/KES-130280","DOIUrl":"https://doi.org/10.3233/KES-130280","url":null,"abstract":"Case-Based Reasoning CBR is a powerful tool for decision making as it approaches human natural thinking process, based on the reuse of past experiences in solving new problems. A CBR system is a combination of processes and knowledge called \"knowledge containers\", its reasoning power can be improved through the use of domain knowledge. CBR systems combining case specific knowledge with general domain knowledge models are called Knowledge Intensive CBR KI-CBR. Although CBR claims to reduce the effort required for developing knowledge-based systems substantially when compared with more traditional Artificial Intelligence approaches, the implementation of a CBR application from scratch is still a time consuming task. The present work aims to develop a CBR application for fault diagnosis of steam turbines that integrates a domain knowledge modeling in an ontological form and focuses on the similarity-based retrieval step. This system is viewed as a KI-CBR system based on domain ontology, built around jCOLIBRI and myCBR, two well-known frameworks to design KI-CBR systems. During the prototyping process, the use and functionality of the two focused frameworks are examined. A comparative study is performed with results presenting advantages provided by the use of ontologies with CBR systems and demonstrating that jCOLIBRI is well adapted to design KI-CBR system.","PeriodicalId":210048,"journal":{"name":"Int. J. Knowl. Based Intell. Eng. Syst.","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2013-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"122413843","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
A new smoothing approach for the implicit Lagrangian twin support vector regression is proposed in this paper. Our formulation leads to solving a pair of unconstrained quadratic programming problems of smaller size than in the classical support vector regression and their solutions are obtained using Newton-Armijo algorithm. This approach has the advantage that a system of linear equations is solved in each iteration of the algorithm. Numerical experiments on several synthetic and real-world datasets are performed and, their results and training time are compared with both the support vector regression and twin support vector regression to verify the effectiveness of the proposed method.
{"title":"Smooth Newton method for implicit Lagrangian twin support vector regression","authors":"S. Balasundaram, M. Tanveer","doi":"10.3233/KES-130277","DOIUrl":"https://doi.org/10.3233/KES-130277","url":null,"abstract":"A new smoothing approach for the implicit Lagrangian twin support vector regression is proposed in this paper. Our formulation leads to solving a pair of unconstrained quadratic programming problems of smaller size than in the classical support vector regression and their solutions are obtained using Newton-Armijo algorithm. This approach has the advantage that a system of linear equations is solved in each iteration of the algorithm. Numerical experiments on several synthetic and real-world datasets are performed and, their results and training time are compared with both the support vector regression and twin support vector regression to verify the effectiveness of the proposed method.","PeriodicalId":210048,"journal":{"name":"Int. J. Knowl. Based Intell. Eng. Syst.","volume":"6 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2013-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129608533","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}