{"title":"基于极限学习机的多任务学习研究","authors":"Wentao Mao, Jiucheng Xu, Shengjie Zhao, Mei Tian","doi":"10.1142/S0218488513400175","DOIUrl":null,"url":null,"abstract":"Recently, extreme learning machines (ELMs) have been a promising tool in solving a wide range of regression and classification applications. However, when modeling multiple related tasks in which only limited training data per task are available and the dimension is low, ELMs are generally hard to get impressive performance due to little help from the informative domain knowledge across tasks. To solve this problem, this paper extends ELM to the scenario of multi-task learning (MTL). First, based on the assumption that model parameters of related tasks are close to each other, a new regularization-based MTL algorithm for ELM is proposed to learn related tasks jointly via simple matrix inversion. For improving the learning performance, the algorithm proposed above is further formulated as a mixed integer programming in order to identify the grouping structure in which parameters are closer than others, and finally an alternating minimization method is presented to solve this optimization. Experiments conducted on a toy problem as well as real-life data set demonstrate the effectiveness of the proposed MTL algorithm compared to the classical ELM and the standard MTL algorithm.","PeriodicalId":50283,"journal":{"name":"International Journal of Uncertainty Fuzziness and Knowledge-Based Systems","volume":"19 1","pages":"75-85"},"PeriodicalIF":1.0000,"publicationDate":"2013-10-31","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"4","resultStr":"{\"title\":\"RESEARCH OF MULTI-TASK LEARNING BASED ON EXTREME LEARNING MACHINE\",\"authors\":\"Wentao Mao, Jiucheng Xu, Shengjie Zhao, Mei Tian\",\"doi\":\"10.1142/S0218488513400175\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"Recently, extreme learning machines (ELMs) have been a promising tool in solving a wide range of regression and classification applications. However, when modeling multiple related tasks in which only limited training data per task are available and the dimension is low, ELMs are generally hard to get impressive performance due to little help from the informative domain knowledge across tasks. To solve this problem, this paper extends ELM to the scenario of multi-task learning (MTL). First, based on the assumption that model parameters of related tasks are close to each other, a new regularization-based MTL algorithm for ELM is proposed to learn related tasks jointly via simple matrix inversion. For improving the learning performance, the algorithm proposed above is further formulated as a mixed integer programming in order to identify the grouping structure in which parameters are closer than others, and finally an alternating minimization method is presented to solve this optimization. Experiments conducted on a toy problem as well as real-life data set demonstrate the effectiveness of the proposed MTL algorithm compared to the classical ELM and the standard MTL algorithm.\",\"PeriodicalId\":50283,\"journal\":{\"name\":\"International Journal of Uncertainty Fuzziness and Knowledge-Based Systems\",\"volume\":\"19 1\",\"pages\":\"75-85\"},\"PeriodicalIF\":1.0000,\"publicationDate\":\"2013-10-31\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"4\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"International Journal of Uncertainty Fuzziness and Knowledge-Based Systems\",\"FirstCategoryId\":\"94\",\"ListUrlMain\":\"https://doi.org/10.1142/S0218488513400175\",\"RegionNum\":4,\"RegionCategory\":\"计算机科学\",\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"Q4\",\"JCRName\":\"COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"International Journal of Uncertainty Fuzziness and Knowledge-Based Systems","FirstCategoryId":"94","ListUrlMain":"https://doi.org/10.1142/S0218488513400175","RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q4","JCRName":"COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE","Score":null,"Total":0}
RESEARCH OF MULTI-TASK LEARNING BASED ON EXTREME LEARNING MACHINE
Recently, extreme learning machines (ELMs) have been a promising tool in solving a wide range of regression and classification applications. However, when modeling multiple related tasks in which only limited training data per task are available and the dimension is low, ELMs are generally hard to get impressive performance due to little help from the informative domain knowledge across tasks. To solve this problem, this paper extends ELM to the scenario of multi-task learning (MTL). First, based on the assumption that model parameters of related tasks are close to each other, a new regularization-based MTL algorithm for ELM is proposed to learn related tasks jointly via simple matrix inversion. For improving the learning performance, the algorithm proposed above is further formulated as a mixed integer programming in order to identify the grouping structure in which parameters are closer than others, and finally an alternating minimization method is presented to solve this optimization. Experiments conducted on a toy problem as well as real-life data set demonstrate the effectiveness of the proposed MTL algorithm compared to the classical ELM and the standard MTL algorithm.
期刊介绍:
The International Journal of Uncertainty, Fuzziness and Knowledge-Based Systems is a forum for research on various methodologies for the management of imprecise, vague, uncertain or incomplete information. The aim of the journal is to promote theoretical or methodological works dealing with all kinds of methods to represent and manipulate imperfectly described pieces of knowledge, excluding results on pure mathematics or simple applications of existing theoretical results. It is published bimonthly, with worldwide distribution to researchers, engineers, decision-makers, and educators.