{"title":"Improving Multilabel Text Classification with Stacking and Recurrent Neural Networks","authors":"R. M. Nunes, M. A. Domingues, V. D. Feltrim","doi":"10.1145/3539637.3557000","DOIUrl":null,"url":null,"abstract":"Multilabel text classification can be defined as a mapping function that categorizes a text in natural language into one or more labels defined by the scope of a problem. In this work we propose an architecture of stacked classifiers for multilabel text classification. The proposed models use an LSTM recurrent neural network in the first stage of the stack and different multilabel classifiers in the second stage. We evaluated our proposal in two datasets well-known in the literature (TMDB and EUR-LEX Subject Matters), and the results showed that the proposed stack consistently outperforms the baselines.","PeriodicalId":350776,"journal":{"name":"Proceedings of the Brazilian Symposium on Multimedia and the Web","volume":"1 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2022-11-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"1","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Proceedings of the Brazilian Symposium on Multimedia and the Web","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1145/3539637.3557000","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 1
Abstract
Multilabel text classification can be defined as a mapping function that categorizes a text in natural language into one or more labels defined by the scope of a problem. In this work we propose an architecture of stacked classifiers for multilabel text classification. The proposed models use an LSTM recurrent neural network in the first stage of the stack and different multilabel classifiers in the second stage. We evaluated our proposal in two datasets well-known in the literature (TMDB and EUR-LEX Subject Matters), and the results showed that the proposed stack consistently outperforms the baselines.