{"title":"Language Modeling Using Part-of-speech and Long Short-Term Memory Networks","authors":"Sanaz Saki Norouzi, A. Akbari, B. Nasersharif","doi":"10.1109/ICCKE48569.2019.8964806","DOIUrl":null,"url":null,"abstract":"In recent years, neural networks have been widely used for language modeling in different tasks of natural language processing. Results show that long short-term memory (LSTM) neural networks are appropriate for language modeling due to their ability to process long sequences. Furthermore, many studies are shown that extra information improve language models (LMs) performance. In this research, we propose parallel structures for incorporating part-of-speech tags into language modeling task using both the unidirectional and bidirectional type of LSTMs. Words and part-of-speech tags are given to the network as parallel inputs. In this way, to concatenate these two paths, two different structures are proposed according to the type of network used in the parallel part. We analyze the efficiency on Penn Treebank (PTB) dataset using perplexity measure. These two proposed structures show improvements in comparison to the baseline models. Not only does the bidirectional LSTM method gain the lowest perplexity, but it also has the lowest training parameters among our proposed methods. The perplexity of proposed structures has reduced 1.5% and %13 for unidirectional and bidirectional LSTMs, respectively.","PeriodicalId":6685,"journal":{"name":"2019 9th International Conference on Computer and Knowledge Engineering (ICCKE)","volume":"12 1","pages":"182-187"},"PeriodicalIF":0.0000,"publicationDate":"2019-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"3","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"2019 9th International Conference on Computer and Knowledge Engineering (ICCKE)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/ICCKE48569.2019.8964806","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 3
Abstract
In recent years, neural networks have been widely used for language modeling in different tasks of natural language processing. Results show that long short-term memory (LSTM) neural networks are appropriate for language modeling due to their ability to process long sequences. Furthermore, many studies are shown that extra information improve language models (LMs) performance. In this research, we propose parallel structures for incorporating part-of-speech tags into language modeling task using both the unidirectional and bidirectional type of LSTMs. Words and part-of-speech tags are given to the network as parallel inputs. In this way, to concatenate these two paths, two different structures are proposed according to the type of network used in the parallel part. We analyze the efficiency on Penn Treebank (PTB) dataset using perplexity measure. These two proposed structures show improvements in comparison to the baseline models. Not only does the bidirectional LSTM method gain the lowest perplexity, but it also has the lowest training parameters among our proposed methods. The perplexity of proposed structures has reduced 1.5% and %13 for unidirectional and bidirectional LSTMs, respectively.