{"title":"Incorporating word embeddings in the hierarchical dirichlet process for query-oriented text summarization","authors":"H. V. Lierde, T. Chow","doi":"10.1109/INDIN.2017.8104916","DOIUrl":null,"url":null,"abstract":"The ever-growing amount of textual data available online creates the need for automatic text summarization tools. Probabilistic topic models are able to infer semantic relationships between sentences which is a key step of extractive summarization methods. However, they strongly rely on word co-occurrence patterns and fail to capture the actual semantic relationships between words such as synonymy, antonymy, etc. We propose a novel algorithm which incorporates pre-trained word embeddings in the probabilistic topic model in order to capture semantic similarities between sentences. These similarities provide the basis for a sentence ranking algorithm for query-oriented summarization. The summary is then produced by extracting highly ranked sentences from the original corpus. Our method is shown to outperform state-of-the-art algorithms on a benchmark dataset.","PeriodicalId":6595,"journal":{"name":"2017 IEEE 15th International Conference on Industrial Informatics (INDIN)","volume":"4 1","pages":"1037-1042"},"PeriodicalIF":0.0000,"publicationDate":"2017-07-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"4","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"2017 IEEE 15th International Conference on Industrial Informatics (INDIN)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/INDIN.2017.8104916","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 4
Abstract
The ever-growing amount of textual data available online creates the need for automatic text summarization tools. Probabilistic topic models are able to infer semantic relationships between sentences which is a key step of extractive summarization methods. However, they strongly rely on word co-occurrence patterns and fail to capture the actual semantic relationships between words such as synonymy, antonymy, etc. We propose a novel algorithm which incorporates pre-trained word embeddings in the probabilistic topic model in order to capture semantic similarities between sentences. These similarities provide the basis for a sentence ranking algorithm for query-oriented summarization. The summary is then produced by extracting highly ranked sentences from the original corpus. Our method is shown to outperform state-of-the-art algorithms on a benchmark dataset.