{"title":"Combined Objective Function in Deep Learning Model for Abstractive Summarization","authors":"Tung Le, Le-Minh Nguyen","doi":"10.1145/3287921.3287952","DOIUrl":null,"url":null,"abstract":"Abstractive Summarization is the specific task in text generation whose popular approaches are based on the strength of Recurrent Neural Network. With the purpose to take advantages of Convolution Neural Network in text representation, we propose to combine these above networks in our encoder to capture both the global and local features from the input documents. Simultaneously, our model also integrates the reinforced mechanism with the novel reward function to get the closer direction between the learning and evaluating process. Through the experiments in CNN/Daily Mail, our models gains the significant results. Especially, in ROUGE-1 and ROUGE-L, it outperforms the previous works in this task with the expressive improvement (39.09% in ROUGE-L F1-score).","PeriodicalId":448008,"journal":{"name":"Proceedings of the 9th International Symposium on Information and Communication Technology","volume":"10 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2018-12-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Proceedings of the 9th International Symposium on Information and Communication Technology","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1145/3287921.3287952","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 0
Abstract
Abstractive Summarization is the specific task in text generation whose popular approaches are based on the strength of Recurrent Neural Network. With the purpose to take advantages of Convolution Neural Network in text representation, we propose to combine these above networks in our encoder to capture both the global and local features from the input documents. Simultaneously, our model also integrates the reinforced mechanism with the novel reward function to get the closer direction between the learning and evaluating process. Through the experiments in CNN/Daily Mail, our models gains the significant results. Especially, in ROUGE-1 and ROUGE-L, it outperforms the previous works in this task with the expressive improvement (39.09% in ROUGE-L F1-score).