多语言抽象文本摘要的深度体系结构

Amr M. Zaki, M. Khalil, Hazem M. Abbas
{"title":"多语言抽象文本摘要的深度体系结构","authors":"Amr M. Zaki, M. Khalil, Hazem M. Abbas","doi":"10.1109/ICCES48960.2019.9068171","DOIUrl":null,"url":null,"abstract":"Abstractive text summarization is the task of generating a novel summary given an article, not by merely extracting and selecting text to produce a summary, but by actually creating and understating the given text to produce a summary. LSTM seq2seq encoder-decoder with attention models have proved successful in this task, but they suffer from some problems. In this work, we would go through multiple models to try and solve these problems, beginning with simple seq2seq with attention models to going to Pointer-Generator, to using a curriculum learning approach called Scheduled-Sampling, till we reach the new approaches of combining reinforcement learning with seq2seq. We have applied these models on multiple datasets for multiple languages, English and Arabic. We have also introduced a new novel method of working with agglutinative languages, it is a preprocessing technique that is applied to the dataset which increases the relevancy of the vocabulary, which effectively increases the efficiency of the text summarization without modifying the models, we call this technique advanced cleaning, we have applied it to the Arabic dataset, and it can then be applied to any other agglutinative language. We have built these models in Jupiter notebooks to run seamlessly on Google colaboratory.11https://medium.com/@theamrzaki22https://github.com/theamrzaki/text_summurization_abstractive_methods","PeriodicalId":136643,"journal":{"name":"2019 14th International Conference on Computer Engineering and Systems (ICCES)","volume":null,"pages":null},"PeriodicalIF":0.0000,"publicationDate":"2019-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"7","resultStr":"{\"title\":\"Deep Architectures for Abstractive Text Summarization in Multiple Languages\",\"authors\":\"Amr M. Zaki, M. Khalil, Hazem M. Abbas\",\"doi\":\"10.1109/ICCES48960.2019.9068171\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"Abstractive text summarization is the task of generating a novel summary given an article, not by merely extracting and selecting text to produce a summary, but by actually creating and understating the given text to produce a summary. LSTM seq2seq encoder-decoder with attention models have proved successful in this task, but they suffer from some problems. In this work, we would go through multiple models to try and solve these problems, beginning with simple seq2seq with attention models to going to Pointer-Generator, to using a curriculum learning approach called Scheduled-Sampling, till we reach the new approaches of combining reinforcement learning with seq2seq. We have applied these models on multiple datasets for multiple languages, English and Arabic. We have also introduced a new novel method of working with agglutinative languages, it is a preprocessing technique that is applied to the dataset which increases the relevancy of the vocabulary, which effectively increases the efficiency of the text summarization without modifying the models, we call this technique advanced cleaning, we have applied it to the Arabic dataset, and it can then be applied to any other agglutinative language. We have built these models in Jupiter notebooks to run seamlessly on Google colaboratory.11https://medium.com/@theamrzaki22https://github.com/theamrzaki/text_summurization_abstractive_methods\",\"PeriodicalId\":136643,\"journal\":{\"name\":\"2019 14th International Conference on Computer Engineering and Systems (ICCES)\",\"volume\":null,\"pages\":null},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2019-12-01\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"7\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"2019 14th International Conference on Computer Engineering and Systems (ICCES)\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.1109/ICCES48960.2019.9068171\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"2019 14th International Conference on Computer Engineering and Systems (ICCES)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/ICCES48960.2019.9068171","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 7

摘要

抽象文本摘要是为给定的文章生成新颖摘要的任务,不是仅仅通过提取和选择文本来生成摘要,而是通过实际创建和理解给定的文本来生成摘要。基于注意力模型的LSTM seq2seq编码器在此任务中取得了成功,但存在一些问题。在这项工作中,我们将通过多个模型来尝试解决这些问题,从带有注意力模型的简单seq2seq开始,到使用指针生成器,再到使用称为调度采样的课程学习方法,直到我们达到将强化学习与seq2seq相结合的新方法。我们已经将这些模型应用于多种语言(英语和阿拉伯语)的多个数据集。我们还引入了一种新的处理黏着语言的方法,它是一种应用于数据集的预处理技术,它增加了词汇的相关性,在不修改模型的情况下有效地提高了文本摘要的效率,我们称这种技术为高级清洗,我们已经将其应用于阿拉伯语数据集,然后它可以应用于任何其他黏着语言。我们已经在Jupiter笔记本电脑中构建了这些模型,以便在谷歌collaborabory.11https://medium.com/@theamrzaki22https://github.com/theamrzaki/text_summurization_abstractive_methods上无缝运行
本文章由计算机程序翻译,如有差异,请以英文原文为准。
查看原文
分享 分享
微信好友 朋友圈 QQ好友 复制链接
本刊更多论文
Deep Architectures for Abstractive Text Summarization in Multiple Languages
Abstractive text summarization is the task of generating a novel summary given an article, not by merely extracting and selecting text to produce a summary, but by actually creating and understating the given text to produce a summary. LSTM seq2seq encoder-decoder with attention models have proved successful in this task, but they suffer from some problems. In this work, we would go through multiple models to try and solve these problems, beginning with simple seq2seq with attention models to going to Pointer-Generator, to using a curriculum learning approach called Scheduled-Sampling, till we reach the new approaches of combining reinforcement learning with seq2seq. We have applied these models on multiple datasets for multiple languages, English and Arabic. We have also introduced a new novel method of working with agglutinative languages, it is a preprocessing technique that is applied to the dataset which increases the relevancy of the vocabulary, which effectively increases the efficiency of the text summarization without modifying the models, we call this technique advanced cleaning, we have applied it to the Arabic dataset, and it can then be applied to any other agglutinative language. We have built these models in Jupiter notebooks to run seamlessly on Google colaboratory.11https://medium.com/@theamrzaki22https://github.com/theamrzaki/text_summurization_abstractive_methods
求助全文
通过发布文献求助,成功后即可免费获取论文全文。 去求助
来源期刊
自引率
0.00%
发文量
0
期刊最新文献
Social Networking Sites (SNS) and Digital Communication Across Nations Improving Golay Code Using Hashing Technique Alzheimer's Disease Integrated Ontology (ADIO) Session PC: Parallel and Cloud Computing Multipath Traffic Engineering for Software Defined Networking
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
已复制链接
已复制链接
快去分享给好友吧!
我知道了
×
扫码分享
扫码分享
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1