Generating Attractive Advertisement Text Campaigns Using Deep Neural Networks

{"title":"Generating Attractive Advertisement Text Campaigns Using Deep Neural Networks","authors":"","doi":"10.33976/jert.9.2/2022/2","DOIUrl":null,"url":null,"abstract":"Text generation task has drawn an increasing attention in the recent years. Recurrent Neural Networks (RNN) achieved great results in this task. There are several parameters and factors that may affect the performance of the recurrent neural networks, that is why text generation is a challenging task, and requires a lot of tuning. This study investigates the impact of three factors that affect the quality of generated text: 1) data source and domain, 2) RNN architecture, 3) named Entities normalization. We conduct several experiments using different RNN architectures (LSTM and GRU), and different datasets (Hulu and booking). Evaluating generated texts is a challenging task. There is no perfect metric judge the quality and the correctness of the generated texts. We use different evaluation metrics to evaluate the performance of the generation models. These metrics include the training loss, the perplexity, the readability, and the relevance of the generated texts. Most of the related works do not consider all these evaluation metrics to evaluate text generation. The results suggest that GRU outperforms LSTM network, and models trained on booking set is better than the ones that trained on Hulu dataset.","PeriodicalId":14123,"journal":{"name":"International journal of engineering research and technology","volume":null,"pages":null},"PeriodicalIF":0.0000,"publicationDate":"2022-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"International journal of engineering research and technology","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.33976/jert.9.2/2022/2","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 0

Abstract

Text generation task has drawn an increasing attention in the recent years. Recurrent Neural Networks (RNN) achieved great results in this task. There are several parameters and factors that may affect the performance of the recurrent neural networks, that is why text generation is a challenging task, and requires a lot of tuning. This study investigates the impact of three factors that affect the quality of generated text: 1) data source and domain, 2) RNN architecture, 3) named Entities normalization. We conduct several experiments using different RNN architectures (LSTM and GRU), and different datasets (Hulu and booking). Evaluating generated texts is a challenging task. There is no perfect metric judge the quality and the correctness of the generated texts. We use different evaluation metrics to evaluate the performance of the generation models. These metrics include the training loss, the perplexity, the readability, and the relevance of the generated texts. Most of the related works do not consider all these evaluation metrics to evaluate text generation. The results suggest that GRU outperforms LSTM network, and models trained on booking set is better than the ones that trained on Hulu dataset.
查看原文
分享 分享
微信好友 朋友圈 QQ好友 复制链接
本刊更多论文
使用深度神经网络生成有吸引力的广告文本活动
近年来,文本生成任务越来越受到人们的关注。递归神经网络(RNN)在这项任务中取得了很好的效果。有几个参数和因素可能会影响递归神经网络的性能,这就是为什么文本生成是一项具有挑战性的任务,需要大量的调优。本研究探讨了影响生成文本质量的三个因素的影响:1)数据源和领域,2)RNN架构,3)命名实体规范化。我们使用不同的RNN架构(LSTM和GRU)和不同的数据集(Hulu和booking)进行了几个实验。评估生成的文本是一项具有挑战性的任务。没有完美的度量标准来判断生成文本的质量和正确性。我们使用不同的评估指标来评估生成模型的性能。这些度量包括训练损失、困惑、可读性和生成文本的相关性。大多数相关工作没有考虑所有这些评估指标来评估文本生成。结果表明,GRU网络优于LSTM网络,在预订集上训练的模型优于在Hulu数据集上训练的模型。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 去求助
来源期刊
自引率
0.00%
发文量
0
期刊最新文献
New Equations for Rate of Energy Dissipation of a Stepped Spillway with Slope less than Critical and Specific Step Height Blockchain-Based Secure Smart Health IoT solution Using RBAC Architecture Fatigue life assessment of high-speed train’s bogie frame due to dynamic loads under the influence of wheel flat Luenberger Observer-Based Speed Sensor Fault Detection: real time implementation to DC Motors Ultra-High-Performance Concrete (UHPC) - Applications Worldwide: A State-of-the-Art Review
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
已复制链接
已复制链接
快去分享给好友吧!
我知道了
×
扫码分享
扫码分享
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1