利用MusicTransformer结构改进符号音乐预训练模型

Yingfeng Fu, Y. Tanimura, H. Nakada
{"title":"利用MusicTransformer结构改进符号音乐预训练模型","authors":"Yingfeng Fu, Y. Tanimura, H. Nakada","doi":"10.1109/IMCOM56909.2023.10035617","DOIUrl":null,"url":null,"abstract":"Pre-training driven by vast data has shown great power in natural language understanding. The idea has also been applied to symbolic music. However, many existing works using pre-training for symbolic music are not general enough to tackle all the tasks in musical information retrieval, and there is still space to improve the model structure. To make up for the insufficiency and compare it with the existing works, we employed a BERT-like masked language pre-training approach to train a stacked MusicTransformer on MAESTRO dataset. Then we fine-tuned our pre-trained model on several symbolic music understanding tasks. In the work, our contribution is 1)we improved MusicBERT by modifying the model structure. 2)be-sides the existing evaluation downstream tasks, we complemented several downstream tasks, including melody extraction, emotion classification, and composer classification. We pre-trained the modified model and existing works under the same condition. We make a comparison of our pre-trained model with the previous works. The result shows that the modified model is more powerful than the previous models with the same pre-training setting.","PeriodicalId":230213,"journal":{"name":"2023 17th International Conference on Ubiquitous Information Management and Communication (IMCOM)","volume":"8 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2023-01-03","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"Improve symbolic music pre-training model using MusicTransformer structure\",\"authors\":\"Yingfeng Fu, Y. Tanimura, H. Nakada\",\"doi\":\"10.1109/IMCOM56909.2023.10035617\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"Pre-training driven by vast data has shown great power in natural language understanding. The idea has also been applied to symbolic music. However, many existing works using pre-training for symbolic music are not general enough to tackle all the tasks in musical information retrieval, and there is still space to improve the model structure. To make up for the insufficiency and compare it with the existing works, we employed a BERT-like masked language pre-training approach to train a stacked MusicTransformer on MAESTRO dataset. Then we fine-tuned our pre-trained model on several symbolic music understanding tasks. In the work, our contribution is 1)we improved MusicBERT by modifying the model structure. 2)be-sides the existing evaluation downstream tasks, we complemented several downstream tasks, including melody extraction, emotion classification, and composer classification. We pre-trained the modified model and existing works under the same condition. We make a comparison of our pre-trained model with the previous works. The result shows that the modified model is more powerful than the previous models with the same pre-training setting.\",\"PeriodicalId\":230213,\"journal\":{\"name\":\"2023 17th International Conference on Ubiquitous Information Management and Communication (IMCOM)\",\"volume\":\"8 1\",\"pages\":\"0\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2023-01-03\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"2023 17th International Conference on Ubiquitous Information Management and Communication (IMCOM)\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.1109/IMCOM56909.2023.10035617\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"2023 17th International Conference on Ubiquitous Information Management and Communication (IMCOM)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/IMCOM56909.2023.10035617","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 0

摘要

由大量数据驱动的预训练在自然语言理解中显示出巨大的力量。这个想法也被应用到象征音乐中。然而,现有的许多符号音乐预训练的研究成果还不够全面,无法解决音乐信息检索中的所有任务,模型结构仍有改进的空间。为了弥补不足并与现有作品进行比较,我们采用了一种类似bert的掩膜语言预训练方法,在MAESTRO数据集上训练了一个堆叠的MusicTransformer。然后,我们在几个符号音乐理解任务上对预训练模型进行了微调。在这项工作中,我们的贡献是:1)我们通过修改模型结构来改进MusicBERT。2)在现有评价下游任务的基础上,补充了旋律提取、情感分类、作曲家分类等下游任务。我们在相同条件下对修改后的模型和现有作品进行预训练。我们将我们的预训练模型与之前的作品进行了比较。结果表明,在相同的预训练设置下,改进后的模型比之前的模型更强大。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
查看原文
分享 分享
微信好友 朋友圈 QQ好友 复制链接
本刊更多论文
Improve symbolic music pre-training model using MusicTransformer structure
Pre-training driven by vast data has shown great power in natural language understanding. The idea has also been applied to symbolic music. However, many existing works using pre-training for symbolic music are not general enough to tackle all the tasks in musical information retrieval, and there is still space to improve the model structure. To make up for the insufficiency and compare it with the existing works, we employed a BERT-like masked language pre-training approach to train a stacked MusicTransformer on MAESTRO dataset. Then we fine-tuned our pre-trained model on several symbolic music understanding tasks. In the work, our contribution is 1)we improved MusicBERT by modifying the model structure. 2)be-sides the existing evaluation downstream tasks, we complemented several downstream tasks, including melody extraction, emotion classification, and composer classification. We pre-trained the modified model and existing works under the same condition. We make a comparison of our pre-trained model with the previous works. The result shows that the modified model is more powerful than the previous models with the same pre-training setting.
求助全文
通过发布文献求助,成功后即可免费获取论文全文。 去求助
来源期刊
自引率
0.00%
发文量
0
期刊最新文献
Lightweight energy-efficient offloading framework for mobile edge/cloud computing Dual ResNet-based Environmental Sound Classification using GAN Finite Element Method for System-in-Package (SiP) Technology: Thermal Analysis Using Chip Cooling Laminate Chip (CCLC) An Improved Reverse Distillation Model for Unsupervised Anomaly Detection Pictorial Map Generation based on Color Extraction and Sentiment Analysis using SNS Photos
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
已复制链接
已复制链接
快去分享给好友吧!
我知道了
×
扫码分享
扫码分享
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1