Ye Bai, Haonan Chen, Jitong Chen, Zhuo Chen, Yi Deng, Xiaohong Dong, Lamtharn Hantrakul, Weituo Hao, Qingqing Huang, Zhongyi Huang, Dongya Jia, Feihu La, Duc Le, Bochen Li, Chumin Li, Hui Li, Xingxing Li, Shouda Liu, Wei-Tsung Lu, Yiqing Lu, Andrew Shaw, Janne Spijkervet, Yakun Sun, Bo Wang, Ju-Chiang Wang, Yuping Wang, Yuxuan Wang, Ling Xu, Yifeng Yang, Chao Yao, Shuo Zhang, Yang Zhang, Yilin Zhang, Hang Zhao, Ziyi Zhao, Dejian Zhong, Shicen Zhou, Pei Zou
{"title":"种子音乐:高质量可控音乐生成的统一框架","authors":"Ye Bai, Haonan Chen, Jitong Chen, Zhuo Chen, Yi Deng, Xiaohong Dong, Lamtharn Hantrakul, Weituo Hao, Qingqing Huang, Zhongyi Huang, Dongya Jia, Feihu La, Duc Le, Bochen Li, Chumin Li, Hui Li, Xingxing Li, Shouda Liu, Wei-Tsung Lu, Yiqing Lu, Andrew Shaw, Janne Spijkervet, Yakun Sun, Bo Wang, Ju-Chiang Wang, Yuping Wang, Yuxuan Wang, Ling Xu, Yifeng Yang, Chao Yao, Shuo Zhang, Yang Zhang, Yilin Zhang, Hang Zhao, Ziyi Zhao, Dejian Zhong, Shicen Zhou, Pei Zou","doi":"arxiv-2409.09214","DOIUrl":null,"url":null,"abstract":"We introduce Seed-Music, a suite of music generation systems capable of\nproducing high-quality music with fine-grained style control. Our unified\nframework leverages both auto-regressive language modeling and diffusion\napproaches to support two key music creation workflows: \\textit{controlled\nmusic generation} and \\textit{post-production editing}. For controlled music\ngeneration, our system enables vocal music generation with performance controls\nfrom multi-modal inputs, including style descriptions, audio references,\nmusical scores, and voice prompts. For post-production editing, it offers\ninteractive tools for editing lyrics and vocal melodies directly in the\ngenerated audio. We encourage readers to listen to demo audio examples at\nhttps://team.doubao.com/seed-music .","PeriodicalId":501178,"journal":{"name":"arXiv - CS - Sound","volume":null,"pages":null},"PeriodicalIF":0.0000,"publicationDate":"2024-09-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"Seed-Music: A Unified Framework for High Quality and Controlled Music Generation\",\"authors\":\"Ye Bai, Haonan Chen, Jitong Chen, Zhuo Chen, Yi Deng, Xiaohong Dong, Lamtharn Hantrakul, Weituo Hao, Qingqing Huang, Zhongyi Huang, Dongya Jia, Feihu La, Duc Le, Bochen Li, Chumin Li, Hui Li, Xingxing Li, Shouda Liu, Wei-Tsung Lu, Yiqing Lu, Andrew Shaw, Janne Spijkervet, Yakun Sun, Bo Wang, Ju-Chiang Wang, Yuping Wang, Yuxuan Wang, Ling Xu, Yifeng Yang, Chao Yao, Shuo Zhang, Yang Zhang, Yilin Zhang, Hang Zhao, Ziyi Zhao, Dejian Zhong, Shicen Zhou, Pei Zou\",\"doi\":\"arxiv-2409.09214\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"We introduce Seed-Music, a suite of music generation systems capable of\\nproducing high-quality music with fine-grained style control. Our unified\\nframework leverages both auto-regressive language modeling and diffusion\\napproaches to support two key music creation workflows: \\\\textit{controlled\\nmusic generation} and \\\\textit{post-production editing}. For controlled music\\ngeneration, our system enables vocal music generation with performance controls\\nfrom multi-modal inputs, including style descriptions, audio references,\\nmusical scores, and voice prompts. For post-production editing, it offers\\ninteractive tools for editing lyrics and vocal melodies directly in the\\ngenerated audio. We encourage readers to listen to demo audio examples at\\nhttps://team.doubao.com/seed-music .\",\"PeriodicalId\":501178,\"journal\":{\"name\":\"arXiv - CS - Sound\",\"volume\":null,\"pages\":null},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2024-09-13\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"arXiv - CS - Sound\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/arxiv-2409.09214\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"arXiv - CS - Sound","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/arxiv-2409.09214","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
Seed-Music: A Unified Framework for High Quality and Controlled Music Generation
We introduce Seed-Music, a suite of music generation systems capable of
producing high-quality music with fine-grained style control. Our unified
framework leverages both auto-regressive language modeling and diffusion
approaches to support two key music creation workflows: \textit{controlled
music generation} and \textit{post-production editing}. For controlled music
generation, our system enables vocal music generation with performance controls
from multi-modal inputs, including style descriptions, audio references,
musical scores, and voice prompts. For post-production editing, it offers
interactive tools for editing lyrics and vocal melodies directly in the
generated audio. We encourage readers to listen to demo audio examples at
https://team.doubao.com/seed-music .