{"title":"合成声景:利用文本到音频模型进行环境声音分类","authors":"Francesca Ronchini, Luca Comanducci, Fabio Antonacci","doi":"arxiv-2403.17864","DOIUrl":null,"url":null,"abstract":"In the past few years, text-to-audio models have emerged as a significant\nadvancement in automatic audio generation. Although they represent impressive\ntechnological progress, the effectiveness of their use in the development of\naudio applications remains uncertain. This paper aims to investigate these\naspects, specifically focusing on the task of classification of environmental\nsounds. This study analyzes the performance of two different environmental\nclassification systems when data generated from text-to-audio models is used\nfor training. Two cases are considered: a) when the training dataset is\naugmented by data coming from two different text-to-audio models; and b) when\nthe training dataset consists solely of synthetic audio generated. In both\ncases, the performance of the classification task is tested on real data.\nResults indicate that text-to-audio models are effective for dataset\naugmentation, whereas the performance of the models drops when relying on only\ngenerated audio.","PeriodicalId":501178,"journal":{"name":"arXiv - CS - Sound","volume":null,"pages":null},"PeriodicalIF":0.0000,"publicationDate":"2024-03-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"Synthesizing Soundscapes: Leveraging Text-to-Audio Models for Environmental Sound Classification\",\"authors\":\"Francesca Ronchini, Luca Comanducci, Fabio Antonacci\",\"doi\":\"arxiv-2403.17864\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"In the past few years, text-to-audio models have emerged as a significant\\nadvancement in automatic audio generation. Although they represent impressive\\ntechnological progress, the effectiveness of their use in the development of\\naudio applications remains uncertain. This paper aims to investigate these\\naspects, specifically focusing on the task of classification of environmental\\nsounds. This study analyzes the performance of two different environmental\\nclassification systems when data generated from text-to-audio models is used\\nfor training. Two cases are considered: a) when the training dataset is\\naugmented by data coming from two different text-to-audio models; and b) when\\nthe training dataset consists solely of synthetic audio generated. In both\\ncases, the performance of the classification task is tested on real data.\\nResults indicate that text-to-audio models are effective for dataset\\naugmentation, whereas the performance of the models drops when relying on only\\ngenerated audio.\",\"PeriodicalId\":501178,\"journal\":{\"name\":\"arXiv - CS - Sound\",\"volume\":null,\"pages\":null},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2024-03-26\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"arXiv - CS - Sound\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/arxiv-2403.17864\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"arXiv - CS - Sound","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/arxiv-2403.17864","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
Synthesizing Soundscapes: Leveraging Text-to-Audio Models for Environmental Sound Classification
In the past few years, text-to-audio models have emerged as a significant
advancement in automatic audio generation. Although they represent impressive
technological progress, the effectiveness of their use in the development of
audio applications remains uncertain. This paper aims to investigate these
aspects, specifically focusing on the task of classification of environmental
sounds. This study analyzes the performance of two different environmental
classification systems when data generated from text-to-audio models is used
for training. Two cases are considered: a) when the training dataset is
augmented by data coming from two different text-to-audio models; and b) when
the training dataset consists solely of synthetic audio generated. In both
cases, the performance of the classification task is tested on real data.
Results indicate that text-to-audio models are effective for dataset
augmentation, whereas the performance of the models drops when relying on only
generated audio.