{"title":"Augment, Drop & Swap: Improving Diversity in LLM Captions for Efficient Music-Text Representation Learning","authors":"Ilaria Manco, Justin Salamon, Oriol Nieto","doi":"arxiv-2409.11498","DOIUrl":null,"url":null,"abstract":"Audio-text contrastive models have become a powerful approach in music\nrepresentation learning. Despite their empirical success, however, little is\nknown about the influence of key design choices on the quality of music-text\nrepresentations learnt through this framework. In this work, we expose these\ndesign choices within the constraints of limited data and computation budgets,\nand establish a more solid understanding of their impact grounded in empirical\nobservations along three axes: the choice of base encoders, the level of\ncuration in training data, and the use of text augmentation. We find that data\ncuration is the single most important factor for music-text contrastive\ntraining in resource-constrained scenarios. Motivated by this insight, we\nintroduce two novel techniques, Augmented View Dropout and TextSwap, which\nincrease the diversity and descriptiveness of text inputs seen in training.\nThrough our experiments we demonstrate that these are effective at boosting\nperformance across different pre-training regimes, model architectures, and\ndownstream data distributions, without incurring higher computational costs or\nrequiring additional training data.","PeriodicalId":501284,"journal":{"name":"arXiv - EE - Audio and Speech Processing","volume":"44 1","pages":""},"PeriodicalIF":0.0000,"publicationDate":"2024-09-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"arXiv - EE - Audio and Speech Processing","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/arxiv-2409.11498","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 0
Abstract
Audio-text contrastive models have become a powerful approach in music
representation learning. Despite their empirical success, however, little is
known about the influence of key design choices on the quality of music-text
representations learnt through this framework. In this work, we expose these
design choices within the constraints of limited data and computation budgets,
and establish a more solid understanding of their impact grounded in empirical
observations along three axes: the choice of base encoders, the level of
curation in training data, and the use of text augmentation. We find that data
curation is the single most important factor for music-text contrastive
training in resource-constrained scenarios. Motivated by this insight, we
introduce two novel techniques, Augmented View Dropout and TextSwap, which
increase the diversity and descriptiveness of text inputs seen in training.
Through our experiments we demonstrate that these are effective at boosting
performance across different pre-training regimes, model architectures, and
downstream data distributions, without incurring higher computational costs or
requiring additional training data.