{"title":"Evaluation of pretrained language models on music understanding","authors":"Yannis Vasilakis, Rachel Bittner, Johan Pauwels","doi":"arxiv-2409.11449","DOIUrl":null,"url":null,"abstract":"Music-text multimodal systems have enabled new approaches to Music\nInformation Research (MIR) applications such as audio-to-text and text-to-audio\nretrieval, text-based song generation, and music captioning. Despite the\nreported success, little effort has been put into evaluating the musical\nknowledge of Large Language Models (LLM). In this paper, we demonstrate that\nLLMs suffer from 1) prompt sensitivity, 2) inability to model negation (e.g.\n'rock song without guitar'), and 3) sensitivity towards the presence of\nspecific words. We quantified these properties as a triplet-based accuracy,\nevaluating the ability to model the relative similarity of labels in a\nhierarchical ontology. We leveraged the Audioset ontology to generate triplets\nconsisting of an anchor, a positive (relevant) label, and a negative (less\nrelevant) label for the genre and instruments sub-tree. We evaluated the\ntriplet-based musical knowledge for six general-purpose Transformer-based\nmodels. The triplets obtained through this methodology required filtering, as\nsome were difficult to judge and therefore relatively uninformative for\nevaluation purposes. Despite the relatively high accuracy reported,\ninconsistencies are evident in all six models, suggesting that off-the-shelf\nLLMs need adaptation to music before use.","PeriodicalId":501284,"journal":{"name":"arXiv - EE - Audio and Speech Processing","volume":"32 1","pages":""},"PeriodicalIF":0.0000,"publicationDate":"2024-09-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"arXiv - EE - Audio and Speech Processing","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/arxiv-2409.11449","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 0
Abstract
Music-text multimodal systems have enabled new approaches to Music
Information Research (MIR) applications such as audio-to-text and text-to-audio
retrieval, text-based song generation, and music captioning. Despite the
reported success, little effort has been put into evaluating the musical
knowledge of Large Language Models (LLM). In this paper, we demonstrate that
LLMs suffer from 1) prompt sensitivity, 2) inability to model negation (e.g.
'rock song without guitar'), and 3) sensitivity towards the presence of
specific words. We quantified these properties as a triplet-based accuracy,
evaluating the ability to model the relative similarity of labels in a
hierarchical ontology. We leveraged the Audioset ontology to generate triplets
consisting of an anchor, a positive (relevant) label, and a negative (less
relevant) label for the genre and instruments sub-tree. We evaluated the
triplet-based musical knowledge for six general-purpose Transformer-based
models. The triplets obtained through this methodology required filtering, as
some were difficult to judge and therefore relatively uninformative for
evaluation purposes. Despite the relatively high accuracy reported,
inconsistencies are evident in all six models, suggesting that off-the-shelf
LLMs need adaptation to music before use.