Yifei Xin, Zhihong Zhu, Xuxin Cheng, Xusheng Yang, Yuexian Zou
{"title":"利用基于变换器的分层对齐和分离式跨模态表示进行音频文本检索","authors":"Yifei Xin, Zhihong Zhu, Xuxin Cheng, Xusheng Yang, Yuexian Zou","doi":"arxiv-2409.09256","DOIUrl":null,"url":null,"abstract":"Most existing audio-text retrieval (ATR) approaches typically rely on a\nsingle-level interaction to associate audio and text, limiting their ability to\nalign different modalities and leading to suboptimal matches. In this work, we\npresent a novel ATR framework that leverages two-stream Transformers in\nconjunction with a Hierarchical Alignment (THA) module to identify multi-level\ncorrespondences of different Transformer blocks between audio and text.\nMoreover, current ATR methods mainly focus on learning a global-level\nrepresentation, missing out on intricate details to capture audio occurrences\nthat correspond to textual semantics. To bridge this gap, we introduce a\nDisentangled Cross-modal Representation (DCR) approach that disentangles\nhigh-dimensional features into compact latent factors to grasp fine-grained\naudio-text semantic correlations. Additionally, we develop a confidence-aware\n(CA) module to estimate the confidence of each latent factor pair and\nadaptively aggregate cross-modal latent factors to achieve local semantic\nalignment. Experiments show that our THA effectively boosts ATR performance,\nwith the DCR approach further contributing to consistent performance gains.","PeriodicalId":501178,"journal":{"name":"arXiv - CS - Sound","volume":"20 1","pages":""},"PeriodicalIF":0.0000,"publicationDate":"2024-09-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"Audio-text Retrieval with Transformer-based Hierarchical Alignment and Disentangled Cross-modal Representation\",\"authors\":\"Yifei Xin, Zhihong Zhu, Xuxin Cheng, Xusheng Yang, Yuexian Zou\",\"doi\":\"arxiv-2409.09256\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"Most existing audio-text retrieval (ATR) approaches typically rely on a\\nsingle-level interaction to associate audio and text, limiting their ability to\\nalign different modalities and leading to suboptimal matches. In this work, we\\npresent a novel ATR framework that leverages two-stream Transformers in\\nconjunction with a Hierarchical Alignment (THA) module to identify multi-level\\ncorrespondences of different Transformer blocks between audio and text.\\nMoreover, current ATR methods mainly focus on learning a global-level\\nrepresentation, missing out on intricate details to capture audio occurrences\\nthat correspond to textual semantics. To bridge this gap, we introduce a\\nDisentangled Cross-modal Representation (DCR) approach that disentangles\\nhigh-dimensional features into compact latent factors to grasp fine-grained\\naudio-text semantic correlations. Additionally, we develop a confidence-aware\\n(CA) module to estimate the confidence of each latent factor pair and\\nadaptively aggregate cross-modal latent factors to achieve local semantic\\nalignment. Experiments show that our THA effectively boosts ATR performance,\\nwith the DCR approach further contributing to consistent performance gains.\",\"PeriodicalId\":501178,\"journal\":{\"name\":\"arXiv - CS - Sound\",\"volume\":\"20 1\",\"pages\":\"\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2024-09-14\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"arXiv - CS - Sound\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/arxiv-2409.09256\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"arXiv - CS - Sound","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/arxiv-2409.09256","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
Audio-text Retrieval with Transformer-based Hierarchical Alignment and Disentangled Cross-modal Representation
Most existing audio-text retrieval (ATR) approaches typically rely on a
single-level interaction to associate audio and text, limiting their ability to
align different modalities and leading to suboptimal matches. In this work, we
present a novel ATR framework that leverages two-stream Transformers in
conjunction with a Hierarchical Alignment (THA) module to identify multi-level
correspondences of different Transformer blocks between audio and text.
Moreover, current ATR methods mainly focus on learning a global-level
representation, missing out on intricate details to capture audio occurrences
that correspond to textual semantics. To bridge this gap, we introduce a
Disentangled Cross-modal Representation (DCR) approach that disentangles
high-dimensional features into compact latent factors to grasp fine-grained
audio-text semantic correlations. Additionally, we develop a confidence-aware
(CA) module to estimate the confidence of each latent factor pair and
adaptively aggregate cross-modal latent factors to achieve local semantic
alignment. Experiments show that our THA effectively boosts ATR performance,
with the DCR approach further contributing to consistent performance gains.