{"title":"探索离散声学单元标记化的益处","authors":"Avihu Dekel, Raul Fernandez","doi":"arxiv-2406.05547","DOIUrl":null,"url":null,"abstract":"Tokenization algorithms that merge the units of a base vocabulary into\nlarger, variable-rate units have become standard in natural language processing\ntasks. This idea, however, has been mostly overlooked when the vocabulary\nconsists of phonemes or Discrete Acoustic Units (DAUs), an audio-based\nrepresentation that is playing an increasingly important role due to the\nsuccess of discrete language-modeling techniques. In this paper, we showcase\nthe advantages of tokenization of phonetic units and of DAUs on three\nprediction tasks: grapheme-to-phoneme, grapheme-to-DAUs, and unsupervised\nspeech generation using DAU language modeling. We demonstrate that tokenization\nyields significant improvements in terms of performance, as well as training\nand inference speed, across all three tasks. We also offer theoretical insights\nto provide some explanation for the superior performance observed.","PeriodicalId":501178,"journal":{"name":"arXiv - CS - Sound","volume":null,"pages":null},"PeriodicalIF":0.0000,"publicationDate":"2024-06-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"Exploring the Benefits of Tokenization of Discrete Acoustic Units\",\"authors\":\"Avihu Dekel, Raul Fernandez\",\"doi\":\"arxiv-2406.05547\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"Tokenization algorithms that merge the units of a base vocabulary into\\nlarger, variable-rate units have become standard in natural language processing\\ntasks. This idea, however, has been mostly overlooked when the vocabulary\\nconsists of phonemes or Discrete Acoustic Units (DAUs), an audio-based\\nrepresentation that is playing an increasingly important role due to the\\nsuccess of discrete language-modeling techniques. In this paper, we showcase\\nthe advantages of tokenization of phonetic units and of DAUs on three\\nprediction tasks: grapheme-to-phoneme, grapheme-to-DAUs, and unsupervised\\nspeech generation using DAU language modeling. We demonstrate that tokenization\\nyields significant improvements in terms of performance, as well as training\\nand inference speed, across all three tasks. We also offer theoretical insights\\nto provide some explanation for the superior performance observed.\",\"PeriodicalId\":501178,\"journal\":{\"name\":\"arXiv - CS - Sound\",\"volume\":null,\"pages\":null},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2024-06-08\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"arXiv - CS - Sound\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/arxiv-2406.05547\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"arXiv - CS - Sound","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/arxiv-2406.05547","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 0
摘要
在自然语言处理任务中,将基础词汇单位合并为更大的、速率可变的单位的标记化算法已成为标准。然而,当词汇包含音素或离散声学单位(DAUs)时,这一想法大多被忽视了,由于离散语言建模技术的成功,基于音频的表述正发挥着越来越重要的作用。在本文中,我们展示了语音单位标记化和 DAUs 在三项预测任务中的优势:词素到词素、词素到 DAUs 以及使用 DAU 语言建模的无监督语音生成。我们证明,在所有三个任务中,标记化在性能、训练和推理速度方面都有显著提高。我们还提出了一些理论见解,为所观察到的卓越性能提供了一些解释。
Exploring the Benefits of Tokenization of Discrete Acoustic Units
Tokenization algorithms that merge the units of a base vocabulary into
larger, variable-rate units have become standard in natural language processing
tasks. This idea, however, has been mostly overlooked when the vocabulary
consists of phonemes or Discrete Acoustic Units (DAUs), an audio-based
representation that is playing an increasingly important role due to the
success of discrete language-modeling techniques. In this paper, we showcase
the advantages of tokenization of phonetic units and of DAUs on three
prediction tasks: grapheme-to-phoneme, grapheme-to-DAUs, and unsupervised
speech generation using DAU language modeling. We demonstrate that tokenization
yields significant improvements in terms of performance, as well as training
and inference speed, across all three tasks. We also offer theoretical insights
to provide some explanation for the superior performance observed.