LeBenchmark 2.0: A standardized, replicable and enhanced framework for self-supervised representations of French speech

IF 3.1 3区 计算机科学 Q2 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Computer Speech and Language Pub Date : 2024-02-03 DOI:10.1016/j.csl.2024.101622
Titouan Parcollet , Ha Nguyen , Solène Evain , Marcely Zanon Boito , Adrien Pupier , Salima Mdhaffar , Hang Le , Sina Alisamir , Natalia Tomashenko , Marco Dinarelli , Shucong Zhang , Alexandre Allauzen , Maximin Coavoux , Yannick Estève , Mickael Rouvier , Jerôme Goulian , Benjamin Lecouteux , François Portet , Solange Rossato , Fabien Ringeval , Laurent Besacier
{"title":"LeBenchmark 2.0: A standardized, replicable and enhanced framework for self-supervised representations of French speech","authors":"Titouan Parcollet ,&nbsp;Ha Nguyen ,&nbsp;Solène Evain ,&nbsp;Marcely Zanon Boito ,&nbsp;Adrien Pupier ,&nbsp;Salima Mdhaffar ,&nbsp;Hang Le ,&nbsp;Sina Alisamir ,&nbsp;Natalia Tomashenko ,&nbsp;Marco Dinarelli ,&nbsp;Shucong Zhang ,&nbsp;Alexandre Allauzen ,&nbsp;Maximin Coavoux ,&nbsp;Yannick Estève ,&nbsp;Mickael Rouvier ,&nbsp;Jerôme Goulian ,&nbsp;Benjamin Lecouteux ,&nbsp;François Portet ,&nbsp;Solange Rossato ,&nbsp;Fabien Ringeval ,&nbsp;Laurent Besacier","doi":"10.1016/j.csl.2024.101622","DOIUrl":null,"url":null,"abstract":"<div><p>Self-supervised learning (SSL) is at the origin of unprecedented improvements in many different domains including computer vision and natural language processing. Speech processing drastically benefitted from SSL as most of the current domain-related tasks are now being approached with pre-trained models. This work introduces <em>LeBenchmark 2.0</em> an open-source framework for assessing and building SSL-equipped French speech technologies. It includes documented, large-scale and heterogeneous corpora with up to 14,000 h of heterogeneous speech, ten pre-trained SSL wav2vec 2.0 models containing from 26 million to one billion learnable parameters shared with the community, and an evaluation protocol made of six downstream tasks to complement existing benchmarks. <em>LeBenchmark 2.0</em> also presents unique perspectives on pre-trained SSL models for speech with the investigation of frozen versus fine-tuned downstream models, task-agnostic versus task-specific pre-trained models as well as a discussion on the carbon footprint of large-scale model training. Overall, the newly introduced models trained on 14,000 h of French speech outperform multilingual and previous <em>LeBenchmark</em> SSL models across the benchmark but also required up to four times more energy for pre-training.</p></div>","PeriodicalId":50638,"journal":{"name":"Computer Speech and Language","volume":null,"pages":null},"PeriodicalIF":3.1000,"publicationDate":"2024-02-03","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Computer Speech and Language","FirstCategoryId":"94","ListUrlMain":"https://www.sciencedirect.com/science/article/pii/S0885230824000056","RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q2","JCRName":"COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE","Score":null,"Total":0}
引用次数: 0

Abstract

Self-supervised learning (SSL) is at the origin of unprecedented improvements in many different domains including computer vision and natural language processing. Speech processing drastically benefitted from SSL as most of the current domain-related tasks are now being approached with pre-trained models. This work introduces LeBenchmark 2.0 an open-source framework for assessing and building SSL-equipped French speech technologies. It includes documented, large-scale and heterogeneous corpora with up to 14,000 h of heterogeneous speech, ten pre-trained SSL wav2vec 2.0 models containing from 26 million to one billion learnable parameters shared with the community, and an evaluation protocol made of six downstream tasks to complement existing benchmarks. LeBenchmark 2.0 also presents unique perspectives on pre-trained SSL models for speech with the investigation of frozen versus fine-tuned downstream models, task-agnostic versus task-specific pre-trained models as well as a discussion on the carbon footprint of large-scale model training. Overall, the newly introduced models trained on 14,000 h of French speech outperform multilingual and previous LeBenchmark SSL models across the benchmark but also required up to four times more energy for pre-training.

查看原文
分享 分享
微信好友 朋友圈 QQ好友 复制链接
本刊更多论文
LeBenchmark 2.0:法语语音自监督表征的标准化、可复制和增强型框架
自我监督学习(SSL)是计算机视觉和自然语言处理等许多不同领域取得前所未有的进步的源泉。语音处理从自监督学习中获益匪浅,因为目前大多数与领域相关的任务都是通过预训练模型完成的。这项工作介绍了 LeBenchmark 2.0,这是一个用于评估和构建配备 SSL 的法语语音技术的开源框架。它包括记录了多达 14,000 小时异构语音的大规模异构语料库、十个预先训练好的 SSL wav2vec 2.0 模型(包含 2,600 万到十亿个可学习参数)以及一个由六个下游任务组成的评估协议,以补充现有的基准。LeBenchmark 2.0 还对语音的预训练 SSL 模型提出了独特的观点,包括研究冻结下游模型与微调下游模型、与任务无关的预训练模型与特定任务预训练模型,以及讨论大规模模型训练的碳足迹。总体而言,新引入的模型在 14,000 小时的法语语音基础上进行了训练,在整个基准测试中的表现优于多语种和以前的 LeBenchmark SSL 模型,但所需的预训练能量也比它们高出四倍。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 去求助
来源期刊
Computer Speech and Language
Computer Speech and Language 工程技术-计算机:人工智能
CiteScore
11.30
自引率
4.70%
发文量
80
审稿时长
22.9 weeks
期刊介绍: Computer Speech & Language publishes reports of original research related to the recognition, understanding, production, coding and mining of speech and language. The speech and language sciences have a long history, but it is only relatively recently that large-scale implementation of and experimentation with complex models of speech and language processing has become feasible. Such research is often carried out somewhat separately by practitioners of artificial intelligence, computer science, electronic engineering, information retrieval, linguistics, phonetics, or psychology.
期刊最新文献
Editorial Board Enhancing analysis of diadochokinetic speech using deep neural networks Copiously Quote Classics: Improving Chinese Poetry Generation with historical allusion knowledge Significance of chirp MFCC as a feature in speech and audio applications Artificial disfluency detection, uh no, disfluency generation for the masses
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
已复制链接
已复制链接
快去分享给好友吧!
我知道了
×
扫码分享
扫码分享
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1