A Deep-Learning Based Framework for Source Separation, Analysis, and Synthesis of Choral Ensembles

IF 1.3 Q3 ENGINEERING, ELECTRICAL & ELECTRONIC Frontiers in signal processing Pub Date : 2022-04-05 DOI:10.3389/frsip.2022.808594
Pritish Chandna, Helena Cuesta, Darius Petermann, E. Gómez
{"title":"A Deep-Learning Based Framework for Source Separation, Analysis, and Synthesis of Choral Ensembles","authors":"Pritish Chandna, Helena Cuesta, Darius Petermann, E. Gómez","doi":"10.3389/frsip.2022.808594","DOIUrl":null,"url":null,"abstract":"Choral singing in the soprano, alto, tenor and bass (SATB) format is a widely practiced and studied art form with significant cultural importance. Despite the popularity of the choral setting, it has received little attention in the field of Music Information Retrieval. However, the recent publication of high-quality choral singing datasets as well as recent developments in deep learning based methodologies applied to the field of music and speech processing, have opened new avenues for research in this field. In this paper, we use some of the publicly available choral singing datasets to train and evaluate state-of-the-art source separation algorithms from the speech and music domains for the case of choral singing. Furthermore, we evaluate existing monophonic F0 estimators on the separated unison stems and propose an approximation of the perceived F0 of a unison signal. Additionally, we present a set of applications combining the proposed methodologies, including synthesizing a single singer voice from the unison, and transposing and remixing the separated stems into a synthetic multi-singer choral signal. We finally conduct a set of listening tests to perform a perceptual evaluation of the results we obtain with the proposed methodologies.","PeriodicalId":93557,"journal":{"name":"Frontiers in signal processing","volume":"13 1","pages":""},"PeriodicalIF":1.3000,"publicationDate":"2022-04-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"4","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Frontiers in signal processing","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.3389/frsip.2022.808594","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q3","JCRName":"ENGINEERING, ELECTRICAL & ELECTRONIC","Score":null,"Total":0}
引用次数: 4

Abstract

Choral singing in the soprano, alto, tenor and bass (SATB) format is a widely practiced and studied art form with significant cultural importance. Despite the popularity of the choral setting, it has received little attention in the field of Music Information Retrieval. However, the recent publication of high-quality choral singing datasets as well as recent developments in deep learning based methodologies applied to the field of music and speech processing, have opened new avenues for research in this field. In this paper, we use some of the publicly available choral singing datasets to train and evaluate state-of-the-art source separation algorithms from the speech and music domains for the case of choral singing. Furthermore, we evaluate existing monophonic F0 estimators on the separated unison stems and propose an approximation of the perceived F0 of a unison signal. Additionally, we present a set of applications combining the proposed methodologies, including synthesizing a single singer voice from the unison, and transposing and remixing the separated stems into a synthetic multi-singer choral signal. We finally conduct a set of listening tests to perform a perceptual evaluation of the results we obtain with the proposed methodologies.
查看原文
分享 分享
微信好友 朋友圈 QQ好友 复制链接
本刊更多论文
基于深度学习的合唱合奏源分离、分析和合成框架
女高音、中音、男高音和男低音(SATB)形式的合唱是一种广泛实践和研究的艺术形式,具有重要的文化意义。尽管合唱环境很受欢迎,但在音乐信息检索领域却很少受到关注。然而,最近发表的高质量合唱数据集以及应用于音乐和语音处理领域的基于深度学习的方法的最新发展,为该领域的研究开辟了新的途径。在本文中,我们使用一些公开可用的合唱数据集来训练和评估来自合唱的语音和音乐领域的最先进的源分离算法。此外,我们评估了现有的单音F0估计器在分离的同音干上,并提出了一个同音信号的感知F0的近似值。此外,我们提出了一组结合所提出的方法的应用,包括从同音中合成单个歌手的声音,以及将分离的词干转置和重新混合成合成的多歌手合唱信号。最后,我们进行了一组听力测试,对我们使用所提出的方法获得的结果进行感性评估。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 去求助
来源期刊
自引率
0.00%
发文量
0
期刊最新文献
A mini-review of signal processing techniques for RIS-assisted near field THz communication Editorial: Signal processing in computational video and video streaming Editorial: Editor’s challenge—image processing Improved circuitry and post-processing for interleaved fast-scan cyclic voltammetry and electrophysiology measurements Bounds for Haralick features in synthetic images with sinusoidal gradients
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
已复制链接
已复制链接
快去分享给好友吧!
我知道了
×
扫码分享
扫码分享
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1