LLM-based speaker diarization correction: A generalizable approach

IF 3 3区 计算机科学 Q2 ACOUSTICS Speech Communication Pub Date : 2025-05-01 Epub Date: 2025-03-13 DOI:10.1016/j.specom.2025.103224
Georgios Efstathiadis , Vijay Yadav , Anzar Abbas
{"title":"LLM-based speaker diarization correction: A generalizable approach","authors":"Georgios Efstathiadis ,&nbsp;Vijay Yadav ,&nbsp;Anzar Abbas","doi":"10.1016/j.specom.2025.103224","DOIUrl":null,"url":null,"abstract":"<div><div>Speaker diarization is necessary for interpreting conversations transcribed using automated speech recognition (ASR) tools. Despite significant developments in diarization methods, diarization accuracy remains an issue. Here, we investigate the use of large language models (LLMs) for diarization correction as a post-processing step. LLMs were fine-tuned using the Fisher corpus, a large dataset of transcribed conversations. The ability of the models to improve diarization accuracy in a holdout dataset from the Fisher corpus as well as an independent dataset was measured. We report that fine-tuned LLMs can markedly improve diarization accuracy. However, model performance is constrained to transcripts produced using the same ASR tool as the transcripts used for fine-tuning, limiting generalizability. To address this constraint, an ensemble model was developed by combining weights from three separate models, each fine-tuned using transcripts from a different ASR tool. The ensemble model demonstrated better overall performance than each of the ASR-specific models, suggesting that a generalizable and ASR-agnostic approach may be achievable. We have made the weights of these models publicly available on HuggingFace at <span><span>https://huggingface.co/bklynhlth</span><svg><path></path></svg></span>.</div></div>","PeriodicalId":49485,"journal":{"name":"Speech Communication","volume":"170 ","pages":"Article 103224"},"PeriodicalIF":3.0000,"publicationDate":"2025-05-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Speech Communication","FirstCategoryId":"94","ListUrlMain":"https://www.sciencedirect.com/science/article/pii/S0167639325000391","RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"2025/3/13 0:00:00","PubModel":"Epub","JCR":"Q2","JCRName":"ACOUSTICS","Score":null,"Total":0}
引用次数: 0

Abstract

Speaker diarization is necessary for interpreting conversations transcribed using automated speech recognition (ASR) tools. Despite significant developments in diarization methods, diarization accuracy remains an issue. Here, we investigate the use of large language models (LLMs) for diarization correction as a post-processing step. LLMs were fine-tuned using the Fisher corpus, a large dataset of transcribed conversations. The ability of the models to improve diarization accuracy in a holdout dataset from the Fisher corpus as well as an independent dataset was measured. We report that fine-tuned LLMs can markedly improve diarization accuracy. However, model performance is constrained to transcripts produced using the same ASR tool as the transcripts used for fine-tuning, limiting generalizability. To address this constraint, an ensemble model was developed by combining weights from three separate models, each fine-tuned using transcripts from a different ASR tool. The ensemble model demonstrated better overall performance than each of the ASR-specific models, suggesting that a generalizable and ASR-agnostic approach may be achievable. We have made the weights of these models publicly available on HuggingFace at https://huggingface.co/bklynhlth.
查看原文
分享 分享
微信好友 朋友圈 QQ好友 复制链接
本刊更多论文
基于法学硕士的说话人刻度校正:一种可推广的方法
使用自动语音识别(ASR)工具翻译会话记录时,说话人拨号是必要的。尽管径迹方法取得了重大进展,但径迹精度仍然是一个问题。在这里,我们研究了将大型语言模型(llm)作为后处理步骤用于词法校正的使用。llm使用Fisher语料库(一个转录对话的大型数据集)进行微调。测试了模型在Fisher语料库和独立数据集中提高分割精度的能力。我们报告微调llm可以显着提高刻度精度。然而,模型性能受限于使用与用于微调的转录本相同的ASR工具生成的转录本,从而限制了通用性。为了解决这个约束,通过组合来自三个独立模型的权重来开发一个集成模型,每个模型都使用来自不同ASR工具的转录本进行微调。集成模型比每个asr特定模型表现出更好的整体性能,这表明可以实现一种可推广且与asr无关的方法。我们在HuggingFace网站https://huggingface.co/bklynhlth上公开了这些模型的权重。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 去求助
来源期刊
Speech Communication
Speech Communication 工程技术-计算机:跨学科应用
CiteScore
6.80
自引率
6.20%
发文量
94
审稿时长
19.2 weeks
期刊介绍: Speech Communication is an interdisciplinary journal whose primary objective is to fulfil the need for the rapid dissemination and thorough discussion of basic and applied research results. The journal''s primary objectives are: • to present a forum for the advancement of human and human-machine speech communication science; • to stimulate cross-fertilization between different fields of this domain; • to contribute towards the rapid and wide diffusion of scientifically sound contributions in this domain.
期刊最新文献
DDNet: A task-decoupled two-stage network for multi-channel speech denoising and dereverberation Diagnosis-aware multitask fine-tuning of Whisper for dysarthric speech recognition Effects of time pressure and regional background on the peak alignment and scaling of nuclear rises in national Standard German varieties A study on the layer-wise transferability of self-supervised learning features for children’s speech processing tasks MaTSE: A hybrid Mamba-Transformer model for monaural Speech Enhancement
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
已复制链接
已复制链接
快去分享给好友吧!
我知道了
×
扫码分享
扫码分享
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1