Investigation of Different Language Models for Turkish Speech Recognition

Ali Orkan Bayer, Tolga Q7iloglut, Meltem Turhan, Yondem Bilgisayar, Miihendisligi B6liimii Telektrik Ve Elektronik, Miihendisligi B6liimii
{"title":"Investigation of Different Language Models for Turkish Speech Recognition","authors":"Ali Orkan Bayer, Tolga Q7iloglut, Meltem Turhan, Yondem Bilgisayar, Miihendisligi B6liimii Telektrik Ve Elektronik, Miihendisligi B6liimii","doi":"10.1109/SIU.2006.1659779","DOIUrl":null,"url":null,"abstract":"Large vocabulary continuous speech recognition can be performed with high accuracy for languages like English that do not have a rich morphological structure. However, the performance of these systems for agglutinative languages is very low. The major reason for that is, the language models that are built on the words do not perform well for agglutinative languages. In this study, three different language models that consider the structure of the agglutinative languages are investigated. Two of the models consider the subword units as the units of language modeling. The first one uses only the stem of the words as units, and the other one uses stems and endings of the words separately as the units. The third model, firstly, places the words into certain classes by using the co-occurrences of the words, and then uses these classes as the units of the language model. The performance of the models are tested by using two stage decoding; in the first stage, lattices are formed by using bi-gram models and then tri-gram models are used for recognition over these lattices. In this study, it is shown that the vocabulary coverage of the system seriously affects the recognition performance. For this reason, models that use stems and endings as the modeling unit perform better since their coverage of the vocabulary is higher. In addition to that, a single-pass decoder that can perform single pass decoding over these models is believed to increase the recognition performance","PeriodicalId":415037,"journal":{"name":"2006 IEEE 14th Signal Processing and Communications Applications","volume":"157 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2006-04-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"7","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"2006 IEEE 14th Signal Processing and Communications Applications","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/SIU.2006.1659779","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 7

Abstract

Large vocabulary continuous speech recognition can be performed with high accuracy for languages like English that do not have a rich morphological structure. However, the performance of these systems for agglutinative languages is very low. The major reason for that is, the language models that are built on the words do not perform well for agglutinative languages. In this study, three different language models that consider the structure of the agglutinative languages are investigated. Two of the models consider the subword units as the units of language modeling. The first one uses only the stem of the words as units, and the other one uses stems and endings of the words separately as the units. The third model, firstly, places the words into certain classes by using the co-occurrences of the words, and then uses these classes as the units of the language model. The performance of the models are tested by using two stage decoding; in the first stage, lattices are formed by using bi-gram models and then tri-gram models are used for recognition over these lattices. In this study, it is shown that the vocabulary coverage of the system seriously affects the recognition performance. For this reason, models that use stems and endings as the modeling unit perform better since their coverage of the vocabulary is higher. In addition to that, a single-pass decoder that can perform single pass decoding over these models is believed to increase the recognition performance
查看原文
分享 分享
微信好友 朋友圈 QQ好友 复制链接
本刊更多论文
土耳其语语音识别的不同语言模型研究
对于像英语这样没有丰富词形结构的语言,大词汇量连续语音识别可以实现高精度。然而,这些系统对黏性语言的性能很低。主要原因是,建立在单词基础上的语言模型在黏性语言中表现不佳。在本研究中,研究了三种不同的语言模型,这些模型考虑了黏着语言的结构。其中两个模型将子词单位作为语言建模的单位。第一种方法只使用单词的词干作为单位,另一种方法分别使用单词的词干和词尾作为单位。第三种模型首先利用词的共现性将词划分为特定的类,然后将这些类作为语言模型的单元。采用两级解码对模型的性能进行了测试;在第一阶段,使用双图模型形成网格,然后使用三图模型对这些网格进行识别。本研究表明,系统的词汇覆盖率严重影响识别性能。出于这个原因,使用词干和词尾作为建模单元的模型表现得更好,因为它们对词汇表的覆盖范围更高。此外,可以在这些模型上执行单通解码的单通解码器被认为可以提高识别性能
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 去求助
来源期刊
自引率
0.00%
发文量
0
期刊最新文献
Peer-to-Peer Multipoint Video Conferencing Using Layered Video Determination of Product Surface Quality Watermarking Tools for Turkish Texts By Using Darlington Topology Improvement of In-Band Gain for the Log Domain Filters Dual Wideband Antenna Analysis for Linear FMCW Radar Applications
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
已复制链接
已复制链接
快去分享给好友吧!
我知道了
×
扫码分享
扫码分享
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1