Yi-Bing Lin, Yuan-Fu Liao, Sin-Horng Chen, Shaw-Hwa Hwang, Yih-Ru Wang
{"title":"VoiceTalk: Multimedia-IoT Applications for Mixing Mandarin, Taiwanese, and English","authors":"Yi-Bing Lin, Yuan-Fu Liao, Sin-Horng Chen, Shaw-Hwa Hwang, Yih-Ru Wang","doi":"https://dl.acm.org/doi/10.1145/3543854","DOIUrl":null,"url":null,"abstract":"<p>The voice-based Internet of Multimedia Things (IoMT) is the combination of IoT interfaces and protocols with associated voice-related information, which enables advanced applications based on human-to-device interactions. An example is Automatic Speech Recognition (ASR) for live captioning and voice translation. Three major issues of ASR for IoMT are IoT development cost, speech recognition accuracy, and execution time complexity. For the first issue, most non-voice IoT applications are upgraded with the ASR feature through hard coding, which are error prone. For the second issue, recognition accuracy must be improved for ASR. For the third issue, many multimedia IoT services are real-time applications and, therefore, the ASR delay must be short.</p><p>This article elaborates on the above issues based on an IoT platform called VoiceTalk. We built the largest Taiwanese spoken corpus to train <b>VoiceTalk ASR (VT-ASR)</b> and show how the VT-ASR mechanism can be transparently integrated with existing IoT applications. We consider two performance measures for VoiceTalk: speech recognition accuracy and VT-ASR delay. For the acoustic tests of PAL-Labs, VT-ASR's accuracy is 96.47%, while Google's accuracy is 94.28%. We are the first to develop an analytic model to investigate the probability that the VT-ASR delay for the first speaker is complete before the second speaker starts talking. From the measurements and analytic modeling, we show that the VT-ASR delay is short enough to result in a very good user experience. Our solution has won several important government and commercial TV contracts in Taiwan. VT-ASR has demonstrated better Taiwanese Mandarin speech recognition accuracy than famous commercial products (including Google and Iflytek) in Formosa Speech Recognition Challenge 2018 (FSR-2018) and was the best among all participating ASR systems for Taiwanese recognition accuracy in FSR-2020.</p>","PeriodicalId":50911,"journal":{"name":"ACM Transactions on Internet Technology","volume":"19 1","pages":""},"PeriodicalIF":3.9000,"publicationDate":"2023-05-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"ACM Transactions on Internet Technology","FirstCategoryId":"94","ListUrlMain":"https://doi.org/https://dl.acm.org/doi/10.1145/3543854","RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q2","JCRName":"COMPUTER SCIENCE, INFORMATION SYSTEMS","Score":null,"Total":0}
引用次数: 0
Abstract
The voice-based Internet of Multimedia Things (IoMT) is the combination of IoT interfaces and protocols with associated voice-related information, which enables advanced applications based on human-to-device interactions. An example is Automatic Speech Recognition (ASR) for live captioning and voice translation. Three major issues of ASR for IoMT are IoT development cost, speech recognition accuracy, and execution time complexity. For the first issue, most non-voice IoT applications are upgraded with the ASR feature through hard coding, which are error prone. For the second issue, recognition accuracy must be improved for ASR. For the third issue, many multimedia IoT services are real-time applications and, therefore, the ASR delay must be short.
This article elaborates on the above issues based on an IoT platform called VoiceTalk. We built the largest Taiwanese spoken corpus to train VoiceTalk ASR (VT-ASR) and show how the VT-ASR mechanism can be transparently integrated with existing IoT applications. We consider two performance measures for VoiceTalk: speech recognition accuracy and VT-ASR delay. For the acoustic tests of PAL-Labs, VT-ASR's accuracy is 96.47%, while Google's accuracy is 94.28%. We are the first to develop an analytic model to investigate the probability that the VT-ASR delay for the first speaker is complete before the second speaker starts talking. From the measurements and analytic modeling, we show that the VT-ASR delay is short enough to result in a very good user experience. Our solution has won several important government and commercial TV contracts in Taiwan. VT-ASR has demonstrated better Taiwanese Mandarin speech recognition accuracy than famous commercial products (including Google and Iflytek) in Formosa Speech Recognition Challenge 2018 (FSR-2018) and was the best among all participating ASR systems for Taiwanese recognition accuracy in FSR-2020.
基于语音的多媒体物联网(IoMT)是物联网接口和协议与相关语音相关信息的结合,它使基于人与设备交互的高级应用成为可能。一个例子是用于实时字幕和语音翻译的自动语音识别(ASR)。物联网ASR的三个主要问题是物联网开发成本、语音识别准确性和执行时间复杂性。对于第一个问题,大多数非语音物联网应用都是通过硬编码升级ASR功能的,这很容易出错。对于第二个问题,必须提高ASR的识别精度。对于第三个问题,许多多媒体物联网服务是实时应用,因此ASR延迟必须短。本文基于一个名为VoiceTalk的物联网平台详细阐述了上述问题。我们建立了最大的台湾口语语料库来训练VoiceTalk ASR (VT-ASR),并展示了VT-ASR机制如何与现有的物联网应用透明地集成。我们考虑了VoiceTalk的两个性能指标:语音识别精度和VT-ASR延迟。对于PAL-Labs的声学测试,VT-ASR的准确率为96.47%,而Google的准确率为94.28%。我们首先开发了一个分析模型来研究第一个说话者的VT-ASR延迟在第二个说话者开始说话之前完成的概率。从测量和分析建模中,我们表明VT-ASR延迟足够短,可以产生非常好的用户体验。我们的解决方案在台湾赢得了几个重要的政府和商业电视合同。在2018台塑语音识别挑战赛(FSR-2018)中,VT-ASR的台湾普通话识别准确率优于知名商用产品(包括Google和科大讯飞),在FSR-2020中,VT-ASR在所有参赛的ASR系统中台湾识别准确率最高。
期刊介绍:
ACM Transactions on Internet Technology (TOIT) brings together many computing disciplines including computer software engineering, computer programming languages, middleware, database management, security, knowledge discovery and data mining, networking and distributed systems, communications, performance and scalability etc. TOIT will cover the results and roles of the individual disciplines and the relationshipsamong them.