Pub Date : 1999-06-20DOI: 10.1109/SCFT.1999.781519
R. Martin, R. Cox
In this paper we present novel solutions for pre-processing noisy speech prior to low bit rate speech coding. We strive especially to improve the estimation of spectral parameters and to reduce the additional algorithmic delay caused by the enhancement pre-processor. While the former is achieved using a new adaptive limiting algorithm for the a priori signal-to-noise ratio (SNR) estimate, the latter makes use of a novel overlap/add scheme. Our enhancement techniques were evaluated in conjunction with the 2400 bps mixed excitation linear prediction (MELP) coder by means of formal and informal listening tests.
{"title":"New speech enhancement techniques for low bit rate speech coding","authors":"R. Martin, R. Cox","doi":"10.1109/SCFT.1999.781519","DOIUrl":"https://doi.org/10.1109/SCFT.1999.781519","url":null,"abstract":"In this paper we present novel solutions for pre-processing noisy speech prior to low bit rate speech coding. We strive especially to improve the estimation of spectral parameters and to reduce the additional algorithmic delay caused by the enhancement pre-processor. While the former is achieved using a new adaptive limiting algorithm for the a priori signal-to-noise ratio (SNR) estimate, the latter makes use of a novel overlap/add scheme. Our enhancement techniques were evaluated in conjunction with the 2400 bps mixed excitation linear prediction (MELP) coder by means of formal and informal listening tests.","PeriodicalId":372569,"journal":{"name":"1999 IEEE Workshop on Speech Coding Proceedings. Model, Coders, and Error Criteria (Cat. No.99EX351)","volume":"35 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1999-06-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"117061748","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 1999-06-20DOI: 10.1109/SCFT.1999.781500
Cheng-Chieh Lee, Y. Shoham
This paper describes using the trellis-based scalar-vector quantizer for sources with memory to solve the excitation codebook search problem of code excited linear prediction (CELP) speech coders. This approach leads to a 24 kbit/s telephony-bandwidth low-delay (3 msec) trellis CELP coder, which outperforms both ITU-T 15 kbit/s G.728 LD-CELP and G.726 32 kbit/s ADPCM. Since the codebook is derived from a scalar alphabet, the proposed coder can effectively handle excitation vectors in the 24-dimensional space (to realize considerable vector quantization gains) and has a computational complexity of approximately 75% of that of ITU-T G.728 LD-CELP.
{"title":"Trellis code excited linear prediction (TCELP) speech coding","authors":"Cheng-Chieh Lee, Y. Shoham","doi":"10.1109/SCFT.1999.781500","DOIUrl":"https://doi.org/10.1109/SCFT.1999.781500","url":null,"abstract":"This paper describes using the trellis-based scalar-vector quantizer for sources with memory to solve the excitation codebook search problem of code excited linear prediction (CELP) speech coders. This approach leads to a 24 kbit/s telephony-bandwidth low-delay (3 msec) trellis CELP coder, which outperforms both ITU-T 15 kbit/s G.728 LD-CELP and G.726 32 kbit/s ADPCM. Since the codebook is derived from a scalar alphabet, the proposed coder can effectively handle excitation vectors in the 24-dimensional space (to realize considerable vector quantization gains) and has a computational complexity of approximately 75% of that of ITU-T G.728 LD-CELP.","PeriodicalId":372569,"journal":{"name":"1999 IEEE Workshop on Speech Coding Proceedings. Model, Coders, and Error Criteria (Cat. No.99EX351)","volume":"131 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1999-06-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"123715117","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 1999-06-20DOI: 10.1109/SCFT.1999.781469
A. Aggarwal, V. Cuperman, K. Rose, A. Gersho
This paper introduces a new algorithm for scalable coding of wideband audio signals. The technique is based on quantization of bi-orthogonal wavelet transformed coefficients using a perceptual zerotree method. An initial zerotree estimate of the wavelet coefficients is computed, followed by scalar quantization of the coefficients according to perceptual thresholds. The choice of wavelet decomposition and encoding parameters for each frame is adapted to the source characteristics employing a rate distortion criterion. The scalability of the coder is due to the tree structure, which enables graceful degradation with decrease in bit rate. Preliminary subjective tests indicate near-transparent quality for average bit rates in the range of 1.5 to 2.5 bits per sample.
{"title":"Perceptual zerotrees for scalable wavelet coding of wideband audio","authors":"A. Aggarwal, V. Cuperman, K. Rose, A. Gersho","doi":"10.1109/SCFT.1999.781469","DOIUrl":"https://doi.org/10.1109/SCFT.1999.781469","url":null,"abstract":"This paper introduces a new algorithm for scalable coding of wideband audio signals. The technique is based on quantization of bi-orthogonal wavelet transformed coefficients using a perceptual zerotree method. An initial zerotree estimate of the wavelet coefficients is computed, followed by scalar quantization of the coefficients according to perceptual thresholds. The choice of wavelet decomposition and encoding parameters for each frame is adapted to the source characteristics employing a rate distortion criterion. The scalability of the coder is due to the tree structure, which enables graceful degradation with decrease in bit rate. Preliminary subjective tests indicate near-transparent quality for average bit rates in the range of 1.5 to 2.5 bits per sample.","PeriodicalId":372569,"journal":{"name":"1999 IEEE Workshop on Speech Coding Proceedings. Model, Coders, and Error Criteria (Cat. No.99EX351)","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1999-06-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128675490","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 1999-06-20DOI: 10.1109/SCFT.1999.781466
B. Bessette, R. Salami, C. Laflamme, R. Lefebvre
A hybrid ACELP/TCX algorithm for coding speech and music signals at 16, 24, and 32 kbit/s is presented. The algorithm switches between algebraic code excited linear prediction (ACELP) and transform coded excitation (TCX) modes on a 20-ms frame basis. Applying TCX on 20 ms frames improved the quality for music signals. Special care was taken to alleviate the switching artifacts between the two modes resulting in a transparent switching process. Subjective test results showed that for speech signals, the performance at 16, 24, and 32 kbit/s, is equivalent to G.722 at 48, 56, and 64 kbit/s, respectively. For music signals, the quality at 24 kbit/s was found equivalent to G.722 at 56 kbit/s. However, at 16 kbit/s, the quality for music was slightly lower than G.722 at 48 kbit/s.
{"title":"A wideband speech and audio codec at 16/24/32 kbit/s using hybrid ACELP/TCX techniques","authors":"B. Bessette, R. Salami, C. Laflamme, R. Lefebvre","doi":"10.1109/SCFT.1999.781466","DOIUrl":"https://doi.org/10.1109/SCFT.1999.781466","url":null,"abstract":"A hybrid ACELP/TCX algorithm for coding speech and music signals at 16, 24, and 32 kbit/s is presented. The algorithm switches between algebraic code excited linear prediction (ACELP) and transform coded excitation (TCX) modes on a 20-ms frame basis. Applying TCX on 20 ms frames improved the quality for music signals. Special care was taken to alleviate the switching artifacts between the two modes resulting in a transparent switching process. Subjective test results showed that for speech signals, the performance at 16, 24, and 32 kbit/s, is equivalent to G.722 at 48, 56, and 64 kbit/s, respectively. For music signals, the quality at 24 kbit/s was found equivalent to G.722 at 56 kbit/s. However, at 16 kbit/s, the quality for music was slightly lower than G.722 at 48 kbit/s.","PeriodicalId":372569,"journal":{"name":"1999 IEEE Workshop on Speech Coding Proceedings. Model, Coders, and Error Criteria (Cat. No.99EX351)","volume":"8 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1999-06-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"124083253","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 1999-06-20DOI: 10.1109/SCFT.1999.781475
U. Visitkitjakarn, W. Chan, Yongyi Yang
Previous works have demonstrated that by preserving speech spectral "dynamics" during spectral parameter quantization and/or decoding, the quality of coded speech can be improved. We explore the use of projections onto convex sets (POCS) techniques to recover speech spectral parameters from their quantized versions. Unlike prior works, the POCS approach enables us to obtain solutions that satisfy precise constraints. Two constraint sets are used in our POCS recovery algorithm: one set constrains the "roughness" of the parameter trajectories, and the other set confines the parameters to the proper quantizer partition cells. Simulation of our algorithm has consistently produced improvements in both the subjective quality and objective distortion measurements.
{"title":"Recovery of speech spectral parameters using convex set projection","authors":"U. Visitkitjakarn, W. Chan, Yongyi Yang","doi":"10.1109/SCFT.1999.781475","DOIUrl":"https://doi.org/10.1109/SCFT.1999.781475","url":null,"abstract":"Previous works have demonstrated that by preserving speech spectral \"dynamics\" during spectral parameter quantization and/or decoding, the quality of coded speech can be improved. We explore the use of projections onto convex sets (POCS) techniques to recover speech spectral parameters from their quantized versions. Unlike prior works, the POCS approach enables us to obtain solutions that satisfy precise constraints. Two constraint sets are used in our POCS recovery algorithm: one set constrains the \"roughness\" of the parameter trajectories, and the other set confines the parameters to the proper quantizer partition cells. Simulation of our algorithm has consistently produced improvements in both the subjective quality and objective distortion measurements.","PeriodicalId":372569,"journal":{"name":"1999 IEEE Workshop on Speech Coding Proceedings. Model, Coders, and Error Criteria (Cat. No.99EX351)","volume":"26 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1999-06-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"133471527","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 1999-06-20DOI: 10.1109/SCFT.1999.781520
M. Kuropatwinski, D. Leckschat, K. Kroschel, A. Czyżewski
Speech coding techniques commonly used in low bit rate analysis-by-synthesis linear predictive coders (LPAS coders) can serve as a speech signal model emphasizing its important features. In the paper it is shown how this coding method can be utilized for speech enhancement. Particularly, the speech signal is modeled as the output of a cascade of an adaptive formant filter and a pitch filter, driven by a white Gaussian process with variance changing with time. A signal estimation method based on the Kalman filter is investigated which implements this speech signal model. The proposed approach yields significantly better performance both in SNR and subjective impression than Kalman filter methods, which use only short-time speech parameters.
{"title":"Integration of speech enhancement and coding techniques","authors":"M. Kuropatwinski, D. Leckschat, K. Kroschel, A. Czyżewski","doi":"10.1109/SCFT.1999.781520","DOIUrl":"https://doi.org/10.1109/SCFT.1999.781520","url":null,"abstract":"Speech coding techniques commonly used in low bit rate analysis-by-synthesis linear predictive coders (LPAS coders) can serve as a speech signal model emphasizing its important features. In the paper it is shown how this coding method can be utilized for speech enhancement. Particularly, the speech signal is modeled as the output of a cascade of an adaptive formant filter and a pitch filter, driven by a white Gaussian process with variance changing with time. A signal estimation method based on the Kalman filter is investigated which implements this speech signal model. The proposed approach yields significantly better performance both in SNR and subjective impression than Kalman filter methods, which use only short-time speech parameters.","PeriodicalId":372569,"journal":{"name":"1999 IEEE Workshop on Speech Coding Proceedings. Model, Coders, and Error Criteria (Cat. No.99EX351)","volume":"61 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1999-06-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114782115","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 1999-06-20DOI: 10.1109/SCFT.1999.781470
A. Murashima, M. Serizawa, K. Ozawa
This paper proposes a wideband multi-rate speech and channel codec based on the MPEG-4/CELP for the ETSI/GSM full-rate channel. In order to improve coding performance under mobile environments, such as channel error and background noise, the proposed codec operates at three bit allocations between speech and channel coding with a constant gross bit-rate of 22.8 kbit/s. The speech coding bit-rates are 10.9, 12.1 and 15.9 kbit/s. It achieves high speech quality under any channel condition by switching the bit allocations and also for noisy speech by using the highest bit-rate. The preliminary subjective evaluation tests show the speech quality is improved by switching the bit allocation under error conditions. It is also comparable of superior to ITU-T Recommendation G.722 48 kbit/s for carrier-to-interference ratios (C/I) higher than 10 dB. The codec at 15.9 kbit/s also gives comparable speech quality to G.722 at 48 kbit/s under background noise conditions.
{"title":"Multi-rate wideband speech/channel codec based on MPEG-4/CELP for ETSI/GSM full-rate channel","authors":"A. Murashima, M. Serizawa, K. Ozawa","doi":"10.1109/SCFT.1999.781470","DOIUrl":"https://doi.org/10.1109/SCFT.1999.781470","url":null,"abstract":"This paper proposes a wideband multi-rate speech and channel codec based on the MPEG-4/CELP for the ETSI/GSM full-rate channel. In order to improve coding performance under mobile environments, such as channel error and background noise, the proposed codec operates at three bit allocations between speech and channel coding with a constant gross bit-rate of 22.8 kbit/s. The speech coding bit-rates are 10.9, 12.1 and 15.9 kbit/s. It achieves high speech quality under any channel condition by switching the bit allocations and also for noisy speech by using the highest bit-rate. The preliminary subjective evaluation tests show the speech quality is improved by switching the bit allocation under error conditions. It is also comparable of superior to ITU-T Recommendation G.722 48 kbit/s for carrier-to-interference ratios (C/I) higher than 10 dB. The codec at 15.9 kbit/s also gives comparable speech quality to G.722 at 48 kbit/s under background noise conditions.","PeriodicalId":372569,"journal":{"name":"1999 IEEE Workshop on Speech Coding Proceedings. Model, Coders, and Error Criteria (Cat. No.99EX351)","volume":"21 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1999-06-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"123401849","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 1999-06-20DOI: 10.1109/SCFT.1999.781486
M. Harton, K. Kapuscinski
Sometimes, there is little interest by algorithm developers in creating a fixed-point simulation from a floating-point algorithm. However, often it is vital that high levels of speech quality be maintained in a fixed-point application. The process of converting floating-point simulations to fixed-point is time consuming, expensive, and if not done well, a state-of-the-art algorithm may never see product implementation. There is a critical need for software tools that reduce the time and effort that algorithm developers spend on floating-point to fixed-point software conversion. Bit-Exact C++ (BEC++) is just such a tool. This paper discusses a fixed-point software implementation tool, BEC++, with syntax similar in look and feel to that of floating-point C. Based on the ETSI Bit-Exact C (BEC) software now commonly used in industry, BEC++ extends the capabilities of BEC through the introduction of C++ language features and object-oriented techniques. This paper also details how to use the software, providing comparisons between BEC++ and BEC implementations.
有时,算法开发人员对从浮点算法创建定点模拟不太感兴趣。然而,在定点应用程序中保持高水平的语音质量通常是至关重要的。将浮点模拟转换为定点模拟的过程既耗时又昂贵,如果做得不好,最先进的算法可能永远不会看到产品实现。迫切需要能够减少算法开发人员在浮点到定点软件转换上花费的时间和精力的软件工具。位精确c++ (BEC++)就是这样一个工具。本文讨论了一种与浮点C语法相似的语法实现工具——BEC++。BEC++在目前工业上普遍使用的ETSI Bit-Exact C (BEC)软件的基础上,通过引入c++语言特性和面向对象技术,扩展了BEC的功能。本文还详细介绍了该软件的使用方法,并对BEC++和BEC实现进行了比较。
{"title":"BEC++: a software tool for increased flexibility in algorithm development","authors":"M. Harton, K. Kapuscinski","doi":"10.1109/SCFT.1999.781486","DOIUrl":"https://doi.org/10.1109/SCFT.1999.781486","url":null,"abstract":"Sometimes, there is little interest by algorithm developers in creating a fixed-point simulation from a floating-point algorithm. However, often it is vital that high levels of speech quality be maintained in a fixed-point application. The process of converting floating-point simulations to fixed-point is time consuming, expensive, and if not done well, a state-of-the-art algorithm may never see product implementation. There is a critical need for software tools that reduce the time and effort that algorithm developers spend on floating-point to fixed-point software conversion. Bit-Exact C++ (BEC++) is just such a tool. This paper discusses a fixed-point software implementation tool, BEC++, with syntax similar in look and feel to that of floating-point C. Based on the ETSI Bit-Exact C (BEC) software now commonly used in industry, BEC++ extends the capabilities of BEC through the introduction of C++ language features and object-oriented techniques. This paper also details how to use the software, providing comparisons between BEC++ and BEC implementations.","PeriodicalId":372569,"journal":{"name":"1999 IEEE Workshop on Speech Coding Proceedings. Model, Coders, and Error Criteria (Cat. No.99EX351)","volume":"137 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1999-06-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127475506","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 1999-06-20DOI: 10.1109/SCFT.1999.781491
G. Kubin, W. Kleijn
Network signal processing aspects dominate in speech and audio coding applications such as Internet telephony or packet radio networks. We demonstrate that our approach to speech coding in a perceptual domain provides an implicit forward error concealment mechanism to handle random erasures of the channel. To this end, the individual acoustic subchannels of our auditory model are grouped into different transport subchannels or packets. Due to the strongly overlapping, redundant filterbank structure of the model, reconstruction of speech without audible degradation becomes possible even if a significant percentage of channels is erased (e.g., up to 40% in a 50-channel auditory model for narrowband speech). We discuss this result both from a hearing-physiology and a frame-theoretic perspective.
{"title":"Multiple-description coding (MDC) of speech with an invertible auditory model","authors":"G. Kubin, W. Kleijn","doi":"10.1109/SCFT.1999.781491","DOIUrl":"https://doi.org/10.1109/SCFT.1999.781491","url":null,"abstract":"Network signal processing aspects dominate in speech and audio coding applications such as Internet telephony or packet radio networks. We demonstrate that our approach to speech coding in a perceptual domain provides an implicit forward error concealment mechanism to handle random erasures of the channel. To this end, the individual acoustic subchannels of our auditory model are grouped into different transport subchannels or packets. Due to the strongly overlapping, redundant filterbank structure of the model, reconstruction of speech without audible degradation becomes possible even if a significant percentage of channels is erased (e.g., up to 40% in a 50-channel auditory model for narrowband speech). We discuss this result both from a hearing-physiology and a frame-theoretic perspective.","PeriodicalId":372569,"journal":{"name":"1999 IEEE Workshop on Speech Coding Proceedings. Model, Coders, and Error Criteria (Cat. No.99EX351)","volume":"11 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1999-06-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126713520","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 1999-06-20DOI: 10.1109/SCFT.1999.781505
M. Foodeei, H. Zarrinkoub, R. Matmti, R. Rabipour, F. Gabin, S. Gosne
We describe a low bit rate speech codec based on the RCELP paradigm and designed as a candidate for GSM-AMR. The relaxation of the waveform-matching constraint in the RCELP model allows for reducing the bit rate without affecting the speech quality. New efficient quantization methods for the LSF and gain parameters coupled with some algorithmic improvements result in a high quality speech codec at bit rates as low as 4.55 kbit/s. Subjective tests show encouraging results in terms of quality and robustness under various operating conditions.
{"title":"A low bit rate codec for AMR standard","authors":"M. Foodeei, H. Zarrinkoub, R. Matmti, R. Rabipour, F. Gabin, S. Gosne","doi":"10.1109/SCFT.1999.781505","DOIUrl":"https://doi.org/10.1109/SCFT.1999.781505","url":null,"abstract":"We describe a low bit rate speech codec based on the RCELP paradigm and designed as a candidate for GSM-AMR. The relaxation of the waveform-matching constraint in the RCELP model allows for reducing the bit rate without affecting the speech quality. New efficient quantization methods for the LSF and gain parameters coupled with some algorithmic improvements result in a high quality speech codec at bit rates as low as 4.55 kbit/s. Subjective tests show encouraging results in terms of quality and robustness under various operating conditions.","PeriodicalId":372569,"journal":{"name":"1999 IEEE Workshop on Speech Coding Proceedings. Model, Coders, and Error Criteria (Cat. No.99EX351)","volume":"96 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1999-06-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126866882","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}