Pub Date : 1999-10-17DOI: 10.1109/ASPAA.1999.810841
J. Herre, E. Allamanche
Stimulated by the technological revolution in both networking technology (the Internet) and highly efficient perceptual audio coding algorithms (e.g. MPEG audio), a tremendous amount of music piracy has emerged recently. In contrast to this, a controlled distribution of music or multimedia content commonly employs so-called secure envelope techniques which "package" the audio bitstream into a secure container by means of ciphering all or part of the payload bitstream. In this way, access to the payload (i.e. decoding of the bitstream) is possible only for authorized persons who are in the possession of the proper key for decryption. While decoding of such a secure envelope bitstream requires a two-stage process (deciphering and source decoding), this paper presents a novel technique integrating both deciphering and source decoding into one combined process. This is achieved by "scrambling" the bitstream of the coded signal in a syntax-compatible way such that playback of the scrambled bitstream without access to the proper key will result in a stable playback at a degraded quality level ("soft-envelope" technique). The approach allows the content authors to select the amount of degradation, does not impose a bitrate or quality burden and can be applied to a wide range of coders. Examples of the scrambling technique are given for an MPEG-2 advanced audio coding (AAC) system.
{"title":"Compatible scrambling of compressed audio","authors":"J. Herre, E. Allamanche","doi":"10.1109/ASPAA.1999.810841","DOIUrl":"https://doi.org/10.1109/ASPAA.1999.810841","url":null,"abstract":"Stimulated by the technological revolution in both networking technology (the Internet) and highly efficient perceptual audio coding algorithms (e.g. MPEG audio), a tremendous amount of music piracy has emerged recently. In contrast to this, a controlled distribution of music or multimedia content commonly employs so-called secure envelope techniques which \"package\" the audio bitstream into a secure container by means of ciphering all or part of the payload bitstream. In this way, access to the payload (i.e. decoding of the bitstream) is possible only for authorized persons who are in the possession of the proper key for decryption. While decoding of such a secure envelope bitstream requires a two-stage process (deciphering and source decoding), this paper presents a novel technique integrating both deciphering and source decoding into one combined process. This is achieved by \"scrambling\" the bitstream of the coded signal in a syntax-compatible way such that playback of the scrambled bitstream without access to the proper key will result in a stable playback at a degraded quality level (\"soft-envelope\" technique). The approach allows the content authors to select the amount of degradation, does not impose a bitrate or quality burden and can be applied to a wide range of coders. Examples of the scrambling technique are given for an MPEG-2 advanced audio coding (AAC) system.","PeriodicalId":229733,"journal":{"name":"Proceedings of the 1999 IEEE Workshop on Applications of Signal Processing to Audio and Acoustics. WASPAA'99 (Cat. No.99TH8452)","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1999-10-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129686674","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 1999-10-17DOI: 10.1109/ASPAA.1999.810838
D. de Vries, M. M. Boone
The concept of wave field synthesis (WFS) was introduced by Betkhout in 1988. It enables the generation of sound fields with natural temporal and spatial properties within a volume or area bounded by arrays of loudspeakers. Applications are found in real time performances as well as in reproduction of multitrack recordings. A logic next step was the formulation of a new wave field analysis (WFA) concept by Berkhout in 1997, where sound fields in enclosures are recorded with arrays of microphones and analyzed with postprocessing techniques commonly used in acoustical imaging. This way, both the temporal and spatial properties of the sound field can be investigated and understood. WFS and WFA meet in auralization applications: sound fields measured (or modeled) along arrays of microphone positions can be generated by arrays of loudspeakers for perceptual evaluation.
{"title":"Wave field synthesis and analysis using array technology","authors":"D. de Vries, M. M. Boone","doi":"10.1109/ASPAA.1999.810838","DOIUrl":"https://doi.org/10.1109/ASPAA.1999.810838","url":null,"abstract":"The concept of wave field synthesis (WFS) was introduced by Betkhout in 1988. It enables the generation of sound fields with natural temporal and spatial properties within a volume or area bounded by arrays of loudspeakers. Applications are found in real time performances as well as in reproduction of multitrack recordings. A logic next step was the formulation of a new wave field analysis (WFA) concept by Berkhout in 1997, where sound fields in enclosures are recorded with arrays of microphones and analyzed with postprocessing techniques commonly used in acoustical imaging. This way, both the temporal and spatial properties of the sound field can be investigated and understood. WFS and WFA meet in auralization applications: sound fields measured (or modeled) along arrays of microphone positions can be generated by arrays of loudspeakers for perceptual evaluation.","PeriodicalId":229733,"journal":{"name":"Proceedings of the 1999 IEEE Workshop on Applications of Signal Processing to Audio and Acoustics. WASPAA'99 (Cat. No.99TH8452)","volume":"11 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1999-10-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"117214465","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 1999-10-17DOI: 10.1109/ASPAA.1999.810889
L. Gresham, L. Collins
Historically, theoretical predictions of human auditory perception have not agreed with experimental measurements. We have previously demonstrated that using signal detection theory to analyze the outputs of deterministic computational auditory models yields more accurate predictions of experimental performance than traditional approaches (Gresham and Collins 1998). However, discrepancies remained between predicted and actual performance. In this paper, the effects of stimulus uncertainty and neural variability on the detectability of a tone in noise are studied. The results suggest that remarkably accurate predictions of detection performance can be generated when such uncertainty is incorporated into the problem.
从历史上看,人类听觉感知的理论预测与实验测量并不一致。我们之前已经证明,使用信号检测理论来分析确定性计算听觉模型的输出,可以比传统方法更准确地预测实验性能(Gresham and Collins 1998)。然而,预测和实际表现之间仍然存在差异。本文研究了刺激不确定性和神经变异性对噪声中音调可检测性的影响。结果表明,当将这种不确定性纳入问题中时,可以产生非常准确的检测性能预测。
{"title":"The effect of a Poisson \"internal noise\" process on theoretical acoustic signal detectability","authors":"L. Gresham, L. Collins","doi":"10.1109/ASPAA.1999.810889","DOIUrl":"https://doi.org/10.1109/ASPAA.1999.810889","url":null,"abstract":"Historically, theoretical predictions of human auditory perception have not agreed with experimental measurements. We have previously demonstrated that using signal detection theory to analyze the outputs of deterministic computational auditory models yields more accurate predictions of experimental performance than traditional approaches (Gresham and Collins 1998). However, discrepancies remained between predicted and actual performance. In this paper, the effects of stimulus uncertainty and neural variability on the detectability of a tone in noise are studied. The results suggest that remarkably accurate predictions of detection performance can be generated when such uncertainty is incorporated into the problem.","PeriodicalId":229733,"journal":{"name":"Proceedings of the 1999 IEEE Workshop on Applications of Signal Processing to Audio and Acoustics. WASPAA'99 (Cat. No.99TH8452)","volume":"23 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1999-10-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130680130","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 1999-10-17DOI: 10.1109/ASPAA.1999.810891
R. Baxter, T. Quatieri
We describe a transduction-based, neurodynamic approach to estimating the amplitude-modulated (AM) and frequency-modulated (FM) components of a signal. We show that the transduction approach can be realized as a bank of constant-Q bandpass filters followed by envelope detectors and shunting neural networks, and the resulting dynamical system is capable of robust AM-FM estimation. Our model is consistent with previous psychophysical experiments that indicate AM and FM components of acoustic signals may be transformed into a common neural code in the brain stem via FM-to-AM transduction (Saberi and Hafter 1995). The shunting network for AM-FM decomposition is followed by a contrast enhancement shunting network that provides a mechanism for robustly selecting auditory filter channels as the FM of an input stimulus sweeps across the multiple filters. The AM-FM output of the shunting networks may provide a robust feature representation and is being considered for applications in signal recognition and multi-component decomposition problems.
{"title":"Shunting networks for multi-band AM-FM-decomposition","authors":"R. Baxter, T. Quatieri","doi":"10.1109/ASPAA.1999.810891","DOIUrl":"https://doi.org/10.1109/ASPAA.1999.810891","url":null,"abstract":"We describe a transduction-based, neurodynamic approach to estimating the amplitude-modulated (AM) and frequency-modulated (FM) components of a signal. We show that the transduction approach can be realized as a bank of constant-Q bandpass filters followed by envelope detectors and shunting neural networks, and the resulting dynamical system is capable of robust AM-FM estimation. Our model is consistent with previous psychophysical experiments that indicate AM and FM components of acoustic signals may be transformed into a common neural code in the brain stem via FM-to-AM transduction (Saberi and Hafter 1995). The shunting network for AM-FM decomposition is followed by a contrast enhancement shunting network that provides a mechanism for robustly selecting auditory filter channels as the FM of an input stimulus sweeps across the multiple filters. The AM-FM output of the shunting networks may provide a robust feature representation and is being considered for applications in signal recognition and multi-component decomposition problems.","PeriodicalId":229733,"journal":{"name":"Proceedings of the 1999 IEEE Workshop on Applications of Signal Processing to Audio and Acoustics. WASPAA'99 (Cat. No.99TH8452)","volume":"40 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1999-10-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127005752","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 1999-10-17DOI: 10.1109/ASPAA.1999.810879
C. Avendaño, V. Algazi, R. Duda
Low-frequency elevation-dependent features appear in HRTF (head related transfer function) measurements because of torso and shoulder reflections and head diffraction effects. A simple structural model that accounts for these features is presented. Listening tests show that the model produces significant elevation cues for virtual sound sources whose spectra are limited to frequencies below 3 kHz. The low-frequency binaural elevation cues are perceptually significant away from the median plane, and complement high-frequency monaural pinna cues.
{"title":"A head-and-torso model for low-frequency binaural elevation effects","authors":"C. Avendaño, V. Algazi, R. Duda","doi":"10.1109/ASPAA.1999.810879","DOIUrl":"https://doi.org/10.1109/ASPAA.1999.810879","url":null,"abstract":"Low-frequency elevation-dependent features appear in HRTF (head related transfer function) measurements because of torso and shoulder reflections and head diffraction effects. A simple structural model that accounts for these features is presented. Listening tests show that the model produces significant elevation cues for virtual sound sources whose spectra are limited to frequencies below 3 kHz. The low-frequency binaural elevation cues are perceptually significant away from the median plane, and complement high-frequency monaural pinna cues.","PeriodicalId":229733,"journal":{"name":"Proceedings of the 1999 IEEE Workshop on Applications of Signal Processing to Audio and Acoustics. WASPAA'99 (Cat. No.99TH8452)","volume":"712 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1999-10-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"122922024","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 1999-10-17DOI: 10.1109/ASPAA.1999.810847
G. Schuller, W. Sweldens
We present a design method for filter banks with unequal length of the impulse responses for the analysis and synthesis part. This is useful e.g. for audio coding applications. A further advantage of the design method is the possibility to explicitly control the overall system delay of the filter bank, when causal filters are desired. The design method is based on a factorization of the polyphase matrices into factors with nilpotent matrices. These factors guarantee mathematical perfect reconstruction of the filter bank, and lead to FIR filters for analysis and synthesis. Using matrices with nilpotency of higher order than 2 leads to FIR filter banks with unequal filter length for analysis and synthesis.
{"title":"Filter bank design using nilpotent matrices","authors":"G. Schuller, W. Sweldens","doi":"10.1109/ASPAA.1999.810847","DOIUrl":"https://doi.org/10.1109/ASPAA.1999.810847","url":null,"abstract":"We present a design method for filter banks with unequal length of the impulse responses for the analysis and synthesis part. This is useful e.g. for audio coding applications. A further advantage of the design method is the possibility to explicitly control the overall system delay of the filter bank, when causal filters are desired. The design method is based on a factorization of the polyphase matrices into factors with nilpotent matrices. These factors guarantee mathematical perfect reconstruction of the filter bank, and lead to FIR filters for analysis and synthesis. Using matrices with nilpotency of higher order than 2 leads to FIR filter banks with unequal filter length for analysis and synthesis.","PeriodicalId":229733,"journal":{"name":"Proceedings of the 1999 IEEE Workshop on Applications of Signal Processing to Audio and Acoustics. WASPAA'99 (Cat. No.99TH8452)","volume":"3 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1999-10-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128501573","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 1999-10-17DOI: 10.1109/ASPAA.1999.810869
J. Seppanen, S. Kananoja, Jari-Yli-Hietanen, K. Koppinen, J. Sjoberg
We introduce an adaptive algorithm for constraining the amplitude of speech signals while at the same time trying to maintain the subjective loudness and trying not to produce disturbing artifacts. The algorithm can be applied to compensate for the clipping distortion of amplifiers in speech reproduction devices. The algorithm analyzes the speech signal on multiple frequency bands and applies an internal audibility law in order to make inaudible changes to the signal. An example of the audibility law, presented in the form of a matrix, is described, associated with a specific speech reproduction device. Multiple band-pass signals are processed with a waveshaper to accomplish soft-clipping and to constrain the amplitude of the processed signal. When processed with the proposed algorithm, the computational loudness value of speech signals was found to diminish only slightly (approximately 6 sones) during processing, while at the same time the signal amplitude could be reduced by even 15 dB.
{"title":"Maximization of the subjective loudness of speech with constrained amplitude","authors":"J. Seppanen, S. Kananoja, Jari-Yli-Hietanen, K. Koppinen, J. Sjoberg","doi":"10.1109/ASPAA.1999.810869","DOIUrl":"https://doi.org/10.1109/ASPAA.1999.810869","url":null,"abstract":"We introduce an adaptive algorithm for constraining the amplitude of speech signals while at the same time trying to maintain the subjective loudness and trying not to produce disturbing artifacts. The algorithm can be applied to compensate for the clipping distortion of amplifiers in speech reproduction devices. The algorithm analyzes the speech signal on multiple frequency bands and applies an internal audibility law in order to make inaudible changes to the signal. An example of the audibility law, presented in the form of a matrix, is described, associated with a specific speech reproduction device. Multiple band-pass signals are processed with a waveshaper to accomplish soft-clipping and to constrain the amplitude of the processed signal. When processed with the proposed algorithm, the computational loudness value of speech signals was found to diminish only slightly (approximately 6 sones) during processing, while at the same time the signal amplitude could be reduced by even 15 dB.","PeriodicalId":229733,"journal":{"name":"Proceedings of the 1999 IEEE Workshop on Applications of Signal Processing to Audio and Acoustics. WASPAA'99 (Cat. No.99TH8452)","volume":"28 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1999-10-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115044966","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 1999-10-17DOI: 10.1109/ASPAA.1999.810892
J. Kates
In feedback cancellation in hearing aids, the output of an adaptive filter is subtracted from the microphone signal to cancel the acoustic and mechanical feedback signals picked up by the microphone. The feedback cancellation filter typically adapts the hearing-aid input signal, and signal cancellation and coloration artifacts can occur for a narrowband input. In this paper, two procedures for LMS adaptation with a constraint on the magnitude of the adaptive weight vector are derived. The constraints greatly reduce the probability that the adaptive filter will cancel a narrowband input. Simulation results are used to demonstrate the efficacy of the constrained adaptation.
{"title":"Feedback cancellation in hearing aids using constrained adaptation","authors":"J. Kates","doi":"10.1109/ASPAA.1999.810892","DOIUrl":"https://doi.org/10.1109/ASPAA.1999.810892","url":null,"abstract":"In feedback cancellation in hearing aids, the output of an adaptive filter is subtracted from the microphone signal to cancel the acoustic and mechanical feedback signals picked up by the microphone. The feedback cancellation filter typically adapts the hearing-aid input signal, and signal cancellation and coloration artifacts can occur for a narrowband input. In this paper, two procedures for LMS adaptation with a constraint on the magnitude of the adaptive weight vector are derived. The constraints greatly reduce the probability that the adaptive filter will cancel a narrowband input. Simulation results are used to demonstrate the efficacy of the constrained adaptation.","PeriodicalId":229733,"journal":{"name":"Proceedings of the 1999 IEEE Workshop on Applications of Signal Processing to Audio and Acoustics. WASPAA'99 (Cat. No.99TH8452)","volume":"579 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1999-10-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"123409760","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 1999-10-17DOI: 10.1109/ASPAA.1999.810851
U. Laine
Linear transformations, like wavelet transforms, and filterbanks of IIR-type and of arbitrary time-frequency plane tilings can be efficiently realized by vector ARMA models. The quality of the realization depends on how well the basis functions or impulse responses of the filterbank can be approximated by the actual VARMA based pole-zero model. The vector AR part gives an MSE-optimal block-recursive model for the target basis functions. The vector MA part is formed of the vector AR residual and further optimized by an iterative algorithm.
{"title":"Linear transforms and filterbanks based on vector ARMA models","authors":"U. Laine","doi":"10.1109/ASPAA.1999.810851","DOIUrl":"https://doi.org/10.1109/ASPAA.1999.810851","url":null,"abstract":"Linear transformations, like wavelet transforms, and filterbanks of IIR-type and of arbitrary time-frequency plane tilings can be efficiently realized by vector ARMA models. The quality of the realization depends on how well the basis functions or impulse responses of the filterbank can be approximated by the actual VARMA based pole-zero model. The vector AR part gives an MSE-optimal block-recursive model for the target basis functions. The vector MA part is formed of the vector AR residual and further optimized by an iterative algorithm.","PeriodicalId":229733,"journal":{"name":"Proceedings of the 1999 IEEE Workshop on Applications of Signal Processing to Audio and Acoustics. WASPAA'99 (Cat. No.99TH8452)","volume":"23 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1999-10-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"120989896","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 1999-10-17DOI: 10.1109/ASPAA.1999.810865
S.J. Wenndt, A. Noga
While the cepstrum feature has been widely used for speaker identification (SID), studies have shown that it can be sensitive to changes in environmental conditions. Many experiments have examined the effects of additive white Gaussian noise on the cepstral feature, but few, if any, have been conducted using additive narrow-band interference. Since such interference appears in an unpredictable fashion due to adverse signal environments or equipment anomalies in communication systems, it is important to understand its impact along with the affect of interference removal algorithms on SID performance. This paper examines two interference removal algorithms for enhancing SID performance. One is a simple notch filter suitable for tone removal. The other is a newly introduced method suitable for mitigating more general forms of interference, including interfering signals that can be modeled as being angle-modulated.
{"title":"Narrow-band interference cancellation for enhanced speaker identification","authors":"S.J. Wenndt, A. Noga","doi":"10.1109/ASPAA.1999.810865","DOIUrl":"https://doi.org/10.1109/ASPAA.1999.810865","url":null,"abstract":"While the cepstrum feature has been widely used for speaker identification (SID), studies have shown that it can be sensitive to changes in environmental conditions. Many experiments have examined the effects of additive white Gaussian noise on the cepstral feature, but few, if any, have been conducted using additive narrow-band interference. Since such interference appears in an unpredictable fashion due to adverse signal environments or equipment anomalies in communication systems, it is important to understand its impact along with the affect of interference removal algorithms on SID performance. This paper examines two interference removal algorithms for enhancing SID performance. One is a simple notch filter suitable for tone removal. The other is a newly introduced method suitable for mitigating more general forms of interference, including interfering signals that can be modeled as being angle-modulated.","PeriodicalId":229733,"journal":{"name":"Proceedings of the 1999 IEEE Workshop on Applications of Signal Processing to Audio and Acoustics. WASPAA'99 (Cat. No.99TH8452)","volume":"52 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1999-10-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"124968188","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}