Pub Date : 2023-01-09DOI: 10.1109/SLT54892.2023.10023174
Yukai Ju, Shimin Zhang, Wei Rao, Yannan Wang, Tao Yu, Lei Xie, Shidong Shang
Personalized speech enhancement (PSE) utilizes additional cues like speaker embeddings to remove background noise and interfering speech and extract the speech from target speaker. Previous work, the Tencent-Ethereal-Audio-Lab personalized speech enhancement (TEA-PSE) system, ranked 1st in the ICASSP 2022 deep noise suppression (DNS2022) challenge. In this paper, we expand TEA-PSE to its sub-band version - TEA-PSE 2.0, to reduce computational complexity as well as further improve performance. Specifically, we adopt finite impulse response filter banks and spectrum splitting to reduce computational complexity. We introduce a time frequency convolution module (TFCM) to the system for increasing the receptive field with small convolution kernels. Besides, we explore several training strategies to optimize the two-stage network and investigate various loss functions in the PSE task. TEA-PSE 2.0 significantly outperforms TEA-PSE in both speech enhancement performance and computation complexity. Experimental results on the DNS2022 blind test set show that TEA-PSE 2.0 brings 0.102 OVRL personalized DNSMOS improvement with only 21.9% multiply-accumulate operations compared with the previous TEA-PSE.
{"title":"TEA-PSE 2.0: Sub-Band Network for Real-Time Personalized Speech Enhancement","authors":"Yukai Ju, Shimin Zhang, Wei Rao, Yannan Wang, Tao Yu, Lei Xie, Shidong Shang","doi":"10.1109/SLT54892.2023.10023174","DOIUrl":"https://doi.org/10.1109/SLT54892.2023.10023174","url":null,"abstract":"Personalized speech enhancement (PSE) utilizes additional cues like speaker embeddings to remove background noise and interfering speech and extract the speech from target speaker. Previous work, the Tencent-Ethereal-Audio-Lab personalized speech enhancement (TEA-PSE) system, ranked 1st in the ICASSP 2022 deep noise suppression (DNS2022) challenge. In this paper, we expand TEA-PSE to its sub-band version - TEA-PSE 2.0, to reduce computational complexity as well as further improve performance. Specifically, we adopt finite impulse response filter banks and spectrum splitting to reduce computational complexity. We introduce a time frequency convolution module (TFCM) to the system for increasing the receptive field with small convolution kernels. Besides, we explore several training strategies to optimize the two-stage network and investigate various loss functions in the PSE task. TEA-PSE 2.0 significantly outperforms TEA-PSE in both speech enhancement performance and computation complexity. Experimental results on the DNS2022 blind test set show that TEA-PSE 2.0 brings 0.102 OVRL personalized DNSMOS improvement with only 21.9% multiply-accumulate operations compared with the previous TEA-PSE.","PeriodicalId":352002,"journal":{"name":"2022 IEEE Spoken Language Technology Workshop (SLT)","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-01-09","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"131234174","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2023-01-09DOI: 10.1109/SLT54892.2023.10023411
David Qiu, Tsendsuren Munkhdalai, Yanzhang He, K. Sim
Confidence estimation for automatic speech recognition (ASR) is important for many downstream tasks. Recently, neural confidence estimation models (CEMs) have been shown to produce accurate confidence scores for predicting word-level errors. These models are built on top of an end-to-end (E2E) ASR and the acoustic embeddings are part of the input features. However, practical E2E ASR systems often incorporate contextual information in the decoder to improve rare word recognition. The CEM is not aware of this and underestimates the confidence of the rare words that have been corrected by the context. In this paper, we propose a context-aware CEM by incorporating context into the encoder using a neural associative memory (NAM) model. It uses attention to detect for presence of the biasing phrases and modify the encoder features. Experiments show that the proposed context-aware CEM using NAM augmented training can improve the AUC-ROC for word error prediction from 0.837 to 0.892.
{"title":"Context-Aware Neural Confidence Estimation for Rare Word Speech Recognition","authors":"David Qiu, Tsendsuren Munkhdalai, Yanzhang He, K. Sim","doi":"10.1109/SLT54892.2023.10023411","DOIUrl":"https://doi.org/10.1109/SLT54892.2023.10023411","url":null,"abstract":"Confidence estimation for automatic speech recognition (ASR) is important for many downstream tasks. Recently, neural confidence estimation models (CEMs) have been shown to produce accurate confidence scores for predicting word-level errors. These models are built on top of an end-to-end (E2E) ASR and the acoustic embeddings are part of the input features. However, practical E2E ASR systems often incorporate contextual information in the decoder to improve rare word recognition. The CEM is not aware of this and underestimates the confidence of the rare words that have been corrected by the context. In this paper, we propose a context-aware CEM by incorporating context into the encoder using a neural associative memory (NAM) model. It uses attention to detect for presence of the biasing phrases and modify the encoder features. Experiments show that the proposed context-aware CEM using NAM augmented training can improve the AUC-ROC for word error prediction from 0.837 to 0.892.","PeriodicalId":352002,"journal":{"name":"2022 IEEE Spoken Language Technology Workshop (SLT)","volume":"12 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-01-09","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"116154827","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2023-01-09DOI: 10.1109/SLT54892.2023.10023407
Cal Peyser, W. R. Huang, Tara N. Sainath, Rohit Prabhavalkar, M. Picheny, K. Cho
Dual learning is a paradigm for semi-supervised machine learning that seeks to leverage unsupervised data by solving two opposite tasks at once. In this scheme, each model is used to generate pseudo-labels for unlabeled examples that are used to train the other model. Dual learning has seen some use in speech processing by pairing ASR and TTS as dual tasks. However, these results mostly address only the case of using unpaired examples to compensate for very small supervised datasets, and mostly on large, non-streaming models. Dual learning has not yet been proven effective for using unsupervised data to improve realistic on-device streaming models that are already trained on large supervised corpora. We provide this missing piece though an analysis of an on-device-sized streaming conformer trained on the entirety of Librispeech, showing relative WER improvements of 10.7%/5.2% without an LM and 11.7%/16.4% with an LM.
{"title":"Dual Learning for Large Vocabulary On-Device ASR","authors":"Cal Peyser, W. R. Huang, Tara N. Sainath, Rohit Prabhavalkar, M. Picheny, K. Cho","doi":"10.1109/SLT54892.2023.10023407","DOIUrl":"https://doi.org/10.1109/SLT54892.2023.10023407","url":null,"abstract":"Dual learning is a paradigm for semi-supervised machine learning that seeks to leverage unsupervised data by solving two opposite tasks at once. In this scheme, each model is used to generate pseudo-labels for unlabeled examples that are used to train the other model. Dual learning has seen some use in speech processing by pairing ASR and TTS as dual tasks. However, these results mostly address only the case of using unpaired examples to compensate for very small supervised datasets, and mostly on large, non-streaming models. Dual learning has not yet been proven effective for using unsupervised data to improve realistic on-device streaming models that are already trained on large supervised corpora. We provide this missing piece though an analysis of an on-device-sized streaming conformer trained on the entirety of Librispeech, showing relative WER improvements of 10.7%/5.2% without an LM and 11.7%/16.4% with an LM.","PeriodicalId":352002,"journal":{"name":"2022 IEEE Spoken Language Technology Workshop (SLT)","volume":"35 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-01-09","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"124042107","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2023-01-09DOI: 10.1109/SLT54892.2023.10022989
Liang Wen, Lizhong Wang, Y. Zhang, K. Choi
Audio bandwidth extension can enhance subjective sound quality by increasing bandwidth of audio signal. This paper presents a novel multi-stage progressive method for time domain causal bandwidth extension. Each stage of the progressive model contains a light weight scale-up module to generate high frequency signal and a supervised attention module to guide features propagating between stages. Time-frequency two-step training method with weighted loss for progressive output is adopted to supervise bandwidth extension performance improves along stages. Test results show that multi-stage model can improve both objective results and perceptual quality progressively. The multi-stage progressive model makes bandwidth extension performance adjustable according to energy consumption, computing capacity and user preferences.
{"title":"Multi-Stage Progressive Audio Bandwidth Extension","authors":"Liang Wen, Lizhong Wang, Y. Zhang, K. Choi","doi":"10.1109/SLT54892.2023.10022989","DOIUrl":"https://doi.org/10.1109/SLT54892.2023.10022989","url":null,"abstract":"Audio bandwidth extension can enhance subjective sound quality by increasing bandwidth of audio signal. This paper presents a novel multi-stage progressive method for time domain causal bandwidth extension. Each stage of the progressive model contains a light weight scale-up module to generate high frequency signal and a supervised attention module to guide features propagating between stages. Time-frequency two-step training method with weighted loss for progressive output is adopted to supervise bandwidth extension performance improves along stages. Test results show that multi-stage model can improve both objective results and perceptual quality progressively. The multi-stage progressive model makes bandwidth extension performance adjustable according to energy consumption, computing capacity and user preferences.","PeriodicalId":352002,"journal":{"name":"2022 IEEE Spoken Language Technology Workshop (SLT)","volume":"35 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-01-09","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127361200","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2023-01-09DOI: 10.1109/SLT54892.2023.10022592
Suhaila M. Shakiah, R. Swaminathan, H. Nguyen, Raviteja Chinta, Tariq Afzal, Nathan Susanj, A. Mouchtaris, Grant P. Strimel, A. Rastrow
Machine learning model weights and activations are represented in full-precision during training. This leads to performance degradation in runtime when deployed on neural network accelerator (NNA) chips, which leverage highly parallelized fixed-point arithmetic to improve runtime memory and latency. In this work, we replicate the NNA operators during the training phase, accounting for the degradation due to low-precision inference on the NNA in back-propagation. Our proposed method efficiently emulates NNA operations, thus foregoing the need to transfer quantization error-prone data to the Central Processing Unit (CPU), ultimately reducing the user perceived latency (UPL). We apply our approach to Recurrent Neural Network-Transducer (RNN-T), an attractive architecture for on-device streaming speech recognition tasks. We train and evaluate models on 270K hours of English data and show a 5-7% improvement in engine latency while saving up to 10% relative degradation in WER.
{"title":"Accelerator-Aware Training for Transducer-Based Speech Recognition","authors":"Suhaila M. Shakiah, R. Swaminathan, H. Nguyen, Raviteja Chinta, Tariq Afzal, Nathan Susanj, A. Mouchtaris, Grant P. Strimel, A. Rastrow","doi":"10.1109/SLT54892.2023.10022592","DOIUrl":"https://doi.org/10.1109/SLT54892.2023.10022592","url":null,"abstract":"Machine learning model weights and activations are represented in full-precision during training. This leads to performance degradation in runtime when deployed on neural network accelerator (NNA) chips, which leverage highly parallelized fixed-point arithmetic to improve runtime memory and latency. In this work, we replicate the NNA operators during the training phase, accounting for the degradation due to low-precision inference on the NNA in back-propagation. Our proposed method efficiently emulates NNA operations, thus foregoing the need to transfer quantization error-prone data to the Central Processing Unit (CPU), ultimately reducing the user perceived latency (UPL). We apply our approach to Recurrent Neural Network-Transducer (RNN-T), an attractive architecture for on-device streaming speech recognition tasks. We train and evaluate models on 270K hours of English data and show a 5-7% improvement in engine latency while saving up to 10% relative degradation in WER.","PeriodicalId":352002,"journal":{"name":"2022 IEEE Spoken Language Technology Workshop (SLT)","volume":"32 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-01-09","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"116341119","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2023-01-09DOI: 10.1109/SLT54892.2023.10022486
Binghuai Lin, Liyuan Wang
This paper proposes an end-to-end pronunciation assessment method to exploit the adequate native data and reduce the need for non-native data costly to label. To obtain discriminative acoustic representations at the phoneme level, the pretrained wav2vec 2.0 is re-trained with connectionist temporal classification (CTC) loss for phoneme recognition using native data. These acoustic representations are fused with phoneme representations derived from a phoneme encoder to obtain final pronunciation scores. An efficient fusion mechanism aligns each phoneme with acoustic frames based on attention, where all blank frames recognized by the CTC-based phoneme recognition are masked. Finally, the whole network is optimized by a multi-task learning framework combining CTC loss and mean square error loss between predicted and human scores. Extensive experiments demonstrate that it outperforms previous baselines in the Pearson correlation coefficient even with much fewer labeled non-native data.
{"title":"Exploiting Information From Native Data for Non-Native Automatic Pronunciation Assessment","authors":"Binghuai Lin, Liyuan Wang","doi":"10.1109/SLT54892.2023.10022486","DOIUrl":"https://doi.org/10.1109/SLT54892.2023.10022486","url":null,"abstract":"This paper proposes an end-to-end pronunciation assessment method to exploit the adequate native data and reduce the need for non-native data costly to label. To obtain discriminative acoustic representations at the phoneme level, the pretrained wav2vec 2.0 is re-trained with connectionist temporal classification (CTC) loss for phoneme recognition using native data. These acoustic representations are fused with phoneme representations derived from a phoneme encoder to obtain final pronunciation scores. An efficient fusion mechanism aligns each phoneme with acoustic frames based on attention, where all blank frames recognized by the CTC-based phoneme recognition are masked. Finally, the whole network is optimized by a multi-task learning framework combining CTC loss and mean square error loss between predicted and human scores. Extensive experiments demonstrate that it outperforms previous baselines in the Pearson correlation coefficient even with much fewer labeled non-native data.","PeriodicalId":352002,"journal":{"name":"2022 IEEE Spoken Language Technology Workshop (SLT)","volume":"79 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-01-09","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"116928790","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2023-01-09DOI: 10.1109/SLT54892.2023.10022986
Woohyun Kang, J. Alam, A. Fathan
Over the recent years, various deep learning-based embedding methods were proposed. Although the deep learning-based embedding extraction methods have shown good performance in numerous tasks including speaker verification, language identification and anti-spoofing, their performance is limited when it comes to mismatched conditions due to the variability within them unrelated to the main task. In order to alleviate this problem, we propose a novel training strategy that regularizes the embedding network to have minimum information about the nuisance attributes. To achieve this, our proposed method directly incorporates the information bottleneck scheme into the training process, where the mutual information is estimated using an auxiliary normalizing flow network. The performance of the proposed method is evaluated on different speech processing tasks and found to provide improvement over the standard training strategy in all experimentations.
{"title":"Flow-ER: A Flow-Based Embedding Regularization Strategy for Robust Speech Representation Learning","authors":"Woohyun Kang, J. Alam, A. Fathan","doi":"10.1109/SLT54892.2023.10022986","DOIUrl":"https://doi.org/10.1109/SLT54892.2023.10022986","url":null,"abstract":"Over the recent years, various deep learning-based embedding methods were proposed. Although the deep learning-based embedding extraction methods have shown good performance in numerous tasks including speaker verification, language identification and anti-spoofing, their performance is limited when it comes to mismatched conditions due to the variability within them unrelated to the main task. In order to alleviate this problem, we propose a novel training strategy that regularizes the embedding network to have minimum information about the nuisance attributes. To achieve this, our proposed method directly incorporates the information bottleneck scheme into the training process, where the mutual information is estimated using an auxiliary normalizing flow network. The performance of the proposed method is evaluated on different speech processing tasks and found to provide improvement over the standard training strategy in all experimentations.","PeriodicalId":352002,"journal":{"name":"2022 IEEE Spoken Language Technology Workshop (SLT)","volume":"6 50","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-01-09","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"132815784","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2023-01-09DOI: 10.1109/SLT54892.2023.10022435
A. Favaro, C. Motley, Tianyu Cao, Miguel Iglesias, A. Butala, E. Oh, R. Stevens, J. Villalba, N. Dehak, L. Moro-Velázquez
Speech-based automatic approaches for evaluating neurological disorders (NDs) depend on feature extraction before the classification pipeline. It is preferable for these features to be interpretable to facilitate their development as diagnostic tools. This study focuses on the analysis of interpretable features obtained from the spoken responses of 88 subjects with NDs and controls (CN). Subjects with NDs have Alzheimer's disease (AD), Parkinson's disease (PD), or Parkinson's disease mimics (PDM). We configured three complementary sets of features related to cognition, speech, and language, and conducted a statistical analysis to examine which features differed between NDs and CN. Results suggested that features capturing response informativeness, reaction times, vocabulary richness, and syntactic complexity provided separability between AD and CN. Similarly, fundamental frequency variability helped differentiate PD from CN, while the number of salient informational units PDM from CN.
{"title":"A Multi-Modal Array of Interpretable Features to Evaluate Language and Speech Patterns in Different Neurological Disorders","authors":"A. Favaro, C. Motley, Tianyu Cao, Miguel Iglesias, A. Butala, E. Oh, R. Stevens, J. Villalba, N. Dehak, L. Moro-Velázquez","doi":"10.1109/SLT54892.2023.10022435","DOIUrl":"https://doi.org/10.1109/SLT54892.2023.10022435","url":null,"abstract":"Speech-based automatic approaches for evaluating neurological disorders (NDs) depend on feature extraction before the classification pipeline. It is preferable for these features to be interpretable to facilitate their development as diagnostic tools. This study focuses on the analysis of interpretable features obtained from the spoken responses of 88 subjects with NDs and controls (CN). Subjects with NDs have Alzheimer's disease (AD), Parkinson's disease (PD), or Parkinson's disease mimics (PDM). We configured three complementary sets of features related to cognition, speech, and language, and conducted a statistical analysis to examine which features differed between NDs and CN. Results suggested that features capturing response informativeness, reaction times, vocabulary richness, and syntactic complexity provided separability between AD and CN. Similarly, fundamental frequency variability helped differentiate PD from CN, while the number of salient informational units PDM from CN.","PeriodicalId":352002,"journal":{"name":"2022 IEEE Spoken Language Technology Workshop (SLT)","volume":"75 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-01-09","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114860381","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2023-01-09DOI: 10.1109/SLT54892.2023.10022637
Tianyu Cao, L. Moro-Velázquez, Piotr Żelasko, J. Villalba, N. Dehak
Vowel space area (VSA) is an applicable metric for studying speech production deficits and intelligibility. Previous works suggest that the VSA accounts for almost 50% of the intelligibility variance, being an essential component of global intelligibility estimates. However, almost no study publishes a tool to estimate VSA automatically with publicly available codes. In this paper, we propose an open-source tool called VSAmeter to measure VSA and vowel articulation index (VAI) automatically and validate it with the VSA and VAI obtained from a dataset in which the formants and phone segments have been annotated manually. The results show that VSA and VAI values obtained by our proposed method strongly correlate with those generated by manually extracted F1 and F2 and alignments. Such a method can be utilized in speech applications, e.g., the automatic measurement of VAI for the evaluation of speakers with dysarthria.
{"title":"Vsameter: Evaluation of a New Open-Source Tool to Measure Vowel Space Area and Related Metrics","authors":"Tianyu Cao, L. Moro-Velázquez, Piotr Żelasko, J. Villalba, N. Dehak","doi":"10.1109/SLT54892.2023.10022637","DOIUrl":"https://doi.org/10.1109/SLT54892.2023.10022637","url":null,"abstract":"Vowel space area (VSA) is an applicable metric for studying speech production deficits and intelligibility. Previous works suggest that the VSA accounts for almost 50% of the intelligibility variance, being an essential component of global intelligibility estimates. However, almost no study publishes a tool to estimate VSA automatically with publicly available codes. In this paper, we propose an open-source tool called VSAmeter to measure VSA and vowel articulation index (VAI) automatically and validate it with the VSA and VAI obtained from a dataset in which the formants and phone segments have been annotated manually. The results show that VSA and VAI values obtained by our proposed method strongly correlate with those generated by manually extracted F1 and F2 and alignments. Such a method can be utilized in speech applications, e.g., the automatic measurement of VAI for the evaluation of speakers with dysarthria.","PeriodicalId":352002,"journal":{"name":"2022 IEEE Spoken Language Technology Workshop (SLT)","volume":"31 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-01-09","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"116446209","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}