Pub Date : 2013-10-01DOI: 10.1109/WASPAA.2013.6701886
A. Vijayakumar, A. Makur
Literature shows that the design criteria of pth order analysis having qth order synthesis filters (p ≠ q) with a flexibility to control the system delay has never been addressed concomitantly. In this paper, we propose a systematic design for a filterbank that can have arbitrary delay with a (p, q) order. Such filterbanks play an important role especially in applications where low delay-high quality signals are required, like a digital hearing aid.
{"title":"Design of arbitrary delay filterbank having arbitrary order for audio applications","authors":"A. Vijayakumar, A. Makur","doi":"10.1109/WASPAA.2013.6701886","DOIUrl":"https://doi.org/10.1109/WASPAA.2013.6701886","url":null,"abstract":"Literature shows that the design criteria of pth order analysis having qth order synthesis filters (p ≠ q) with a flexibility to control the system delay has never been addressed concomitantly. In this paper, we propose a systematic design for a filterbank that can have arbitrary delay with a (p, q) order. Such filterbanks play an important role especially in applications where low delay-high quality signals are required, like a digital hearing aid.","PeriodicalId":341888,"journal":{"name":"2013 IEEE Workshop on Applications of Signal Processing to Audio and Acoustics","volume":"31 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2013-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130774493","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2013-10-01DOI: 10.1109/WASPAA.2013.6701868
Jens Schröder, Niko Moritz, M. R. Schädler, Benjamin Cauchi, K. Adiloglu, J. Anemüller, S. Doclo, B. Kollmeier, Stefan Goetze
In this contribution, an acoustic event detection system based on spectro-temporal features and a two-layer hidden Markov model as back-end is proposed within the framework of the IEEE AASP challenge `Detection and Classification of Acoustic Scenes and Events' (D-CASE). Noise reduction based on the log-spectral amplitude estimator by [1] and noise power density estimation by [2] is used for signal enhancement. Performance based on three different kinds of features is compared, i.e. for amplitude modulation spectrogram, Gabor filterbank-features and conventional Mel-frequency cepstral coefficients (MFCCs), all of them known from automatic speech recognition (ASR). The evaluation is based on the office live recordings provided within the D-CASE challenge. The influence of the signal enhancement is investigated and the increase in recognition rate by the proposed features in comparison to MFCC-features is shown. It is demonstrated that the proposed spectro-temporal features achieve a better recognition accuracy than MFCCs.
{"title":"On the use of spectro-temporal features for the IEEE AASP challenge ‘detection and classification of acoustic scenes and events’","authors":"Jens Schröder, Niko Moritz, M. R. Schädler, Benjamin Cauchi, K. Adiloglu, J. Anemüller, S. Doclo, B. Kollmeier, Stefan Goetze","doi":"10.1109/WASPAA.2013.6701868","DOIUrl":"https://doi.org/10.1109/WASPAA.2013.6701868","url":null,"abstract":"In this contribution, an acoustic event detection system based on spectro-temporal features and a two-layer hidden Markov model as back-end is proposed within the framework of the IEEE AASP challenge `Detection and Classification of Acoustic Scenes and Events' (D-CASE). Noise reduction based on the log-spectral amplitude estimator by [1] and noise power density estimation by [2] is used for signal enhancement. Performance based on three different kinds of features is compared, i.e. for amplitude modulation spectrogram, Gabor filterbank-features and conventional Mel-frequency cepstral coefficients (MFCCs), all of them known from automatic speech recognition (ASR). The evaluation is based on the office live recordings provided within the D-CASE challenge. The influence of the signal enhancement is investigated and the increase in recognition rate by the proposed features in comparison to MFCC-features is shown. It is demonstrated that the proposed spectro-temporal features achieve a better recognition accuracy than MFCCs.","PeriodicalId":341888,"journal":{"name":"2013 IEEE Workshop on Applications of Signal Processing to Audio and Acoustics","volume":"181 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2013-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"116654944","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2013-10-01DOI: 10.1109/WASPAA.2013.6701888
Jonathan Le Roux, Shinji Watanabe, J. Hershey
Over the years, countless algorithms have been proposed to solve the problem of speech enhancement from a noisy mixture. Many have succeeded in improving at least parts of the signal, while often deteriorating others. Based on the assumption that different algorithms are likely to enjoy different qualities and suffer from different flaws, we investigate the possibility of combining the strengths of multiple speech enhancement algorithms, formulating the problem in an ensemble learning framework. As a first example of such a system, we consider the prediction of a time-frequency mask obtained from the clean speech, based on the outputs of various algorithms applied on the noisy mixture. We consider several approaches involving various notions of context and various machine learning algorithms for classification, in the case of binary masks, and regression, in the case of continuous masks. We show that combining several algorithms in this way can lead to an improvement in enhancement performance, while simple averaging or voting techniques fail to do so.
{"title":"Ensemble learning for speech enhancement","authors":"Jonathan Le Roux, Shinji Watanabe, J. Hershey","doi":"10.1109/WASPAA.2013.6701888","DOIUrl":"https://doi.org/10.1109/WASPAA.2013.6701888","url":null,"abstract":"Over the years, countless algorithms have been proposed to solve the problem of speech enhancement from a noisy mixture. Many have succeeded in improving at least parts of the signal, while often deteriorating others. Based on the assumption that different algorithms are likely to enjoy different qualities and suffer from different flaws, we investigate the possibility of combining the strengths of multiple speech enhancement algorithms, formulating the problem in an ensemble learning framework. As a first example of such a system, we consider the prediction of a time-frequency mask obtained from the clean speech, based on the outputs of various algorithms applied on the noisy mixture. We consider several approaches involving various notions of context and various machine learning algorithms for classification, in the case of binary masks, and regression, in the case of continuous masks. We show that combining several algorithms in this way can lead to an improvement in enhancement performance, while simple averaging or voting techniques fail to do so.","PeriodicalId":341888,"journal":{"name":"2013 IEEE Workshop on Applications of Signal Processing to Audio and Acoustics","volume":"72 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2013-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"123296057","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2013-10-01DOI: 10.1109/WASPAA.2013.6701846
Geliang Zhang, S. Godsill
Pitch tracking has been used in many speech processing applications. Most present time domain techniques in pitch estimation mainly use autocorrelation methods and the average magnitude difference functions. This paper aims to track pitch period of speech using the particle filter approach. A simple model has been proposed to capture the pitch period variations of noisy speech during voiced periods. Performance of the proposed method is compared with standard pitch detection algorithms. Simulation results show that the proposed method can track the pitch period even if strong noise exists. It suggests that the particle filter approach could be an alternative way to address the pitch tracking problem.
{"title":"Tracking pitch period using particle filters","authors":"Geliang Zhang, S. Godsill","doi":"10.1109/WASPAA.2013.6701846","DOIUrl":"https://doi.org/10.1109/WASPAA.2013.6701846","url":null,"abstract":"Pitch tracking has been used in many speech processing applications. Most present time domain techniques in pitch estimation mainly use autocorrelation methods and the average magnitude difference functions. This paper aims to track pitch period of speech using the particle filter approach. A simple model has been proposed to capture the pitch period variations of noisy speech during voiced periods. Performance of the proposed method is compared with standard pitch detection algorithms. Simulation results show that the proposed method can track the pitch period even if strong noise exists. It suggests that the particle filter approach could be an alternative way to address the pitch tracking problem.","PeriodicalId":341888,"journal":{"name":"2013 IEEE Workshop on Applications of Signal Processing to Audio and Acoustics","volume":"21 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2013-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127665749","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2013-10-01DOI: 10.1109/WASPAA.2013.6701813
Toby Christian Lawin-Ore, S. Doclo
The performance of the multi-channel Wiener filter (MWF), which is often used for noise reduction in speech enhancement applications, depends on the noise field and on the acoustic transfer functions (ATFs) between the desired source and the microphone array. Recently, using statistical room acoustics an analytical expression for the spatially averaged output SNR, given the relative distance between the source and the microphone array, has been derived for the MWF in a diffuse noise field, requiring only the room properties to be known. In this paper, we show that this analytical expression can be extended to compute the average output SNR of the MWF for a specific microphone configuration, enabling to compare the performance of different microphone configurations, e.g. in an acoustic sensor network. Simulation results show that the average output SNR obtained using the statistical properties of ATFs is similar to the average output SNR obtained using simulated ATFs, therefore providing an efficient way to compare different microphone configurations.
{"title":"Average output SNR of the multichannel Wiener filter using statistical room acoustics","authors":"Toby Christian Lawin-Ore, S. Doclo","doi":"10.1109/WASPAA.2013.6701813","DOIUrl":"https://doi.org/10.1109/WASPAA.2013.6701813","url":null,"abstract":"The performance of the multi-channel Wiener filter (MWF), which is often used for noise reduction in speech enhancement applications, depends on the noise field and on the acoustic transfer functions (ATFs) between the desired source and the microphone array. Recently, using statistical room acoustics an analytical expression for the spatially averaged output SNR, given the relative distance between the source and the microphone array, has been derived for the MWF in a diffuse noise field, requiring only the room properties to be known. In this paper, we show that this analytical expression can be extended to compute the average output SNR of the MWF for a specific microphone configuration, enabling to compare the performance of different microphone configurations, e.g. in an acoustic sensor network. Simulation results show that the average output SNR obtained using the statistical properties of ATFs is similar to the average output SNR obtained using simulated ATFs, therefore providing an efficient way to compare different microphone configurations.","PeriodicalId":341888,"journal":{"name":"2013 IEEE Workshop on Applications of Signal Processing to Audio and Acoustics","volume":"41 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2013-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126380269","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2013-10-01DOI: 10.1109/WASPAA.2013.6701871
A. Sugiyama, Ryoji Miyahara
This paper proposes tapping noise suppression with a new phase-based detection. Phase slope of the input noisy signal is compared with an ideal phase slope obtained from an average of intra-frame slopes along the frequency axis. In order to cope with heavily low-pass characteristics of tapping noise spectrum, phase values are weighted with the magnitude at each frequency point. Phase unwrapping problem is alleviated by use of a rotation vector of frequency domain components. Comparison of enhanced signal spectrogram with that of clean speech demonstrates superior enhanced signal quality.
{"title":"Tapping-noise suppression with magnitude-weighted phase-based detection","authors":"A. Sugiyama, Ryoji Miyahara","doi":"10.1109/WASPAA.2013.6701871","DOIUrl":"https://doi.org/10.1109/WASPAA.2013.6701871","url":null,"abstract":"This paper proposes tapping noise suppression with a new phase-based detection. Phase slope of the input noisy signal is compared with an ideal phase slope obtained from an average of intra-frame slopes along the frequency axis. In order to cope with heavily low-pass characteristics of tapping noise spectrum, phase values are weighted with the magnitude at each frequency point. Phase unwrapping problem is alleviated by use of a rotation vector of frequency domain components. Comparison of enhanced signal spectrogram with that of clean speech demonstrates superior enhanced signal quality.","PeriodicalId":341888,"journal":{"name":"2013 IEEE Workshop on Applications of Signal Processing to Audio and Acoustics","volume":"72 1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2013-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"131998355","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2013-10-01DOI: 10.1109/WASPAA.2013.6701874
S. Stenzel, Toby Christian Lawin-Ore, J. Freudenberger, S. Doclo
In speech enhancement applications, the multichannel Wiener filter (MWF) is widely used to reduce noise and thus improve signal quality. The MWF performs noise reduction by estimating the desired signal component in one of the microphones, referred to as the reference microphone. However, for distributed microphones, the selection of the reference microphone has a significant impact on the broadband output SNR of the MWF, largely depending on the acoustical transfer function (ATF) between the desired source and the reference microphone. In this paper, a multichannel Wiener filtering approach using a soft combined reference is presented. Simulation results show that the proposed scheme leads to a higher broadband output SNR compared to an arbitrarily selected reference microphone, moreover achieving a partial equalization of the overall acoustic system.
{"title":"A multichannel Wiener filter with partial equalization for distributed microphones","authors":"S. Stenzel, Toby Christian Lawin-Ore, J. Freudenberger, S. Doclo","doi":"10.1109/WASPAA.2013.6701874","DOIUrl":"https://doi.org/10.1109/WASPAA.2013.6701874","url":null,"abstract":"In speech enhancement applications, the multichannel Wiener filter (MWF) is widely used to reduce noise and thus improve signal quality. The MWF performs noise reduction by estimating the desired signal component in one of the microphones, referred to as the reference microphone. However, for distributed microphones, the selection of the reference microphone has a significant impact on the broadband output SNR of the MWF, largely depending on the acoustical transfer function (ATF) between the desired source and the reference microphone. In this paper, a multichannel Wiener filtering approach using a soft combined reference is presented. Simulation results show that the proposed scheme leads to a higher broadband output SNR compared to an arbitrarily selected reference microphone, moreover achieving a partial equalization of the overall acoustic system.","PeriodicalId":341888,"journal":{"name":"2013 IEEE Workshop on Applications of Signal Processing to Audio and Acoustics","volume":"13 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2013-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"134244311","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2013-10-01DOI: 10.1109/WASPAA.2013.6701900
Deliang Wang
Summary form only given. Speech separation, or the cocktail party problem, is a widely acknowledged challenge. Part of the challenge stems from the confusion of what the computational goal should be. While the separation of every sound source in a mixture is considered the gold standard, I argue that such an objective is neither realistic nor what the human auditory system does. Motivated by the auditory masking phenomenon, we have suggested instead the ideal time-frequency binary mask as a main goal for computational auditory scene analysis. This leads to a new formulation to speech separation that classifies time-frequency units into two classes: those dominated by the target speech and the rest. In supervised learning, a paramount issue is generalization to conditions unseen during training. I describe novel methods to deal with the generalization issue where support vector machines (SVMs) are used to estimate the ideal binary mask. One method employs distribution fitting to adapt to unseen signal-to-noise ratios and iterative voice activity detection to adapt to unseen noises. Another method learns more linearly separable features using deep neural networks (DNNs) and then couples DNN and linear SVM for training on a variety of noisy conditions. Systematic evaluations show high quality separation in new acoustic environments.
{"title":"Keynote addresses: From auditory masking to binary classification: Machine learning for speech separation","authors":"Deliang Wang","doi":"10.1109/WASPAA.2013.6701900","DOIUrl":"https://doi.org/10.1109/WASPAA.2013.6701900","url":null,"abstract":"Summary form only given. Speech separation, or the cocktail party problem, is a widely acknowledged challenge. Part of the challenge stems from the confusion of what the computational goal should be. While the separation of every sound source in a mixture is considered the gold standard, I argue that such an objective is neither realistic nor what the human auditory system does. Motivated by the auditory masking phenomenon, we have suggested instead the ideal time-frequency binary mask as a main goal for computational auditory scene analysis. This leads to a new formulation to speech separation that classifies time-frequency units into two classes: those dominated by the target speech and the rest. In supervised learning, a paramount issue is generalization to conditions unseen during training. I describe novel methods to deal with the generalization issue where support vector machines (SVMs) are used to estimate the ideal binary mask. One method employs distribution fitting to adapt to unseen signal-to-noise ratios and iterative voice activity detection to adapt to unseen noises. Another method learns more linearly separable features using deep neural networks (DNNs) and then couples DNN and linear SVM for training on a variety of noisy conditions. Systematic evaluations show high quality separation in new acoustic environments.","PeriodicalId":341888,"journal":{"name":"2013 IEEE Workshop on Applications of Signal Processing to Audio and Acoustics","volume":"22 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2013-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"131787070","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2013-10-01DOI: 10.1109/WASPAA.2013.6701865
Varinthira Duangudom, David V. Anderson
Auditory saliency refers to the characteristics of a sound that cause it to attract the attention of a listener. Pre-attentive or bottom-up saliency has to do with automatic processing in the human auditory system that does not require and often precedes attention. Unlike visual saliency, where eye-tracking is a commonly used evaluation method, with auditory saliency, there is no easily trackable physical correlate that can be used for evaluation. Other auditory saliency models [1, 2] have been evaluated using tests that did not specifically target bottom-up saliency. In this paper, we present a method to conclusively isolate bottom-up auditory saliency. There are also several important applications to bottom-up saliency in auditory scene analysis, auditory display design and analysis, and speech processing.
{"title":"Identifying salient sounds using dual-task experiments","authors":"Varinthira Duangudom, David V. Anderson","doi":"10.1109/WASPAA.2013.6701865","DOIUrl":"https://doi.org/10.1109/WASPAA.2013.6701865","url":null,"abstract":"Auditory saliency refers to the characteristics of a sound that cause it to attract the attention of a listener. Pre-attentive or bottom-up saliency has to do with automatic processing in the human auditory system that does not require and often precedes attention. Unlike visual saliency, where eye-tracking is a commonly used evaluation method, with auditory saliency, there is no easily trackable physical correlate that can be used for evaluation. Other auditory saliency models [1, 2] have been evaluated using tests that did not specifically target bottom-up saliency. In this paper, we present a method to conclusively isolate bottom-up auditory saliency. There are also several important applications to bottom-up saliency in auditory scene analysis, auditory display design and analysis, and speech processing.","PeriodicalId":341888,"journal":{"name":"2013 IEEE Workshop on Applications of Signal Processing to Audio and Acoustics","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2013-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"133427678","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2013-10-01DOI: 10.1109/WASPAA.2013.6701866
Yuya Sugimoto, S. Miyabe, Takeshi Yamada, S. Makino, B. Juang
Several extensions of the MUltiple SIgnal Classification (MUSIC) algorithm exploiting high order statistics were proposed to estimate directions of arrival (DOAs) with high resolution in underdetermined conditions. However, these methods entail a trade-off between two performance goals, namely, robustness and resolution, in the choice of orders because use of high-ordered statistics increases not only the resolution but also the statistical bias. To overcome this problem, this paper proposes a new extension of MUSIC using a nonlinear high-dimensional map, which corresponds to the joint analysis of moments of multiple orders and helps to realize the both advantages of robustness and high resolution of low-ordered and high-ordered statistics. Experimental results show that the proposed method can estimate DOAs more accurately than the conventional MUSIC extensions exploiting moments of a single high order.
{"title":"Employing moments of multiple high orders for high-resolution underdetermined DOA estimation based on MUSIC","authors":"Yuya Sugimoto, S. Miyabe, Takeshi Yamada, S. Makino, B. Juang","doi":"10.1109/WASPAA.2013.6701866","DOIUrl":"https://doi.org/10.1109/WASPAA.2013.6701866","url":null,"abstract":"Several extensions of the MUltiple SIgnal Classification (MUSIC) algorithm exploiting high order statistics were proposed to estimate directions of arrival (DOAs) with high resolution in underdetermined conditions. However, these methods entail a trade-off between two performance goals, namely, robustness and resolution, in the choice of orders because use of high-ordered statistics increases not only the resolution but also the statistical bias. To overcome this problem, this paper proposes a new extension of MUSIC using a nonlinear high-dimensional map, which corresponds to the joint analysis of moments of multiple orders and helps to realize the both advantages of robustness and high resolution of low-ordered and high-ordered statistics. Experimental results show that the proposed method can estimate DOAs more accurately than the conventional MUSIC extensions exploiting moments of a single high order.","PeriodicalId":341888,"journal":{"name":"2013 IEEE Workshop on Applications of Signal Processing to Audio and Acoustics","volume":"22 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2013-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"134310732","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}