Pub Date : 2013-10-01DOI: 10.1109/WASPAA.2013.6701845
P. Smaragdis, Minje Kim
Non-negative factorizations of spectra have been a very popular tool for various audio tasks recently. A long-standing problem with these methods methods is that they cannot be easily applied on other kinds of spectral decompositions such as sinusoidal models, constant-Q transforms, wavelets and reassigned spectra. This is because with these transforms the frequency and/or time values are real-valued and not sampled on a regular grid. We therefore cannot represent them as a matrix that we can later factorize. In this paper we present a formulation of non-negative matrix factorization that can be applied on data with real-valued indices, thereby making the application of this family of methods feasible on a broader family of time/frequency transforms.
{"title":"Non-negative matrix factorization for irregularly-spaced transforms","authors":"P. Smaragdis, Minje Kim","doi":"10.1109/WASPAA.2013.6701845","DOIUrl":"https://doi.org/10.1109/WASPAA.2013.6701845","url":null,"abstract":"Non-negative factorizations of spectra have been a very popular tool for various audio tasks recently. A long-standing problem with these methods methods is that they cannot be easily applied on other kinds of spectral decompositions such as sinusoidal models, constant-Q transforms, wavelets and reassigned spectra. This is because with these transforms the frequency and/or time values are real-valued and not sampled on a regular grid. We therefore cannot represent them as a matrix that we can later factorize. In this paper we present a formulation of non-negative matrix factorization that can be applied on data with real-valued indices, thereby making the application of this family of methods feasible on a broader family of time/frequency transforms.","PeriodicalId":341888,"journal":{"name":"2013 IEEE Workshop on Applications of Signal Processing to Audio and Acoustics","volume":"269 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2013-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"124234781","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2013-10-01DOI: 10.1109/WASPAA.2013.6701828
J. Ahrens, Mark R. P. Thomas, I. Tashev
The Spectral Division Method is an analytic approach for sound field synthesis that determines the loudspeaker driving function in the wavenumber domain. Compact expressions for the driving function in time-frequency domain or in time domain can only be determined for a low number of special cases. Generally, the involved spatial Fourier transforms have to be evaluated numerically. We present a detailed description of the computational procedure and minimize the number of required computations by exploiting the following two aspects: 1) The interval for the spatial sampling of the virtual sound field can be selected for each time-frequency bin, whereby low time-frequency bins can be sampled more coarsely, and 2) the driving function only needs to be evaluated at the locations of the loudspeakers of a given array. The inverse spatial Fourier transform is therefore not required to be evaluated at all initial spatial sampling points but only at those locations that coincide with loudspeakers.
{"title":"Efficient implementation of the spectral division method for arbitrary virtual sound fields","authors":"J. Ahrens, Mark R. P. Thomas, I. Tashev","doi":"10.1109/WASPAA.2013.6701828","DOIUrl":"https://doi.org/10.1109/WASPAA.2013.6701828","url":null,"abstract":"The Spectral Division Method is an analytic approach for sound field synthesis that determines the loudspeaker driving function in the wavenumber domain. Compact expressions for the driving function in time-frequency domain or in time domain can only be determined for a low number of special cases. Generally, the involved spatial Fourier transforms have to be evaluated numerically. We present a detailed description of the computational procedure and minimize the number of required computations by exploiting the following two aspects: 1) The interval for the spatial sampling of the virtual sound field can be selected for each time-frequency bin, whereby low time-frequency bins can be sampled more coarsely, and 2) the driving function only needs to be evaluated at the locations of the loudspeakers of a given array. The inverse spatial Fourier transform is therefore not required to be evaluated at all initial spatial sampling points but only at those locations that coincide with loudspeakers.","PeriodicalId":341888,"journal":{"name":"2013 IEEE Workshop on Applications of Signal Processing to Audio and Acoustics","volume":"52 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2013-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"121589846","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2013-10-01DOI: 10.1109/WASPAA.2013.6701882
Chris Oreinos, J. Buchholz, Jorge Mejia
Multi-channel loudspeaker systems have been proposed to assess the real-life benefit of devices such as hearing aids, cochlear implants, or mobile phones. This paper investigates to what extent sound fields recreated by Higher-Order Ambisonics (HOA) can be used to evaluate the performance of spatially selective multi-microphone processing schemes (beamformers) inside complex acoustic environments. Two example schemes are considered: an adaptive directional microphone (ADM) and a contralateral suppression bilateral beamformer (BBF), both implemented in the context of a hearing aid device. The acoustic scenarios consist of a single speech target (0°) competing against three speech jammers (±90° and 180°) set either in an anechoic or in a reverberant simulated classroom (T30 = 0.6s). The HOA effect on the directional algorithm performance is quantified through: (a) the adaptive, frequency-dependent, algorithm gains, (b) the SNR improvement calculated in one-third octave bands, and (c) the processed target frequency response. The HOA reconstruction errors influence the beamformers in mainly two ways; first, by altering the spatial characteristics of the sound field, which in turn modifies the adaptation of the algorithms, and second, by affecting the spectral content of the sources. The results suggest that although HOA (here 7th order) does not degrade the broadband, long-term, intelligibility-weighted SNR improvement of the two beamformers, it imposes a low-pass effect on the processed target. This renders the HOA coding problematic above the system's cut-off frequency.
{"title":"Effect of higher-order ambisonics on evaluating beamformer benefit in realistic acoustic environments","authors":"Chris Oreinos, J. Buchholz, Jorge Mejia","doi":"10.1109/WASPAA.2013.6701882","DOIUrl":"https://doi.org/10.1109/WASPAA.2013.6701882","url":null,"abstract":"Multi-channel loudspeaker systems have been proposed to assess the real-life benefit of devices such as hearing aids, cochlear implants, or mobile phones. This paper investigates to what extent sound fields recreated by Higher-Order Ambisonics (HOA) can be used to evaluate the performance of spatially selective multi-microphone processing schemes (beamformers) inside complex acoustic environments. Two example schemes are considered: an adaptive directional microphone (ADM) and a contralateral suppression bilateral beamformer (BBF), both implemented in the context of a hearing aid device. The acoustic scenarios consist of a single speech target (0°) competing against three speech jammers (±90° and 180°) set either in an anechoic or in a reverberant simulated classroom (T30 = 0.6s). The HOA effect on the directional algorithm performance is quantified through: (a) the adaptive, frequency-dependent, algorithm gains, (b) the SNR improvement calculated in one-third octave bands, and (c) the processed target frequency response. The HOA reconstruction errors influence the beamformers in mainly two ways; first, by altering the spatial characteristics of the sound field, which in turn modifies the adaptation of the algorithms, and second, by affecting the spectral content of the sources. The results suggest that although HOA (here 7th order) does not degrade the broadband, long-term, intelligibility-weighted SNR improvement of the two beamformers, it imposes a low-pass effect on the processed target. This renders the HOA coding problematic above the system's cut-off frequency.","PeriodicalId":341888,"journal":{"name":"2013 IEEE Workshop on Applications of Signal Processing to Audio and Acoustics","volume":"4 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2013-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127939975","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2013-10-01DOI: 10.1109/WASPAA.2013.6701843
A. Nagathil, Rainer Martin
In this paper we present a study on the spectral analysis of music signals comparing the time domain representation, the short-time Fourier transform (STFT) and the constant-Q transform (CQT) which are additionally combined with different signal-dependent transforms. The comparison is carried out with respect to the spectral compactness, the data compression ability and the temporal continuity of transform coefficients for which we propose measures in this paper. In addition, we investigate the performance of these transforms in a source separation task in which we strive for recovering the main melody line from a mixed instrument recording. Our experiments reveal that performing a rank-reduced principal component analysis based on a CQT representation exhibits the best results in terms of instrumental source separation measures and listening impression which points towards the potential of the CQT for improving existing source separation methods which are currently often based on the STFT.
{"title":"Evaluation of spectral transforms for music signal analysis","authors":"A. Nagathil, Rainer Martin","doi":"10.1109/WASPAA.2013.6701843","DOIUrl":"https://doi.org/10.1109/WASPAA.2013.6701843","url":null,"abstract":"In this paper we present a study on the spectral analysis of music signals comparing the time domain representation, the short-time Fourier transform (STFT) and the constant-Q transform (CQT) which are additionally combined with different signal-dependent transforms. The comparison is carried out with respect to the spectral compactness, the data compression ability and the temporal continuity of transform coefficients for which we propose measures in this paper. In addition, we investigate the performance of these transforms in a source separation task in which we strive for recovering the main melody line from a mixed instrument recording. Our experiments reveal that performing a rank-reduced principal component analysis based on a CQT representation exhibits the best results in terms of instrumental source separation measures and listening impression which points towards the potential of the CQT for improving existing source separation methods which are currently often based on the STFT.","PeriodicalId":341888,"journal":{"name":"2013 IEEE Workshop on Applications of Signal Processing to Audio and Acoustics","volume":"144 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2013-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"124644858","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2013-10-01DOI: 10.1109/WASPAA.2013.6701883
Zhuo Chen, D. Ellis
Speech enhancement requires some principle by which to distinguish speech and noise, and the most successful separation requires strong models for both speech and noise. If, however, the noise encountered differs significantly from the system's assumptions, performance will suffer. In this work, we propose a novel speech enhancement system based on decomposing the spectrogram into sparse activation of a dictionary of target speech templates, and a low-rank background model, which makes few assumptions about the noise other than its limited spectral variation. A variation of this model specifically designed to handle transient noise intrusions is also proposed. Evaluation via BSS EVAL and PESQ show that the new approaches improve signal-to-distortion ratio in most cases and PESQ in high-noise conditions when compared to several traditional speech enhancement algorithms including log-MMSE.
{"title":"Speech enhancement by sparse, low-rank, and dictionary spectrogram decomposition","authors":"Zhuo Chen, D. Ellis","doi":"10.1109/WASPAA.2013.6701883","DOIUrl":"https://doi.org/10.1109/WASPAA.2013.6701883","url":null,"abstract":"Speech enhancement requires some principle by which to distinguish speech and noise, and the most successful separation requires strong models for both speech and noise. If, however, the noise encountered differs significantly from the system's assumptions, performance will suffer. In this work, we propose a novel speech enhancement system based on decomposing the spectrogram into sparse activation of a dictionary of target speech templates, and a low-rank background model, which makes few assumptions about the noise other than its limited spectral variation. A variation of this model specifically designed to handle transient noise intrusions is also proposed. Evaluation via BSS EVAL and PESQ show that the new approaches improve signal-to-distortion ratio in most cases and PESQ in high-noise conditions when compared to several traditional speech enhancement algorithms including log-MMSE.","PeriodicalId":341888,"journal":{"name":"2013 IEEE Workshop on Applications of Signal Processing to Audio and Acoustics","volume":"106 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2013-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"122445248","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2013-10-01DOI: 10.1109/WASPAA.2013.6701863
Kai Siedenburg, P. Depalle
This paper considers the estimation of time-frequency coefficients of audio signals from the viewpoint of spectro-temporal modulation analysis. It is shown that estimators employing neighborhood-smoothed shrinkage masks are closely related to modulation filters. The usefulness of this perspective is first demonstrated by separating an artificial mixture of components with different orientation in time-frequency. It is secondly shown that modulation filters can be learned directly from audio and that their usage improves the state of the art in noise-reduction by a small margin when measured by signal to noise ratio.
{"title":"Modulation filtering for structured time-frequency estimation of audio signals","authors":"Kai Siedenburg, P. Depalle","doi":"10.1109/WASPAA.2013.6701863","DOIUrl":"https://doi.org/10.1109/WASPAA.2013.6701863","url":null,"abstract":"This paper considers the estimation of time-frequency coefficients of audio signals from the viewpoint of spectro-temporal modulation analysis. It is shown that estimators employing neighborhood-smoothed shrinkage masks are closely related to modulation filters. The usefulness of this perspective is first demonstrated by separating an artificial mixture of components with different orientation in time-frequency. It is secondly shown that modulation filters can be learned directly from audio and that their usage improves the state of the art in noise-reduction by a small margin when measured by signal to noise ratio.","PeriodicalId":341888,"journal":{"name":"2013 IEEE Workshop on Applications of Signal Processing to Audio and Acoustics","volume":"23 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2013-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126540605","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2013-10-01DOI: 10.1109/WASPAA.2013.6701894
K. Kinoshita, Marc Delcroix, Takuya Yoshioka, T. Nakatani, A. Sehr, Walter Kellermann, R. Maas
Recently, substantial progress has been made in the field of reverberant speech signal processing, including both single- and multichannel dereverberation techniques, and automatic speech recognition (ASR) techniques robust to reverberation. To evaluate state-of-the-art algorithms and obtain new insights regarding potential future research directions, we propose a common evaluation framework including datasets, tasks, and evaluation metrics for both speech enhancement and ASR techniques. The proposed framework will be used as a common basis for the REVERB (REverberant Voice Enhancement and Recognition Benchmark) challenge. This paper describes the rationale behind the challenge, and provides a detailed description of the evaluation framework and benchmark results.
{"title":"The reverb challenge: A common evaluation framework for dereverberation and recognition of reverberant speech","authors":"K. Kinoshita, Marc Delcroix, Takuya Yoshioka, T. Nakatani, A. Sehr, Walter Kellermann, R. Maas","doi":"10.1109/WASPAA.2013.6701894","DOIUrl":"https://doi.org/10.1109/WASPAA.2013.6701894","url":null,"abstract":"Recently, substantial progress has been made in the field of reverberant speech signal processing, including both single- and multichannel dereverberation techniques, and automatic speech recognition (ASR) techniques robust to reverberation. To evaluate state-of-the-art algorithms and obtain new insights regarding potential future research directions, we propose a common evaluation framework including datasets, tasks, and evaluation metrics for both speech enhancement and ASR techniques. The proposed framework will be used as a common basis for the REVERB (REverberant Voice Enhancement and Recognition Benchmark) challenge. This paper describes the rationale behind the challenge, and provides a detailed description of the evaluation framework and benchmark results.","PeriodicalId":341888,"journal":{"name":"2013 IEEE Workshop on Applications of Signal Processing to Audio and Acoustics","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2013-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130394285","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2013-10-01DOI: 10.1109/WASPAA.2013.6701892
Sandra Dias, Aníbal J. S. Ferreira
In this paper we describe innovative advances to the design of a new frequency-domain algorithm to glottal source estimation whose conceptual approach we have reported recently [1]. Those advances result from accurate sinusoidal/harmonic analysis and synthesis of two concomitant acoustic signals: the glottal source signal captured near the vocal folds, and the corresponding voiced signal captured outside the mouth. We describe the experimental procedure which was performed by an ORL specialist using a rigid video-laryngoscope and two tiny and high-quality microphones. Six subjects have participated in the tests and records were made for vowels /a/ and /i/. The data analysis allowed us to conclude on the magnitude and on the phase-related NRD features of the glottal source signal. In addition, a new frequency-domain glottal pulse model combining features of the Liljencrants-Fant and Rosenberg models has been devised that is a better match to the observed data. The derivatives of the three models are obtained using accurate frequency-domain processing. The paper concludes with next research steps.
{"title":"A hybrid LF-Rosenberg frequency-domain model of the glottal pulse","authors":"Sandra Dias, Aníbal J. S. Ferreira","doi":"10.1109/WASPAA.2013.6701892","DOIUrl":"https://doi.org/10.1109/WASPAA.2013.6701892","url":null,"abstract":"In this paper we describe innovative advances to the design of a new frequency-domain algorithm to glottal source estimation whose conceptual approach we have reported recently [1]. Those advances result from accurate sinusoidal/harmonic analysis and synthesis of two concomitant acoustic signals: the glottal source signal captured near the vocal folds, and the corresponding voiced signal captured outside the mouth. We describe the experimental procedure which was performed by an ORL specialist using a rigid video-laryngoscope and two tiny and high-quality microphones. Six subjects have participated in the tests and records were made for vowels /a/ and /i/. The data analysis allowed us to conclude on the magnitude and on the phase-related NRD features of the glottal source signal. In addition, a new frequency-domain glottal pulse model combining features of the Liljencrants-Fant and Rosenberg models has been devised that is a better match to the observed data. The derivatives of the three models are obtained using accurate frequency-domain processing. The paper concludes with next research steps.","PeriodicalId":341888,"journal":{"name":"2013 IEEE Workshop on Applications of Signal Processing to Audio and Acoustics","volume":"19 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2013-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"122278111","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2013-10-01DOI: 10.1109/WASPAA.2013.6701850
Lucio Bianchi, Dejan Markovic, F. Antonacci, A. Sarti, S. Tubaro
In this paper we propose a methodology aimed at improving the resolution capabilities of plenacoustic imaging, which is based on deconvolution techniques mutuated from aerospace acoustic imaging. In order to reduce the computational burden, we also propose a modification of the minimization problem that exploits the highly structured information contained in the plenacoustic image. Experiments and simulations show the improvement of the accuracy gained by applying the deconvolution operator.
{"title":"Deconvolution of plenacoustic images","authors":"Lucio Bianchi, Dejan Markovic, F. Antonacci, A. Sarti, S. Tubaro","doi":"10.1109/WASPAA.2013.6701850","DOIUrl":"https://doi.org/10.1109/WASPAA.2013.6701850","url":null,"abstract":"In this paper we propose a methodology aimed at improving the resolution capabilities of plenacoustic imaging, which is based on deconvolution techniques mutuated from aerospace acoustic imaging. In order to reduce the computational burden, we also propose a modification of the minimization problem that exploits the highly structured information contained in the plenacoustic image. Experiments and simulations show the improvement of the accuracy gained by applying the deconvolution operator.","PeriodicalId":341888,"journal":{"name":"2013 IEEE Workshop on Applications of Signal Processing to Audio and Acoustics","volume":"395 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2013-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"122722365","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2013-10-01DOI: 10.1109/WASPAA.2013.6701841
Hequn Bai, G. Richard, L. Daudet
In this paper we propose an extension for the Acoustic Radiance Transfer (ART) method for the modeling of room acoustics. The original ART method is very efficient for modeling diffuse reflections and the late reverberation but does not well represent the early echoes. We then propose, in this paper, an extension of the ART method which allows to model the early part while keeping the advantages of the original method for the late reverberation simulation. The experimental results confirm that the proposed method gives more accurate reconstruction of the early reflections than the traditional ART method in average and that comparable accuracy can be obtained at lower complexity and memory requirements than the traditional ART method.
{"title":"Modeling early reflections of room impulse responses using a radiance transfer method","authors":"Hequn Bai, G. Richard, L. Daudet","doi":"10.1109/WASPAA.2013.6701841","DOIUrl":"https://doi.org/10.1109/WASPAA.2013.6701841","url":null,"abstract":"In this paper we propose an extension for the Acoustic Radiance Transfer (ART) method for the modeling of room acoustics. The original ART method is very efficient for modeling diffuse reflections and the late reverberation but does not well represent the early echoes. We then propose, in this paper, an extension of the ART method which allows to model the early part while keeping the advantages of the original method for the late reverberation simulation. The experimental results confirm that the proposed method gives more accurate reconstruction of the early reflections than the traditional ART method in average and that comparable accuracy can be obtained at lower complexity and memory requirements than the traditional ART method.","PeriodicalId":341888,"journal":{"name":"2013 IEEE Workshop on Applications of Signal Processing to Audio and Acoustics","volume":"44 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2013-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127474697","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}